00:00:00.001 Started by upstream project "autotest-per-patch" build number 122926 00:00:00.001 originally caused by: 00:00:00.001 Started by user sys_sgci 00:00:00.059 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-phy.groovy 00:00:00.060 The recommended git tool is: git 00:00:00.060 using credential 00000000-0000-0000-0000-000000000002 00:00:00.061 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.093 Fetching changes from the remote Git repository 00:00:00.095 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.146 Using shallow fetch with depth 1 00:00:00.146 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.146 > git --version # timeout=10 00:00:00.186 > git --version # 'git version 2.39.2' 00:00:00.186 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.187 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.187 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:06.211 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:06.223 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:06.236 Checking out Revision c7986954d8037b9c61764d44ed2af24625b251c6 (FETCH_HEAD) 00:00:06.236 > git config core.sparsecheckout # timeout=10 00:00:06.247 > git read-tree -mu HEAD # timeout=10 00:00:06.263 > git checkout -f c7986954d8037b9c61764d44ed2af24625b251c6 # timeout=5 00:00:06.280 Commit message: "inventory/dev: add missing long names" 00:00:06.280 > git rev-list --no-walk c7986954d8037b9c61764d44ed2af24625b251c6 # timeout=10 00:00:06.373 [Pipeline] Start of Pipeline 00:00:06.386 [Pipeline] library 00:00:06.387 Loading library shm_lib@master 00:00:06.387 Library shm_lib@master is cached. Copying from home. 00:00:06.402 [Pipeline] node 00:00:06.410 Running on WFP8 in /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:06.411 [Pipeline] { 00:00:06.421 [Pipeline] catchError 00:00:06.422 [Pipeline] { 00:00:06.431 [Pipeline] wrap 00:00:06.437 [Pipeline] { 00:00:06.443 [Pipeline] stage 00:00:06.445 [Pipeline] { (Prologue) 00:00:06.602 [Pipeline] sh 00:00:06.890 + logger -p user.info -t JENKINS-CI 00:00:06.908 [Pipeline] echo 00:00:06.910 Node: WFP8 00:00:06.919 [Pipeline] sh 00:00:07.220 [Pipeline] setCustomBuildProperty 00:00:07.232 [Pipeline] echo 00:00:07.233 Cleanup processes 00:00:07.238 [Pipeline] sh 00:00:07.521 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:07.521 2774978 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:07.534 [Pipeline] sh 00:00:07.816 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:07.816 ++ grep -v 'sudo pgrep' 00:00:07.816 ++ awk '{print $1}' 00:00:07.816 + sudo kill -9 00:00:07.816 + true 00:00:07.831 [Pipeline] cleanWs 00:00:07.842 [WS-CLEANUP] Deleting project workspace... 00:00:07.842 [WS-CLEANUP] Deferred wipeout is used... 00:00:07.847 [WS-CLEANUP] done 00:00:07.854 [Pipeline] setCustomBuildProperty 00:00:07.868 [Pipeline] sh 00:00:08.150 + sudo git config --global --replace-all safe.directory '*' 00:00:08.216 [Pipeline] nodesByLabel 00:00:08.218 Found a total of 1 nodes with the 'sorcerer' label 00:00:08.226 [Pipeline] httpRequest 00:00:08.230 HttpMethod: GET 00:00:08.231 URL: http://10.211.164.101/packages/jbp_c7986954d8037b9c61764d44ed2af24625b251c6.tar.gz 00:00:08.234 Sending request to url: http://10.211.164.101/packages/jbp_c7986954d8037b9c61764d44ed2af24625b251c6.tar.gz 00:00:08.249 Response Code: HTTP/1.1 200 OK 00:00:08.249 Success: Status code 200 is in the accepted range: 200,404 00:00:08.250 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/jbp_c7986954d8037b9c61764d44ed2af24625b251c6.tar.gz 00:00:23.052 [Pipeline] sh 00:00:23.336 + tar --no-same-owner -xf jbp_c7986954d8037b9c61764d44ed2af24625b251c6.tar.gz 00:00:23.356 [Pipeline] httpRequest 00:00:23.360 HttpMethod: GET 00:00:23.361 URL: http://10.211.164.101/packages/spdk_0ba8ca574de797824f3a3628371f3fdb7485939b.tar.gz 00:00:23.362 Sending request to url: http://10.211.164.101/packages/spdk_0ba8ca574de797824f3a3628371f3fdb7485939b.tar.gz 00:00:23.377 Response Code: HTTP/1.1 200 OK 00:00:23.378 Success: Status code 200 is in the accepted range: 200,404 00:00:23.378 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk_0ba8ca574de797824f3a3628371f3fdb7485939b.tar.gz 00:00:41.008 [Pipeline] sh 00:00:41.289 + tar --no-same-owner -xf spdk_0ba8ca574de797824f3a3628371f3fdb7485939b.tar.gz 00:00:43.833 [Pipeline] sh 00:00:44.110 + git -C spdk log --oneline -n5 00:00:44.110 0ba8ca574 vbdev_lvol: add lvol set external parent 00:00:44.110 9847cfd5e lvol: add lvol set external parent 00:00:44.110 b9051c4b6 lvol: add lvol set parent 00:00:44.110 6f5af7c8f blob: add blob set external parent 00:00:44.110 f9277e176 blob: add blob set parent 00:00:44.120 [Pipeline] } 00:00:44.134 [Pipeline] // stage 00:00:44.140 [Pipeline] stage 00:00:44.142 [Pipeline] { (Prepare) 00:00:44.155 [Pipeline] writeFile 00:00:44.166 [Pipeline] sh 00:00:44.449 + logger -p user.info -t JENKINS-CI 00:00:44.461 [Pipeline] sh 00:00:44.818 + logger -p user.info -t JENKINS-CI 00:00:44.829 [Pipeline] sh 00:00:45.100 + cat autorun-spdk.conf 00:00:45.100 SPDK_RUN_FUNCTIONAL_TEST=1 00:00:45.100 SPDK_TEST_NVMF=1 00:00:45.100 SPDK_TEST_NVME_CLI=1 00:00:45.100 SPDK_TEST_NVMF_TRANSPORT=tcp 00:00:45.100 SPDK_TEST_NVMF_NICS=e810 00:00:45.100 SPDK_TEST_VFIOUSER=1 00:00:45.100 SPDK_RUN_UBSAN=1 00:00:45.100 NET_TYPE=phy 00:00:45.107 RUN_NIGHTLY=0 00:00:45.114 [Pipeline] readFile 00:00:45.134 [Pipeline] withEnv 00:00:45.136 [Pipeline] { 00:00:45.146 [Pipeline] sh 00:00:45.424 + set -ex 00:00:45.424 + [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf ]] 00:00:45.424 + source /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:00:45.424 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:00:45.424 ++ SPDK_TEST_NVMF=1 00:00:45.424 ++ SPDK_TEST_NVME_CLI=1 00:00:45.424 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:00:45.424 ++ SPDK_TEST_NVMF_NICS=e810 00:00:45.424 ++ SPDK_TEST_VFIOUSER=1 00:00:45.424 ++ SPDK_RUN_UBSAN=1 00:00:45.424 ++ NET_TYPE=phy 00:00:45.424 ++ RUN_NIGHTLY=0 00:00:45.424 + case $SPDK_TEST_NVMF_NICS in 00:00:45.424 + DRIVERS=ice 00:00:45.424 + [[ tcp == \r\d\m\a ]] 00:00:45.424 + [[ -n ice ]] 00:00:45.424 + sudo rmmod mlx4_ib mlx5_ib irdma i40iw iw_cxgb4 00:00:45.424 rmmod: ERROR: Module mlx4_ib is not currently loaded 00:00:45.424 rmmod: ERROR: Module mlx5_ib is not currently loaded 00:00:45.424 rmmod: ERROR: Module irdma is not currently loaded 00:00:45.424 rmmod: ERROR: Module i40iw is not currently loaded 00:00:45.424 rmmod: ERROR: Module iw_cxgb4 is not currently loaded 00:00:45.424 + true 00:00:45.424 + for D in $DRIVERS 00:00:45.424 + sudo modprobe ice 00:00:45.424 + exit 0 00:00:45.433 [Pipeline] } 00:00:45.450 [Pipeline] // withEnv 00:00:45.455 [Pipeline] } 00:00:45.470 [Pipeline] // stage 00:00:45.479 [Pipeline] catchError 00:00:45.480 [Pipeline] { 00:00:45.494 [Pipeline] timeout 00:00:45.494 Timeout set to expire in 40 min 00:00:45.495 [Pipeline] { 00:00:45.510 [Pipeline] stage 00:00:45.512 [Pipeline] { (Tests) 00:00:45.528 [Pipeline] sh 00:00:45.809 + jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:45.809 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:45.809 + DIR_ROOT=/var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:45.809 + [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest ]] 00:00:45.809 + DIR_SPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:45.810 + DIR_OUTPUT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:00:45.810 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk ]] 00:00:45.810 + [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:00:45.810 + mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:00:45.810 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:00:45.810 + cd /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:45.810 + source /etc/os-release 00:00:45.810 ++ NAME='Fedora Linux' 00:00:45.810 ++ VERSION='38 (Cloud Edition)' 00:00:45.810 ++ ID=fedora 00:00:45.810 ++ VERSION_ID=38 00:00:45.810 ++ VERSION_CODENAME= 00:00:45.810 ++ PLATFORM_ID=platform:f38 00:00:45.810 ++ PRETTY_NAME='Fedora Linux 38 (Cloud Edition)' 00:00:45.810 ++ ANSI_COLOR='0;38;2;60;110;180' 00:00:45.810 ++ LOGO=fedora-logo-icon 00:00:45.810 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:38 00:00:45.810 ++ HOME_URL=https://fedoraproject.org/ 00:00:45.810 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f38/system-administrators-guide/ 00:00:45.810 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:00:45.810 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:00:45.810 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:00:45.810 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=38 00:00:45.810 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:00:45.810 ++ REDHAT_SUPPORT_PRODUCT_VERSION=38 00:00:45.810 ++ SUPPORT_END=2024-05-14 00:00:45.810 ++ VARIANT='Cloud Edition' 00:00:45.810 ++ VARIANT_ID=cloud 00:00:45.810 + uname -a 00:00:45.810 Linux spdk-wfp-08 6.7.0-68.fc38.x86_64 #1 SMP PREEMPT_DYNAMIC Mon Jan 15 00:59:40 UTC 2024 x86_64 GNU/Linux 00:00:45.810 + sudo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:00:48.342 Hugepages 00:00:48.342 node hugesize free / total 00:00:48.342 node0 1048576kB 0 / 0 00:00:48.342 node0 2048kB 0 / 0 00:00:48.342 node1 1048576kB 0 / 0 00:00:48.342 node1 2048kB 0 / 0 00:00:48.342 00:00:48.342 Type BDF Vendor Device NUMA Driver Device Block devices 00:00:48.342 I/OAT 0000:00:04.0 8086 2021 0 ioatdma - - 00:00:48.342 I/OAT 0000:00:04.1 8086 2021 0 ioatdma - - 00:00:48.342 I/OAT 0000:00:04.2 8086 2021 0 ioatdma - - 00:00:48.342 I/OAT 0000:00:04.3 8086 2021 0 ioatdma - - 00:00:48.342 I/OAT 0000:00:04.4 8086 2021 0 ioatdma - - 00:00:48.342 I/OAT 0000:00:04.5 8086 2021 0 ioatdma - - 00:00:48.342 I/OAT 0000:00:04.6 8086 2021 0 ioatdma - - 00:00:48.342 I/OAT 0000:00:04.7 8086 2021 0 ioatdma - - 00:00:48.342 NVMe 0000:5e:00.0 8086 0a54 0 nvme nvme0 nvme0n1 00:00:48.342 I/OAT 0000:80:04.0 8086 2021 1 ioatdma - - 00:00:48.342 I/OAT 0000:80:04.1 8086 2021 1 ioatdma - - 00:00:48.342 I/OAT 0000:80:04.2 8086 2021 1 ioatdma - - 00:00:48.342 I/OAT 0000:80:04.3 8086 2021 1 ioatdma - - 00:00:48.342 I/OAT 0000:80:04.4 8086 2021 1 ioatdma - - 00:00:48.342 I/OAT 0000:80:04.5 8086 2021 1 ioatdma - - 00:00:48.342 I/OAT 0000:80:04.6 8086 2021 1 ioatdma - - 00:00:48.342 I/OAT 0000:80:04.7 8086 2021 1 ioatdma - - 00:00:48.342 + rm -f /tmp/spdk-ld-path 00:00:48.342 + source autorun-spdk.conf 00:00:48.342 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:00:48.342 ++ SPDK_TEST_NVMF=1 00:00:48.342 ++ SPDK_TEST_NVME_CLI=1 00:00:48.342 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:00:48.342 ++ SPDK_TEST_NVMF_NICS=e810 00:00:48.342 ++ SPDK_TEST_VFIOUSER=1 00:00:48.342 ++ SPDK_RUN_UBSAN=1 00:00:48.342 ++ NET_TYPE=phy 00:00:48.342 ++ RUN_NIGHTLY=0 00:00:48.342 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:00:48.342 + [[ -n '' ]] 00:00:48.342 + sudo git config --global --add safe.directory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:48.342 + for M in /var/spdk/build-*-manifest.txt 00:00:48.342 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:00:48.342 + cp /var/spdk/build-pkg-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:00:48.342 + for M in /var/spdk/build-*-manifest.txt 00:00:48.342 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:00:48.342 + cp /var/spdk/build-repo-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:00:48.342 ++ uname 00:00:48.342 + [[ Linux == \L\i\n\u\x ]] 00:00:48.342 + sudo dmesg -T 00:00:48.342 + sudo dmesg --clear 00:00:48.342 + dmesg_pid=2776406 00:00:48.342 + [[ Fedora Linux == FreeBSD ]] 00:00:48.342 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:00:48.342 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:00:48.342 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:00:48.342 + [[ -x /usr/src/fio-static/fio ]] 00:00:48.342 + export FIO_BIN=/usr/src/fio-static/fio 00:00:48.342 + FIO_BIN=/usr/src/fio-static/fio 00:00:48.342 + sudo dmesg -Tw 00:00:48.342 + [[ '' == \/\v\a\r\/\j\e\n\k\i\n\s\/\w\o\r\k\s\p\a\c\e\/\n\v\m\f\-\t\c\p\-\p\h\y\-\a\u\t\o\t\e\s\t\/\q\e\m\u\_\v\f\i\o\/* ]] 00:00:48.342 + [[ ! -v VFIO_QEMU_BIN ]] 00:00:48.342 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:00:48.342 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:00:48.342 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:00:48.342 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:00:48.342 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:00:48.342 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:00:48.342 + spdk/autorun.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:00:48.342 Test configuration: 00:00:48.342 SPDK_RUN_FUNCTIONAL_TEST=1 00:00:48.342 SPDK_TEST_NVMF=1 00:00:48.342 SPDK_TEST_NVME_CLI=1 00:00:48.342 SPDK_TEST_NVMF_TRANSPORT=tcp 00:00:48.342 SPDK_TEST_NVMF_NICS=e810 00:00:48.342 SPDK_TEST_VFIOUSER=1 00:00:48.342 SPDK_RUN_UBSAN=1 00:00:48.342 NET_TYPE=phy 00:00:48.342 RUN_NIGHTLY=0 16:51:35 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:00:48.600 16:51:36 -- scripts/common.sh@508 -- $ [[ -e /bin/wpdk_common.sh ]] 00:00:48.600 16:51:36 -- scripts/common.sh@516 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:00:48.600 16:51:36 -- scripts/common.sh@517 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:00:48.601 16:51:36 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:00:48.601 16:51:36 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:00:48.601 16:51:36 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:00:48.601 16:51:36 -- paths/export.sh@5 -- $ export PATH 00:00:48.601 16:51:36 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:00:48.601 16:51:36 -- common/autobuild_common.sh@436 -- $ out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:00:48.601 16:51:36 -- common/autobuild_common.sh@437 -- $ date +%s 00:00:48.601 16:51:36 -- common/autobuild_common.sh@437 -- $ mktemp -dt spdk_1715784696.XXXXXX 00:00:48.601 16:51:36 -- common/autobuild_common.sh@437 -- $ SPDK_WORKSPACE=/tmp/spdk_1715784696.s7KaHb 00:00:48.601 16:51:36 -- common/autobuild_common.sh@439 -- $ [[ -n '' ]] 00:00:48.601 16:51:36 -- common/autobuild_common.sh@443 -- $ '[' -n '' ']' 00:00:48.601 16:51:36 -- common/autobuild_common.sh@446 -- $ scanbuild_exclude='--exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/' 00:00:48.601 16:51:36 -- common/autobuild_common.sh@450 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp' 00:00:48.601 16:51:36 -- common/autobuild_common.sh@452 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/ --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:00:48.601 16:51:36 -- common/autobuild_common.sh@453 -- $ get_config_params 00:00:48.601 16:51:36 -- common/autotest_common.sh@395 -- $ xtrace_disable 00:00:48.601 16:51:36 -- common/autotest_common.sh@10 -- $ set +x 00:00:48.601 16:51:36 -- common/autobuild_common.sh@453 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user' 00:00:48.601 16:51:36 -- common/autobuild_common.sh@455 -- $ start_monitor_resources 00:00:48.601 16:51:36 -- pm/common@17 -- $ local monitor 00:00:48.601 16:51:36 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:00:48.601 16:51:36 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:00:48.601 16:51:36 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:00:48.601 16:51:36 -- pm/common@21 -- $ date +%s 00:00:48.601 16:51:36 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:00:48.601 16:51:36 -- pm/common@21 -- $ date +%s 00:00:48.601 16:51:36 -- pm/common@25 -- $ sleep 1 00:00:48.601 16:51:36 -- pm/common@21 -- $ date +%s 00:00:48.601 16:51:36 -- pm/common@21 -- $ date +%s 00:00:48.601 16:51:36 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1715784696 00:00:48.601 16:51:36 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1715784696 00:00:48.601 16:51:36 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1715784696 00:00:48.601 16:51:36 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1715784696 00:00:48.601 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1715784696_collect-vmstat.pm.log 00:00:48.601 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1715784696_collect-cpu-load.pm.log 00:00:48.601 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1715784696_collect-cpu-temp.pm.log 00:00:48.601 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1715784696_collect-bmc-pm.bmc.pm.log 00:00:49.537 16:51:37 -- common/autobuild_common.sh@456 -- $ trap stop_monitor_resources EXIT 00:00:49.537 16:51:37 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:00:49.537 16:51:37 -- spdk/autobuild.sh@12 -- $ umask 022 00:00:49.537 16:51:37 -- spdk/autobuild.sh@13 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:49.537 16:51:37 -- spdk/autobuild.sh@16 -- $ date -u 00:00:49.537 Wed May 15 02:51:37 PM UTC 2024 00:00:49.537 16:51:37 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:00:49.537 v24.05-pre-664-g0ba8ca574 00:00:49.537 16:51:37 -- spdk/autobuild.sh@19 -- $ '[' 0 -eq 1 ']' 00:00:49.537 16:51:37 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:00:49.537 16:51:37 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:00:49.537 16:51:37 -- common/autotest_common.sh@1097 -- $ '[' 3 -le 1 ']' 00:00:49.537 16:51:37 -- common/autotest_common.sh@1103 -- $ xtrace_disable 00:00:49.537 16:51:37 -- common/autotest_common.sh@10 -- $ set +x 00:00:49.537 ************************************ 00:00:49.537 START TEST ubsan 00:00:49.537 ************************************ 00:00:49.537 16:51:37 ubsan -- common/autotest_common.sh@1121 -- $ echo 'using ubsan' 00:00:49.537 using ubsan 00:00:49.537 00:00:49.537 real 0m0.000s 00:00:49.537 user 0m0.000s 00:00:49.537 sys 0m0.000s 00:00:49.537 16:51:37 ubsan -- common/autotest_common.sh@1122 -- $ xtrace_disable 00:00:49.537 16:51:37 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:00:49.537 ************************************ 00:00:49.537 END TEST ubsan 00:00:49.537 ************************************ 00:00:49.537 16:51:37 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:00:49.537 16:51:37 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:00:49.537 16:51:37 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:00:49.537 16:51:37 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:00:49.537 16:51:37 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:00:49.537 16:51:37 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:00:49.537 16:51:37 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:00:49.537 16:51:37 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:00:49.537 16:51:37 -- spdk/autobuild.sh@67 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user --with-shared 00:00:49.796 Using default SPDK env in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:00:49.796 Using default DPDK in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:00:50.054 Using 'verbs' RDMA provider 00:01:03.184 Configuring ISA-L (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal.log)...done. 00:01:13.149 Configuring ISA-L-crypto (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal-crypto.log)...done. 00:01:13.149 Creating mk/config.mk...done. 00:01:13.149 Creating mk/cc.flags.mk...done. 00:01:13.149 Type 'make' to build. 00:01:13.149 16:52:00 -- spdk/autobuild.sh@69 -- $ run_test make make -j96 00:01:13.149 16:52:00 -- common/autotest_common.sh@1097 -- $ '[' 3 -le 1 ']' 00:01:13.149 16:52:00 -- common/autotest_common.sh@1103 -- $ xtrace_disable 00:01:13.149 16:52:00 -- common/autotest_common.sh@10 -- $ set +x 00:01:13.149 ************************************ 00:01:13.149 START TEST make 00:01:13.149 ************************************ 00:01:13.149 16:52:00 make -- common/autotest_common.sh@1121 -- $ make -j96 00:01:13.406 make[1]: Nothing to be done for 'all'. 00:01:14.788 The Meson build system 00:01:14.788 Version: 1.3.1 00:01:14.788 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user 00:01:14.788 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:01:14.788 Build type: native build 00:01:14.788 Project name: libvfio-user 00:01:14.788 Project version: 0.0.1 00:01:14.788 C compiler for the host machine: cc (gcc 13.2.1 "cc (GCC) 13.2.1 20231011 (Red Hat 13.2.1-4)") 00:01:14.788 C linker for the host machine: cc ld.bfd 2.39-16 00:01:14.788 Host machine cpu family: x86_64 00:01:14.788 Host machine cpu: x86_64 00:01:14.788 Run-time dependency threads found: YES 00:01:14.788 Library dl found: YES 00:01:14.788 Found pkg-config: YES (/usr/bin/pkg-config) 1.8.0 00:01:14.788 Run-time dependency json-c found: YES 0.17 00:01:14.788 Run-time dependency cmocka found: YES 1.1.7 00:01:14.788 Program pytest-3 found: NO 00:01:14.788 Program flake8 found: NO 00:01:14.788 Program misspell-fixer found: NO 00:01:14.788 Program restructuredtext-lint found: NO 00:01:14.788 Program valgrind found: YES (/usr/bin/valgrind) 00:01:14.788 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:01:14.788 Compiler for C supports arguments -Wmissing-declarations: YES 00:01:14.788 Compiler for C supports arguments -Wwrite-strings: YES 00:01:14.788 ../libvfio-user/test/meson.build:20: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:01:14.788 Program test-lspci.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user/test/test-lspci.sh) 00:01:14.788 Program test-linkage.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user/test/test-linkage.sh) 00:01:14.788 ../libvfio-user/test/py/meson.build:16: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:01:14.788 Build targets in project: 8 00:01:14.788 WARNING: Project specifies a minimum meson_version '>= 0.53.0' but uses features which were added in newer versions: 00:01:14.788 * 0.57.0: {'exclude_suites arg in add_test_setup'} 00:01:14.788 00:01:14.788 libvfio-user 0.0.1 00:01:14.788 00:01:14.788 User defined options 00:01:14.788 buildtype : debug 00:01:14.788 default_library: shared 00:01:14.788 libdir : /usr/local/lib 00:01:14.788 00:01:14.788 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:01:15.046 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug' 00:01:15.303 [1/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci.c.o 00:01:15.303 [2/37] Compiling C object samples/gpio-pci-idio-16.p/gpio-pci-idio-16.c.o 00:01:15.303 [3/37] Compiling C object samples/shadow_ioeventfd_server.p/shadow_ioeventfd_server.c.o 00:01:15.303 [4/37] Compiling C object lib/libvfio-user.so.0.0.1.p/migration.c.o 00:01:15.303 [5/37] Compiling C object samples/lspci.p/lspci.c.o 00:01:15.303 [6/37] Compiling C object samples/null.p/null.c.o 00:01:15.303 [7/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran.c.o 00:01:15.303 [8/37] Compiling C object test/unit_tests.p/.._lib_migration.c.o 00:01:15.303 [9/37] Compiling C object samples/client.p/.._lib_tran.c.o 00:01:15.303 [10/37] Compiling C object lib/libvfio-user.so.0.0.1.p/irq.c.o 00:01:15.303 [11/37] Compiling C object test/unit_tests.p/.._lib_tran.c.o 00:01:15.303 [12/37] Compiling C object test/unit_tests.p/unit-tests.c.o 00:01:15.303 [13/37] Compiling C object test/unit_tests.p/.._lib_tran_pipe.c.o 00:01:15.303 [14/37] Compiling C object samples/client.p/.._lib_migration.c.o 00:01:15.303 [15/37] Compiling C object test/unit_tests.p/.._lib_irq.c.o 00:01:15.303 [16/37] Compiling C object test/unit_tests.p/mocks.c.o 00:01:15.303 [17/37] Compiling C object test/unit_tests.p/.._lib_pci.c.o 00:01:15.303 [18/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran_sock.c.o 00:01:15.303 [19/37] Compiling C object test/unit_tests.p/.._lib_dma.c.o 00:01:15.303 [20/37] Compiling C object test/unit_tests.p/.._lib_pci_caps.c.o 00:01:15.303 [21/37] Compiling C object test/unit_tests.p/.._lib_tran_sock.c.o 00:01:15.303 [22/37] Compiling C object lib/libvfio-user.so.0.0.1.p/dma.c.o 00:01:15.303 [23/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci_caps.c.o 00:01:15.303 [24/37] Compiling C object samples/server.p/server.c.o 00:01:15.303 [25/37] Compiling C object samples/client.p/.._lib_tran_sock.c.o 00:01:15.303 [26/37] Compiling C object samples/client.p/client.c.o 00:01:15.560 [27/37] Linking target samples/client 00:01:15.560 [28/37] Compiling C object lib/libvfio-user.so.0.0.1.p/libvfio-user.c.o 00:01:15.560 [29/37] Compiling C object test/unit_tests.p/.._lib_libvfio-user.c.o 00:01:15.560 [30/37] Linking target lib/libvfio-user.so.0.0.1 00:01:15.560 [31/37] Linking target test/unit_tests 00:01:15.560 [32/37] Generating symbol file lib/libvfio-user.so.0.0.1.p/libvfio-user.so.0.0.1.symbols 00:01:15.560 [33/37] Linking target samples/null 00:01:15.560 [34/37] Linking target samples/lspci 00:01:15.560 [35/37] Linking target samples/server 00:01:15.560 [36/37] Linking target samples/gpio-pci-idio-16 00:01:15.560 [37/37] Linking target samples/shadow_ioeventfd_server 00:01:15.560 INFO: autodetecting backend as ninja 00:01:15.560 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:01:15.817 DESTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user meson install --quiet -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:01:16.075 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug' 00:01:16.075 ninja: no work to do. 00:01:21.343 The Meson build system 00:01:21.343 Version: 1.3.1 00:01:21.343 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk 00:01:21.343 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp 00:01:21.343 Build type: native build 00:01:21.343 Program cat found: YES (/usr/bin/cat) 00:01:21.343 Project name: DPDK 00:01:21.343 Project version: 23.11.0 00:01:21.343 C compiler for the host machine: cc (gcc 13.2.1 "cc (GCC) 13.2.1 20231011 (Red Hat 13.2.1-4)") 00:01:21.343 C linker for the host machine: cc ld.bfd 2.39-16 00:01:21.343 Host machine cpu family: x86_64 00:01:21.343 Host machine cpu: x86_64 00:01:21.343 Message: ## Building in Developer Mode ## 00:01:21.343 Program pkg-config found: YES (/usr/bin/pkg-config) 00:01:21.343 Program check-symbols.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/buildtools/check-symbols.sh) 00:01:21.343 Program options-ibverbs-static.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:01:21.343 Program python3 found: YES (/usr/bin/python3) 00:01:21.343 Program cat found: YES (/usr/bin/cat) 00:01:21.343 Compiler for C supports arguments -march=native: YES 00:01:21.343 Checking for size of "void *" : 8 00:01:21.343 Checking for size of "void *" : 8 (cached) 00:01:21.343 Library m found: YES 00:01:21.343 Library numa found: YES 00:01:21.343 Has header "numaif.h" : YES 00:01:21.343 Library fdt found: NO 00:01:21.344 Library execinfo found: NO 00:01:21.344 Has header "execinfo.h" : YES 00:01:21.344 Found pkg-config: YES (/usr/bin/pkg-config) 1.8.0 00:01:21.344 Run-time dependency libarchive found: NO (tried pkgconfig) 00:01:21.344 Run-time dependency libbsd found: NO (tried pkgconfig) 00:01:21.344 Run-time dependency jansson found: NO (tried pkgconfig) 00:01:21.344 Run-time dependency openssl found: YES 3.0.9 00:01:21.344 Run-time dependency libpcap found: YES 1.10.4 00:01:21.344 Has header "pcap.h" with dependency libpcap: YES 00:01:21.344 Compiler for C supports arguments -Wcast-qual: YES 00:01:21.344 Compiler for C supports arguments -Wdeprecated: YES 00:01:21.344 Compiler for C supports arguments -Wformat: YES 00:01:21.344 Compiler for C supports arguments -Wformat-nonliteral: NO 00:01:21.344 Compiler for C supports arguments -Wformat-security: NO 00:01:21.344 Compiler for C supports arguments -Wmissing-declarations: YES 00:01:21.344 Compiler for C supports arguments -Wmissing-prototypes: YES 00:01:21.344 Compiler for C supports arguments -Wnested-externs: YES 00:01:21.344 Compiler for C supports arguments -Wold-style-definition: YES 00:01:21.344 Compiler for C supports arguments -Wpointer-arith: YES 00:01:21.344 Compiler for C supports arguments -Wsign-compare: YES 00:01:21.344 Compiler for C supports arguments -Wstrict-prototypes: YES 00:01:21.344 Compiler for C supports arguments -Wundef: YES 00:01:21.344 Compiler for C supports arguments -Wwrite-strings: YES 00:01:21.344 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:01:21.344 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:01:21.344 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:01:21.344 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:01:21.344 Program objdump found: YES (/usr/bin/objdump) 00:01:21.344 Compiler for C supports arguments -mavx512f: YES 00:01:21.344 Checking if "AVX512 checking" compiles: YES 00:01:21.344 Fetching value of define "__SSE4_2__" : 1 00:01:21.344 Fetching value of define "__AES__" : 1 00:01:21.344 Fetching value of define "__AVX__" : 1 00:01:21.344 Fetching value of define "__AVX2__" : 1 00:01:21.344 Fetching value of define "__AVX512BW__" : 1 00:01:21.344 Fetching value of define "__AVX512CD__" : 1 00:01:21.344 Fetching value of define "__AVX512DQ__" : 1 00:01:21.344 Fetching value of define "__AVX512F__" : 1 00:01:21.344 Fetching value of define "__AVX512VL__" : 1 00:01:21.344 Fetching value of define "__PCLMUL__" : 1 00:01:21.344 Fetching value of define "__RDRND__" : 1 00:01:21.344 Fetching value of define "__RDSEED__" : 1 00:01:21.344 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:01:21.344 Fetching value of define "__znver1__" : (undefined) 00:01:21.344 Fetching value of define "__znver2__" : (undefined) 00:01:21.344 Fetching value of define "__znver3__" : (undefined) 00:01:21.344 Fetching value of define "__znver4__" : (undefined) 00:01:21.344 Compiler for C supports arguments -Wno-format-truncation: YES 00:01:21.344 Message: lib/log: Defining dependency "log" 00:01:21.344 Message: lib/kvargs: Defining dependency "kvargs" 00:01:21.344 Message: lib/telemetry: Defining dependency "telemetry" 00:01:21.344 Checking for function "getentropy" : NO 00:01:21.344 Message: lib/eal: Defining dependency "eal" 00:01:21.344 Message: lib/ring: Defining dependency "ring" 00:01:21.344 Message: lib/rcu: Defining dependency "rcu" 00:01:21.344 Message: lib/mempool: Defining dependency "mempool" 00:01:21.344 Message: lib/mbuf: Defining dependency "mbuf" 00:01:21.344 Fetching value of define "__PCLMUL__" : 1 (cached) 00:01:21.344 Fetching value of define "__AVX512F__" : 1 (cached) 00:01:21.344 Fetching value of define "__AVX512BW__" : 1 (cached) 00:01:21.344 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:01:21.344 Fetching value of define "__AVX512VL__" : 1 (cached) 00:01:21.344 Fetching value of define "__VPCLMULQDQ__" : (undefined) (cached) 00:01:21.344 Compiler for C supports arguments -mpclmul: YES 00:01:21.344 Compiler for C supports arguments -maes: YES 00:01:21.344 Compiler for C supports arguments -mavx512f: YES (cached) 00:01:21.344 Compiler for C supports arguments -mavx512bw: YES 00:01:21.344 Compiler for C supports arguments -mavx512dq: YES 00:01:21.344 Compiler for C supports arguments -mavx512vl: YES 00:01:21.344 Compiler for C supports arguments -mvpclmulqdq: YES 00:01:21.344 Compiler for C supports arguments -mavx2: YES 00:01:21.344 Compiler for C supports arguments -mavx: YES 00:01:21.344 Message: lib/net: Defining dependency "net" 00:01:21.344 Message: lib/meter: Defining dependency "meter" 00:01:21.344 Message: lib/ethdev: Defining dependency "ethdev" 00:01:21.344 Message: lib/pci: Defining dependency "pci" 00:01:21.344 Message: lib/cmdline: Defining dependency "cmdline" 00:01:21.344 Message: lib/hash: Defining dependency "hash" 00:01:21.344 Message: lib/timer: Defining dependency "timer" 00:01:21.344 Message: lib/compressdev: Defining dependency "compressdev" 00:01:21.344 Message: lib/cryptodev: Defining dependency "cryptodev" 00:01:21.344 Message: lib/dmadev: Defining dependency "dmadev" 00:01:21.344 Compiler for C supports arguments -Wno-cast-qual: YES 00:01:21.344 Message: lib/power: Defining dependency "power" 00:01:21.344 Message: lib/reorder: Defining dependency "reorder" 00:01:21.344 Message: lib/security: Defining dependency "security" 00:01:21.344 Has header "linux/userfaultfd.h" : YES 00:01:21.344 Has header "linux/vduse.h" : YES 00:01:21.344 Message: lib/vhost: Defining dependency "vhost" 00:01:21.344 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:01:21.344 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:01:21.344 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:01:21.344 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:01:21.344 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:01:21.344 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:01:21.344 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:01:21.344 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:01:21.344 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:01:21.344 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:01:21.344 Program doxygen found: YES (/usr/bin/doxygen) 00:01:21.344 Configuring doxy-api-html.conf using configuration 00:01:21.344 Configuring doxy-api-man.conf using configuration 00:01:21.344 Program mandb found: YES (/usr/bin/mandb) 00:01:21.344 Program sphinx-build found: NO 00:01:21.344 Configuring rte_build_config.h using configuration 00:01:21.344 Message: 00:01:21.344 ================= 00:01:21.344 Applications Enabled 00:01:21.344 ================= 00:01:21.344 00:01:21.344 apps: 00:01:21.344 00:01:21.344 00:01:21.344 Message: 00:01:21.344 ================= 00:01:21.344 Libraries Enabled 00:01:21.344 ================= 00:01:21.344 00:01:21.344 libs: 00:01:21.344 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:01:21.344 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:01:21.344 cryptodev, dmadev, power, reorder, security, vhost, 00:01:21.344 00:01:21.344 Message: 00:01:21.344 =============== 00:01:21.344 Drivers Enabled 00:01:21.344 =============== 00:01:21.344 00:01:21.344 common: 00:01:21.344 00:01:21.344 bus: 00:01:21.344 pci, vdev, 00:01:21.344 mempool: 00:01:21.344 ring, 00:01:21.344 dma: 00:01:21.344 00:01:21.344 net: 00:01:21.344 00:01:21.344 crypto: 00:01:21.344 00:01:21.344 compress: 00:01:21.344 00:01:21.344 vdpa: 00:01:21.344 00:01:21.344 00:01:21.344 Message: 00:01:21.344 ================= 00:01:21.344 Content Skipped 00:01:21.344 ================= 00:01:21.344 00:01:21.344 apps: 00:01:21.344 dumpcap: explicitly disabled via build config 00:01:21.344 graph: explicitly disabled via build config 00:01:21.344 pdump: explicitly disabled via build config 00:01:21.344 proc-info: explicitly disabled via build config 00:01:21.344 test-acl: explicitly disabled via build config 00:01:21.344 test-bbdev: explicitly disabled via build config 00:01:21.344 test-cmdline: explicitly disabled via build config 00:01:21.344 test-compress-perf: explicitly disabled via build config 00:01:21.344 test-crypto-perf: explicitly disabled via build config 00:01:21.344 test-dma-perf: explicitly disabled via build config 00:01:21.344 test-eventdev: explicitly disabled via build config 00:01:21.344 test-fib: explicitly disabled via build config 00:01:21.344 test-flow-perf: explicitly disabled via build config 00:01:21.344 test-gpudev: explicitly disabled via build config 00:01:21.344 test-mldev: explicitly disabled via build config 00:01:21.344 test-pipeline: explicitly disabled via build config 00:01:21.344 test-pmd: explicitly disabled via build config 00:01:21.344 test-regex: explicitly disabled via build config 00:01:21.344 test-sad: explicitly disabled via build config 00:01:21.344 test-security-perf: explicitly disabled via build config 00:01:21.344 00:01:21.344 libs: 00:01:21.344 metrics: explicitly disabled via build config 00:01:21.344 acl: explicitly disabled via build config 00:01:21.344 bbdev: explicitly disabled via build config 00:01:21.344 bitratestats: explicitly disabled via build config 00:01:21.344 bpf: explicitly disabled via build config 00:01:21.344 cfgfile: explicitly disabled via build config 00:01:21.344 distributor: explicitly disabled via build config 00:01:21.344 efd: explicitly disabled via build config 00:01:21.344 eventdev: explicitly disabled via build config 00:01:21.344 dispatcher: explicitly disabled via build config 00:01:21.344 gpudev: explicitly disabled via build config 00:01:21.344 gro: explicitly disabled via build config 00:01:21.344 gso: explicitly disabled via build config 00:01:21.344 ip_frag: explicitly disabled via build config 00:01:21.344 jobstats: explicitly disabled via build config 00:01:21.344 latencystats: explicitly disabled via build config 00:01:21.344 lpm: explicitly disabled via build config 00:01:21.344 member: explicitly disabled via build config 00:01:21.344 pcapng: explicitly disabled via build config 00:01:21.344 rawdev: explicitly disabled via build config 00:01:21.344 regexdev: explicitly disabled via build config 00:01:21.344 mldev: explicitly disabled via build config 00:01:21.344 rib: explicitly disabled via build config 00:01:21.344 sched: explicitly disabled via build config 00:01:21.344 stack: explicitly disabled via build config 00:01:21.345 ipsec: explicitly disabled via build config 00:01:21.345 pdcp: explicitly disabled via build config 00:01:21.345 fib: explicitly disabled via build config 00:01:21.345 port: explicitly disabled via build config 00:01:21.345 pdump: explicitly disabled via build config 00:01:21.345 table: explicitly disabled via build config 00:01:21.345 pipeline: explicitly disabled via build config 00:01:21.345 graph: explicitly disabled via build config 00:01:21.345 node: explicitly disabled via build config 00:01:21.345 00:01:21.345 drivers: 00:01:21.345 common/cpt: not in enabled drivers build config 00:01:21.345 common/dpaax: not in enabled drivers build config 00:01:21.345 common/iavf: not in enabled drivers build config 00:01:21.345 common/idpf: not in enabled drivers build config 00:01:21.345 common/mvep: not in enabled drivers build config 00:01:21.345 common/octeontx: not in enabled drivers build config 00:01:21.345 bus/auxiliary: not in enabled drivers build config 00:01:21.345 bus/cdx: not in enabled drivers build config 00:01:21.345 bus/dpaa: not in enabled drivers build config 00:01:21.345 bus/fslmc: not in enabled drivers build config 00:01:21.345 bus/ifpga: not in enabled drivers build config 00:01:21.345 bus/platform: not in enabled drivers build config 00:01:21.345 bus/vmbus: not in enabled drivers build config 00:01:21.345 common/cnxk: not in enabled drivers build config 00:01:21.345 common/mlx5: not in enabled drivers build config 00:01:21.345 common/nfp: not in enabled drivers build config 00:01:21.345 common/qat: not in enabled drivers build config 00:01:21.345 common/sfc_efx: not in enabled drivers build config 00:01:21.345 mempool/bucket: not in enabled drivers build config 00:01:21.345 mempool/cnxk: not in enabled drivers build config 00:01:21.345 mempool/dpaa: not in enabled drivers build config 00:01:21.345 mempool/dpaa2: not in enabled drivers build config 00:01:21.345 mempool/octeontx: not in enabled drivers build config 00:01:21.345 mempool/stack: not in enabled drivers build config 00:01:21.345 dma/cnxk: not in enabled drivers build config 00:01:21.345 dma/dpaa: not in enabled drivers build config 00:01:21.345 dma/dpaa2: not in enabled drivers build config 00:01:21.345 dma/hisilicon: not in enabled drivers build config 00:01:21.345 dma/idxd: not in enabled drivers build config 00:01:21.345 dma/ioat: not in enabled drivers build config 00:01:21.345 dma/skeleton: not in enabled drivers build config 00:01:21.345 net/af_packet: not in enabled drivers build config 00:01:21.345 net/af_xdp: not in enabled drivers build config 00:01:21.345 net/ark: not in enabled drivers build config 00:01:21.345 net/atlantic: not in enabled drivers build config 00:01:21.345 net/avp: not in enabled drivers build config 00:01:21.345 net/axgbe: not in enabled drivers build config 00:01:21.345 net/bnx2x: not in enabled drivers build config 00:01:21.345 net/bnxt: not in enabled drivers build config 00:01:21.345 net/bonding: not in enabled drivers build config 00:01:21.345 net/cnxk: not in enabled drivers build config 00:01:21.345 net/cpfl: not in enabled drivers build config 00:01:21.345 net/cxgbe: not in enabled drivers build config 00:01:21.345 net/dpaa: not in enabled drivers build config 00:01:21.345 net/dpaa2: not in enabled drivers build config 00:01:21.345 net/e1000: not in enabled drivers build config 00:01:21.345 net/ena: not in enabled drivers build config 00:01:21.345 net/enetc: not in enabled drivers build config 00:01:21.345 net/enetfec: not in enabled drivers build config 00:01:21.345 net/enic: not in enabled drivers build config 00:01:21.345 net/failsafe: not in enabled drivers build config 00:01:21.345 net/fm10k: not in enabled drivers build config 00:01:21.345 net/gve: not in enabled drivers build config 00:01:21.345 net/hinic: not in enabled drivers build config 00:01:21.345 net/hns3: not in enabled drivers build config 00:01:21.345 net/i40e: not in enabled drivers build config 00:01:21.345 net/iavf: not in enabled drivers build config 00:01:21.345 net/ice: not in enabled drivers build config 00:01:21.345 net/idpf: not in enabled drivers build config 00:01:21.345 net/igc: not in enabled drivers build config 00:01:21.345 net/ionic: not in enabled drivers build config 00:01:21.345 net/ipn3ke: not in enabled drivers build config 00:01:21.345 net/ixgbe: not in enabled drivers build config 00:01:21.345 net/mana: not in enabled drivers build config 00:01:21.345 net/memif: not in enabled drivers build config 00:01:21.345 net/mlx4: not in enabled drivers build config 00:01:21.345 net/mlx5: not in enabled drivers build config 00:01:21.345 net/mvneta: not in enabled drivers build config 00:01:21.345 net/mvpp2: not in enabled drivers build config 00:01:21.345 net/netvsc: not in enabled drivers build config 00:01:21.345 net/nfb: not in enabled drivers build config 00:01:21.345 net/nfp: not in enabled drivers build config 00:01:21.345 net/ngbe: not in enabled drivers build config 00:01:21.345 net/null: not in enabled drivers build config 00:01:21.345 net/octeontx: not in enabled drivers build config 00:01:21.345 net/octeon_ep: not in enabled drivers build config 00:01:21.345 net/pcap: not in enabled drivers build config 00:01:21.345 net/pfe: not in enabled drivers build config 00:01:21.345 net/qede: not in enabled drivers build config 00:01:21.345 net/ring: not in enabled drivers build config 00:01:21.345 net/sfc: not in enabled drivers build config 00:01:21.345 net/softnic: not in enabled drivers build config 00:01:21.345 net/tap: not in enabled drivers build config 00:01:21.345 net/thunderx: not in enabled drivers build config 00:01:21.345 net/txgbe: not in enabled drivers build config 00:01:21.345 net/vdev_netvsc: not in enabled drivers build config 00:01:21.345 net/vhost: not in enabled drivers build config 00:01:21.345 net/virtio: not in enabled drivers build config 00:01:21.345 net/vmxnet3: not in enabled drivers build config 00:01:21.345 raw/*: missing internal dependency, "rawdev" 00:01:21.345 crypto/armv8: not in enabled drivers build config 00:01:21.345 crypto/bcmfs: not in enabled drivers build config 00:01:21.345 crypto/caam_jr: not in enabled drivers build config 00:01:21.345 crypto/ccp: not in enabled drivers build config 00:01:21.345 crypto/cnxk: not in enabled drivers build config 00:01:21.345 crypto/dpaa_sec: not in enabled drivers build config 00:01:21.345 crypto/dpaa2_sec: not in enabled drivers build config 00:01:21.345 crypto/ipsec_mb: not in enabled drivers build config 00:01:21.345 crypto/mlx5: not in enabled drivers build config 00:01:21.345 crypto/mvsam: not in enabled drivers build config 00:01:21.345 crypto/nitrox: not in enabled drivers build config 00:01:21.345 crypto/null: not in enabled drivers build config 00:01:21.345 crypto/octeontx: not in enabled drivers build config 00:01:21.345 crypto/openssl: not in enabled drivers build config 00:01:21.345 crypto/scheduler: not in enabled drivers build config 00:01:21.345 crypto/uadk: not in enabled drivers build config 00:01:21.345 crypto/virtio: not in enabled drivers build config 00:01:21.345 compress/isal: not in enabled drivers build config 00:01:21.345 compress/mlx5: not in enabled drivers build config 00:01:21.345 compress/octeontx: not in enabled drivers build config 00:01:21.345 compress/zlib: not in enabled drivers build config 00:01:21.345 regex/*: missing internal dependency, "regexdev" 00:01:21.345 ml/*: missing internal dependency, "mldev" 00:01:21.345 vdpa/ifc: not in enabled drivers build config 00:01:21.345 vdpa/mlx5: not in enabled drivers build config 00:01:21.345 vdpa/nfp: not in enabled drivers build config 00:01:21.345 vdpa/sfc: not in enabled drivers build config 00:01:21.345 event/*: missing internal dependency, "eventdev" 00:01:21.345 baseband/*: missing internal dependency, "bbdev" 00:01:21.345 gpu/*: missing internal dependency, "gpudev" 00:01:21.345 00:01:21.345 00:01:21.345 Build targets in project: 85 00:01:21.345 00:01:21.345 DPDK 23.11.0 00:01:21.345 00:01:21.345 User defined options 00:01:21.345 buildtype : debug 00:01:21.345 default_library : shared 00:01:21.345 libdir : lib 00:01:21.345 prefix : /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:01:21.345 c_args : -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds -fPIC -Werror 00:01:21.345 c_link_args : 00:01:21.345 cpu_instruction_set: native 00:01:21.345 disable_apps : test-sad,graph,test-regex,dumpcap,test-eventdev,test-compress-perf,pdump,test-security-perf,test-pmd,test-flow-perf,test-pipeline,test-crypto-perf,test-gpudev,test-cmdline,test-dma-perf,proc-info,test-bbdev,test-acl,test,test-mldev,test-fib 00:01:21.345 disable_libs : sched,port,dispatcher,graph,rawdev,pdcp,bitratestats,ipsec,pcapng,pdump,gso,cfgfile,gpudev,ip_frag,node,distributor,mldev,lpm,acl,bpf,latencystats,eventdev,regexdev,gro,stack,fib,pipeline,bbdev,table,metrics,member,jobstats,efd,rib 00:01:21.345 enable_docs : false 00:01:21.345 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring 00:01:21.345 enable_kmods : false 00:01:21.345 tests : false 00:01:21.345 00:01:21.345 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:01:21.635 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp' 00:01:21.635 [1/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:01:21.635 [2/265] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:01:21.635 [3/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:01:21.635 [4/265] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:01:21.635 [5/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:01:21.635 [6/265] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:01:21.635 [7/265] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:01:21.635 [8/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:01:21.635 [9/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:01:21.635 [10/265] Linking static target lib/librte_kvargs.a 00:01:21.635 [11/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:01:21.635 [12/265] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:01:21.635 [13/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:01:21.635 [14/265] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:01:21.635 [15/265] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:01:21.635 [16/265] Compiling C object lib/librte_log.a.p/log_log.c.o 00:01:21.917 [17/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:01:21.917 [18/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:01:21.917 [19/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:01:21.917 [20/265] Linking static target lib/librte_log.a 00:01:21.917 [21/265] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:01:21.917 [22/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:01:21.917 [23/265] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:01:21.917 [24/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:01:21.917 [25/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:01:21.917 [26/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:01:21.917 [27/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:01:21.917 [28/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:01:21.917 [29/265] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:01:21.917 [30/265] Linking static target lib/librte_pci.a 00:01:21.917 [31/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:01:21.917 [32/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:01:21.917 [33/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:01:21.917 [34/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:01:21.917 [35/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:01:21.917 [36/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:01:21.917 [37/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:01:22.183 [38/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:01:22.183 [39/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:01:22.183 [40/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:01:22.183 [41/265] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:01:22.183 [42/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:01:22.183 [43/265] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:01:22.183 [44/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:01:22.183 [45/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:01:22.183 [46/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:01:22.183 [47/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:01:22.183 [48/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:01:22.183 [49/265] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:01:22.183 [50/265] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:01:22.183 [51/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:01:22.183 [52/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:01:22.183 [53/265] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:01:22.183 [54/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:01:22.183 [55/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:01:22.183 [56/265] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:01:22.183 [57/265] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:01:22.183 [58/265] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:01:22.184 [59/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:01:22.184 [60/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:01:22.184 [61/265] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:01:22.184 [62/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:01:22.184 [63/265] Linking static target lib/librte_ring.a 00:01:22.184 [64/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:01:22.184 [65/265] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:01:22.184 [66/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:01:22.184 [67/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:01:22.184 [68/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:01:22.184 [69/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:01:22.184 [70/265] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:01:22.184 [71/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:01:22.184 [72/265] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:01:22.184 [73/265] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:01:22.184 [74/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:01:22.184 [75/265] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:01:22.184 [76/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:01:22.184 [77/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:01:22.184 [78/265] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:01:22.184 [79/265] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:01:22.184 [80/265] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:01:22.184 [81/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:01:22.184 [82/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:01:22.184 [83/265] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:01:22.184 [84/265] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:01:22.184 [85/265] Linking static target lib/librte_meter.a 00:01:22.184 [86/265] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:01:22.184 [87/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:01:22.184 [88/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:01:22.184 [89/265] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:01:22.184 [90/265] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:01:22.184 [91/265] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:01:22.184 [92/265] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:01:22.184 [93/265] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:01:22.184 [94/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:01:22.184 [95/265] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:01:22.184 [96/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:01:22.184 [97/265] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:01:22.184 [98/265] Linking static target lib/librte_telemetry.a 00:01:22.184 [99/265] Linking static target lib/net/libnet_crc_avx512_lib.a 00:01:22.441 [100/265] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:01:22.441 [101/265] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:01:22.441 [102/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:01:22.441 [103/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:01:22.441 [104/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:01:22.441 [105/265] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:01:22.441 [106/265] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:01:22.441 [107/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:01:22.442 [108/265] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:01:22.442 [109/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:01:22.442 [110/265] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:01:22.442 [111/265] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:01:22.442 [112/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:01:22.442 [113/265] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:01:22.442 [114/265] Linking static target lib/librte_cmdline.a 00:01:22.442 [115/265] Linking static target lib/librte_net.a 00:01:22.442 [116/265] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:01:22.442 [117/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:01:22.442 [118/265] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:01:22.442 [119/265] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:01:22.442 [120/265] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:01:22.442 [121/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:01:22.442 [122/265] Linking static target lib/librte_mempool.a 00:01:22.442 [123/265] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:01:22.442 [124/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:01:22.442 [125/265] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:01:22.442 [126/265] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:01:22.442 [127/265] Linking static target lib/librte_eal.a 00:01:22.442 [128/265] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:01:22.442 [129/265] Linking static target lib/librte_rcu.a 00:01:22.442 [130/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:01:22.442 [131/265] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:01:22.442 [132/265] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:01:22.442 [133/265] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:01:22.442 [134/265] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:01:22.442 [135/265] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:01:22.442 [136/265] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:01:22.442 [137/265] Linking static target lib/librte_timer.a 00:01:22.442 [138/265] Linking target lib/librte_log.so.24.0 00:01:22.442 [139/265] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:01:22.442 [140/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:01:22.442 [141/265] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:01:22.442 [142/265] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:01:22.442 [143/265] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:01:22.442 [144/265] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:01:22.442 [145/265] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:01:22.442 [146/265] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:01:22.442 [147/265] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:01:22.701 [148/265] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:01:22.702 [149/265] Linking static target lib/librte_mbuf.a 00:01:22.702 [150/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:01:22.702 [151/265] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:01:22.702 [152/265] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:01:22.702 [153/265] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:01:22.702 [154/265] Linking static target lib/librte_compressdev.a 00:01:22.702 [155/265] Generating symbol file lib/librte_log.so.24.0.p/librte_log.so.24.0.symbols 00:01:22.702 [156/265] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:01:22.702 [157/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:01:22.702 [158/265] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:01:22.702 [159/265] Linking static target lib/librte_dmadev.a 00:01:22.702 [160/265] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:01:22.702 [161/265] Linking static target drivers/libtmp_rte_bus_vdev.a 00:01:22.702 [162/265] Linking target lib/librte_kvargs.so.24.0 00:01:22.702 [163/265] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:01:22.702 [164/265] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:01:22.702 [165/265] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:01:22.702 [166/265] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:01:22.702 [167/265] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:01:22.702 [168/265] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:01:22.702 [169/265] Linking static target lib/librte_hash.a 00:01:22.702 [170/265] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:01:22.702 [171/265] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:01:22.702 [172/265] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:01:22.702 [173/265] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:01:22.702 [174/265] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:01:22.702 [175/265] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:01:22.702 [176/265] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:01:22.702 [177/265] Linking static target lib/librte_security.a 00:01:22.702 [178/265] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:01:22.702 [179/265] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:01:22.702 [180/265] Linking static target drivers/libtmp_rte_bus_pci.a 00:01:22.702 [181/265] Generating symbol file lib/librte_kvargs.so.24.0.p/librte_kvargs.so.24.0.symbols 00:01:22.702 [182/265] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:01:22.702 [183/265] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:01:22.702 [184/265] Linking target lib/librte_telemetry.so.24.0 00:01:22.702 [185/265] Linking static target lib/librte_reorder.a 00:01:22.702 [186/265] Linking static target drivers/libtmp_rte_mempool_ring.a 00:01:22.702 [187/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:01:22.702 [188/265] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:01:22.702 [189/265] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:01:22.702 [190/265] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:01:22.702 [191/265] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:01:22.961 [192/265] Linking static target lib/librte_power.a 00:01:22.961 [193/265] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:01:22.961 [194/265] Compiling C object drivers/librte_bus_vdev.so.24.0.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:01:22.961 [195/265] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:01:22.961 [196/265] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:01:22.961 [197/265] Linking static target drivers/librte_bus_vdev.a 00:01:22.961 [198/265] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:01:22.961 [199/265] Generating symbol file lib/librte_telemetry.so.24.0.p/librte_telemetry.so.24.0.symbols 00:01:22.961 [200/265] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:01:22.961 [201/265] Compiling C object drivers/librte_bus_pci.so.24.0.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:01:22.961 [202/265] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:01:22.961 [203/265] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:01:22.961 [204/265] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:01:22.961 [205/265] Linking static target drivers/librte_bus_pci.a 00:01:22.961 [206/265] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:01:22.961 [207/265] Compiling C object drivers/librte_mempool_ring.so.24.0.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:01:22.961 [208/265] Linking static target drivers/librte_mempool_ring.a 00:01:23.220 [209/265] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:01:23.220 [210/265] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:23.220 [211/265] Linking static target lib/librte_cryptodev.a 00:01:23.220 [212/265] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:01:23.220 [213/265] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:23.220 [214/265] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:01:23.220 [215/265] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:23.220 [216/265] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:01:23.220 [217/265] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:01:23.478 [218/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:01:23.478 [219/265] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:01:23.478 [220/265] Linking static target lib/librte_ethdev.a 00:01:23.478 [221/265] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:01:23.478 [222/265] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:01:23.737 [223/265] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:01:23.737 [224/265] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:01:24.673 [225/265] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:01:24.673 [226/265] Linking static target lib/librte_vhost.a 00:01:24.931 [227/265] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:26.305 [228/265] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:01:30.488 [229/265] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:31.865 [230/265] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:01:32.123 [231/265] Linking target lib/librte_eal.so.24.0 00:01:32.123 [232/265] Generating symbol file lib/librte_eal.so.24.0.p/librte_eal.so.24.0.symbols 00:01:32.123 [233/265] Linking target drivers/librte_bus_vdev.so.24.0 00:01:32.123 [234/265] Linking target lib/librte_ring.so.24.0 00:01:32.123 [235/265] Linking target lib/librte_meter.so.24.0 00:01:32.123 [236/265] Linking target lib/librte_pci.so.24.0 00:01:32.123 [237/265] Linking target lib/librte_timer.so.24.0 00:01:32.123 [238/265] Linking target lib/librte_dmadev.so.24.0 00:01:32.381 [239/265] Generating symbol file lib/librte_meter.so.24.0.p/librte_meter.so.24.0.symbols 00:01:32.381 [240/265] Generating symbol file lib/librte_ring.so.24.0.p/librte_ring.so.24.0.symbols 00:01:32.381 [241/265] Generating symbol file lib/librte_pci.so.24.0.p/librte_pci.so.24.0.symbols 00:01:32.381 [242/265] Generating symbol file lib/librte_dmadev.so.24.0.p/librte_dmadev.so.24.0.symbols 00:01:32.381 [243/265] Generating symbol file lib/librte_timer.so.24.0.p/librte_timer.so.24.0.symbols 00:01:32.381 [244/265] Linking target lib/librte_rcu.so.24.0 00:01:32.381 [245/265] Linking target lib/librte_mempool.so.24.0 00:01:32.381 [246/265] Linking target drivers/librte_bus_pci.so.24.0 00:01:32.381 [247/265] Generating symbol file lib/librte_rcu.so.24.0.p/librte_rcu.so.24.0.symbols 00:01:32.381 [248/265] Generating symbol file lib/librte_mempool.so.24.0.p/librte_mempool.so.24.0.symbols 00:01:32.639 [249/265] Linking target drivers/librte_mempool_ring.so.24.0 00:01:32.639 [250/265] Linking target lib/librte_mbuf.so.24.0 00:01:32.639 [251/265] Generating symbol file lib/librte_mbuf.so.24.0.p/librte_mbuf.so.24.0.symbols 00:01:32.639 [252/265] Linking target lib/librte_reorder.so.24.0 00:01:32.639 [253/265] Linking target lib/librte_compressdev.so.24.0 00:01:32.639 [254/265] Linking target lib/librte_net.so.24.0 00:01:32.639 [255/265] Linking target lib/librte_cryptodev.so.24.0 00:01:32.896 [256/265] Generating symbol file lib/librte_cryptodev.so.24.0.p/librte_cryptodev.so.24.0.symbols 00:01:32.896 [257/265] Generating symbol file lib/librte_net.so.24.0.p/librte_net.so.24.0.symbols 00:01:32.896 [258/265] Linking target lib/librte_security.so.24.0 00:01:32.896 [259/265] Linking target lib/librte_cmdline.so.24.0 00:01:32.896 [260/265] Linking target lib/librte_hash.so.24.0 00:01:32.896 [261/265] Linking target lib/librte_ethdev.so.24.0 00:01:33.155 [262/265] Generating symbol file lib/librte_hash.so.24.0.p/librte_hash.so.24.0.symbols 00:01:33.155 [263/265] Generating symbol file lib/librte_ethdev.so.24.0.p/librte_ethdev.so.24.0.symbols 00:01:33.155 [264/265] Linking target lib/librte_power.so.24.0 00:01:33.155 [265/265] Linking target lib/librte_vhost.so.24.0 00:01:33.155 INFO: autodetecting backend as ninja 00:01:33.155 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp -j 96 00:01:34.091 CC lib/ut/ut.o 00:01:34.091 CC lib/log/log.o 00:01:34.091 CC lib/ut_mock/mock.o 00:01:34.091 CC lib/log/log_flags.o 00:01:34.091 CC lib/log/log_deprecated.o 00:01:34.091 LIB libspdk_ut.a 00:01:34.349 LIB libspdk_ut_mock.a 00:01:34.349 SO libspdk_ut.so.2.0 00:01:34.349 LIB libspdk_log.a 00:01:34.349 SO libspdk_ut_mock.so.6.0 00:01:34.349 SO libspdk_log.so.7.0 00:01:34.349 SYMLINK libspdk_ut.so 00:01:34.349 SYMLINK libspdk_ut_mock.so 00:01:34.349 SYMLINK libspdk_log.so 00:01:34.607 CXX lib/trace_parser/trace.o 00:01:34.607 CC lib/dma/dma.o 00:01:34.607 CC lib/ioat/ioat.o 00:01:34.607 CC lib/util/base64.o 00:01:34.607 CC lib/util/bit_array.o 00:01:34.607 CC lib/util/cpuset.o 00:01:34.607 CC lib/util/crc16.o 00:01:34.607 CC lib/util/crc32.o 00:01:34.607 CC lib/util/crc32c.o 00:01:34.607 CC lib/util/crc32_ieee.o 00:01:34.607 CC lib/util/crc64.o 00:01:34.607 CC lib/util/dif.o 00:01:34.607 CC lib/util/fd.o 00:01:34.607 CC lib/util/file.o 00:01:34.607 CC lib/util/hexlify.o 00:01:34.607 CC lib/util/iov.o 00:01:34.607 CC lib/util/math.o 00:01:34.607 CC lib/util/pipe.o 00:01:34.607 CC lib/util/strerror_tls.o 00:01:34.607 CC lib/util/string.o 00:01:34.607 CC lib/util/uuid.o 00:01:34.607 CC lib/util/xor.o 00:01:34.607 CC lib/util/fd_group.o 00:01:34.607 CC lib/util/zipf.o 00:01:34.866 LIB libspdk_dma.a 00:01:34.866 CC lib/vfio_user/host/vfio_user.o 00:01:34.866 CC lib/vfio_user/host/vfio_user_pci.o 00:01:34.866 SO libspdk_dma.so.4.0 00:01:34.866 LIB libspdk_ioat.a 00:01:34.866 SYMLINK libspdk_dma.so 00:01:34.866 SO libspdk_ioat.so.7.0 00:01:34.866 SYMLINK libspdk_ioat.so 00:01:35.125 LIB libspdk_vfio_user.a 00:01:35.125 SO libspdk_vfio_user.so.5.0 00:01:35.125 LIB libspdk_util.a 00:01:35.125 SYMLINK libspdk_vfio_user.so 00:01:35.125 SO libspdk_util.so.9.0 00:01:35.125 SYMLINK libspdk_util.so 00:01:35.384 LIB libspdk_trace_parser.a 00:01:35.384 SO libspdk_trace_parser.so.5.0 00:01:35.384 SYMLINK libspdk_trace_parser.so 00:01:35.641 CC lib/vmd/vmd.o 00:01:35.641 CC lib/vmd/led.o 00:01:35.641 CC lib/rdma/common.o 00:01:35.641 CC lib/rdma/rdma_verbs.o 00:01:35.641 CC lib/conf/conf.o 00:01:35.641 CC lib/env_dpdk/env.o 00:01:35.641 CC lib/env_dpdk/memory.o 00:01:35.641 CC lib/env_dpdk/init.o 00:01:35.641 CC lib/env_dpdk/pci.o 00:01:35.641 CC lib/json/json_parse.o 00:01:35.641 CC lib/env_dpdk/threads.o 00:01:35.641 CC lib/json/json_util.o 00:01:35.641 CC lib/env_dpdk/pci_ioat.o 00:01:35.641 CC lib/idxd/idxd_user.o 00:01:35.641 CC lib/env_dpdk/pci_virtio.o 00:01:35.641 CC lib/json/json_write.o 00:01:35.641 CC lib/idxd/idxd.o 00:01:35.641 CC lib/env_dpdk/pci_vmd.o 00:01:35.641 CC lib/env_dpdk/sigbus_handler.o 00:01:35.641 CC lib/env_dpdk/pci_idxd.o 00:01:35.641 CC lib/env_dpdk/pci_event.o 00:01:35.641 CC lib/env_dpdk/pci_dpdk.o 00:01:35.641 CC lib/env_dpdk/pci_dpdk_2207.o 00:01:35.641 CC lib/env_dpdk/pci_dpdk_2211.o 00:01:35.641 LIB libspdk_conf.a 00:01:35.899 LIB libspdk_rdma.a 00:01:35.899 SO libspdk_conf.so.6.0 00:01:35.899 LIB libspdk_json.a 00:01:35.899 SO libspdk_rdma.so.6.0 00:01:35.899 SYMLINK libspdk_conf.so 00:01:35.899 SO libspdk_json.so.6.0 00:01:35.899 SYMLINK libspdk_rdma.so 00:01:35.899 SYMLINK libspdk_json.so 00:01:35.899 LIB libspdk_idxd.a 00:01:35.899 LIB libspdk_vmd.a 00:01:36.158 SO libspdk_idxd.so.12.0 00:01:36.158 SO libspdk_vmd.so.6.0 00:01:36.158 SYMLINK libspdk_idxd.so 00:01:36.158 SYMLINK libspdk_vmd.so 00:01:36.158 CC lib/jsonrpc/jsonrpc_server.o 00:01:36.158 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:01:36.158 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:01:36.158 CC lib/jsonrpc/jsonrpc_client.o 00:01:36.416 LIB libspdk_jsonrpc.a 00:01:36.416 SO libspdk_jsonrpc.so.6.0 00:01:36.416 SYMLINK libspdk_jsonrpc.so 00:01:36.674 LIB libspdk_env_dpdk.a 00:01:36.674 SO libspdk_env_dpdk.so.14.0 00:01:36.674 SYMLINK libspdk_env_dpdk.so 00:01:36.674 CC lib/rpc/rpc.o 00:01:36.933 LIB libspdk_rpc.a 00:01:36.933 SO libspdk_rpc.so.6.0 00:01:36.933 SYMLINK libspdk_rpc.so 00:01:37.190 CC lib/trace/trace.o 00:01:37.190 CC lib/trace/trace_flags.o 00:01:37.190 CC lib/trace/trace_rpc.o 00:01:37.448 CC lib/notify/notify.o 00:01:37.448 CC lib/notify/notify_rpc.o 00:01:37.448 CC lib/keyring/keyring_rpc.o 00:01:37.448 CC lib/keyring/keyring.o 00:01:37.448 LIB libspdk_notify.a 00:01:37.448 SO libspdk_notify.so.6.0 00:01:37.448 LIB libspdk_trace.a 00:01:37.448 LIB libspdk_keyring.a 00:01:37.448 SYMLINK libspdk_notify.so 00:01:37.448 SO libspdk_trace.so.10.0 00:01:37.707 SO libspdk_keyring.so.1.0 00:01:37.707 SYMLINK libspdk_trace.so 00:01:37.707 SYMLINK libspdk_keyring.so 00:01:37.965 CC lib/sock/sock.o 00:01:37.965 CC lib/sock/sock_rpc.o 00:01:37.965 CC lib/thread/thread.o 00:01:37.965 CC lib/thread/iobuf.o 00:01:38.223 LIB libspdk_sock.a 00:01:38.223 SO libspdk_sock.so.9.0 00:01:38.223 SYMLINK libspdk_sock.so 00:01:38.481 CC lib/nvme/nvme_ctrlr_cmd.o 00:01:38.481 CC lib/nvme/nvme_ctrlr.o 00:01:38.481 CC lib/nvme/nvme_fabric.o 00:01:38.481 CC lib/nvme/nvme_ns_cmd.o 00:01:38.481 CC lib/nvme/nvme_ns.o 00:01:38.481 CC lib/nvme/nvme_qpair.o 00:01:38.481 CC lib/nvme/nvme_pcie_common.o 00:01:38.481 CC lib/nvme/nvme_pcie.o 00:01:38.481 CC lib/nvme/nvme.o 00:01:38.481 CC lib/nvme/nvme_quirks.o 00:01:38.481 CC lib/nvme/nvme_transport.o 00:01:38.481 CC lib/nvme/nvme_discovery.o 00:01:38.481 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:01:38.481 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:01:38.481 CC lib/nvme/nvme_tcp.o 00:01:38.481 CC lib/nvme/nvme_io_msg.o 00:01:38.481 CC lib/nvme/nvme_opal.o 00:01:38.481 CC lib/nvme/nvme_poll_group.o 00:01:38.481 CC lib/nvme/nvme_zns.o 00:01:38.481 CC lib/nvme/nvme_stubs.o 00:01:38.481 CC lib/nvme/nvme_auth.o 00:01:38.481 CC lib/nvme/nvme_cuse.o 00:01:38.481 CC lib/nvme/nvme_vfio_user.o 00:01:38.481 CC lib/nvme/nvme_rdma.o 00:01:39.047 LIB libspdk_thread.a 00:01:39.047 SO libspdk_thread.so.10.0 00:01:39.047 SYMLINK libspdk_thread.so 00:01:39.305 CC lib/init/json_config.o 00:01:39.305 CC lib/init/subsystem.o 00:01:39.305 CC lib/init/subsystem_rpc.o 00:01:39.305 CC lib/init/rpc.o 00:01:39.305 CC lib/accel/accel.o 00:01:39.305 CC lib/accel/accel_rpc.o 00:01:39.305 CC lib/accel/accel_sw.o 00:01:39.305 CC lib/vfu_tgt/tgt_endpoint.o 00:01:39.305 CC lib/vfu_tgt/tgt_rpc.o 00:01:39.305 CC lib/blob/blobstore.o 00:01:39.305 CC lib/blob/request.o 00:01:39.305 CC lib/blob/zeroes.o 00:01:39.305 CC lib/blob/blob_bs_dev.o 00:01:39.305 CC lib/virtio/virtio.o 00:01:39.305 CC lib/virtio/virtio_vhost_user.o 00:01:39.305 CC lib/virtio/virtio_vfio_user.o 00:01:39.305 CC lib/virtio/virtio_pci.o 00:01:39.564 LIB libspdk_init.a 00:01:39.564 SO libspdk_init.so.5.0 00:01:39.564 SYMLINK libspdk_init.so 00:01:39.564 LIB libspdk_vfu_tgt.a 00:01:39.564 LIB libspdk_virtio.a 00:01:39.564 SO libspdk_vfu_tgt.so.3.0 00:01:39.564 SO libspdk_virtio.so.7.0 00:01:39.823 SYMLINK libspdk_vfu_tgt.so 00:01:39.823 SYMLINK libspdk_virtio.so 00:01:39.823 CC lib/event/app.o 00:01:39.823 CC lib/event/reactor.o 00:01:39.823 CC lib/event/scheduler_static.o 00:01:39.823 CC lib/event/log_rpc.o 00:01:39.823 CC lib/event/app_rpc.o 00:01:40.082 LIB libspdk_accel.a 00:01:40.082 SO libspdk_accel.so.15.0 00:01:40.082 LIB libspdk_nvme.a 00:01:40.082 SYMLINK libspdk_accel.so 00:01:40.341 LIB libspdk_event.a 00:01:40.341 SO libspdk_nvme.so.13.0 00:01:40.341 SO libspdk_event.so.13.0 00:01:40.341 SYMLINK libspdk_event.so 00:01:40.599 CC lib/bdev/bdev_rpc.o 00:01:40.599 CC lib/bdev/bdev.o 00:01:40.599 CC lib/bdev/part.o 00:01:40.599 CC lib/bdev/bdev_zone.o 00:01:40.599 CC lib/bdev/scsi_nvme.o 00:01:40.599 SYMLINK libspdk_nvme.so 00:01:41.535 LIB libspdk_blob.a 00:01:41.535 SO libspdk_blob.so.11.0 00:01:41.535 SYMLINK libspdk_blob.so 00:01:41.793 CC lib/lvol/lvol.o 00:01:41.793 CC lib/blobfs/blobfs.o 00:01:41.793 CC lib/blobfs/tree.o 00:01:42.359 LIB libspdk_bdev.a 00:01:42.359 SO libspdk_bdev.so.15.0 00:01:42.359 SYMLINK libspdk_bdev.so 00:01:42.359 LIB libspdk_blobfs.a 00:01:42.359 LIB libspdk_lvol.a 00:01:42.359 SO libspdk_blobfs.so.10.0 00:01:42.671 SO libspdk_lvol.so.10.0 00:01:42.671 SYMLINK libspdk_blobfs.so 00:01:42.671 SYMLINK libspdk_lvol.so 00:01:42.671 CC lib/nbd/nbd.o 00:01:42.671 CC lib/nbd/nbd_rpc.o 00:01:42.671 CC lib/nvmf/ctrlr.o 00:01:42.671 CC lib/nvmf/ctrlr_discovery.o 00:01:42.671 CC lib/nvmf/ctrlr_bdev.o 00:01:42.671 CC lib/nvmf/subsystem.o 00:01:42.671 CC lib/nvmf/nvmf.o 00:01:42.671 CC lib/nvmf/nvmf_rpc.o 00:01:42.671 CC lib/ftl/ftl_core.o 00:01:42.671 CC lib/ftl/ftl_init.o 00:01:42.671 CC lib/nvmf/transport.o 00:01:42.671 CC lib/ftl/ftl_layout.o 00:01:42.671 CC lib/ftl/ftl_debug.o 00:01:42.671 CC lib/nvmf/tcp.o 00:01:42.671 CC lib/ftl/ftl_io.o 00:01:42.671 CC lib/nvmf/stubs.o 00:01:42.671 CC lib/nvmf/vfio_user.o 00:01:42.671 CC lib/ftl/ftl_sb.o 00:01:42.671 CC lib/ftl/ftl_l2p.o 00:01:42.671 CC lib/nvmf/mdns_server.o 00:01:42.671 CC lib/ftl/ftl_l2p_flat.o 00:01:42.671 CC lib/nvmf/auth.o 00:01:42.671 CC lib/ftl/ftl_nv_cache.o 00:01:42.671 CC lib/ublk/ublk.o 00:01:42.671 CC lib/nvmf/rdma.o 00:01:42.671 CC lib/ublk/ublk_rpc.o 00:01:42.671 CC lib/scsi/dev.o 00:01:42.671 CC lib/scsi/lun.o 00:01:42.671 CC lib/ftl/ftl_band_ops.o 00:01:42.671 CC lib/ftl/ftl_band.o 00:01:42.671 CC lib/ftl/ftl_writer.o 00:01:42.671 CC lib/scsi/port.o 00:01:42.671 CC lib/scsi/scsi_bdev.o 00:01:42.671 CC lib/scsi/scsi.o 00:01:42.671 CC lib/ftl/ftl_rq.o 00:01:42.671 CC lib/ftl/ftl_reloc.o 00:01:42.671 CC lib/scsi/scsi_rpc.o 00:01:42.671 CC lib/scsi/scsi_pr.o 00:01:42.671 CC lib/ftl/ftl_l2p_cache.o 00:01:42.671 CC lib/ftl/ftl_p2l.o 00:01:42.671 CC lib/scsi/task.o 00:01:42.671 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:01:42.671 CC lib/ftl/mngt/ftl_mngt.o 00:01:42.671 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:01:42.671 CC lib/ftl/mngt/ftl_mngt_startup.o 00:01:42.671 CC lib/ftl/mngt/ftl_mngt_md.o 00:01:42.671 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:01:42.671 CC lib/ftl/mngt/ftl_mngt_misc.o 00:01:42.671 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:01:42.671 CC lib/ftl/mngt/ftl_mngt_band.o 00:01:42.671 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:01:42.671 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:01:42.671 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:01:42.671 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:01:42.671 CC lib/ftl/utils/ftl_conf.o 00:01:42.671 CC lib/ftl/utils/ftl_mempool.o 00:01:42.671 CC lib/ftl/utils/ftl_md.o 00:01:42.671 CC lib/ftl/utils/ftl_bitmap.o 00:01:42.671 CC lib/ftl/utils/ftl_property.o 00:01:42.671 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:01:42.671 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:01:42.671 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:01:42.671 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:01:42.671 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:01:42.671 CC lib/ftl/upgrade/ftl_sb_v3.o 00:01:42.671 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:01:42.671 CC lib/ftl/upgrade/ftl_sb_v5.o 00:01:42.671 CC lib/ftl/nvc/ftl_nvc_dev.o 00:01:42.671 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:01:42.671 CC lib/ftl/base/ftl_base_dev.o 00:01:42.671 CC lib/ftl/ftl_trace.o 00:01:42.671 CC lib/ftl/base/ftl_base_bdev.o 00:01:43.237 LIB libspdk_nbd.a 00:01:43.237 SO libspdk_nbd.so.7.0 00:01:43.237 SYMLINK libspdk_nbd.so 00:01:43.237 LIB libspdk_ublk.a 00:01:43.237 LIB libspdk_scsi.a 00:01:43.237 SO libspdk_ublk.so.3.0 00:01:43.237 SO libspdk_scsi.so.9.0 00:01:43.237 SYMLINK libspdk_ublk.so 00:01:43.495 SYMLINK libspdk_scsi.so 00:01:43.495 LIB libspdk_ftl.a 00:01:43.754 CC lib/iscsi/conn.o 00:01:43.754 CC lib/vhost/vhost.o 00:01:43.754 CC lib/iscsi/init_grp.o 00:01:43.754 CC lib/vhost/vhost_rpc.o 00:01:43.754 CC lib/iscsi/iscsi.o 00:01:43.754 CC lib/vhost/vhost_scsi.o 00:01:43.754 CC lib/iscsi/md5.o 00:01:43.754 CC lib/iscsi/param.o 00:01:43.754 CC lib/vhost/vhost_blk.o 00:01:43.754 CC lib/iscsi/portal_grp.o 00:01:43.754 CC lib/vhost/rte_vhost_user.o 00:01:43.754 CC lib/iscsi/tgt_node.o 00:01:43.754 CC lib/iscsi/iscsi_rpc.o 00:01:43.754 CC lib/iscsi/iscsi_subsystem.o 00:01:43.754 CC lib/iscsi/task.o 00:01:43.754 SO libspdk_ftl.so.9.0 00:01:44.014 SYMLINK libspdk_ftl.so 00:01:44.273 LIB libspdk_nvmf.a 00:01:44.273 SO libspdk_nvmf.so.18.0 00:01:44.531 LIB libspdk_vhost.a 00:01:44.531 SO libspdk_vhost.so.8.0 00:01:44.531 SYMLINK libspdk_nvmf.so 00:01:44.531 SYMLINK libspdk_vhost.so 00:01:44.531 LIB libspdk_iscsi.a 00:01:44.790 SO libspdk_iscsi.so.8.0 00:01:44.791 SYMLINK libspdk_iscsi.so 00:01:45.358 CC module/env_dpdk/env_dpdk_rpc.o 00:01:45.358 CC module/vfu_device/vfu_virtio.o 00:01:45.358 CC module/vfu_device/vfu_virtio_blk.o 00:01:45.358 CC module/vfu_device/vfu_virtio_scsi.o 00:01:45.358 CC module/vfu_device/vfu_virtio_rpc.o 00:01:45.358 CC module/keyring/file/keyring.o 00:01:45.358 CC module/keyring/file/keyring_rpc.o 00:01:45.358 CC module/accel/error/accel_error.o 00:01:45.358 CC module/accel/error/accel_error_rpc.o 00:01:45.358 CC module/accel/iaa/accel_iaa.o 00:01:45.358 CC module/accel/ioat/accel_ioat_rpc.o 00:01:45.358 CC module/accel/iaa/accel_iaa_rpc.o 00:01:45.358 CC module/accel/ioat/accel_ioat.o 00:01:45.358 LIB libspdk_env_dpdk_rpc.a 00:01:45.358 CC module/accel/dsa/accel_dsa.o 00:01:45.358 CC module/sock/posix/posix.o 00:01:45.358 CC module/accel/dsa/accel_dsa_rpc.o 00:01:45.358 CC module/blob/bdev/blob_bdev.o 00:01:45.358 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:01:45.358 CC module/scheduler/gscheduler/gscheduler.o 00:01:45.358 CC module/scheduler/dynamic/scheduler_dynamic.o 00:01:45.358 SO libspdk_env_dpdk_rpc.so.6.0 00:01:45.615 SYMLINK libspdk_env_dpdk_rpc.so 00:01:45.615 LIB libspdk_keyring_file.a 00:01:45.615 LIB libspdk_accel_error.a 00:01:45.615 SO libspdk_keyring_file.so.1.0 00:01:45.615 LIB libspdk_scheduler_gscheduler.a 00:01:45.615 LIB libspdk_accel_ioat.a 00:01:45.615 LIB libspdk_scheduler_dpdk_governor.a 00:01:45.615 SO libspdk_accel_error.so.2.0 00:01:45.615 SO libspdk_scheduler_gscheduler.so.4.0 00:01:45.615 LIB libspdk_accel_iaa.a 00:01:45.615 LIB libspdk_scheduler_dynamic.a 00:01:45.615 SO libspdk_accel_ioat.so.6.0 00:01:45.615 SYMLINK libspdk_keyring_file.so 00:01:45.615 SO libspdk_scheduler_dpdk_governor.so.4.0 00:01:45.615 LIB libspdk_accel_dsa.a 00:01:45.615 LIB libspdk_blob_bdev.a 00:01:45.615 SO libspdk_accel_iaa.so.3.0 00:01:45.615 SO libspdk_scheduler_dynamic.so.4.0 00:01:45.615 SYMLINK libspdk_scheduler_gscheduler.so 00:01:45.615 SO libspdk_accel_dsa.so.5.0 00:01:45.615 SYMLINK libspdk_accel_error.so 00:01:45.615 SYMLINK libspdk_scheduler_dpdk_governor.so 00:01:45.615 SO libspdk_blob_bdev.so.11.0 00:01:45.615 SYMLINK libspdk_accel_ioat.so 00:01:45.615 SYMLINK libspdk_scheduler_dynamic.so 00:01:45.615 SYMLINK libspdk_accel_iaa.so 00:01:45.615 SYMLINK libspdk_accel_dsa.so 00:01:45.615 SYMLINK libspdk_blob_bdev.so 00:01:45.873 LIB libspdk_vfu_device.a 00:01:45.873 SO libspdk_vfu_device.so.3.0 00:01:45.873 SYMLINK libspdk_vfu_device.so 00:01:45.873 LIB libspdk_sock_posix.a 00:01:46.131 SO libspdk_sock_posix.so.6.0 00:01:46.131 SYMLINK libspdk_sock_posix.so 00:01:46.131 CC module/bdev/split/vbdev_split.o 00:01:46.131 CC module/bdev/split/vbdev_split_rpc.o 00:01:46.131 CC module/bdev/delay/vbdev_delay_rpc.o 00:01:46.131 CC module/bdev/delay/vbdev_delay.o 00:01:46.131 CC module/bdev/nvme/bdev_nvme.o 00:01:46.131 CC module/bdev/lvol/vbdev_lvol.o 00:01:46.131 CC module/bdev/nvme/bdev_nvme_rpc.o 00:01:46.131 CC module/bdev/malloc/bdev_malloc_rpc.o 00:01:46.131 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:01:46.131 CC module/bdev/nvme/bdev_mdns_client.o 00:01:46.131 CC module/bdev/malloc/bdev_malloc.o 00:01:46.131 CC module/bdev/virtio/bdev_virtio_scsi.o 00:01:46.131 CC module/bdev/virtio/bdev_virtio_blk.o 00:01:46.131 CC module/bdev/nvme/nvme_rpc.o 00:01:46.131 CC module/bdev/virtio/bdev_virtio_rpc.o 00:01:46.131 CC module/bdev/nvme/vbdev_opal.o 00:01:46.131 CC module/bdev/raid/bdev_raid.o 00:01:46.131 CC module/bdev/raid/bdev_raid_rpc.o 00:01:46.131 CC module/bdev/nvme/vbdev_opal_rpc.o 00:01:46.131 CC module/bdev/raid/bdev_raid_sb.o 00:01:46.131 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:01:46.131 CC module/bdev/raid/raid0.o 00:01:46.131 CC module/bdev/gpt/gpt.o 00:01:46.131 CC module/bdev/raid/raid1.o 00:01:46.131 CC module/bdev/gpt/vbdev_gpt.o 00:01:46.131 CC module/bdev/raid/concat.o 00:01:46.131 CC module/bdev/error/vbdev_error.o 00:01:46.131 CC module/bdev/error/vbdev_error_rpc.o 00:01:46.131 CC module/bdev/zone_block/vbdev_zone_block.o 00:01:46.131 CC module/blobfs/bdev/blobfs_bdev.o 00:01:46.131 CC module/bdev/null/bdev_null.o 00:01:46.131 CC module/bdev/null/bdev_null_rpc.o 00:01:46.131 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:01:46.131 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:01:46.131 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:01:46.131 CC module/bdev/passthru/vbdev_passthru.o 00:01:46.131 CC module/bdev/iscsi/bdev_iscsi.o 00:01:46.131 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:01:46.131 CC module/bdev/aio/bdev_aio.o 00:01:46.131 CC module/bdev/aio/bdev_aio_rpc.o 00:01:46.131 CC module/bdev/ftl/bdev_ftl_rpc.o 00:01:46.131 CC module/bdev/ftl/bdev_ftl.o 00:01:46.389 LIB libspdk_blobfs_bdev.a 00:01:46.389 LIB libspdk_bdev_split.a 00:01:46.389 SO libspdk_blobfs_bdev.so.6.0 00:01:46.389 SO libspdk_bdev_split.so.6.0 00:01:46.389 LIB libspdk_bdev_gpt.a 00:01:46.389 LIB libspdk_bdev_error.a 00:01:46.389 LIB libspdk_bdev_ftl.a 00:01:46.389 LIB libspdk_bdev_null.a 00:01:46.389 LIB libspdk_bdev_passthru.a 00:01:46.389 SO libspdk_bdev_gpt.so.6.0 00:01:46.389 SYMLINK libspdk_blobfs_bdev.so 00:01:46.389 SYMLINK libspdk_bdev_split.so 00:01:46.389 SO libspdk_bdev_passthru.so.6.0 00:01:46.389 SO libspdk_bdev_error.so.6.0 00:01:46.389 LIB libspdk_bdev_zone_block.a 00:01:46.646 LIB libspdk_bdev_delay.a 00:01:46.646 SO libspdk_bdev_ftl.so.6.0 00:01:46.646 SO libspdk_bdev_null.so.6.0 00:01:46.646 LIB libspdk_bdev_aio.a 00:01:46.646 LIB libspdk_bdev_malloc.a 00:01:46.646 SO libspdk_bdev_zone_block.so.6.0 00:01:46.646 SO libspdk_bdev_aio.so.6.0 00:01:46.646 SO libspdk_bdev_delay.so.6.0 00:01:46.646 SYMLINK libspdk_bdev_gpt.so 00:01:46.646 LIB libspdk_bdev_iscsi.a 00:01:46.646 SO libspdk_bdev_malloc.so.6.0 00:01:46.646 SYMLINK libspdk_bdev_passthru.so 00:01:46.646 SYMLINK libspdk_bdev_error.so 00:01:46.646 SYMLINK libspdk_bdev_ftl.so 00:01:46.646 SYMLINK libspdk_bdev_null.so 00:01:46.646 SO libspdk_bdev_iscsi.so.6.0 00:01:46.646 SYMLINK libspdk_bdev_zone_block.so 00:01:46.646 SYMLINK libspdk_bdev_aio.so 00:01:46.646 SYMLINK libspdk_bdev_malloc.so 00:01:46.646 SYMLINK libspdk_bdev_delay.so 00:01:46.646 LIB libspdk_bdev_lvol.a 00:01:46.646 SYMLINK libspdk_bdev_iscsi.so 00:01:46.646 LIB libspdk_bdev_virtio.a 00:01:46.646 SO libspdk_bdev_lvol.so.6.0 00:01:46.646 SO libspdk_bdev_virtio.so.6.0 00:01:46.646 SYMLINK libspdk_bdev_lvol.so 00:01:46.646 SYMLINK libspdk_bdev_virtio.so 00:01:46.905 LIB libspdk_bdev_raid.a 00:01:46.905 SO libspdk_bdev_raid.so.6.0 00:01:47.163 SYMLINK libspdk_bdev_raid.so 00:01:47.730 LIB libspdk_bdev_nvme.a 00:01:47.730 SO libspdk_bdev_nvme.so.7.0 00:01:47.730 SYMLINK libspdk_bdev_nvme.so 00:01:48.666 CC module/event/subsystems/iobuf/iobuf.o 00:01:48.666 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:01:48.666 CC module/event/subsystems/keyring/keyring.o 00:01:48.666 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:01:48.666 CC module/event/subsystems/scheduler/scheduler.o 00:01:48.666 CC module/event/subsystems/vmd/vmd.o 00:01:48.666 CC module/event/subsystems/sock/sock.o 00:01:48.666 CC module/event/subsystems/vmd/vmd_rpc.o 00:01:48.666 CC module/event/subsystems/vfu_tgt/vfu_tgt.o 00:01:48.666 LIB libspdk_event_keyring.a 00:01:48.666 LIB libspdk_event_iobuf.a 00:01:48.666 SO libspdk_event_keyring.so.1.0 00:01:48.666 LIB libspdk_event_vhost_blk.a 00:01:48.666 LIB libspdk_event_vmd.a 00:01:48.666 LIB libspdk_event_scheduler.a 00:01:48.666 SO libspdk_event_iobuf.so.3.0 00:01:48.666 LIB libspdk_event_sock.a 00:01:48.666 LIB libspdk_event_vfu_tgt.a 00:01:48.666 SO libspdk_event_vhost_blk.so.3.0 00:01:48.666 SO libspdk_event_scheduler.so.4.0 00:01:48.666 SO libspdk_event_vmd.so.6.0 00:01:48.666 SO libspdk_event_sock.so.5.0 00:01:48.666 SYMLINK libspdk_event_keyring.so 00:01:48.666 SO libspdk_event_vfu_tgt.so.3.0 00:01:48.666 SYMLINK libspdk_event_iobuf.so 00:01:48.666 SYMLINK libspdk_event_vmd.so 00:01:48.667 SYMLINK libspdk_event_vhost_blk.so 00:01:48.667 SYMLINK libspdk_event_scheduler.so 00:01:48.667 SYMLINK libspdk_event_sock.so 00:01:48.667 SYMLINK libspdk_event_vfu_tgt.so 00:01:48.924 CC module/event/subsystems/accel/accel.o 00:01:49.182 LIB libspdk_event_accel.a 00:01:49.182 SO libspdk_event_accel.so.6.0 00:01:49.182 SYMLINK libspdk_event_accel.so 00:01:49.441 CC module/event/subsystems/bdev/bdev.o 00:01:49.698 LIB libspdk_event_bdev.a 00:01:49.698 SO libspdk_event_bdev.so.6.0 00:01:49.698 SYMLINK libspdk_event_bdev.so 00:01:49.956 CC module/event/subsystems/ublk/ublk.o 00:01:49.956 CC module/event/subsystems/nbd/nbd.o 00:01:49.956 CC module/event/subsystems/scsi/scsi.o 00:01:49.956 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:01:49.956 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:01:50.215 LIB libspdk_event_ublk.a 00:01:50.215 LIB libspdk_event_nbd.a 00:01:50.215 LIB libspdk_event_scsi.a 00:01:50.215 SO libspdk_event_ublk.so.3.0 00:01:50.215 SO libspdk_event_nbd.so.6.0 00:01:50.215 SO libspdk_event_scsi.so.6.0 00:01:50.215 SYMLINK libspdk_event_ublk.so 00:01:50.215 SYMLINK libspdk_event_nbd.so 00:01:50.215 LIB libspdk_event_nvmf.a 00:01:50.215 SYMLINK libspdk_event_scsi.so 00:01:50.215 SO libspdk_event_nvmf.so.6.0 00:01:50.472 SYMLINK libspdk_event_nvmf.so 00:01:50.472 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:01:50.472 CC module/event/subsystems/iscsi/iscsi.o 00:01:50.730 LIB libspdk_event_vhost_scsi.a 00:01:50.730 SO libspdk_event_vhost_scsi.so.3.0 00:01:50.730 LIB libspdk_event_iscsi.a 00:01:50.730 SYMLINK libspdk_event_vhost_scsi.so 00:01:50.730 SO libspdk_event_iscsi.so.6.0 00:01:50.730 SYMLINK libspdk_event_iscsi.so 00:01:50.988 SO libspdk.so.6.0 00:01:50.988 SYMLINK libspdk.so 00:01:51.247 CXX app/trace/trace.o 00:01:51.247 CC app/spdk_lspci/spdk_lspci.o 00:01:51.247 CC app/spdk_top/spdk_top.o 00:01:51.247 CC app/spdk_nvme_discover/discovery_aer.o 00:01:51.247 CC app/spdk_nvme_identify/identify.o 00:01:51.247 CC app/trace_record/trace_record.o 00:01:51.247 CC app/spdk_nvme_perf/perf.o 00:01:51.247 CC test/rpc_client/rpc_client_test.o 00:01:51.247 CC examples/interrupt_tgt/interrupt_tgt.o 00:01:51.247 CC app/spdk_dd/spdk_dd.o 00:01:51.247 TEST_HEADER include/spdk/accel.h 00:01:51.247 CC app/iscsi_tgt/iscsi_tgt.o 00:01:51.247 TEST_HEADER include/spdk/accel_module.h 00:01:51.247 TEST_HEADER include/spdk/assert.h 00:01:51.247 TEST_HEADER include/spdk/barrier.h 00:01:51.247 TEST_HEADER include/spdk/base64.h 00:01:51.247 TEST_HEADER include/spdk/bdev_module.h 00:01:51.247 TEST_HEADER include/spdk/bdev.h 00:01:51.247 CC app/nvmf_tgt/nvmf_main.o 00:01:51.247 TEST_HEADER include/spdk/bit_array.h 00:01:51.247 TEST_HEADER include/spdk/bdev_zone.h 00:01:51.247 TEST_HEADER include/spdk/blob_bdev.h 00:01:51.247 TEST_HEADER include/spdk/blobfs_bdev.h 00:01:51.247 TEST_HEADER include/spdk/bit_pool.h 00:01:51.247 TEST_HEADER include/spdk/blobfs.h 00:01:51.247 TEST_HEADER include/spdk/blob.h 00:01:51.247 TEST_HEADER include/spdk/conf.h 00:01:51.247 TEST_HEADER include/spdk/cpuset.h 00:01:51.247 TEST_HEADER include/spdk/config.h 00:01:51.247 TEST_HEADER include/spdk/crc32.h 00:01:51.247 TEST_HEADER include/spdk/crc16.h 00:01:51.247 TEST_HEADER include/spdk/dif.h 00:01:51.247 TEST_HEADER include/spdk/dma.h 00:01:51.247 TEST_HEADER include/spdk/endian.h 00:01:51.247 CC app/vhost/vhost.o 00:01:51.247 TEST_HEADER include/spdk/crc64.h 00:01:51.247 TEST_HEADER include/spdk/env_dpdk.h 00:01:51.247 TEST_HEADER include/spdk/env.h 00:01:51.247 TEST_HEADER include/spdk/fd_group.h 00:01:51.247 TEST_HEADER include/spdk/event.h 00:01:51.247 TEST_HEADER include/spdk/fd.h 00:01:51.247 TEST_HEADER include/spdk/file.h 00:01:51.247 TEST_HEADER include/spdk/gpt_spec.h 00:01:51.247 TEST_HEADER include/spdk/hexlify.h 00:01:51.247 TEST_HEADER include/spdk/ftl.h 00:01:51.509 TEST_HEADER include/spdk/histogram_data.h 00:01:51.509 TEST_HEADER include/spdk/init.h 00:01:51.509 TEST_HEADER include/spdk/idxd_spec.h 00:01:51.509 TEST_HEADER include/spdk/idxd.h 00:01:51.509 TEST_HEADER include/spdk/ioat_spec.h 00:01:51.509 TEST_HEADER include/spdk/ioat.h 00:01:51.509 TEST_HEADER include/spdk/json.h 00:01:51.509 TEST_HEADER include/spdk/iscsi_spec.h 00:01:51.509 TEST_HEADER include/spdk/jsonrpc.h 00:01:51.509 TEST_HEADER include/spdk/keyring_module.h 00:01:51.509 TEST_HEADER include/spdk/keyring.h 00:01:51.509 TEST_HEADER include/spdk/likely.h 00:01:51.509 TEST_HEADER include/spdk/lvol.h 00:01:51.509 TEST_HEADER include/spdk/memory.h 00:01:51.509 TEST_HEADER include/spdk/log.h 00:01:51.509 TEST_HEADER include/spdk/mmio.h 00:01:51.509 TEST_HEADER include/spdk/nbd.h 00:01:51.509 TEST_HEADER include/spdk/notify.h 00:01:51.509 TEST_HEADER include/spdk/nvme.h 00:01:51.509 TEST_HEADER include/spdk/nvme_ocssd.h 00:01:51.509 TEST_HEADER include/spdk/nvme_intel.h 00:01:51.509 TEST_HEADER include/spdk/nvme_spec.h 00:01:51.509 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:01:51.509 TEST_HEADER include/spdk/nvme_zns.h 00:01:51.509 TEST_HEADER include/spdk/nvmf_cmd.h 00:01:51.509 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:01:51.509 TEST_HEADER include/spdk/nvmf.h 00:01:51.509 TEST_HEADER include/spdk/nvmf_spec.h 00:01:51.509 TEST_HEADER include/spdk/opal.h 00:01:51.509 TEST_HEADER include/spdk/opal_spec.h 00:01:51.509 TEST_HEADER include/spdk/nvmf_transport.h 00:01:51.509 TEST_HEADER include/spdk/pipe.h 00:01:51.509 TEST_HEADER include/spdk/pci_ids.h 00:01:51.509 TEST_HEADER include/spdk/queue.h 00:01:51.509 TEST_HEADER include/spdk/rpc.h 00:01:51.509 TEST_HEADER include/spdk/reduce.h 00:01:51.509 TEST_HEADER include/spdk/scheduler.h 00:01:51.509 TEST_HEADER include/spdk/scsi.h 00:01:51.509 TEST_HEADER include/spdk/scsi_spec.h 00:01:51.509 TEST_HEADER include/spdk/sock.h 00:01:51.509 TEST_HEADER include/spdk/stdinc.h 00:01:51.509 TEST_HEADER include/spdk/thread.h 00:01:51.509 TEST_HEADER include/spdk/string.h 00:01:51.509 TEST_HEADER include/spdk/trace.h 00:01:51.509 TEST_HEADER include/spdk/trace_parser.h 00:01:51.509 CC app/spdk_tgt/spdk_tgt.o 00:01:51.509 TEST_HEADER include/spdk/util.h 00:01:51.509 TEST_HEADER include/spdk/tree.h 00:01:51.509 TEST_HEADER include/spdk/ublk.h 00:01:51.509 TEST_HEADER include/spdk/version.h 00:01:51.509 TEST_HEADER include/spdk/uuid.h 00:01:51.509 TEST_HEADER include/spdk/vfio_user_pci.h 00:01:51.509 TEST_HEADER include/spdk/vfio_user_spec.h 00:01:51.509 TEST_HEADER include/spdk/vmd.h 00:01:51.509 TEST_HEADER include/spdk/vhost.h 00:01:51.509 TEST_HEADER include/spdk/xor.h 00:01:51.509 TEST_HEADER include/spdk/zipf.h 00:01:51.509 CXX test/cpp_headers/accel_module.o 00:01:51.509 CXX test/cpp_headers/accel.o 00:01:51.509 CXX test/cpp_headers/assert.o 00:01:51.509 CXX test/cpp_headers/barrier.o 00:01:51.509 CXX test/cpp_headers/base64.o 00:01:51.509 CXX test/cpp_headers/bdev.o 00:01:51.509 CXX test/cpp_headers/bdev_module.o 00:01:51.509 CXX test/cpp_headers/bdev_zone.o 00:01:51.509 CXX test/cpp_headers/bit_array.o 00:01:51.509 CXX test/cpp_headers/bit_pool.o 00:01:51.509 CXX test/cpp_headers/blobfs_bdev.o 00:01:51.509 CXX test/cpp_headers/blob_bdev.o 00:01:51.509 CXX test/cpp_headers/blobfs.o 00:01:51.509 CXX test/cpp_headers/blob.o 00:01:51.509 CXX test/cpp_headers/conf.o 00:01:51.509 CXX test/cpp_headers/cpuset.o 00:01:51.509 CXX test/cpp_headers/crc16.o 00:01:51.509 CXX test/cpp_headers/config.o 00:01:51.509 CXX test/cpp_headers/crc32.o 00:01:51.509 CXX test/cpp_headers/crc64.o 00:01:51.509 CXX test/cpp_headers/dif.o 00:01:51.509 CC examples/nvme/reconnect/reconnect.o 00:01:51.509 CC examples/nvme/arbitration/arbitration.o 00:01:51.509 CC examples/idxd/perf/perf.o 00:01:51.509 CC examples/nvme/abort/abort.o 00:01:51.509 CC test/app/stub/stub.o 00:01:51.509 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:01:51.509 CC examples/sock/hello_world/hello_sock.o 00:01:51.509 CC examples/nvme/nvme_manage/nvme_manage.o 00:01:51.509 CC examples/ioat/verify/verify.o 00:01:51.509 CC examples/nvme/cmb_copy/cmb_copy.o 00:01:51.509 CC examples/ioat/perf/perf.o 00:01:51.509 CC examples/nvme/hello_world/hello_world.o 00:01:51.509 CC examples/nvme/hotplug/hotplug.o 00:01:51.509 CC test/app/histogram_perf/histogram_perf.o 00:01:51.509 CXX test/cpp_headers/dma.o 00:01:51.509 CC test/app/jsoncat/jsoncat.o 00:01:51.509 CC app/fio/nvme/fio_plugin.o 00:01:51.509 CC test/event/event_perf/event_perf.o 00:01:51.509 CC test/env/vtophys/vtophys.o 00:01:51.509 CC test/event/reactor_perf/reactor_perf.o 00:01:51.509 CC test/nvme/overhead/overhead.o 00:01:51.509 CC examples/accel/perf/accel_perf.o 00:01:51.509 CC examples/vmd/led/led.o 00:01:51.509 CC test/nvme/e2edp/nvme_dp.o 00:01:51.509 CC examples/vmd/lsvmd/lsvmd.o 00:01:51.509 CC test/nvme/startup/startup.o 00:01:51.509 CC test/nvme/sgl/sgl.o 00:01:51.509 CC examples/util/zipf/zipf.o 00:01:51.509 CC test/thread/poller_perf/poller_perf.o 00:01:51.509 CC test/nvme/reset/reset.o 00:01:51.509 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:01:51.509 CC test/nvme/connect_stress/connect_stress.o 00:01:51.509 CC test/nvme/cuse/cuse.o 00:01:51.509 CC test/nvme/fused_ordering/fused_ordering.o 00:01:51.509 CC test/env/memory/memory_ut.o 00:01:51.509 CC test/nvme/boot_partition/boot_partition.o 00:01:51.509 CC examples/bdev/bdevperf/bdevperf.o 00:01:51.509 CC test/nvme/simple_copy/simple_copy.o 00:01:51.509 CC test/env/pci/pci_ut.o 00:01:51.509 CC test/app/bdev_svc/bdev_svc.o 00:01:51.509 CC examples/blob/cli/blobcli.o 00:01:51.509 CC test/event/reactor/reactor.o 00:01:51.509 CC test/bdev/bdevio/bdevio.o 00:01:51.509 CC test/event/app_repeat/app_repeat.o 00:01:51.509 CC examples/blob/hello_world/hello_blob.o 00:01:51.509 CC test/nvme/err_injection/err_injection.o 00:01:51.509 CC test/nvme/doorbell_aers/doorbell_aers.o 00:01:51.509 CC examples/nvmf/nvmf/nvmf.o 00:01:51.509 CC test/event/scheduler/scheduler.o 00:01:51.509 CC test/nvme/aer/aer.o 00:01:51.509 CC examples/bdev/hello_world/hello_bdev.o 00:01:51.509 CC test/accel/dif/dif.o 00:01:51.509 CC test/nvme/compliance/nvme_compliance.o 00:01:51.509 CC test/nvme/reserve/reserve.o 00:01:51.509 LINK spdk_lspci 00:01:51.509 CC test/nvme/fdp/fdp.o 00:01:51.509 CC test/blobfs/mkfs/mkfs.o 00:01:51.509 CC app/fio/bdev/fio_plugin.o 00:01:51.776 CC test/dma/test_dma/test_dma.o 00:01:51.776 CC examples/thread/thread/thread_ex.o 00:01:51.776 LINK interrupt_tgt 00:01:51.776 LINK nvmf_tgt 00:01:51.776 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:01:51.776 LINK vhost 00:01:51.776 CC test/env/mem_callbacks/mem_callbacks.o 00:01:51.776 LINK vtophys 00:01:52.036 LINK event_perf 00:01:52.036 LINK poller_perf 00:01:52.036 LINK rpc_client_test 00:01:52.036 LINK reactor_perf 00:01:52.036 CC test/lvol/esnap/esnap.o 00:01:52.036 LINK spdk_nvme_discover 00:01:52.036 LINK pmr_persistence 00:01:52.036 LINK histogram_perf 00:01:52.036 LINK cmb_copy 00:01:52.036 CXX test/cpp_headers/endian.o 00:01:52.036 LINK env_dpdk_post_init 00:01:52.036 LINK zipf 00:01:52.036 LINK iscsi_tgt 00:01:52.036 CXX test/cpp_headers/env_dpdk.o 00:01:52.036 CXX test/cpp_headers/env.o 00:01:52.036 CXX test/cpp_headers/event.o 00:01:52.036 CXX test/cpp_headers/fd_group.o 00:01:52.036 LINK verify 00:01:52.036 CXX test/cpp_headers/fd.o 00:01:52.036 CXX test/cpp_headers/file.o 00:01:52.036 CXX test/cpp_headers/ftl.o 00:01:52.036 LINK hotplug 00:01:52.036 LINK spdk_dd 00:01:52.036 LINK ioat_perf 00:01:52.036 LINK doorbell_aers 00:01:52.036 LINK jsoncat 00:01:52.036 LINK err_injection 00:01:52.036 LINK lsvmd 00:01:52.036 LINK fused_ordering 00:01:52.036 CXX test/cpp_headers/gpt_spec.o 00:01:52.036 LINK spdk_tgt 00:01:52.036 LINK spdk_trace_record 00:01:52.036 LINK led 00:01:52.036 CXX test/cpp_headers/hexlify.o 00:01:52.036 LINK reactor 00:01:52.036 LINK sgl 00:01:52.036 LINK scheduler 00:01:52.036 CXX test/cpp_headers/histogram_data.o 00:01:52.036 LINK overhead 00:01:52.036 LINK nvme_dp 00:01:52.036 LINK stub 00:01:52.036 LINK app_repeat 00:01:52.036 LINK connect_stress 00:01:52.036 LINK startup 00:01:52.036 LINK arbitration 00:01:52.302 LINK boot_partition 00:01:52.302 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:01:52.302 LINK spdk_trace 00:01:52.302 CXX test/cpp_headers/idxd.o 00:01:52.302 CXX test/cpp_headers/ioat.o 00:01:52.302 CXX test/cpp_headers/init.o 00:01:52.302 LINK hello_world 00:01:52.302 CXX test/cpp_headers/idxd_spec.o 00:01:52.302 CXX test/cpp_headers/ioat_spec.o 00:01:52.302 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:01:52.302 LINK bdev_svc 00:01:52.302 LINK hello_sock 00:01:52.302 CXX test/cpp_headers/iscsi_spec.o 00:01:52.302 CXX test/cpp_headers/json.o 00:01:52.302 CXX test/cpp_headers/jsonrpc.o 00:01:52.302 CXX test/cpp_headers/keyring.o 00:01:52.302 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:01:52.302 LINK reserve 00:01:52.302 CXX test/cpp_headers/keyring_module.o 00:01:52.302 CXX test/cpp_headers/likely.o 00:01:52.302 CXX test/cpp_headers/lvol.o 00:01:52.302 CXX test/cpp_headers/log.o 00:01:52.302 CXX test/cpp_headers/memory.o 00:01:52.302 LINK simple_copy 00:01:52.302 LINK mkfs 00:01:52.302 CXX test/cpp_headers/mmio.o 00:01:52.302 CXX test/cpp_headers/nbd.o 00:01:52.302 LINK hello_blob 00:01:52.302 LINK hello_bdev 00:01:52.302 CXX test/cpp_headers/notify.o 00:01:52.302 CXX test/cpp_headers/nvme.o 00:01:52.302 CXX test/cpp_headers/nvme_intel.o 00:01:52.302 CXX test/cpp_headers/nvme_ocssd.o 00:01:52.302 LINK reset 00:01:52.302 CXX test/cpp_headers/nvme_ocssd_spec.o 00:01:52.302 LINK thread 00:01:52.302 LINK dif 00:01:52.302 CXX test/cpp_headers/nvme_spec.o 00:01:52.302 CXX test/cpp_headers/nvme_zns.o 00:01:52.302 CXX test/cpp_headers/nvmf_cmd.o 00:01:52.302 LINK test_dma 00:01:52.302 CXX test/cpp_headers/nvmf_fc_spec.o 00:01:52.302 CXX test/cpp_headers/nvmf.o 00:01:52.302 CXX test/cpp_headers/nvmf_spec.o 00:01:52.302 LINK idxd_perf 00:01:52.302 CXX test/cpp_headers/nvmf_transport.o 00:01:52.302 LINK aer 00:01:52.302 CXX test/cpp_headers/opal.o 00:01:52.302 CXX test/cpp_headers/opal_spec.o 00:01:52.302 CXX test/cpp_headers/pci_ids.o 00:01:52.302 CXX test/cpp_headers/pipe.o 00:01:52.302 CXX test/cpp_headers/queue.o 00:01:52.302 CXX test/cpp_headers/reduce.o 00:01:52.302 CXX test/cpp_headers/rpc.o 00:01:52.302 LINK reconnect 00:01:52.302 CXX test/cpp_headers/scheduler.o 00:01:52.302 CXX test/cpp_headers/scsi.o 00:01:52.302 LINK abort 00:01:52.302 CXX test/cpp_headers/scsi_spec.o 00:01:52.302 CXX test/cpp_headers/sock.o 00:01:52.302 LINK nvme_compliance 00:01:52.302 CXX test/cpp_headers/stdinc.o 00:01:52.302 LINK fdp 00:01:52.302 CXX test/cpp_headers/thread.o 00:01:52.302 CXX test/cpp_headers/trace.o 00:01:52.302 CXX test/cpp_headers/string.o 00:01:52.302 LINK nvmf 00:01:52.302 CXX test/cpp_headers/trace_parser.o 00:01:52.562 CXX test/cpp_headers/tree.o 00:01:52.562 CXX test/cpp_headers/ublk.o 00:01:52.562 CXX test/cpp_headers/util.o 00:01:52.562 CXX test/cpp_headers/uuid.o 00:01:52.562 CXX test/cpp_headers/version.o 00:01:52.562 CXX test/cpp_headers/vfio_user_pci.o 00:01:52.562 LINK spdk_nvme 00:01:52.562 LINK nvme_fuzz 00:01:52.562 CXX test/cpp_headers/vfio_user_spec.o 00:01:52.562 LINK pci_ut 00:01:52.562 CXX test/cpp_headers/vhost.o 00:01:52.562 CXX test/cpp_headers/vmd.o 00:01:52.562 CXX test/cpp_headers/xor.o 00:01:52.562 CXX test/cpp_headers/zipf.o 00:01:52.562 LINK bdevio 00:01:52.562 LINK spdk_bdev 00:01:52.562 LINK accel_perf 00:01:52.562 LINK nvme_manage 00:01:52.562 LINK blobcli 00:01:52.820 LINK spdk_nvme_perf 00:01:52.820 LINK spdk_nvme_identify 00:01:52.820 LINK mem_callbacks 00:01:52.820 LINK bdevperf 00:01:52.820 LINK spdk_top 00:01:52.820 LINK vhost_fuzz 00:01:52.820 LINK memory_ut 00:01:53.079 LINK cuse 00:01:53.647 LINK iscsi_fuzz 00:01:55.550 LINK esnap 00:01:55.809 00:01:55.809 real 0m42.776s 00:01:55.809 user 6m33.864s 00:01:55.809 sys 3m35.464s 00:01:55.809 16:52:43 make -- common/autotest_common.sh@1122 -- $ xtrace_disable 00:01:55.809 16:52:43 make -- common/autotest_common.sh@10 -- $ set +x 00:01:55.809 ************************************ 00:01:55.809 END TEST make 00:01:55.809 ************************************ 00:01:55.809 16:52:43 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:01:55.809 16:52:43 -- pm/common@29 -- $ signal_monitor_resources TERM 00:01:55.809 16:52:43 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:01:55.810 16:52:43 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:55.810 16:52:43 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:01:55.810 16:52:43 -- pm/common@44 -- $ pid=2776441 00:01:55.810 16:52:43 -- pm/common@50 -- $ kill -TERM 2776441 00:01:55.810 16:52:43 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:55.810 16:52:43 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:01:55.810 16:52:43 -- pm/common@44 -- $ pid=2776442 00:01:55.810 16:52:43 -- pm/common@50 -- $ kill -TERM 2776442 00:01:55.810 16:52:43 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:55.810 16:52:43 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:01:55.810 16:52:43 -- pm/common@44 -- $ pid=2776444 00:01:55.810 16:52:43 -- pm/common@50 -- $ kill -TERM 2776444 00:01:55.810 16:52:43 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:55.810 16:52:43 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:01:55.810 16:52:43 -- pm/common@44 -- $ pid=2776473 00:01:55.810 16:52:43 -- pm/common@50 -- $ sudo -E kill -TERM 2776473 00:01:56.069 16:52:43 -- spdk/autotest.sh@25 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:01:56.069 16:52:43 -- nvmf/common.sh@7 -- # uname -s 00:01:56.069 16:52:43 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:01:56.069 16:52:43 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:01:56.069 16:52:43 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:01:56.069 16:52:43 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:01:56.069 16:52:43 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:01:56.069 16:52:43 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:01:56.069 16:52:43 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:01:56.069 16:52:43 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:01:56.069 16:52:43 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:01:56.069 16:52:43 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:01:56.069 16:52:43 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:01:56.069 16:52:43 -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:01:56.069 16:52:43 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:01:56.069 16:52:43 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:01:56.069 16:52:43 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:01:56.069 16:52:43 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:01:56.069 16:52:43 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:01:56.069 16:52:43 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:01:56.069 16:52:43 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:01:56.069 16:52:43 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:01:56.069 16:52:43 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:56.069 16:52:43 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:56.069 16:52:43 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:56.069 16:52:43 -- paths/export.sh@5 -- # export PATH 00:01:56.069 16:52:43 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:56.069 16:52:43 -- nvmf/common.sh@47 -- # : 0 00:01:56.069 16:52:43 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:01:56.069 16:52:43 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:01:56.069 16:52:43 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:01:56.069 16:52:43 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:01:56.069 16:52:43 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:01:56.069 16:52:43 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:01:56.069 16:52:43 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:01:56.069 16:52:43 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:01:56.069 16:52:43 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:01:56.069 16:52:43 -- spdk/autotest.sh@32 -- # uname -s 00:01:56.069 16:52:43 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:01:56.069 16:52:43 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:01:56.069 16:52:43 -- spdk/autotest.sh@34 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:01:56.069 16:52:43 -- spdk/autotest.sh@39 -- # echo '|/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/core-collector.sh %P %s %t' 00:01:56.069 16:52:43 -- spdk/autotest.sh@40 -- # echo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:01:56.069 16:52:43 -- spdk/autotest.sh@44 -- # modprobe nbd 00:01:56.070 16:52:43 -- spdk/autotest.sh@46 -- # type -P udevadm 00:01:56.070 16:52:43 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:01:56.070 16:52:43 -- spdk/autotest.sh@48 -- # udevadm_pid=2834408 00:01:56.070 16:52:43 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:01:56.070 16:52:43 -- pm/common@17 -- # local monitor 00:01:56.070 16:52:43 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:01:56.070 16:52:43 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:01:56.070 16:52:43 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:01:56.070 16:52:43 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:01:56.070 16:52:43 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:01:56.070 16:52:43 -- pm/common@25 -- # sleep 1 00:01:56.070 16:52:43 -- pm/common@21 -- # date +%s 00:01:56.070 16:52:43 -- pm/common@21 -- # date +%s 00:01:56.070 16:52:43 -- pm/common@21 -- # date +%s 00:01:56.070 16:52:43 -- pm/common@21 -- # date +%s 00:01:56.070 16:52:43 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1715784763 00:01:56.070 16:52:43 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1715784763 00:01:56.070 16:52:43 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1715784763 00:01:56.070 16:52:43 -- pm/common@21 -- # sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1715784763 00:01:56.070 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1715784763_collect-vmstat.pm.log 00:01:56.070 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1715784763_collect-cpu-load.pm.log 00:01:56.070 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1715784763_collect-cpu-temp.pm.log 00:01:56.070 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1715784763_collect-bmc-pm.bmc.pm.log 00:01:57.007 16:52:44 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:01:57.007 16:52:44 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:01:57.007 16:52:44 -- common/autotest_common.sh@720 -- # xtrace_disable 00:01:57.007 16:52:44 -- common/autotest_common.sh@10 -- # set +x 00:01:57.007 16:52:44 -- spdk/autotest.sh@59 -- # create_test_list 00:01:57.007 16:52:44 -- common/autotest_common.sh@744 -- # xtrace_disable 00:01:57.007 16:52:44 -- common/autotest_common.sh@10 -- # set +x 00:01:57.007 16:52:44 -- spdk/autotest.sh@61 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/autotest.sh 00:01:57.007 16:52:44 -- spdk/autotest.sh@61 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:57.007 16:52:44 -- spdk/autotest.sh@61 -- # src=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:57.007 16:52:44 -- spdk/autotest.sh@62 -- # out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:01:57.007 16:52:44 -- spdk/autotest.sh@63 -- # cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:57.007 16:52:44 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:01:57.007 16:52:44 -- common/autotest_common.sh@1451 -- # uname 00:01:57.007 16:52:44 -- common/autotest_common.sh@1451 -- # '[' Linux = FreeBSD ']' 00:01:57.007 16:52:44 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:01:57.007 16:52:44 -- common/autotest_common.sh@1471 -- # uname 00:01:57.007 16:52:44 -- common/autotest_common.sh@1471 -- # [[ Linux = FreeBSD ]] 00:01:57.007 16:52:44 -- spdk/autotest.sh@71 -- # grep CC_TYPE mk/cc.mk 00:01:57.007 16:52:44 -- spdk/autotest.sh@71 -- # CC_TYPE=CC_TYPE=gcc 00:01:57.007 16:52:44 -- spdk/autotest.sh@72 -- # hash lcov 00:01:57.007 16:52:44 -- spdk/autotest.sh@72 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:01:57.007 16:52:44 -- spdk/autotest.sh@80 -- # export 'LCOV_OPTS= 00:01:57.007 --rc lcov_branch_coverage=1 00:01:57.007 --rc lcov_function_coverage=1 00:01:57.007 --rc genhtml_branch_coverage=1 00:01:57.007 --rc genhtml_function_coverage=1 00:01:57.007 --rc genhtml_legend=1 00:01:57.007 --rc geninfo_all_blocks=1 00:01:57.007 ' 00:01:57.007 16:52:44 -- spdk/autotest.sh@80 -- # LCOV_OPTS=' 00:01:57.007 --rc lcov_branch_coverage=1 00:01:57.007 --rc lcov_function_coverage=1 00:01:57.007 --rc genhtml_branch_coverage=1 00:01:57.007 --rc genhtml_function_coverage=1 00:01:57.007 --rc genhtml_legend=1 00:01:57.007 --rc geninfo_all_blocks=1 00:01:57.007 ' 00:01:57.007 16:52:44 -- spdk/autotest.sh@81 -- # export 'LCOV=lcov 00:01:57.007 --rc lcov_branch_coverage=1 00:01:57.007 --rc lcov_function_coverage=1 00:01:57.007 --rc genhtml_branch_coverage=1 00:01:57.007 --rc genhtml_function_coverage=1 00:01:57.007 --rc genhtml_legend=1 00:01:57.007 --rc geninfo_all_blocks=1 00:01:57.007 --no-external' 00:01:57.007 16:52:44 -- spdk/autotest.sh@81 -- # LCOV='lcov 00:01:57.007 --rc lcov_branch_coverage=1 00:01:57.007 --rc lcov_function_coverage=1 00:01:57.007 --rc genhtml_branch_coverage=1 00:01:57.007 --rc genhtml_function_coverage=1 00:01:57.007 --rc genhtml_legend=1 00:01:57.007 --rc geninfo_all_blocks=1 00:01:57.007 --no-external' 00:01:57.007 16:52:44 -- spdk/autotest.sh@83 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -v 00:01:57.265 lcov: LCOV version 1.14 00:01:57.265 16:52:44 -- spdk/autotest.sh@85 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -i -t Baseline -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info 00:02:07.240 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:02:07.240 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno 00:02:07.240 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/ftl/upgrade/ftl_band_upgrade.gcno:no functions found 00:02:07.240 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/ftl/upgrade/ftl_band_upgrade.gcno 00:02:07.240 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/ftl/upgrade/ftl_chunk_upgrade.gcno:no functions found 00:02:07.240 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/ftl/upgrade/ftl_chunk_upgrade.gcno 00:02:07.240 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/ftl/upgrade/ftl_p2l_upgrade.gcno:no functions found 00:02:07.240 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/ftl/upgrade/ftl_p2l_upgrade.gcno 00:02:19.479 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel_module.gcno:no functions found 00:02:19.479 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel_module.gcno 00:02:19.479 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev.gcno:no functions found 00:02:19.479 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev.gcno 00:02:19.479 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/barrier.gcno:no functions found 00:02:19.479 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/barrier.gcno 00:02:19.479 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel.gcno:no functions found 00:02:19.479 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel.gcno 00:02:19.479 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/assert.gcno:no functions found 00:02:19.479 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/assert.gcno 00:02:19.479 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/base64.gcno:no functions found 00:02:19.479 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/base64.gcno 00:02:19.479 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob_bdev.gcno:no functions found 00:02:19.479 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob_bdev.gcno 00:02:19.479 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_module.gcno:no functions found 00:02:19.479 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_module.gcno 00:02:19.479 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs_bdev.gcno:no functions found 00:02:19.479 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs_bdev.gcno 00:02:19.479 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/conf.gcno:no functions found 00:02:19.479 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/conf.gcno 00:02:19.479 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob.gcno:no functions found 00:02:19.479 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob.gcno 00:02:19.479 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_array.gcno:no functions found 00:02:19.479 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_array.gcno 00:02:19.479 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_pool.gcno:no functions found 00:02:19.479 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_pool.gcno 00:02:19.479 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_zone.gcno:no functions found 00:02:19.479 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_zone.gcno 00:02:19.479 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs.gcno:no functions found 00:02:19.479 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs.gcno 00:02:19.479 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/cpuset.gcno:no functions found 00:02:19.479 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/cpuset.gcno 00:02:19.479 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/config.gcno:no functions found 00:02:19.479 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/config.gcno 00:02:19.479 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc16.gcno:no functions found 00:02:19.479 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc16.gcno 00:02:19.479 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc32.gcno:no functions found 00:02:19.479 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc32.gcno 00:02:19.479 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dif.gcno:no functions found 00:02:19.479 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dif.gcno 00:02:19.479 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc64.gcno:no functions found 00:02:19.479 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc64.gcno 00:02:19.479 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dma.gcno:no functions found 00:02:19.479 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dma.gcno 00:02:19.479 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/endian.gcno:no functions found 00:02:19.479 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/endian.gcno 00:02:19.479 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env_dpdk.gcno:no functions found 00:02:19.479 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env_dpdk.gcno 00:02:19.479 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env.gcno:no functions found 00:02:19.479 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env.gcno 00:02:19.479 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd_group.gcno:no functions found 00:02:19.479 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd_group.gcno 00:02:19.479 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/event.gcno:no functions found 00:02:19.479 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/event.gcno 00:02:19.479 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/file.gcno:no functions found 00:02:19.479 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/file.gcno 00:02:19.479 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd.gcno:no functions found 00:02:19.479 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd.gcno 00:02:19.479 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ftl.gcno:no functions found 00:02:19.479 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ftl.gcno 00:02:19.479 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/gpt_spec.gcno:no functions found 00:02:19.479 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/gpt_spec.gcno 00:02:19.479 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/hexlify.gcno:no functions found 00:02:19.479 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/hexlify.gcno 00:02:19.479 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/histogram_data.gcno:no functions found 00:02:19.479 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/histogram_data.gcno 00:02:19.479 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd.gcno:no functions found 00:02:19.479 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd.gcno 00:02:19.479 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat.gcno:no functions found 00:02:19.479 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat.gcno 00:02:19.479 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat_spec.gcno:no functions found 00:02:19.479 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat_spec.gcno 00:02:19.479 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd_spec.gcno:no functions found 00:02:19.479 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd_spec.gcno 00:02:19.479 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/init.gcno:no functions found 00:02:19.479 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/init.gcno 00:02:19.479 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/iscsi_spec.gcno:no functions found 00:02:19.479 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/iscsi_spec.gcno 00:02:19.479 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/jsonrpc.gcno:no functions found 00:02:19.479 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/jsonrpc.gcno 00:02:19.479 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring.gcno:no functions found 00:02:19.479 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring.gcno 00:02:19.479 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/json.gcno:no functions found 00:02:19.479 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/json.gcno 00:02:19.479 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring_module.gcno:no functions found 00:02:19.479 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring_module.gcno 00:02:19.479 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/lvol.gcno:no functions found 00:02:19.479 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/lvol.gcno 00:02:19.479 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/log.gcno:no functions found 00:02:19.479 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/log.gcno 00:02:19.479 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/likely.gcno:no functions found 00:02:19.480 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/likely.gcno 00:02:19.480 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/memory.gcno:no functions found 00:02:19.480 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/memory.gcno 00:02:19.480 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nbd.gcno:no functions found 00:02:19.480 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nbd.gcno 00:02:19.480 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/mmio.gcno:no functions found 00:02:19.480 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/mmio.gcno 00:02:19.480 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/notify.gcno:no functions found 00:02:19.480 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/notify.gcno 00:02:19.480 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme.gcno:no functions found 00:02:19.480 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme.gcno 00:02:19.480 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_intel.gcno:no functions found 00:02:19.480 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_intel.gcno 00:02:19.480 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd.gcno:no functions found 00:02:19.480 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd.gcno 00:02:19.480 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd_spec.gcno:no functions found 00:02:19.480 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd_spec.gcno 00:02:19.480 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_spec.gcno:no functions found 00:02:19.480 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_spec.gcno 00:02:19.480 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_zns.gcno:no functions found 00:02:19.480 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_zns.gcno 00:02:19.480 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_cmd.gcno:no functions found 00:02:19.480 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_cmd.gcno 00:02:19.480 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_transport.gcno:no functions found 00:02:19.480 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_transport.gcno 00:02:19.480 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_fc_spec.gcno:no functions found 00:02:19.480 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_fc_spec.gcno 00:02:19.480 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf.gcno:no functions found 00:02:19.480 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf.gcno 00:02:19.480 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_spec.gcno:no functions found 00:02:19.480 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_spec.gcno 00:02:19.480 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal.gcno:no functions found 00:02:19.480 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal.gcno 00:02:19.739 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pipe.gcno:no functions found 00:02:19.739 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pipe.gcno 00:02:19.739 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal_spec.gcno:no functions found 00:02:19.739 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal_spec.gcno 00:02:19.739 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/queue.gcno:no functions found 00:02:19.739 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/queue.gcno 00:02:19.739 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/reduce.gcno:no functions found 00:02:19.739 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/reduce.gcno 00:02:19.739 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/rpc.gcno:no functions found 00:02:19.739 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/rpc.gcno 00:02:19.739 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pci_ids.gcno:no functions found 00:02:19.739 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pci_ids.gcno 00:02:19.739 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi.gcno:no functions found 00:02:19.739 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi.gcno 00:02:19.739 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi_spec.gcno:no functions found 00:02:19.739 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi_spec.gcno 00:02:19.739 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scheduler.gcno:no functions found 00:02:19.739 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scheduler.gcno 00:02:19.739 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/stdinc.gcno:no functions found 00:02:19.739 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/stdinc.gcno 00:02:19.739 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/thread.gcno:no functions found 00:02:19.740 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/thread.gcno 00:02:19.740 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace.gcno:no functions found 00:02:19.740 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace.gcno 00:02:19.740 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/string.gcno:no functions found 00:02:19.740 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/string.gcno 00:02:19.740 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/tree.gcno:no functions found 00:02:19.740 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/tree.gcno 00:02:19.740 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/sock.gcno:no functions found 00:02:19.740 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/sock.gcno 00:02:19.740 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/util.gcno:no functions found 00:02:19.740 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/util.gcno 00:02:19.740 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace_parser.gcno:no functions found 00:02:19.740 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace_parser.gcno 00:02:19.740 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ublk.gcno:no functions found 00:02:19.740 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ublk.gcno 00:02:19.740 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/uuid.gcno:no functions found 00:02:19.740 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/uuid.gcno 00:02:19.740 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_pci.gcno:no functions found 00:02:19.740 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_pci.gcno 00:02:19.740 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/version.gcno:no functions found 00:02:19.740 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/version.gcno 00:02:19.740 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vhost.gcno:no functions found 00:02:19.740 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vhost.gcno 00:02:19.740 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/xor.gcno:no functions found 00:02:19.740 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/xor.gcno 00:02:19.740 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_spec.gcno:no functions found 00:02:19.740 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_spec.gcno 00:02:19.740 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vmd.gcno:no functions found 00:02:19.740 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vmd.gcno 00:02:19.740 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/zipf.gcno:no functions found 00:02:19.740 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/zipf.gcno 00:02:21.116 16:53:08 -- spdk/autotest.sh@89 -- # timing_enter pre_cleanup 00:02:21.116 16:53:08 -- common/autotest_common.sh@720 -- # xtrace_disable 00:02:21.117 16:53:08 -- common/autotest_common.sh@10 -- # set +x 00:02:21.117 16:53:08 -- spdk/autotest.sh@91 -- # rm -f 00:02:21.117 16:53:08 -- spdk/autotest.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:02:23.650 0000:5e:00.0 (8086 0a54): Already using the nvme driver 00:02:23.650 0000:00:04.7 (8086 2021): Already using the ioatdma driver 00:02:23.650 0000:00:04.6 (8086 2021): Already using the ioatdma driver 00:02:23.650 0000:00:04.5 (8086 2021): Already using the ioatdma driver 00:02:23.650 0000:00:04.4 (8086 2021): Already using the ioatdma driver 00:02:23.650 0000:00:04.3 (8086 2021): Already using the ioatdma driver 00:02:23.650 0000:00:04.2 (8086 2021): Already using the ioatdma driver 00:02:23.650 0000:00:04.1 (8086 2021): Already using the ioatdma driver 00:02:23.909 0000:00:04.0 (8086 2021): Already using the ioatdma driver 00:02:23.909 0000:80:04.7 (8086 2021): Already using the ioatdma driver 00:02:23.909 0000:80:04.6 (8086 2021): Already using the ioatdma driver 00:02:23.909 0000:80:04.5 (8086 2021): Already using the ioatdma driver 00:02:23.909 0000:80:04.4 (8086 2021): Already using the ioatdma driver 00:02:23.909 0000:80:04.3 (8086 2021): Already using the ioatdma driver 00:02:23.909 0000:80:04.2 (8086 2021): Already using the ioatdma driver 00:02:23.909 0000:80:04.1 (8086 2021): Already using the ioatdma driver 00:02:23.909 0000:80:04.0 (8086 2021): Already using the ioatdma driver 00:02:23.909 16:53:11 -- spdk/autotest.sh@96 -- # get_zoned_devs 00:02:23.909 16:53:11 -- common/autotest_common.sh@1665 -- # zoned_devs=() 00:02:23.909 16:53:11 -- common/autotest_common.sh@1665 -- # local -gA zoned_devs 00:02:23.909 16:53:11 -- common/autotest_common.sh@1666 -- # local nvme bdf 00:02:23.909 16:53:11 -- common/autotest_common.sh@1668 -- # for nvme in /sys/block/nvme* 00:02:23.909 16:53:11 -- common/autotest_common.sh@1669 -- # is_block_zoned nvme0n1 00:02:23.909 16:53:11 -- common/autotest_common.sh@1658 -- # local device=nvme0n1 00:02:23.910 16:53:11 -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:02:23.910 16:53:11 -- common/autotest_common.sh@1661 -- # [[ none != none ]] 00:02:23.910 16:53:11 -- spdk/autotest.sh@98 -- # (( 0 > 0 )) 00:02:23.910 16:53:11 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:02:23.910 16:53:11 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:02:23.910 16:53:11 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme0n1 00:02:23.910 16:53:11 -- scripts/common.sh@378 -- # local block=/dev/nvme0n1 pt 00:02:23.910 16:53:11 -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:02:24.168 No valid GPT data, bailing 00:02:24.168 16:53:11 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:02:24.168 16:53:11 -- scripts/common.sh@391 -- # pt= 00:02:24.168 16:53:11 -- scripts/common.sh@392 -- # return 1 00:02:24.168 16:53:11 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:02:24.168 1+0 records in 00:02:24.168 1+0 records out 00:02:24.168 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00621962 s, 169 MB/s 00:02:24.169 16:53:11 -- spdk/autotest.sh@118 -- # sync 00:02:24.169 16:53:11 -- spdk/autotest.sh@120 -- # xtrace_disable_per_cmd reap_spdk_processes 00:02:24.169 16:53:11 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:02:24.169 16:53:11 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:02:29.447 16:53:16 -- spdk/autotest.sh@124 -- # uname -s 00:02:29.447 16:53:16 -- spdk/autotest.sh@124 -- # '[' Linux = Linux ']' 00:02:29.447 16:53:16 -- spdk/autotest.sh@125 -- # run_test setup.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/test-setup.sh 00:02:29.447 16:53:16 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:02:29.447 16:53:16 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:02:29.447 16:53:16 -- common/autotest_common.sh@10 -- # set +x 00:02:29.447 ************************************ 00:02:29.447 START TEST setup.sh 00:02:29.447 ************************************ 00:02:29.448 16:53:16 setup.sh -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/test-setup.sh 00:02:29.448 * Looking for test storage... 00:02:29.448 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:02:29.448 16:53:16 setup.sh -- setup/test-setup.sh@10 -- # uname -s 00:02:29.448 16:53:16 setup.sh -- setup/test-setup.sh@10 -- # [[ Linux == Linux ]] 00:02:29.448 16:53:16 setup.sh -- setup/test-setup.sh@12 -- # run_test acl /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/acl.sh 00:02:29.448 16:53:16 setup.sh -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:02:29.448 16:53:16 setup.sh -- common/autotest_common.sh@1103 -- # xtrace_disable 00:02:29.448 16:53:16 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:02:29.448 ************************************ 00:02:29.448 START TEST acl 00:02:29.448 ************************************ 00:02:29.448 16:53:16 setup.sh.acl -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/acl.sh 00:02:29.448 * Looking for test storage... 00:02:29.448 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:02:29.448 16:53:16 setup.sh.acl -- setup/acl.sh@10 -- # get_zoned_devs 00:02:29.448 16:53:16 setup.sh.acl -- common/autotest_common.sh@1665 -- # zoned_devs=() 00:02:29.448 16:53:16 setup.sh.acl -- common/autotest_common.sh@1665 -- # local -gA zoned_devs 00:02:29.448 16:53:16 setup.sh.acl -- common/autotest_common.sh@1666 -- # local nvme bdf 00:02:29.448 16:53:16 setup.sh.acl -- common/autotest_common.sh@1668 -- # for nvme in /sys/block/nvme* 00:02:29.448 16:53:16 setup.sh.acl -- common/autotest_common.sh@1669 -- # is_block_zoned nvme0n1 00:02:29.448 16:53:16 setup.sh.acl -- common/autotest_common.sh@1658 -- # local device=nvme0n1 00:02:29.448 16:53:16 setup.sh.acl -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:02:29.448 16:53:16 setup.sh.acl -- common/autotest_common.sh@1661 -- # [[ none != none ]] 00:02:29.448 16:53:16 setup.sh.acl -- setup/acl.sh@12 -- # devs=() 00:02:29.448 16:53:16 setup.sh.acl -- setup/acl.sh@12 -- # declare -a devs 00:02:29.448 16:53:16 setup.sh.acl -- setup/acl.sh@13 -- # drivers=() 00:02:29.448 16:53:16 setup.sh.acl -- setup/acl.sh@13 -- # declare -A drivers 00:02:29.448 16:53:16 setup.sh.acl -- setup/acl.sh@51 -- # setup reset 00:02:29.448 16:53:16 setup.sh.acl -- setup/common.sh@9 -- # [[ reset == output ]] 00:02:29.448 16:53:16 setup.sh.acl -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:02:31.981 16:53:19 setup.sh.acl -- setup/acl.sh@52 -- # collect_setup_devs 00:02:31.981 16:53:19 setup.sh.acl -- setup/acl.sh@16 -- # local dev driver 00:02:31.981 16:53:19 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:31.981 16:53:19 setup.sh.acl -- setup/acl.sh@15 -- # setup output status 00:02:31.981 16:53:19 setup.sh.acl -- setup/common.sh@9 -- # [[ output == output ]] 00:02:31.981 16:53:19 setup.sh.acl -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:02:34.512 Hugepages 00:02:34.512 node hugesize free / total 00:02:34.512 16:53:21 setup.sh.acl -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:02:34.512 16:53:21 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:02:34.512 16:53:21 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:34.512 16:53:21 setup.sh.acl -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:02:34.512 16:53:21 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:02:34.512 16:53:21 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:34.512 16:53:21 setup.sh.acl -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:02:34.512 16:53:21 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:02:34.512 16:53:21 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:34.512 00:02:34.512 Type BDF Vendor Device NUMA Driver Device Block devices 00:02:34.512 16:53:21 setup.sh.acl -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:02:34.512 16:53:21 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:02:34.512 16:53:21 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:34.512 16:53:21 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.0 == *:*:*.* ]] 00:02:34.512 16:53:21 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:34.512 16:53:21 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:34.512 16:53:21 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:34.512 16:53:21 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.1 == *:*:*.* ]] 00:02:34.512 16:53:21 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:34.512 16:53:21 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:34.512 16:53:21 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:34.512 16:53:21 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.2 == *:*:*.* ]] 00:02:34.512 16:53:21 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:34.512 16:53:21 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:34.512 16:53:21 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:34.512 16:53:21 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.3 == *:*:*.* ]] 00:02:34.512 16:53:21 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:34.512 16:53:21 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:34.512 16:53:21 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:34.512 16:53:21 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.4 == *:*:*.* ]] 00:02:34.512 16:53:21 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:34.512 16:53:21 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:34.512 16:53:21 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:34.512 16:53:21 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.5 == *:*:*.* ]] 00:02:34.512 16:53:21 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:34.512 16:53:21 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:34.512 16:53:21 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:34.512 16:53:21 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.6 == *:*:*.* ]] 00:02:34.512 16:53:21 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:34.512 16:53:21 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:34.512 16:53:21 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:34.512 16:53:21 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.7 == *:*:*.* ]] 00:02:34.512 16:53:21 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:34.512 16:53:21 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:34.512 16:53:21 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:34.513 16:53:21 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:5e:00.0 == *:*:*.* ]] 00:02:34.513 16:53:21 setup.sh.acl -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:02:34.513 16:53:21 setup.sh.acl -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\5\e\:\0\0\.\0* ]] 00:02:34.513 16:53:21 setup.sh.acl -- setup/acl.sh@22 -- # devs+=("$dev") 00:02:34.513 16:53:21 setup.sh.acl -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:02:34.513 16:53:21 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:34.513 16:53:21 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.0 == *:*:*.* ]] 00:02:34.513 16:53:21 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:34.513 16:53:21 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:34.513 16:53:21 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:34.513 16:53:21 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.1 == *:*:*.* ]] 00:02:34.513 16:53:21 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:34.513 16:53:21 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:34.513 16:53:21 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:34.513 16:53:21 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.2 == *:*:*.* ]] 00:02:34.513 16:53:21 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:34.513 16:53:21 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:34.513 16:53:21 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:34.513 16:53:21 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.3 == *:*:*.* ]] 00:02:34.513 16:53:21 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:34.513 16:53:21 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:34.513 16:53:21 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:34.513 16:53:21 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.4 == *:*:*.* ]] 00:02:34.513 16:53:21 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:34.513 16:53:21 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:34.513 16:53:21 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:34.513 16:53:21 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.5 == *:*:*.* ]] 00:02:34.513 16:53:21 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:34.513 16:53:21 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:34.513 16:53:21 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:34.513 16:53:21 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.6 == *:*:*.* ]] 00:02:34.513 16:53:21 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:34.513 16:53:21 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:34.513 16:53:21 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:34.513 16:53:21 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.7 == *:*:*.* ]] 00:02:34.513 16:53:21 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:34.513 16:53:21 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:34.513 16:53:21 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:34.513 16:53:21 setup.sh.acl -- setup/acl.sh@24 -- # (( 1 > 0 )) 00:02:34.513 16:53:21 setup.sh.acl -- setup/acl.sh@54 -- # run_test denied denied 00:02:34.513 16:53:21 setup.sh.acl -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:02:34.513 16:53:21 setup.sh.acl -- common/autotest_common.sh@1103 -- # xtrace_disable 00:02:34.513 16:53:21 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:02:34.513 ************************************ 00:02:34.513 START TEST denied 00:02:34.513 ************************************ 00:02:34.513 16:53:21 setup.sh.acl.denied -- common/autotest_common.sh@1121 -- # denied 00:02:34.513 16:53:21 setup.sh.acl.denied -- setup/acl.sh@38 -- # PCI_BLOCKED=' 0000:5e:00.0' 00:02:34.513 16:53:21 setup.sh.acl.denied -- setup/acl.sh@39 -- # grep 'Skipping denied controller at 0000:5e:00.0' 00:02:34.513 16:53:21 setup.sh.acl.denied -- setup/acl.sh@38 -- # setup output config 00:02:34.513 16:53:21 setup.sh.acl.denied -- setup/common.sh@9 -- # [[ output == output ]] 00:02:34.513 16:53:21 setup.sh.acl.denied -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:02:37.801 0000:5e:00.0 (8086 0a54): Skipping denied controller at 0000:5e:00.0 00:02:37.801 16:53:24 setup.sh.acl.denied -- setup/acl.sh@40 -- # verify 0000:5e:00.0 00:02:37.801 16:53:24 setup.sh.acl.denied -- setup/acl.sh@28 -- # local dev driver 00:02:37.801 16:53:24 setup.sh.acl.denied -- setup/acl.sh@30 -- # for dev in "$@" 00:02:37.801 16:53:24 setup.sh.acl.denied -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:5e:00.0 ]] 00:02:37.801 16:53:24 setup.sh.acl.denied -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:5e:00.0/driver 00:02:37.801 16:53:24 setup.sh.acl.denied -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:02:37.801 16:53:24 setup.sh.acl.denied -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:02:37.801 16:53:24 setup.sh.acl.denied -- setup/acl.sh@41 -- # setup reset 00:02:37.801 16:53:24 setup.sh.acl.denied -- setup/common.sh@9 -- # [[ reset == output ]] 00:02:37.801 16:53:24 setup.sh.acl.denied -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:02:41.086 00:02:41.086 real 0m6.275s 00:02:41.086 user 0m1.930s 00:02:41.086 sys 0m3.546s 00:02:41.086 16:53:28 setup.sh.acl.denied -- common/autotest_common.sh@1122 -- # xtrace_disable 00:02:41.086 16:53:28 setup.sh.acl.denied -- common/autotest_common.sh@10 -- # set +x 00:02:41.086 ************************************ 00:02:41.086 END TEST denied 00:02:41.086 ************************************ 00:02:41.086 16:53:28 setup.sh.acl -- setup/acl.sh@55 -- # run_test allowed allowed 00:02:41.086 16:53:28 setup.sh.acl -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:02:41.086 16:53:28 setup.sh.acl -- common/autotest_common.sh@1103 -- # xtrace_disable 00:02:41.086 16:53:28 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:02:41.086 ************************************ 00:02:41.086 START TEST allowed 00:02:41.086 ************************************ 00:02:41.086 16:53:28 setup.sh.acl.allowed -- common/autotest_common.sh@1121 -- # allowed 00:02:41.086 16:53:28 setup.sh.acl.allowed -- setup/acl.sh@46 -- # grep -E '0000:5e:00.0 .*: nvme -> .*' 00:02:41.086 16:53:28 setup.sh.acl.allowed -- setup/acl.sh@45 -- # PCI_ALLOWED=0000:5e:00.0 00:02:41.086 16:53:28 setup.sh.acl.allowed -- setup/acl.sh@45 -- # setup output config 00:02:41.086 16:53:28 setup.sh.acl.allowed -- setup/common.sh@9 -- # [[ output == output ]] 00:02:41.086 16:53:28 setup.sh.acl.allowed -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:02:44.375 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:02:44.375 16:53:31 setup.sh.acl.allowed -- setup/acl.sh@47 -- # verify 00:02:44.375 16:53:31 setup.sh.acl.allowed -- setup/acl.sh@28 -- # local dev driver 00:02:44.375 16:53:31 setup.sh.acl.allowed -- setup/acl.sh@48 -- # setup reset 00:02:44.375 16:53:31 setup.sh.acl.allowed -- setup/common.sh@9 -- # [[ reset == output ]] 00:02:44.375 16:53:31 setup.sh.acl.allowed -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:02:47.669 00:02:47.669 real 0m6.334s 00:02:47.669 user 0m1.874s 00:02:47.669 sys 0m3.533s 00:02:47.669 16:53:34 setup.sh.acl.allowed -- common/autotest_common.sh@1122 -- # xtrace_disable 00:02:47.669 16:53:34 setup.sh.acl.allowed -- common/autotest_common.sh@10 -- # set +x 00:02:47.669 ************************************ 00:02:47.669 END TEST allowed 00:02:47.669 ************************************ 00:02:47.669 00:02:47.669 real 0m18.310s 00:02:47.669 user 0m5.853s 00:02:47.669 sys 0m10.804s 00:02:47.669 16:53:34 setup.sh.acl -- common/autotest_common.sh@1122 -- # xtrace_disable 00:02:47.669 16:53:34 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:02:47.669 ************************************ 00:02:47.669 END TEST acl 00:02:47.669 ************************************ 00:02:47.669 16:53:34 setup.sh -- setup/test-setup.sh@13 -- # run_test hugepages /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/hugepages.sh 00:02:47.669 16:53:34 setup.sh -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:02:47.669 16:53:34 setup.sh -- common/autotest_common.sh@1103 -- # xtrace_disable 00:02:47.669 16:53:34 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:02:47.669 ************************************ 00:02:47.669 START TEST hugepages 00:02:47.669 ************************************ 00:02:47.669 16:53:34 setup.sh.hugepages -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/hugepages.sh 00:02:47.669 * Looking for test storage... 00:02:47.669 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:02:47.669 16:53:34 setup.sh.hugepages -- setup/hugepages.sh@10 -- # nodes_sys=() 00:02:47.669 16:53:34 setup.sh.hugepages -- setup/hugepages.sh@10 -- # declare -a nodes_sys 00:02:47.669 16:53:34 setup.sh.hugepages -- setup/hugepages.sh@12 -- # declare -i default_hugepages=0 00:02:47.669 16:53:34 setup.sh.hugepages -- setup/hugepages.sh@13 -- # declare -i no_nodes=0 00:02:47.669 16:53:34 setup.sh.hugepages -- setup/hugepages.sh@14 -- # declare -i nr_hugepages=0 00:02:47.669 16:53:34 setup.sh.hugepages -- setup/hugepages.sh@16 -- # get_meminfo Hugepagesize 00:02:47.669 16:53:34 setup.sh.hugepages -- setup/common.sh@17 -- # local get=Hugepagesize 00:02:47.669 16:53:34 setup.sh.hugepages -- setup/common.sh@18 -- # local node= 00:02:47.669 16:53:34 setup.sh.hugepages -- setup/common.sh@19 -- # local var val 00:02:47.669 16:53:34 setup.sh.hugepages -- setup/common.sh@20 -- # local mem_f mem 00:02:47.669 16:53:34 setup.sh.hugepages -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:47.669 16:53:34 setup.sh.hugepages -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:47.669 16:53:34 setup.sh.hugepages -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:47.669 16:53:34 setup.sh.hugepages -- setup/common.sh@28 -- # mapfile -t mem 00:02:47.669 16:53:34 setup.sh.hugepages -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:47.669 16:53:34 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:47.669 16:53:34 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:47.670 16:53:34 setup.sh.hugepages -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381164 kB' 'MemFree: 171471508 kB' 'MemAvailable: 175474500 kB' 'Buffers: 3888 kB' 'Cached: 12111164 kB' 'SwapCached: 0 kB' 'Active: 8135084 kB' 'Inactive: 4449832 kB' 'Active(anon): 7569580 kB' 'Inactive(anon): 0 kB' 'Active(file): 565504 kB' 'Inactive(file): 4449832 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 473156 kB' 'Mapped: 178728 kB' 'Shmem: 7099716 kB' 'KReclaimable: 264216 kB' 'Slab: 836696 kB' 'SReclaimable: 264216 kB' 'SUnreclaim: 572480 kB' 'KernelStack: 20352 kB' 'PageTables: 8856 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 101982032 kB' 'Committed_AS: 9009372 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 314572 kB' 'VmallocChunk: 0 kB' 'Percpu: 69888 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 2048' 'HugePages_Free: 2048' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 4194304 kB' 'DirectMap4k: 2796500 kB' 'DirectMap2M: 19951616 kB' 'DirectMap1G: 179306496 kB' 00:02:47.670 16:53:34 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:47.670 16:53:34 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:47.670 16:53:34 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:47.670 16:53:34 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:47.670 16:53:34 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:47.670 16:53:34 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:47.670 16:53:34 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:47.670 16:53:34 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:47.670 16:53:34 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:47.670 16:53:34 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:47.670 16:53:34 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:47.670 16:53:34 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:47.670 16:53:34 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:47.670 16:53:34 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:47.670 16:53:34 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:47.670 16:53:34 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:47.670 16:53:34 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:47.670 16:53:34 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:47.670 16:53:34 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:47.670 16:53:34 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:47.670 16:53:34 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:47.670 16:53:34 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:47.670 16:53:34 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:47.670 16:53:34 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:47.670 16:53:34 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:47.670 16:53:34 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:47.670 16:53:34 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:47.670 16:53:34 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:47.670 16:53:34 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:47.670 16:53:34 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:47.670 16:53:34 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:47.670 16:53:34 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:47.670 16:53:34 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:47.670 16:53:34 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:47.670 16:53:34 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:47.670 16:53:34 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:47.670 16:53:34 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:47.670 16:53:34 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:47.670 16:53:34 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:47.670 16:53:34 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:47.670 16:53:34 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:47.670 16:53:34 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:47.670 16:53:34 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:47.670 16:53:34 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:47.670 16:53:34 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:47.670 16:53:34 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:47.670 16:53:34 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:47.670 16:53:34 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:47.670 16:53:34 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:47.670 16:53:34 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:47.670 16:53:34 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:47.670 16:53:34 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:47.670 16:53:34 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:47.670 16:53:34 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:47.670 16:53:34 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:47.670 16:53:34 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:47.670 16:53:34 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:47.670 16:53:34 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:47.670 16:53:34 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:47.670 16:53:34 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:47.670 16:53:34 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:47.670 16:53:34 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:47.670 16:53:34 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:47.670 16:53:34 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:47.670 16:53:34 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:47.670 16:53:34 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:47.670 16:53:34 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:47.670 16:53:34 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:47.670 16:53:34 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:47.670 16:53:34 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:47.670 16:53:34 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:47.670 16:53:34 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:47.670 16:53:34 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:47.670 16:53:34 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:47.670 16:53:34 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:47.670 16:53:34 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:47.670 16:53:34 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:47.670 16:53:34 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:47.670 16:53:34 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:47.670 16:53:34 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:47.670 16:53:34 setup.sh.hugepages -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:47.670 16:53:34 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:47.670 16:53:34 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:47.670 16:53:34 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:47.670 16:53:34 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:47.670 16:53:34 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:47.670 16:53:34 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:47.670 16:53:34 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:47.670 16:53:34 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:47.670 16:53:34 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:47.670 16:53:34 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:47.670 16:53:34 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:47.670 16:53:34 setup.sh.hugepages -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:47.670 16:53:34 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:47.670 16:53:34 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:47.670 16:53:34 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:47.670 16:53:34 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:47.670 16:53:34 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:47.670 16:53:34 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:47.670 16:53:34 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:47.670 16:53:34 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:47.670 16:53:34 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:47.670 16:53:34 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:47.670 16:53:34 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:47.670 16:53:34 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:47.670 16:53:34 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:47.670 16:53:34 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:47.670 16:53:34 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:47.671 16:53:34 setup.sh.hugepages -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:47.671 16:53:34 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:47.671 16:53:34 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:47.671 16:53:34 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:47.671 16:53:34 setup.sh.hugepages -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:47.671 16:53:34 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:47.671 16:53:34 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:47.671 16:53:34 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:47.671 16:53:34 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:47.671 16:53:34 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:47.671 16:53:34 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:47.671 16:53:34 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:47.671 16:53:34 setup.sh.hugepages -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:47.671 16:53:34 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:47.671 16:53:34 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:47.671 16:53:34 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:47.671 16:53:34 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:47.671 16:53:34 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:47.671 16:53:34 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:47.671 16:53:34 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:47.671 16:53:34 setup.sh.hugepages -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:47.671 16:53:34 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:47.671 16:53:34 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:47.671 16:53:34 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:47.671 16:53:34 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:47.671 16:53:34 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:47.671 16:53:34 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:47.671 16:53:34 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:47.671 16:53:34 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:47.671 16:53:34 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:47.671 16:53:34 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:47.671 16:53:34 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:47.671 16:53:34 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:47.671 16:53:34 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:47.671 16:53:34 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:47.671 16:53:34 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:47.671 16:53:34 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:47.671 16:53:34 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:47.671 16:53:34 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:47.671 16:53:34 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:47.671 16:53:34 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:47.671 16:53:34 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:47.671 16:53:34 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:47.671 16:53:34 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:47.671 16:53:34 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:47.671 16:53:34 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:47.671 16:53:34 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:47.671 16:53:34 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:47.671 16:53:34 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:47.671 16:53:34 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:47.671 16:53:34 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:47.671 16:53:34 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:47.671 16:53:34 setup.sh.hugepages -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:47.671 16:53:34 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:47.671 16:53:34 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:47.671 16:53:34 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:47.671 16:53:34 setup.sh.hugepages -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:47.671 16:53:34 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:47.671 16:53:34 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:47.671 16:53:34 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:47.671 16:53:34 setup.sh.hugepages -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:47.671 16:53:34 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:47.671 16:53:34 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:47.671 16:53:34 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:47.671 16:53:34 setup.sh.hugepages -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:47.671 16:53:34 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:47.671 16:53:34 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:47.671 16:53:34 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:47.671 16:53:34 setup.sh.hugepages -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:47.671 16:53:34 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:47.671 16:53:34 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:47.671 16:53:34 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:47.671 16:53:34 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:47.671 16:53:34 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:47.671 16:53:34 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:47.671 16:53:34 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:47.671 16:53:34 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:47.671 16:53:34 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:47.671 16:53:34 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:47.671 16:53:34 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:47.671 16:53:34 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:47.671 16:53:34 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:47.671 16:53:34 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:47.671 16:53:34 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:47.671 16:53:34 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:47.671 16:53:34 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:47.671 16:53:34 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:47.671 16:53:34 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:47.671 16:53:34 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:47.671 16:53:34 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:47.671 16:53:34 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:47.671 16:53:34 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:47.671 16:53:34 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:47.671 16:53:34 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:47.671 16:53:34 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:47.671 16:53:34 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:47.671 16:53:34 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:47.671 16:53:34 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:47.671 16:53:34 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:47.671 16:53:34 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:47.671 16:53:34 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Hugepagesize == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:47.671 16:53:34 setup.sh.hugepages -- setup/common.sh@33 -- # echo 2048 00:02:47.671 16:53:34 setup.sh.hugepages -- setup/common.sh@33 -- # return 0 00:02:47.671 16:53:34 setup.sh.hugepages -- setup/hugepages.sh@16 -- # default_hugepages=2048 00:02:47.671 16:53:34 setup.sh.hugepages -- setup/hugepages.sh@17 -- # default_huge_nr=/sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages 00:02:47.671 16:53:34 setup.sh.hugepages -- setup/hugepages.sh@18 -- # global_huge_nr=/proc/sys/vm/nr_hugepages 00:02:47.671 16:53:34 setup.sh.hugepages -- setup/hugepages.sh@21 -- # unset -v HUGE_EVEN_ALLOC 00:02:47.671 16:53:34 setup.sh.hugepages -- setup/hugepages.sh@22 -- # unset -v HUGEMEM 00:02:47.671 16:53:34 setup.sh.hugepages -- setup/hugepages.sh@23 -- # unset -v HUGENODE 00:02:47.671 16:53:34 setup.sh.hugepages -- setup/hugepages.sh@24 -- # unset -v NRHUGE 00:02:47.671 16:53:34 setup.sh.hugepages -- setup/hugepages.sh@207 -- # get_nodes 00:02:47.671 16:53:34 setup.sh.hugepages -- setup/hugepages.sh@27 -- # local node 00:02:47.671 16:53:34 setup.sh.hugepages -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:02:47.671 16:53:34 setup.sh.hugepages -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=2048 00:02:47.671 16:53:34 setup.sh.hugepages -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:02:47.671 16:53:34 setup.sh.hugepages -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:02:47.671 16:53:34 setup.sh.hugepages -- setup/hugepages.sh@32 -- # no_nodes=2 00:02:47.671 16:53:34 setup.sh.hugepages -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:02:47.671 16:53:34 setup.sh.hugepages -- setup/hugepages.sh@208 -- # clear_hp 00:02:47.671 16:53:34 setup.sh.hugepages -- setup/hugepages.sh@37 -- # local node hp 00:02:47.671 16:53:34 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:02:47.671 16:53:34 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:02:47.671 16:53:34 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:02:47.671 16:53:34 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:02:47.671 16:53:34 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:02:47.671 16:53:34 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:02:47.671 16:53:34 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:02:47.671 16:53:34 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:02:47.671 16:53:34 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:02:47.671 16:53:34 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:02:47.671 16:53:34 setup.sh.hugepages -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:02:47.671 16:53:34 setup.sh.hugepages -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:02:47.671 16:53:34 setup.sh.hugepages -- setup/hugepages.sh@210 -- # run_test default_setup default_setup 00:02:47.671 16:53:34 setup.sh.hugepages -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:02:47.671 16:53:34 setup.sh.hugepages -- common/autotest_common.sh@1103 -- # xtrace_disable 00:02:47.671 16:53:34 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:02:47.671 ************************************ 00:02:47.672 START TEST default_setup 00:02:47.672 ************************************ 00:02:47.672 16:53:34 setup.sh.hugepages.default_setup -- common/autotest_common.sh@1121 -- # default_setup 00:02:47.672 16:53:34 setup.sh.hugepages.default_setup -- setup/hugepages.sh@136 -- # get_test_nr_hugepages 2097152 0 00:02:47.672 16:53:34 setup.sh.hugepages.default_setup -- setup/hugepages.sh@49 -- # local size=2097152 00:02:47.672 16:53:34 setup.sh.hugepages.default_setup -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:02:47.672 16:53:34 setup.sh.hugepages.default_setup -- setup/hugepages.sh@51 -- # shift 00:02:47.672 16:53:34 setup.sh.hugepages.default_setup -- setup/hugepages.sh@52 -- # node_ids=('0') 00:02:47.672 16:53:34 setup.sh.hugepages.default_setup -- setup/hugepages.sh@52 -- # local node_ids 00:02:47.672 16:53:34 setup.sh.hugepages.default_setup -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:02:47.672 16:53:34 setup.sh.hugepages.default_setup -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:02:47.672 16:53:34 setup.sh.hugepages.default_setup -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:02:47.672 16:53:34 setup.sh.hugepages.default_setup -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:02:47.672 16:53:34 setup.sh.hugepages.default_setup -- setup/hugepages.sh@62 -- # local user_nodes 00:02:47.672 16:53:34 setup.sh.hugepages.default_setup -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:02:47.672 16:53:34 setup.sh.hugepages.default_setup -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:02:47.672 16:53:34 setup.sh.hugepages.default_setup -- setup/hugepages.sh@67 -- # nodes_test=() 00:02:47.672 16:53:34 setup.sh.hugepages.default_setup -- setup/hugepages.sh@67 -- # local -g nodes_test 00:02:47.672 16:53:34 setup.sh.hugepages.default_setup -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:02:47.672 16:53:34 setup.sh.hugepages.default_setup -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:02:47.672 16:53:34 setup.sh.hugepages.default_setup -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:02:47.672 16:53:34 setup.sh.hugepages.default_setup -- setup/hugepages.sh@73 -- # return 0 00:02:47.672 16:53:34 setup.sh.hugepages.default_setup -- setup/hugepages.sh@137 -- # setup output 00:02:47.672 16:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@9 -- # [[ output == output ]] 00:02:47.672 16:53:34 setup.sh.hugepages.default_setup -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:02:50.264 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:02:50.264 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:02:50.264 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:02:50.264 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:02:50.264 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:02:50.264 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:02:50.264 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:02:50.264 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:02:50.264 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:02:50.264 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:02:50.264 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:02:50.264 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:02:50.264 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:02:50.264 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:02:50.264 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:02:50.264 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:02:50.836 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:02:50.836 16:53:38 setup.sh.hugepages.default_setup -- setup/hugepages.sh@138 -- # verify_nr_hugepages 00:02:50.836 16:53:38 setup.sh.hugepages.default_setup -- setup/hugepages.sh@89 -- # local node 00:02:50.836 16:53:38 setup.sh.hugepages.default_setup -- setup/hugepages.sh@90 -- # local sorted_t 00:02:50.836 16:53:38 setup.sh.hugepages.default_setup -- setup/hugepages.sh@91 -- # local sorted_s 00:02:50.836 16:53:38 setup.sh.hugepages.default_setup -- setup/hugepages.sh@92 -- # local surp 00:02:50.836 16:53:38 setup.sh.hugepages.default_setup -- setup/hugepages.sh@93 -- # local resv 00:02:50.836 16:53:38 setup.sh.hugepages.default_setup -- setup/hugepages.sh@94 -- # local anon 00:02:50.836 16:53:38 setup.sh.hugepages.default_setup -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:02:50.836 16:53:38 setup.sh.hugepages.default_setup -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:02:50.836 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=AnonHugePages 00:02:50.836 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:02:50.836 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:02:50.836 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:02:50.836 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:50.836 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:50.836 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:50.836 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:02:50.836 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:50.836 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:50.836 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:50.837 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381164 kB' 'MemFree: 173638208 kB' 'MemAvailable: 177641200 kB' 'Buffers: 3888 kB' 'Cached: 12111264 kB' 'SwapCached: 0 kB' 'Active: 8152260 kB' 'Inactive: 4449832 kB' 'Active(anon): 7586756 kB' 'Inactive(anon): 0 kB' 'Active(file): 565504 kB' 'Inactive(file): 4449832 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 490588 kB' 'Mapped: 178852 kB' 'Shmem: 7099816 kB' 'KReclaimable: 264216 kB' 'Slab: 835904 kB' 'SReclaimable: 264216 kB' 'SUnreclaim: 571688 kB' 'KernelStack: 20320 kB' 'PageTables: 9028 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030608 kB' 'Committed_AS: 9027696 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 314828 kB' 'VmallocChunk: 0 kB' 'Percpu: 69888 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2796500 kB' 'DirectMap2M: 19951616 kB' 'DirectMap1G: 179306496 kB' 00:02:50.837 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:50.837 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:50.837 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:50.837 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:50.837 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:50.837 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:50.837 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:50.837 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:50.837 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:50.837 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:50.837 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:50.837 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:50.837 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:50.837 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:50.837 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:50.837 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:50.837 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:50.837 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:50.837 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:50.837 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:50.837 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:50.837 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:50.837 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:50.837 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:50.837 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:50.837 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:50.837 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:50.837 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:50.837 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:50.837 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:50.837 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:50.837 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:50.837 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:50.837 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:50.837 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:50.837 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:50.837 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:50.837 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:50.837 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:50.837 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:50.837 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:50.837 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:50.837 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:50.837 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:50.837 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:50.837 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:50.837 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:50.837 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:50.837 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:50.837 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:50.837 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:50.837 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:50.837 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:50.837 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:50.837 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:50.837 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:50.837 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:50.837 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:50.837 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:50.837 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:50.837 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:50.837 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:50.837 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:50.837 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:50.837 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:50.837 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:50.837 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:50.837 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:50.837 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:50.837 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:50.837 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:50.837 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:50.837 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:50.837 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:50.837 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:50.837 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:50.837 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:50.837 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:50.837 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:50.837 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:50.837 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:50.837 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:50.837 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:50.837 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:50.837 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:50.837 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:50.837 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:50.837 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:50.837 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:50.837 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:50.837 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:50.837 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:50.837 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:50.837 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:50.837 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:50.837 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:50.837 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:50.837 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:50.837 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:50.837 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:50.837 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:50.837 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:50.837 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:50.837 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:50.837 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:50.837 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:50.837 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:50.837 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:50.837 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:50.837 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:50.837 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:50.837 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:50.837 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:50.838 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:50.838 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:50.838 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:50.838 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:50.838 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:50.838 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:50.838 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:50.838 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:50.838 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:50.838 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:50.838 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:50.838 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:50.838 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:50.838 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:50.838 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:50.838 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:50.838 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:50.838 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:50.838 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:50.838 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:50.838 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:50.838 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:50.838 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:50.838 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:50.838 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:50.838 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:50.838 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:50.838 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:50.838 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:50.838 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:50.838 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:50.838 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:50.838 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:50.838 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:50.838 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:50.838 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:50.838 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:50.838 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:50.838 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:50.838 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:50.838 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:50.838 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:50.838 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:50.838 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:50.838 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:50.838 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:50.838 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:50.838 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:50.838 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:02:50.838 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:02:50.838 16:53:38 setup.sh.hugepages.default_setup -- setup/hugepages.sh@97 -- # anon=0 00:02:50.838 16:53:38 setup.sh.hugepages.default_setup -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:02:50.838 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Surp 00:02:50.838 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:02:50.838 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:02:50.838 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:02:50.838 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:50.838 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:50.838 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:50.838 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:02:50.838 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:50.838 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:50.838 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:50.838 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381164 kB' 'MemFree: 173638964 kB' 'MemAvailable: 177641956 kB' 'Buffers: 3888 kB' 'Cached: 12111264 kB' 'SwapCached: 0 kB' 'Active: 8153316 kB' 'Inactive: 4449832 kB' 'Active(anon): 7587812 kB' 'Inactive(anon): 0 kB' 'Active(file): 565504 kB' 'Inactive(file): 4449832 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 491184 kB' 'Mapped: 178852 kB' 'Shmem: 7099816 kB' 'KReclaimable: 264216 kB' 'Slab: 835904 kB' 'SReclaimable: 264216 kB' 'SUnreclaim: 571688 kB' 'KernelStack: 20416 kB' 'PageTables: 9448 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030608 kB' 'Committed_AS: 9027716 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 314844 kB' 'VmallocChunk: 0 kB' 'Percpu: 69888 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2796500 kB' 'DirectMap2M: 19951616 kB' 'DirectMap1G: 179306496 kB' 00:02:50.838 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.838 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:50.838 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:50.838 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:50.838 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.838 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:50.838 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:50.838 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:50.838 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.838 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:50.838 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:50.838 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:50.838 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.838 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:50.838 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:50.838 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:50.838 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.838 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:50.838 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:50.838 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:50.838 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.838 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:50.838 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:50.838 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:50.838 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.838 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:50.838 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:50.838 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:50.838 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.838 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:50.838 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:50.838 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:50.838 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.839 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:50.839 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:50.839 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:50.839 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.839 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:50.839 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:50.839 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:50.839 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.839 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:50.839 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:50.839 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:50.839 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.839 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:50.839 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:50.839 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:50.839 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.839 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:50.839 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:50.839 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:50.839 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.839 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:50.839 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:50.839 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:50.839 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.839 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:50.839 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:50.839 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:50.839 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.839 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:50.839 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:50.839 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:50.839 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.839 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:50.839 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:50.839 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:50.839 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.839 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:50.839 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:50.839 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:50.839 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.839 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:50.839 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:50.839 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:50.839 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.839 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:50.839 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:50.839 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:50.839 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.839 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:50.839 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:50.839 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:50.839 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.839 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:50.839 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:50.839 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:50.839 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.839 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:50.839 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:50.839 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:50.839 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.839 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:50.839 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:50.839 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:50.839 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.839 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:50.839 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:50.839 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:50.839 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.839 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:50.839 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:50.839 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:50.839 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.839 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:50.839 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:50.839 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:50.839 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.839 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:50.839 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:50.839 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:50.839 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.839 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:50.839 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:50.839 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:50.839 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.839 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:50.839 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:50.839 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:50.839 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.839 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:50.839 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:50.839 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:50.839 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.839 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:50.839 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:50.839 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:50.839 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.839 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:50.839 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:50.839 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:50.839 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.839 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:50.839 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:50.839 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:50.839 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.839 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:50.839 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:50.839 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:50.839 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.839 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:50.839 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:50.839 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:50.839 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.839 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:50.839 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:50.839 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:50.839 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.839 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:50.839 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:50.839 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:50.839 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.839 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:50.839 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:50.840 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:50.840 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.840 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:50.840 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:50.840 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:50.840 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.840 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:50.840 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:50.840 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:50.840 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.840 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:50.840 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:50.840 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:50.840 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.840 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:50.840 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:50.840 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:50.840 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.840 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:50.840 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:50.840 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:50.840 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.840 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:50.840 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:50.840 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:50.840 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.840 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:50.840 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:50.840 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:50.840 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.840 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:50.840 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:50.840 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:50.840 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.840 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:50.840 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:50.840 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:50.840 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.840 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:50.840 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:50.840 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:50.840 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.840 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:50.840 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:50.840 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:50.840 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.840 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:50.840 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:50.840 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:50.840 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.840 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:02:50.840 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:02:50.840 16:53:38 setup.sh.hugepages.default_setup -- setup/hugepages.sh@99 -- # surp=0 00:02:50.840 16:53:38 setup.sh.hugepages.default_setup -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:02:50.840 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:02:50.840 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:02:50.840 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:02:50.840 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:02:50.840 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:50.840 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:50.840 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:50.840 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:02:50.840 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:50.840 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381164 kB' 'MemFree: 173639456 kB' 'MemAvailable: 177642448 kB' 'Buffers: 3888 kB' 'Cached: 12111264 kB' 'SwapCached: 0 kB' 'Active: 8152764 kB' 'Inactive: 4449832 kB' 'Active(anon): 7587260 kB' 'Inactive(anon): 0 kB' 'Active(file): 565504 kB' 'Inactive(file): 4449832 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 490632 kB' 'Mapped: 178828 kB' 'Shmem: 7099816 kB' 'KReclaimable: 264216 kB' 'Slab: 835904 kB' 'SReclaimable: 264216 kB' 'SUnreclaim: 571688 kB' 'KernelStack: 20480 kB' 'PageTables: 9368 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030608 kB' 'Committed_AS: 9026244 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 314764 kB' 'VmallocChunk: 0 kB' 'Percpu: 69888 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2796500 kB' 'DirectMap2M: 19951616 kB' 'DirectMap1G: 179306496 kB' 00:02:50.840 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:50.840 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:50.840 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:50.840 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:50.840 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:50.840 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:50.840 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:50.840 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:50.840 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:50.840 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:50.840 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:50.840 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:50.840 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:50.840 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:50.840 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:50.840 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:50.840 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:50.840 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:50.840 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:50.840 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:50.840 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:50.840 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:50.840 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:50.840 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:50.840 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:50.840 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:50.840 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:50.840 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:50.840 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:50.840 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:50.840 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:50.840 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:50.840 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:50.840 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:50.840 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:50.840 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:50.840 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:50.841 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:50.841 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:50.841 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:50.841 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:50.841 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:50.841 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:50.841 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:50.841 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:50.841 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:50.841 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:50.841 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:50.841 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:50.841 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:50.841 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:50.841 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:50.841 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:50.841 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:50.841 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:50.841 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:50.841 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:50.841 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:50.841 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:50.841 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:50.841 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:50.841 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:50.841 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:50.841 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:50.841 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:50.841 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:50.841 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:50.841 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:50.841 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:50.841 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:50.841 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:50.841 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:50.841 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:50.841 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:50.841 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:50.841 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:50.841 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:50.841 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:50.841 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:50.841 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:50.841 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:50.841 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:50.841 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:50.841 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:50.841 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:50.841 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:50.841 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:50.841 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:50.841 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:50.841 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:50.841 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:50.841 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:50.841 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:50.841 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:50.841 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:50.841 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:50.841 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:50.841 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:50.841 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:50.841 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:50.841 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:50.841 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:50.841 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:50.841 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:50.841 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:50.841 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:50.841 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:50.841 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:50.841 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:50.841 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:50.841 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:50.841 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:50.841 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:50.841 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:50.841 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:50.841 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:50.841 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:50.841 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:50.841 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:50.841 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:50.841 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:50.841 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:50.841 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:50.841 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:50.841 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:50.841 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:50.841 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:50.841 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:50.841 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:50.841 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:50.842 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:50.842 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:50.842 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:50.842 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:50.842 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:50.842 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:50.842 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:50.842 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:50.842 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:50.842 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:50.842 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:50.842 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:50.842 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:50.842 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:50.842 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:50.842 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:50.842 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:50.842 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:50.842 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:50.842 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:50.842 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:50.842 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:50.842 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:50.842 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:50.842 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:50.842 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:50.842 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:50.842 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:50.842 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:50.842 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:50.842 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:50.842 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:50.842 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:50.842 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:50.842 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:50.842 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:50.842 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:50.842 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:50.842 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:50.842 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:50.842 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:50.842 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:50.842 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:50.842 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:50.842 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:50.842 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:50.842 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:50.842 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:50.842 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:50.842 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:50.842 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:50.842 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:50.842 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:50.842 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:50.842 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:50.842 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:50.842 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:50.842 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:50.842 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:50.842 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:50.842 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:50.842 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:50.842 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:50.842 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:50.842 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:50.842 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:50.842 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:50.842 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:50.842 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:50.842 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:50.842 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:50.842 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:50.842 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:50.842 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:02:50.842 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:02:50.842 16:53:38 setup.sh.hugepages.default_setup -- setup/hugepages.sh@100 -- # resv=0 00:02:50.842 16:53:38 setup.sh.hugepages.default_setup -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:02:50.842 nr_hugepages=1024 00:02:50.842 16:53:38 setup.sh.hugepages.default_setup -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:02:50.842 resv_hugepages=0 00:02:50.842 16:53:38 setup.sh.hugepages.default_setup -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:02:50.842 surplus_hugepages=0 00:02:50.842 16:53:38 setup.sh.hugepages.default_setup -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:02:50.842 anon_hugepages=0 00:02:50.842 16:53:38 setup.sh.hugepages.default_setup -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:02:50.842 16:53:38 setup.sh.hugepages.default_setup -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:02:50.842 16:53:38 setup.sh.hugepages.default_setup -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:02:50.842 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Total 00:02:50.842 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:02:50.842 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:02:50.842 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:02:50.842 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:50.842 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:50.842 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:50.842 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:02:50.842 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:50.842 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:50.842 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:50.842 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381164 kB' 'MemFree: 173642616 kB' 'MemAvailable: 177645608 kB' 'Buffers: 3888 kB' 'Cached: 12111300 kB' 'SwapCached: 0 kB' 'Active: 8152216 kB' 'Inactive: 4449832 kB' 'Active(anon): 7586712 kB' 'Inactive(anon): 0 kB' 'Active(file): 565504 kB' 'Inactive(file): 4449832 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 490032 kB' 'Mapped: 178752 kB' 'Shmem: 7099852 kB' 'KReclaimable: 264216 kB' 'Slab: 835888 kB' 'SReclaimable: 264216 kB' 'SUnreclaim: 571672 kB' 'KernelStack: 20128 kB' 'PageTables: 8536 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030608 kB' 'Committed_AS: 9027756 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 314716 kB' 'VmallocChunk: 0 kB' 'Percpu: 69888 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2796500 kB' 'DirectMap2M: 19951616 kB' 'DirectMap1G: 179306496 kB' 00:02:50.842 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:50.842 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:50.842 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:50.842 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:50.842 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:50.842 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:50.842 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:50.842 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:50.842 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:50.842 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:50.842 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:50.842 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:50.842 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:50.842 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:50.842 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:50.843 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:50.843 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:50.843 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:50.843 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:50.843 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:50.843 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:50.843 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:50.843 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:50.843 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:50.843 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:50.843 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:50.843 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:50.843 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:50.843 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:50.843 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:50.843 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:50.843 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:50.843 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:50.843 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:50.843 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:50.843 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:50.843 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:50.843 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:50.843 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:50.843 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:50.843 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:50.843 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:50.843 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:50.843 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:50.843 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:50.843 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:50.843 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:50.843 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:50.843 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:50.843 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:50.843 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:50.843 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:50.843 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:50.843 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:50.843 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:50.843 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:50.843 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:50.843 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:50.843 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:50.843 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:50.843 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:50.843 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:50.843 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:50.843 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:50.843 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:50.843 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:50.843 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:50.843 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:50.843 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:50.843 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:50.843 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:50.843 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:50.843 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:50.843 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:50.843 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:50.843 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:50.843 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:50.843 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:50.843 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:50.843 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:50.843 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:50.843 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:50.843 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:50.843 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:50.843 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:50.843 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:50.843 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:50.843 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:50.843 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:50.843 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:50.843 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:50.843 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:50.843 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:50.843 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:50.843 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:50.843 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:50.843 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:50.843 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:50.843 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:50.843 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:50.843 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:50.843 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:50.843 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:50.843 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:50.843 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:50.843 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:50.843 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:50.843 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:50.843 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:50.843 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:50.843 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:50.843 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:50.843 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:50.843 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:50.843 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:50.843 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:50.843 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:50.843 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:50.843 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:50.843 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:50.843 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:50.843 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:50.843 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:50.843 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:50.843 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:50.843 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:50.843 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:50.844 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:50.844 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:50.844 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:50.844 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:50.844 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:50.844 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:50.844 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:50.844 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:50.844 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:50.844 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:50.844 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:50.844 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:50.844 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:50.844 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:50.844 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:50.844 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:50.844 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:50.844 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:50.844 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:50.844 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:50.844 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:50.844 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:50.844 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:50.844 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:50.844 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:50.844 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:50.844 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:50.844 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:50.844 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:50.844 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:50.844 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:50.844 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:50.844 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:50.844 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:50.844 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:50.844 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:50.844 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:50.844 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:50.844 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:50.844 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:50.844 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:50.844 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:50.844 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:50.844 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:50.844 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:50.844 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:50.844 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:50.844 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:50.844 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:50.844 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:50.844 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:50.844 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:50.844 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:50.844 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:50.844 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:50.844 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:50.844 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:50.844 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:50.844 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:50.844 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:50.844 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:50.844 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:50.844 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:50.844 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:50.844 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:50.844 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:50.844 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 1024 00:02:50.844 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:02:50.844 16:53:38 setup.sh.hugepages.default_setup -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:02:50.844 16:53:38 setup.sh.hugepages.default_setup -- setup/hugepages.sh@112 -- # get_nodes 00:02:50.844 16:53:38 setup.sh.hugepages.default_setup -- setup/hugepages.sh@27 -- # local node 00:02:50.844 16:53:38 setup.sh.hugepages.default_setup -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:02:50.844 16:53:38 setup.sh.hugepages.default_setup -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:02:50.844 16:53:38 setup.sh.hugepages.default_setup -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:02:50.844 16:53:38 setup.sh.hugepages.default_setup -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:02:50.844 16:53:38 setup.sh.hugepages.default_setup -- setup/hugepages.sh@32 -- # no_nodes=2 00:02:50.844 16:53:38 setup.sh.hugepages.default_setup -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:02:50.844 16:53:38 setup.sh.hugepages.default_setup -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:02:50.844 16:53:38 setup.sh.hugepages.default_setup -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:02:50.844 16:53:38 setup.sh.hugepages.default_setup -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:02:50.844 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Surp 00:02:50.844 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node=0 00:02:50.844 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:02:50.844 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:02:50.844 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:50.844 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:02:50.844 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:02:50.844 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:02:50.844 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:50.844 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:50.844 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:50.844 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 97662684 kB' 'MemFree: 83557120 kB' 'MemUsed: 14105564 kB' 'SwapCached: 0 kB' 'Active: 6531256 kB' 'Inactive: 4007180 kB' 'Active(anon): 6063640 kB' 'Inactive(anon): 0 kB' 'Active(file): 467616 kB' 'Inactive(file): 4007180 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 10246452 kB' 'Mapped: 150596 kB' 'AnonPages: 294628 kB' 'Shmem: 5771656 kB' 'KernelStack: 11864 kB' 'PageTables: 6284 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 131308 kB' 'Slab: 420080 kB' 'SReclaimable: 131308 kB' 'SUnreclaim: 288772 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:02:50.844 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.844 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:50.844 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:50.844 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:50.844 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.844 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:50.844 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:50.844 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:50.844 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.844 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:50.844 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:50.844 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:50.844 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.844 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:50.844 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:50.844 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:50.844 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.844 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:50.844 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:50.844 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:50.844 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.844 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:50.844 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:50.844 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:50.845 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.845 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:50.845 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:50.845 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:50.845 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.845 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:50.845 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:50.845 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:50.845 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.845 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:50.845 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:50.845 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:50.845 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.845 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:50.845 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:50.845 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:50.845 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.845 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:50.845 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:50.845 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:50.845 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.845 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:50.845 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:50.845 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:50.845 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.845 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:50.845 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:50.845 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:50.845 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.845 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:50.845 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:50.845 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:50.845 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.845 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:50.845 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:50.845 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:50.845 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.845 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:50.845 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:50.845 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:50.845 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.845 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:50.845 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:50.845 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:50.845 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.845 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:50.845 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:50.845 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:50.845 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.845 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:50.845 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:50.845 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:50.845 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.845 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:50.845 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:50.845 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:50.845 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.845 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:50.845 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:50.845 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:50.845 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.845 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:50.845 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:50.845 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:50.845 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.845 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:50.845 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:50.845 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:50.845 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.845 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:50.845 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:50.845 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:50.845 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.845 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:50.845 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:50.845 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:50.845 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.845 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:50.845 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:50.845 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:50.845 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.845 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:50.845 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:50.845 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:50.845 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.845 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:50.845 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:50.845 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:50.845 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.845 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:50.845 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:50.845 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:50.845 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.845 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:50.845 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:50.845 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:50.845 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.845 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:50.845 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:50.845 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:50.845 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.845 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:50.845 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:50.845 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:50.845 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.845 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:50.845 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:50.845 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:50.845 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.845 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:50.845 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:50.845 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:50.845 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.845 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:50.845 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:50.845 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:50.845 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.845 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:50.845 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:50.845 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:50.845 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.845 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:02:50.845 16:53:38 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:02:50.845 16:53:38 setup.sh.hugepages.default_setup -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:02:50.845 16:53:38 setup.sh.hugepages.default_setup -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:02:50.846 16:53:38 setup.sh.hugepages.default_setup -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:02:50.846 16:53:38 setup.sh.hugepages.default_setup -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:02:50.846 16:53:38 setup.sh.hugepages.default_setup -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:02:50.846 node0=1024 expecting 1024 00:02:50.846 16:53:38 setup.sh.hugepages.default_setup -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:02:50.846 00:02:50.846 real 0m3.557s 00:02:50.846 user 0m1.084s 00:02:50.846 sys 0m1.714s 00:02:50.846 16:53:38 setup.sh.hugepages.default_setup -- common/autotest_common.sh@1122 -- # xtrace_disable 00:02:50.846 16:53:38 setup.sh.hugepages.default_setup -- common/autotest_common.sh@10 -- # set +x 00:02:50.846 ************************************ 00:02:50.846 END TEST default_setup 00:02:50.846 ************************************ 00:02:51.105 16:53:38 setup.sh.hugepages -- setup/hugepages.sh@211 -- # run_test per_node_1G_alloc per_node_1G_alloc 00:02:51.105 16:53:38 setup.sh.hugepages -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:02:51.105 16:53:38 setup.sh.hugepages -- common/autotest_common.sh@1103 -- # xtrace_disable 00:02:51.105 16:53:38 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:02:51.105 ************************************ 00:02:51.105 START TEST per_node_1G_alloc 00:02:51.105 ************************************ 00:02:51.105 16:53:38 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@1121 -- # per_node_1G_alloc 00:02:51.105 16:53:38 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@143 -- # local IFS=, 00:02:51.105 16:53:38 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@145 -- # get_test_nr_hugepages 1048576 0 1 00:02:51.105 16:53:38 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@49 -- # local size=1048576 00:02:51.105 16:53:38 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@50 -- # (( 3 > 1 )) 00:02:51.105 16:53:38 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@51 -- # shift 00:02:51.105 16:53:38 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@52 -- # node_ids=('0' '1') 00:02:51.105 16:53:38 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@52 -- # local node_ids 00:02:51.105 16:53:38 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:02:51.105 16:53:38 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:02:51.105 16:53:38 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 1 00:02:51.105 16:53:38 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@62 -- # user_nodes=('0' '1') 00:02:51.105 16:53:38 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:02:51.105 16:53:38 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:02:51.105 16:53:38 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:02:51.105 16:53:38 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:02:51.105 16:53:38 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:02:51.105 16:53:38 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@69 -- # (( 2 > 0 )) 00:02:51.105 16:53:38 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:02:51.105 16:53:38 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:02:51.105 16:53:38 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:02:51.105 16:53:38 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:02:51.105 16:53:38 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@73 -- # return 0 00:02:51.105 16:53:38 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # NRHUGE=512 00:02:51.105 16:53:38 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # HUGENODE=0,1 00:02:51.105 16:53:38 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # setup output 00:02:51.105 16:53:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:02:51.105 16:53:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:02:53.644 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:02:53.645 0000:5e:00.0 (8086 0a54): Already using the vfio-pci driver 00:02:53.645 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:02:53.645 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:02:53.645 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:02:53.645 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:02:53.645 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:02:53.645 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:02:53.645 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:02:53.645 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:02:53.645 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:02:53.645 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:02:53.645 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:02:53.645 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:02:53.645 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:02:53.645 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:02:53.645 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:02:53.645 16:53:40 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@147 -- # nr_hugepages=1024 00:02:53.645 16:53:40 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@147 -- # verify_nr_hugepages 00:02:53.645 16:53:40 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@89 -- # local node 00:02:53.645 16:53:40 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:02:53.645 16:53:40 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:02:53.645 16:53:40 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@92 -- # local surp 00:02:53.645 16:53:40 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@93 -- # local resv 00:02:53.645 16:53:40 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@94 -- # local anon 00:02:53.645 16:53:40 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:02:53.645 16:53:40 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:02:53.645 16:53:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:02:53.645 16:53:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:02:53.645 16:53:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:02:53.645 16:53:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:53.645 16:53:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:53.645 16:53:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:53.645 16:53:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:53.645 16:53:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:53.645 16:53:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:53.645 16:53:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.645 16:53:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.645 16:53:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381164 kB' 'MemFree: 173632608 kB' 'MemAvailable: 177635600 kB' 'Buffers: 3888 kB' 'Cached: 12111404 kB' 'SwapCached: 0 kB' 'Active: 8151944 kB' 'Inactive: 4449832 kB' 'Active(anon): 7586440 kB' 'Inactive(anon): 0 kB' 'Active(file): 565504 kB' 'Inactive(file): 4449832 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 489808 kB' 'Mapped: 178816 kB' 'Shmem: 7099956 kB' 'KReclaimable: 264216 kB' 'Slab: 835868 kB' 'SReclaimable: 264216 kB' 'SUnreclaim: 571652 kB' 'KernelStack: 20272 kB' 'PageTables: 8792 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030608 kB' 'Committed_AS: 9025776 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 314796 kB' 'VmallocChunk: 0 kB' 'Percpu: 69888 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2796500 kB' 'DirectMap2M: 19951616 kB' 'DirectMap1G: 179306496 kB' 00:02:53.645 16:53:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:53.645 16:53:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:53.645 16:53:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.645 16:53:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.645 16:53:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:53.645 16:53:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:53.645 16:53:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.645 16:53:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.645 16:53:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:53.645 16:53:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:53.645 16:53:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.645 16:53:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.645 16:53:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:53.645 16:53:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:53.645 16:53:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.645 16:53:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.645 16:53:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:53.645 16:53:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:53.645 16:53:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.645 16:53:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.645 16:53:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:53.645 16:53:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:53.645 16:53:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.645 16:53:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.645 16:53:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:53.645 16:53:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:53.645 16:53:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.645 16:53:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.645 16:53:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:53.645 16:53:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:53.645 16:53:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.645 16:53:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.645 16:53:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:53.645 16:53:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:53.645 16:53:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.645 16:53:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.645 16:53:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:53.645 16:53:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:53.645 16:53:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.645 16:53:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.645 16:53:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:53.645 16:53:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:53.645 16:53:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.645 16:53:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.645 16:53:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:53.645 16:53:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:53.645 16:53:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.645 16:53:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.645 16:53:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:53.645 16:53:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:53.645 16:53:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.645 16:53:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.645 16:53:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:53.645 16:53:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:53.645 16:53:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.645 16:53:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.645 16:53:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:53.645 16:53:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:53.645 16:53:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.645 16:53:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.645 16:53:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:53.645 16:53:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:53.645 16:53:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.645 16:53:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.645 16:53:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:53.645 16:53:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:53.645 16:53:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.645 16:53:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.645 16:53:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:53.645 16:53:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:53.645 16:53:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.645 16:53:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.645 16:53:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:53.645 16:53:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:53.645 16:53:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.645 16:53:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.646 16:53:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:53.646 16:53:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:53.646 16:53:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.646 16:53:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.646 16:53:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:53.646 16:53:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:53.646 16:53:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.646 16:53:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.646 16:53:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:53.646 16:53:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:53.646 16:53:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.646 16:53:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.646 16:53:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:53.646 16:53:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:53.646 16:53:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.646 16:53:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.646 16:53:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:53.646 16:53:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:53.646 16:53:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.646 16:53:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.646 16:53:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:53.646 16:53:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:53.646 16:53:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.646 16:53:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.646 16:53:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:53.646 16:53:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:53.646 16:53:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.646 16:53:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.646 16:53:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:53.646 16:53:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:53.646 16:53:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.646 16:53:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.646 16:53:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:53.646 16:53:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:53.646 16:53:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.646 16:53:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.646 16:53:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:53.646 16:53:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:53.646 16:53:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.646 16:53:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.646 16:53:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:53.646 16:53:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:53.646 16:53:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.646 16:53:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.646 16:53:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:53.646 16:53:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:53.646 16:53:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.646 16:53:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.646 16:53:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:53.646 16:53:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:53.646 16:53:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.646 16:53:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.646 16:53:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:53.646 16:53:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:53.646 16:53:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.646 16:53:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.646 16:53:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:53.646 16:53:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:53.646 16:53:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.646 16:53:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.646 16:53:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:53.646 16:53:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:53.646 16:53:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.646 16:53:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.646 16:53:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:53.646 16:53:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:53.646 16:53:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.646 16:53:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.646 16:53:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:53.646 16:53:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:53.646 16:53:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.646 16:53:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.646 16:53:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:53.646 16:53:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:53.646 16:53:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.646 16:53:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.646 16:53:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:53.646 16:53:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:53.646 16:53:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.646 16:53:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.646 16:53:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:53.646 16:53:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:53.646 16:53:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.646 16:53:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.646 16:53:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:53.646 16:53:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:02:53.646 16:53:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:02:53.646 16:53:40 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@97 -- # anon=0 00:02:53.646 16:53:40 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:02:53.646 16:53:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:02:53.646 16:53:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:02:53.646 16:53:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:02:53.646 16:53:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:53.646 16:53:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:53.646 16:53:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:53.646 16:53:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:53.646 16:53:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:53.646 16:53:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:53.646 16:53:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.646 16:53:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.646 16:53:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381164 kB' 'MemFree: 173633560 kB' 'MemAvailable: 177636552 kB' 'Buffers: 3888 kB' 'Cached: 12111408 kB' 'SwapCached: 0 kB' 'Active: 8152112 kB' 'Inactive: 4449832 kB' 'Active(anon): 7586608 kB' 'Inactive(anon): 0 kB' 'Active(file): 565504 kB' 'Inactive(file): 4449832 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 490000 kB' 'Mapped: 178776 kB' 'Shmem: 7099960 kB' 'KReclaimable: 264216 kB' 'Slab: 835936 kB' 'SReclaimable: 264216 kB' 'SUnreclaim: 571720 kB' 'KernelStack: 20256 kB' 'PageTables: 8740 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030608 kB' 'Committed_AS: 9026920 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 314748 kB' 'VmallocChunk: 0 kB' 'Percpu: 69888 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2796500 kB' 'DirectMap2M: 19951616 kB' 'DirectMap1G: 179306496 kB' 00:02:53.646 16:53:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.646 16:53:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:53.646 16:53:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.646 16:53:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.646 16:53:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.646 16:53:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:53.646 16:53:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.646 16:53:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.647 16:53:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.647 16:53:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:53.647 16:53:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.647 16:53:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.647 16:53:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.647 16:53:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:53.647 16:53:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.647 16:53:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.647 16:53:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.647 16:53:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:53.647 16:53:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.647 16:53:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.647 16:53:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.647 16:53:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:53.647 16:53:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.647 16:53:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.647 16:53:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.647 16:53:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:53.647 16:53:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.647 16:53:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.647 16:53:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.647 16:53:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:53.647 16:53:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.647 16:53:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.647 16:53:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.647 16:53:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:53.647 16:53:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.647 16:53:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.647 16:53:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.647 16:53:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:53.647 16:53:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.647 16:53:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.647 16:53:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.647 16:53:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:53.647 16:53:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.647 16:53:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.647 16:53:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.647 16:53:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:53.647 16:53:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.647 16:53:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.647 16:53:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.647 16:53:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:53.647 16:53:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.647 16:53:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.647 16:53:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.647 16:53:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:53.647 16:53:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.647 16:53:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.647 16:53:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.647 16:53:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:53.647 16:53:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.647 16:53:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.647 16:53:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.647 16:53:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:53.647 16:53:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.647 16:53:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.647 16:53:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.647 16:53:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:53.647 16:53:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.647 16:53:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.647 16:53:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.647 16:53:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:53.647 16:53:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.647 16:53:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.647 16:53:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.647 16:53:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:53.647 16:53:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.647 16:53:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.647 16:53:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.647 16:53:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:53.647 16:53:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.647 16:53:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.647 16:53:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.647 16:53:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:53.647 16:53:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.647 16:53:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.647 16:53:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.647 16:53:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:53.647 16:53:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.647 16:53:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.647 16:53:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.647 16:53:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:53.647 16:53:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.647 16:53:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.647 16:53:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.647 16:53:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:53.647 16:53:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.647 16:53:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.647 16:53:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.647 16:53:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:53.647 16:53:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.647 16:53:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.647 16:53:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.647 16:53:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:53.647 16:53:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.647 16:53:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.647 16:53:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.647 16:53:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:53.647 16:53:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.647 16:53:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.647 16:53:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.647 16:53:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:53.647 16:53:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.647 16:53:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.647 16:53:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.647 16:53:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:53.647 16:53:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.647 16:53:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.647 16:53:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.647 16:53:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:53.647 16:53:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.647 16:53:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.647 16:53:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.647 16:53:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:53.647 16:53:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.647 16:53:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.647 16:53:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.647 16:53:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:53.647 16:53:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.647 16:53:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.647 16:53:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.648 16:53:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:53.648 16:53:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.648 16:53:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.648 16:53:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.648 16:53:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:53.648 16:53:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.648 16:53:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.648 16:53:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.648 16:53:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:53.648 16:53:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.648 16:53:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.648 16:53:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.648 16:53:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:53.648 16:53:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.648 16:53:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.648 16:53:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.648 16:53:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:53.648 16:53:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.648 16:53:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.648 16:53:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.648 16:53:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:53.648 16:53:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.648 16:53:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.648 16:53:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.648 16:53:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:53.648 16:53:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.648 16:53:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.648 16:53:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.648 16:53:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:53.648 16:53:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.648 16:53:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.648 16:53:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.648 16:53:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:53.648 16:53:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.648 16:53:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.648 16:53:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.648 16:53:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:53.648 16:53:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.648 16:53:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.648 16:53:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.648 16:53:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:53.648 16:53:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.648 16:53:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.648 16:53:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.648 16:53:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:53.648 16:53:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.648 16:53:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.648 16:53:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.648 16:53:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:53.648 16:53:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.648 16:53:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.648 16:53:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.648 16:53:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:53.648 16:53:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.648 16:53:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.648 16:53:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.648 16:53:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:53.648 16:53:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.648 16:53:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.648 16:53:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.648 16:53:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:53.648 16:53:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.648 16:53:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.648 16:53:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.648 16:53:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:53.648 16:53:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.648 16:53:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.648 16:53:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.648 16:53:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:53.648 16:53:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.648 16:53:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.648 16:53:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.648 16:53:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:53.648 16:53:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.648 16:53:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.648 16:53:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.648 16:53:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:02:53.648 16:53:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:02:53.648 16:53:41 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@99 -- # surp=0 00:02:53.648 16:53:41 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:02:53.648 16:53:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:02:53.648 16:53:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:02:53.648 16:53:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:02:53.648 16:53:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:53.648 16:53:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:53.648 16:53:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:53.648 16:53:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:53.648 16:53:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:53.648 16:53:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:53.648 16:53:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.648 16:53:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.648 16:53:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381164 kB' 'MemFree: 173634064 kB' 'MemAvailable: 177637056 kB' 'Buffers: 3888 kB' 'Cached: 12111424 kB' 'SwapCached: 0 kB' 'Active: 8152412 kB' 'Inactive: 4449832 kB' 'Active(anon): 7586908 kB' 'Inactive(anon): 0 kB' 'Active(file): 565504 kB' 'Inactive(file): 4449832 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 490336 kB' 'Mapped: 178776 kB' 'Shmem: 7099976 kB' 'KReclaimable: 264216 kB' 'Slab: 835936 kB' 'SReclaimable: 264216 kB' 'SUnreclaim: 571720 kB' 'KernelStack: 20208 kB' 'PageTables: 8600 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030608 kB' 'Committed_AS: 9027180 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 314732 kB' 'VmallocChunk: 0 kB' 'Percpu: 69888 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2796500 kB' 'DirectMap2M: 19951616 kB' 'DirectMap1G: 179306496 kB' 00:02:53.648 16:53:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:53.648 16:53:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:53.648 16:53:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.648 16:53:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.648 16:53:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:53.648 16:53:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:53.648 16:53:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.648 16:53:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.648 16:53:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:53.648 16:53:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:53.648 16:53:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.648 16:53:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.648 16:53:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:53.648 16:53:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:53.648 16:53:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.648 16:53:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.648 16:53:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:53.649 16:53:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:53.649 16:53:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.649 16:53:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.649 16:53:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:53.649 16:53:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:53.649 16:53:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.649 16:53:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.649 16:53:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:53.649 16:53:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:53.649 16:53:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.649 16:53:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.649 16:53:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:53.649 16:53:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:53.649 16:53:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.649 16:53:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.649 16:53:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:53.649 16:53:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:53.649 16:53:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.649 16:53:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.649 16:53:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:53.649 16:53:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:53.649 16:53:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.649 16:53:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.649 16:53:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:53.649 16:53:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:53.649 16:53:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.649 16:53:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.649 16:53:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:53.649 16:53:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:53.649 16:53:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.649 16:53:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.649 16:53:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:53.649 16:53:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:53.649 16:53:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.649 16:53:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.649 16:53:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:53.649 16:53:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:53.649 16:53:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.649 16:53:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.649 16:53:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:53.649 16:53:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:53.649 16:53:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.649 16:53:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.649 16:53:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:53.649 16:53:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:53.649 16:53:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.649 16:53:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.649 16:53:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:53.649 16:53:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:53.649 16:53:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.649 16:53:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.649 16:53:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:53.649 16:53:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:53.649 16:53:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.649 16:53:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.649 16:53:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:53.649 16:53:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:53.649 16:53:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.649 16:53:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.649 16:53:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:53.649 16:53:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:53.649 16:53:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.649 16:53:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.649 16:53:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:53.649 16:53:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:53.649 16:53:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.649 16:53:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.649 16:53:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:53.649 16:53:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:53.649 16:53:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.649 16:53:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.649 16:53:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:53.649 16:53:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:53.649 16:53:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.649 16:53:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.649 16:53:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:53.649 16:53:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:53.649 16:53:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.649 16:53:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.649 16:53:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:53.649 16:53:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:53.649 16:53:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.649 16:53:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.649 16:53:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:53.649 16:53:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:53.649 16:53:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.649 16:53:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.649 16:53:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:53.649 16:53:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:53.649 16:53:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.649 16:53:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.649 16:53:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:53.649 16:53:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:53.649 16:53:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.649 16:53:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.649 16:53:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:53.649 16:53:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:53.649 16:53:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.649 16:53:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.649 16:53:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:53.649 16:53:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:53.649 16:53:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.650 16:53:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.650 16:53:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:53.650 16:53:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:53.650 16:53:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.650 16:53:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.650 16:53:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:53.650 16:53:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:53.650 16:53:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.650 16:53:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.650 16:53:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:53.650 16:53:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:53.650 16:53:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.650 16:53:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.650 16:53:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:53.650 16:53:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:53.650 16:53:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.650 16:53:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.650 16:53:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:53.650 16:53:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:53.650 16:53:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.650 16:53:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.650 16:53:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:53.650 16:53:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:53.650 16:53:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.650 16:53:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.650 16:53:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:53.650 16:53:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:53.650 16:53:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.650 16:53:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.650 16:53:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:53.650 16:53:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:53.650 16:53:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.650 16:53:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.650 16:53:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:53.650 16:53:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:53.650 16:53:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.650 16:53:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.650 16:53:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:53.650 16:53:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:53.650 16:53:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.650 16:53:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.650 16:53:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:53.650 16:53:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:53.650 16:53:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.650 16:53:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.650 16:53:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:53.650 16:53:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:53.650 16:53:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.650 16:53:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.650 16:53:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:53.650 16:53:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:53.650 16:53:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.650 16:53:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.650 16:53:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:53.650 16:53:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:53.650 16:53:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.650 16:53:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.650 16:53:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:53.650 16:53:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:53.650 16:53:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.650 16:53:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.650 16:53:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:53.650 16:53:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:53.650 16:53:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.650 16:53:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.650 16:53:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:53.650 16:53:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:53.650 16:53:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.650 16:53:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.650 16:53:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:53.650 16:53:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:53.650 16:53:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.650 16:53:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.650 16:53:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:53.650 16:53:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:53.650 16:53:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.650 16:53:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.650 16:53:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:53.650 16:53:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:53.650 16:53:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.650 16:53:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.650 16:53:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:53.650 16:53:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:02:53.650 16:53:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:02:53.650 16:53:41 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@100 -- # resv=0 00:02:53.650 16:53:41 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:02:53.650 nr_hugepages=1024 00:02:53.650 16:53:41 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:02:53.650 resv_hugepages=0 00:02:53.650 16:53:41 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:02:53.650 surplus_hugepages=0 00:02:53.650 16:53:41 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:02:53.650 anon_hugepages=0 00:02:53.650 16:53:41 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:02:53.650 16:53:41 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:02:53.650 16:53:41 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:02:53.650 16:53:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:02:53.650 16:53:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:02:53.650 16:53:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:02:53.650 16:53:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:53.650 16:53:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:53.650 16:53:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:53.650 16:53:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:53.650 16:53:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:53.650 16:53:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:53.650 16:53:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.650 16:53:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.650 16:53:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381164 kB' 'MemFree: 173636544 kB' 'MemAvailable: 177639536 kB' 'Buffers: 3888 kB' 'Cached: 12111448 kB' 'SwapCached: 0 kB' 'Active: 8152396 kB' 'Inactive: 4449832 kB' 'Active(anon): 7586892 kB' 'Inactive(anon): 0 kB' 'Active(file): 565504 kB' 'Inactive(file): 4449832 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 490080 kB' 'Mapped: 178784 kB' 'Shmem: 7100000 kB' 'KReclaimable: 264216 kB' 'Slab: 835936 kB' 'SReclaimable: 264216 kB' 'SUnreclaim: 571720 kB' 'KernelStack: 20288 kB' 'PageTables: 8572 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030608 kB' 'Committed_AS: 9028456 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 314796 kB' 'VmallocChunk: 0 kB' 'Percpu: 69888 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2796500 kB' 'DirectMap2M: 19951616 kB' 'DirectMap1G: 179306496 kB' 00:02:53.650 16:53:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:53.650 16:53:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:53.650 16:53:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.650 16:53:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.651 16:53:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:53.651 16:53:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:53.651 16:53:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.651 16:53:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.651 16:53:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:53.651 16:53:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:53.651 16:53:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.651 16:53:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.651 16:53:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:53.651 16:53:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:53.651 16:53:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.651 16:53:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.651 16:53:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:53.651 16:53:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:53.651 16:53:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.651 16:53:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.651 16:53:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:53.651 16:53:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:53.651 16:53:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.651 16:53:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.651 16:53:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:53.651 16:53:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:53.651 16:53:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.651 16:53:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.651 16:53:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:53.651 16:53:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:53.651 16:53:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.651 16:53:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.651 16:53:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:53.651 16:53:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:53.651 16:53:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.651 16:53:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.651 16:53:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:53.651 16:53:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:53.651 16:53:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.651 16:53:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.651 16:53:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:53.651 16:53:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:53.651 16:53:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.651 16:53:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.651 16:53:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:53.651 16:53:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:53.651 16:53:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.651 16:53:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.651 16:53:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:53.651 16:53:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:53.651 16:53:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.651 16:53:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.651 16:53:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:53.651 16:53:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:53.651 16:53:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.651 16:53:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.651 16:53:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:53.651 16:53:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:53.651 16:53:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.651 16:53:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.651 16:53:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:53.651 16:53:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:53.651 16:53:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.651 16:53:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.651 16:53:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:53.651 16:53:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:53.651 16:53:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.651 16:53:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.651 16:53:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:53.651 16:53:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:53.651 16:53:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.651 16:53:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.651 16:53:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:53.651 16:53:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:53.651 16:53:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.651 16:53:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.651 16:53:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:53.651 16:53:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:53.651 16:53:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.651 16:53:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.651 16:53:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:53.651 16:53:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:53.651 16:53:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.651 16:53:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.651 16:53:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:53.651 16:53:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:53.651 16:53:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.651 16:53:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.651 16:53:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:53.651 16:53:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:53.651 16:53:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.651 16:53:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.651 16:53:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:53.651 16:53:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:53.651 16:53:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.651 16:53:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.651 16:53:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:53.651 16:53:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:53.651 16:53:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.651 16:53:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.651 16:53:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:53.651 16:53:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:53.651 16:53:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.651 16:53:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.651 16:53:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:53.651 16:53:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:53.651 16:53:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.651 16:53:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.651 16:53:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:53.651 16:53:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:53.651 16:53:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.651 16:53:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.651 16:53:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:53.651 16:53:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:53.651 16:53:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.651 16:53:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.651 16:53:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:53.651 16:53:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:53.651 16:53:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.651 16:53:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.651 16:53:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:53.651 16:53:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:53.651 16:53:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.651 16:53:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.651 16:53:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:53.651 16:53:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:53.652 16:53:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.652 16:53:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.652 16:53:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:53.652 16:53:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:53.652 16:53:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.652 16:53:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.652 16:53:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:53.652 16:53:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:53.652 16:53:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.652 16:53:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.652 16:53:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:53.652 16:53:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:53.652 16:53:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.652 16:53:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.652 16:53:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:53.652 16:53:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:53.652 16:53:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.652 16:53:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.652 16:53:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:53.652 16:53:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:53.652 16:53:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.652 16:53:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.652 16:53:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:53.652 16:53:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:53.652 16:53:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.652 16:53:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.652 16:53:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:53.652 16:53:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:53.652 16:53:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.652 16:53:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.652 16:53:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:53.652 16:53:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:53.652 16:53:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.652 16:53:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.652 16:53:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:53.652 16:53:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:53.652 16:53:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.652 16:53:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.652 16:53:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:53.652 16:53:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:53.652 16:53:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.652 16:53:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.652 16:53:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:53.652 16:53:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:53.652 16:53:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.652 16:53:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.652 16:53:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:53.652 16:53:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:53.652 16:53:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.652 16:53:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.652 16:53:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:53.652 16:53:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:53.652 16:53:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.652 16:53:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.652 16:53:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:53.652 16:53:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:53.652 16:53:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.652 16:53:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.652 16:53:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:53.652 16:53:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:53.652 16:53:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.652 16:53:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.652 16:53:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:53.652 16:53:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:53.652 16:53:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.652 16:53:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.652 16:53:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:53.652 16:53:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 1024 00:02:53.652 16:53:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:02:53.652 16:53:41 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:02:53.652 16:53:41 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:02:53.652 16:53:41 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@27 -- # local node 00:02:53.652 16:53:41 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:02:53.652 16:53:41 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:02:53.652 16:53:41 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:02:53.652 16:53:41 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:02:53.652 16:53:41 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:02:53.652 16:53:41 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:02:53.652 16:53:41 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:02:53.652 16:53:41 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:02:53.652 16:53:41 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:02:53.652 16:53:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:02:53.652 16:53:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node=0 00:02:53.652 16:53:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:02:53.652 16:53:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:53.652 16:53:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:53.652 16:53:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:02:53.652 16:53:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:02:53.652 16:53:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:53.652 16:53:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:53.652 16:53:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.652 16:53:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.652 16:53:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 97662684 kB' 'MemFree: 84605124 kB' 'MemUsed: 13057560 kB' 'SwapCached: 0 kB' 'Active: 6531960 kB' 'Inactive: 4007180 kB' 'Active(anon): 6064344 kB' 'Inactive(anon): 0 kB' 'Active(file): 467616 kB' 'Inactive(file): 4007180 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 10246600 kB' 'Mapped: 150620 kB' 'AnonPages: 295804 kB' 'Shmem: 5771804 kB' 'KernelStack: 11960 kB' 'PageTables: 6292 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 131308 kB' 'Slab: 420364 kB' 'SReclaimable: 131308 kB' 'SUnreclaim: 289056 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:02:53.652 16:53:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.652 16:53:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:53.652 16:53:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.652 16:53:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.652 16:53:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.652 16:53:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:53.652 16:53:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.652 16:53:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.652 16:53:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.652 16:53:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:53.652 16:53:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.652 16:53:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.652 16:53:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.652 16:53:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:53.652 16:53:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.652 16:53:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.652 16:53:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.652 16:53:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:53.652 16:53:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.652 16:53:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.653 16:53:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.653 16:53:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:53.653 16:53:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.653 16:53:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.653 16:53:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.653 16:53:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:53.653 16:53:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.653 16:53:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.653 16:53:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.653 16:53:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:53.653 16:53:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.653 16:53:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.653 16:53:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.653 16:53:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:53.653 16:53:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.653 16:53:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.653 16:53:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.653 16:53:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:53.653 16:53:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.653 16:53:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.653 16:53:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.653 16:53:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:53.653 16:53:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.653 16:53:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.653 16:53:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.653 16:53:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:53.653 16:53:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.653 16:53:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.653 16:53:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.653 16:53:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:53.653 16:53:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.653 16:53:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.653 16:53:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.653 16:53:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:53.653 16:53:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.653 16:53:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.653 16:53:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.653 16:53:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:53.653 16:53:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.653 16:53:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.653 16:53:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.653 16:53:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:53.653 16:53:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.653 16:53:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.653 16:53:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.653 16:53:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:53.653 16:53:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.653 16:53:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.653 16:53:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.653 16:53:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:53.653 16:53:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.653 16:53:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.653 16:53:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.653 16:53:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:53.653 16:53:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.653 16:53:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.653 16:53:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.653 16:53:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:53.653 16:53:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.653 16:53:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.653 16:53:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.653 16:53:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:53.653 16:53:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.653 16:53:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.653 16:53:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.653 16:53:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:53.653 16:53:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.653 16:53:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.653 16:53:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.653 16:53:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:53.653 16:53:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.653 16:53:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.653 16:53:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.653 16:53:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:53.653 16:53:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.653 16:53:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.653 16:53:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.653 16:53:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:53.653 16:53:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.653 16:53:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.653 16:53:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.653 16:53:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:53.653 16:53:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.653 16:53:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.653 16:53:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.653 16:53:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:53.653 16:53:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.653 16:53:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.653 16:53:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.653 16:53:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:53.653 16:53:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.653 16:53:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.653 16:53:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.653 16:53:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:53.653 16:53:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.653 16:53:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.653 16:53:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.653 16:53:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:53.653 16:53:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.653 16:53:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.653 16:53:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.653 16:53:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:53.653 16:53:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.653 16:53:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.654 16:53:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.654 16:53:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:53.654 16:53:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.654 16:53:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.654 16:53:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.654 16:53:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:53.654 16:53:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.654 16:53:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.654 16:53:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.654 16:53:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:53.654 16:53:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.654 16:53:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.654 16:53:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.654 16:53:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:53.654 16:53:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.654 16:53:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.654 16:53:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.654 16:53:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:53.654 16:53:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.654 16:53:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.654 16:53:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.654 16:53:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:02:53.654 16:53:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:02:53.654 16:53:41 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:02:53.654 16:53:41 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:02:53.654 16:53:41 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:02:53.654 16:53:41 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:02:53.654 16:53:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:02:53.654 16:53:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node=1 00:02:53.654 16:53:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:02:53.654 16:53:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:53.654 16:53:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:53.654 16:53:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:02:53.654 16:53:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:02:53.654 16:53:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:53.654 16:53:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:53.654 16:53:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.654 16:53:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.654 16:53:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 93718480 kB' 'MemFree: 89031140 kB' 'MemUsed: 4687340 kB' 'SwapCached: 0 kB' 'Active: 1620444 kB' 'Inactive: 442652 kB' 'Active(anon): 1522556 kB' 'Inactive(anon): 0 kB' 'Active(file): 97888 kB' 'Inactive(file): 442652 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 1868760 kB' 'Mapped: 28156 kB' 'AnonPages: 194376 kB' 'Shmem: 1328220 kB' 'KernelStack: 8456 kB' 'PageTables: 2756 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 132908 kB' 'Slab: 415572 kB' 'SReclaimable: 132908 kB' 'SUnreclaim: 282664 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:02:53.654 16:53:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.654 16:53:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:53.654 16:53:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.654 16:53:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.654 16:53:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.654 16:53:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:53.654 16:53:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.654 16:53:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.654 16:53:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.654 16:53:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:53.654 16:53:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.654 16:53:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.654 16:53:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.654 16:53:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:53.654 16:53:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.654 16:53:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.654 16:53:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.654 16:53:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:53.654 16:53:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.654 16:53:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.654 16:53:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.654 16:53:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:53.654 16:53:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.654 16:53:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.654 16:53:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.654 16:53:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:53.654 16:53:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.654 16:53:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.654 16:53:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.654 16:53:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:53.654 16:53:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.654 16:53:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.654 16:53:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.654 16:53:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:53.654 16:53:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.654 16:53:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.654 16:53:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.654 16:53:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:53.654 16:53:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.654 16:53:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.654 16:53:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.654 16:53:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:53.654 16:53:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.654 16:53:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.654 16:53:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.654 16:53:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:53.654 16:53:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.654 16:53:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.654 16:53:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.654 16:53:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:53.654 16:53:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.654 16:53:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.654 16:53:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.654 16:53:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:53.654 16:53:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.654 16:53:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.654 16:53:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.654 16:53:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:53.654 16:53:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.654 16:53:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.654 16:53:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.654 16:53:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:53.654 16:53:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.654 16:53:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.654 16:53:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.654 16:53:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:53.654 16:53:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.655 16:53:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.655 16:53:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.655 16:53:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:53.655 16:53:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.655 16:53:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.655 16:53:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.655 16:53:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:53.655 16:53:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.655 16:53:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.655 16:53:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.655 16:53:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:53.655 16:53:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.655 16:53:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.655 16:53:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.655 16:53:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:53.655 16:53:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.655 16:53:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.655 16:53:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.655 16:53:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:53.655 16:53:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.655 16:53:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.655 16:53:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.655 16:53:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:53.655 16:53:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.655 16:53:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.655 16:53:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.655 16:53:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:53.655 16:53:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.655 16:53:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.655 16:53:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.655 16:53:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:53.655 16:53:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.655 16:53:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.655 16:53:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.655 16:53:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:53.655 16:53:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.655 16:53:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.655 16:53:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.655 16:53:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:53.655 16:53:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.655 16:53:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.655 16:53:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.655 16:53:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:53.655 16:53:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.655 16:53:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.655 16:53:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.655 16:53:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:53.655 16:53:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.655 16:53:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.655 16:53:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.655 16:53:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:53.655 16:53:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.655 16:53:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.655 16:53:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.655 16:53:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:53.655 16:53:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.655 16:53:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.655 16:53:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.655 16:53:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:53.655 16:53:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.655 16:53:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.655 16:53:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.655 16:53:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:53.655 16:53:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.655 16:53:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.655 16:53:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.655 16:53:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:53.655 16:53:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.655 16:53:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.655 16:53:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.655 16:53:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:53.655 16:53:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.655 16:53:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.655 16:53:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.655 16:53:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:53.655 16:53:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.655 16:53:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.655 16:53:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.655 16:53:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:02:53.655 16:53:41 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:02:53.655 16:53:41 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:02:53.655 16:53:41 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:02:53.655 16:53:41 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:02:53.655 16:53:41 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:02:53.655 16:53:41 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:02:53.655 node0=512 expecting 512 00:02:53.655 16:53:41 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:02:53.655 16:53:41 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:02:53.655 16:53:41 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:02:53.655 16:53:41 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@128 -- # echo 'node1=512 expecting 512' 00:02:53.655 node1=512 expecting 512 00:02:53.655 16:53:41 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:02:53.655 00:02:53.655 real 0m2.581s 00:02:53.655 user 0m1.053s 00:02:53.655 sys 0m1.536s 00:02:53.655 16:53:41 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:02:53.655 16:53:41 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@10 -- # set +x 00:02:53.655 ************************************ 00:02:53.655 END TEST per_node_1G_alloc 00:02:53.655 ************************************ 00:02:53.655 16:53:41 setup.sh.hugepages -- setup/hugepages.sh@212 -- # run_test even_2G_alloc even_2G_alloc 00:02:53.655 16:53:41 setup.sh.hugepages -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:02:53.655 16:53:41 setup.sh.hugepages -- common/autotest_common.sh@1103 -- # xtrace_disable 00:02:53.655 16:53:41 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:02:53.655 ************************************ 00:02:53.655 START TEST even_2G_alloc 00:02:53.655 ************************************ 00:02:53.655 16:53:41 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@1121 -- # even_2G_alloc 00:02:53.655 16:53:41 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@152 -- # get_test_nr_hugepages 2097152 00:02:53.655 16:53:41 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:02:53.655 16:53:41 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:02:53.655 16:53:41 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:02:53.655 16:53:41 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:02:53.655 16:53:41 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:02:53.655 16:53:41 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:02:53.655 16:53:41 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:02:53.655 16:53:41 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:02:53.655 16:53:41 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:02:53.655 16:53:41 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:02:53.655 16:53:41 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:02:53.655 16:53:41 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:02:53.655 16:53:41 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:02:53.655 16:53:41 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:02:53.655 16:53:41 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:02:53.655 16:53:41 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@83 -- # : 512 00:02:53.655 16:53:41 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@84 -- # : 1 00:02:53.655 16:53:41 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:02:53.655 16:53:41 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:02:53.656 16:53:41 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@83 -- # : 0 00:02:53.656 16:53:41 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@84 -- # : 0 00:02:53.656 16:53:41 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:02:53.656 16:53:41 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # NRHUGE=1024 00:02:53.656 16:53:41 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # HUGE_EVEN_ALLOC=yes 00:02:53.656 16:53:41 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # setup output 00:02:53.656 16:53:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:02:53.656 16:53:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:02:56.190 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:02:56.190 0000:5e:00.0 (8086 0a54): Already using the vfio-pci driver 00:02:56.190 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:02:56.190 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:02:56.190 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:02:56.190 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:02:56.190 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:02:56.190 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:02:56.190 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:02:56.190 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:02:56.190 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:02:56.190 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:02:56.190 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:02:56.190 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:02:56.190 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:02:56.190 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:02:56.190 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:02:56.455 16:53:43 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@154 -- # verify_nr_hugepages 00:02:56.455 16:53:43 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@89 -- # local node 00:02:56.455 16:53:43 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:02:56.455 16:53:43 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:02:56.455 16:53:43 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@92 -- # local surp 00:02:56.455 16:53:43 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@93 -- # local resv 00:02:56.455 16:53:43 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@94 -- # local anon 00:02:56.455 16:53:43 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:02:56.455 16:53:43 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:02:56.455 16:53:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:02:56.455 16:53:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:02:56.455 16:53:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:02:56.455 16:53:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:56.455 16:53:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:56.455 16:53:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:56.455 16:53:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:56.455 16:53:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:56.455 16:53:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:56.455 16:53:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.455 16:53:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.455 16:53:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381164 kB' 'MemFree: 173615236 kB' 'MemAvailable: 177618228 kB' 'Buffers: 3888 kB' 'Cached: 12111556 kB' 'SwapCached: 0 kB' 'Active: 8152348 kB' 'Inactive: 4449832 kB' 'Active(anon): 7586844 kB' 'Inactive(anon): 0 kB' 'Active(file): 565504 kB' 'Inactive(file): 4449832 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 488972 kB' 'Mapped: 177564 kB' 'Shmem: 7100108 kB' 'KReclaimable: 264216 kB' 'Slab: 836260 kB' 'SReclaimable: 264216 kB' 'SUnreclaim: 572044 kB' 'KernelStack: 20672 kB' 'PageTables: 9972 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030608 kB' 'Committed_AS: 9017720 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 314956 kB' 'VmallocChunk: 0 kB' 'Percpu: 69888 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2796500 kB' 'DirectMap2M: 19951616 kB' 'DirectMap1G: 179306496 kB' 00:02:56.455 16:53:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:56.455 16:53:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.455 16:53:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.455 16:53:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.455 16:53:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:56.455 16:53:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.455 16:53:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.455 16:53:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.455 16:53:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:56.455 16:53:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.455 16:53:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.455 16:53:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.455 16:53:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:56.455 16:53:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.455 16:53:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.455 16:53:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.455 16:53:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:56.455 16:53:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.455 16:53:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.455 16:53:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.455 16:53:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:56.455 16:53:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.455 16:53:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.455 16:53:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.455 16:53:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:56.455 16:53:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.455 16:53:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.455 16:53:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.455 16:53:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:56.455 16:53:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.455 16:53:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.455 16:53:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.455 16:53:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:56.455 16:53:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.455 16:53:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.455 16:53:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.455 16:53:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:56.455 16:53:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.455 16:53:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.455 16:53:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.455 16:53:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:56.455 16:53:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.455 16:53:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.455 16:53:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.455 16:53:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:56.455 16:53:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.455 16:53:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.455 16:53:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.455 16:53:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:56.455 16:53:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.455 16:53:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.456 16:53:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.456 16:53:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:56.456 16:53:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.456 16:53:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.456 16:53:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.456 16:53:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:56.456 16:53:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.456 16:53:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.456 16:53:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.456 16:53:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:56.456 16:53:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.456 16:53:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.456 16:53:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.456 16:53:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:56.456 16:53:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.456 16:53:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.456 16:53:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.456 16:53:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:56.456 16:53:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.456 16:53:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.456 16:53:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.456 16:53:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:56.456 16:53:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.456 16:53:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.456 16:53:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.456 16:53:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:56.456 16:53:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.456 16:53:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.456 16:53:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.456 16:53:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:56.456 16:53:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.456 16:53:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.456 16:53:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.456 16:53:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:56.456 16:53:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.456 16:53:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.456 16:53:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.456 16:53:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:56.456 16:53:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.456 16:53:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.456 16:53:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.456 16:53:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:56.456 16:53:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.456 16:53:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.456 16:53:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.456 16:53:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:56.456 16:53:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.456 16:53:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.456 16:53:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.456 16:53:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:56.456 16:53:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.456 16:53:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.456 16:53:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.456 16:53:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:56.456 16:53:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.456 16:53:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.456 16:53:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.456 16:53:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:56.456 16:53:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.456 16:53:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.456 16:53:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.456 16:53:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:56.456 16:53:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.456 16:53:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.456 16:53:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.456 16:53:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:56.456 16:53:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.456 16:53:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.456 16:53:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.456 16:53:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:56.456 16:53:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.456 16:53:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.456 16:53:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.456 16:53:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:56.456 16:53:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.456 16:53:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.456 16:53:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.456 16:53:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:56.456 16:53:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.456 16:53:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.456 16:53:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.456 16:53:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:56.456 16:53:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.456 16:53:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.456 16:53:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.456 16:53:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:56.456 16:53:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.456 16:53:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.456 16:53:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.456 16:53:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:56.456 16:53:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.456 16:53:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.456 16:53:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.456 16:53:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:56.456 16:53:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.456 16:53:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.456 16:53:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.456 16:53:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:56.456 16:53:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.456 16:53:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.456 16:53:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.456 16:53:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:56.456 16:53:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.456 16:53:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.456 16:53:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.456 16:53:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:56.456 16:53:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.456 16:53:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.456 16:53:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.456 16:53:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:56.456 16:53:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:02:56.456 16:53:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:02:56.456 16:53:43 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@97 -- # anon=0 00:02:56.456 16:53:43 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:02:56.456 16:53:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:02:56.456 16:53:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:02:56.456 16:53:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:02:56.456 16:53:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:56.456 16:53:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:56.457 16:53:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:56.457 16:53:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:56.457 16:53:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:56.457 16:53:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:56.457 16:53:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381164 kB' 'MemFree: 173615900 kB' 'MemAvailable: 177618892 kB' 'Buffers: 3888 kB' 'Cached: 12111556 kB' 'SwapCached: 0 kB' 'Active: 8152264 kB' 'Inactive: 4449832 kB' 'Active(anon): 7586760 kB' 'Inactive(anon): 0 kB' 'Active(file): 565504 kB' 'Inactive(file): 4449832 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 489356 kB' 'Mapped: 177544 kB' 'Shmem: 7100108 kB' 'KReclaimable: 264216 kB' 'Slab: 836232 kB' 'SReclaimable: 264216 kB' 'SUnreclaim: 572016 kB' 'KernelStack: 20624 kB' 'PageTables: 9560 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030608 kB' 'Committed_AS: 9017740 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 314828 kB' 'VmallocChunk: 0 kB' 'Percpu: 69888 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2796500 kB' 'DirectMap2M: 19951616 kB' 'DirectMap1G: 179306496 kB' 00:02:56.457 16:53:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.457 16:53:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.457 16:53:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.457 16:53:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.457 16:53:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.457 16:53:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.457 16:53:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.457 16:53:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.457 16:53:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.457 16:53:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.457 16:53:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.457 16:53:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.457 16:53:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.457 16:53:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.457 16:53:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.457 16:53:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.457 16:53:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.457 16:53:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.457 16:53:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.457 16:53:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.457 16:53:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.457 16:53:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.457 16:53:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.457 16:53:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.457 16:53:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.457 16:53:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.457 16:53:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.457 16:53:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.457 16:53:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.457 16:53:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.457 16:53:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.457 16:53:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.457 16:53:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.457 16:53:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.457 16:53:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.457 16:53:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.457 16:53:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.457 16:53:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.457 16:53:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.457 16:53:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.457 16:53:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.457 16:53:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.457 16:53:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.457 16:53:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.457 16:53:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.457 16:53:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.457 16:53:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.457 16:53:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.457 16:53:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.457 16:53:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.457 16:53:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.457 16:53:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.457 16:53:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.457 16:53:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.457 16:53:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.457 16:53:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.457 16:53:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.457 16:53:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.457 16:53:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.457 16:53:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.457 16:53:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.457 16:53:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.457 16:53:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.457 16:53:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.457 16:53:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.457 16:53:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.457 16:53:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.457 16:53:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.457 16:53:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.457 16:53:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.457 16:53:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.457 16:53:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.457 16:53:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.457 16:53:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.457 16:53:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.457 16:53:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.457 16:53:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.457 16:53:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.457 16:53:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.457 16:53:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.457 16:53:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.457 16:53:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.457 16:53:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.457 16:53:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.457 16:53:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.457 16:53:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.457 16:53:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.457 16:53:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.457 16:53:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.457 16:53:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.457 16:53:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.457 16:53:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.457 16:53:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.457 16:53:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.457 16:53:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.457 16:53:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.457 16:53:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.457 16:53:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.457 16:53:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.457 16:53:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.458 16:53:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.458 16:53:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.458 16:53:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.458 16:53:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.458 16:53:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.458 16:53:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.458 16:53:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.458 16:53:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.458 16:53:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.458 16:53:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.458 16:53:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.458 16:53:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.458 16:53:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.458 16:53:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.458 16:53:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.458 16:53:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.458 16:53:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.458 16:53:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.458 16:53:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.458 16:53:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.458 16:53:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.458 16:53:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.458 16:53:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.458 16:53:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.458 16:53:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.458 16:53:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.458 16:53:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.458 16:53:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.458 16:53:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.458 16:53:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.458 16:53:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.458 16:53:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.458 16:53:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.458 16:53:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.458 16:53:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.458 16:53:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.458 16:53:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.458 16:53:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.458 16:53:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.458 16:53:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.458 16:53:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.458 16:53:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.458 16:53:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.458 16:53:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.458 16:53:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.458 16:53:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.458 16:53:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.458 16:53:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.458 16:53:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.458 16:53:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.458 16:53:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.458 16:53:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.458 16:53:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.458 16:53:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.458 16:53:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.458 16:53:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.458 16:53:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.458 16:53:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.458 16:53:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.458 16:53:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.458 16:53:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.458 16:53:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.458 16:53:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.458 16:53:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.458 16:53:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.458 16:53:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.458 16:53:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.458 16:53:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.458 16:53:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.458 16:53:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.458 16:53:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.458 16:53:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.458 16:53:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.458 16:53:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.458 16:53:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.458 16:53:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.458 16:53:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.458 16:53:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.458 16:53:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.458 16:53:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.458 16:53:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.458 16:53:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.458 16:53:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.458 16:53:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.458 16:53:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.458 16:53:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.458 16:53:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.458 16:53:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.458 16:53:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.458 16:53:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.458 16:53:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.458 16:53:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.458 16:53:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.458 16:53:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.458 16:53:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.458 16:53:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.458 16:53:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.458 16:53:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.458 16:53:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.458 16:53:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.458 16:53:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.458 16:53:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.458 16:53:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.458 16:53:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.458 16:53:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.458 16:53:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.458 16:53:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.458 16:53:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:02:56.458 16:53:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:02:56.458 16:53:43 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@99 -- # surp=0 00:02:56.458 16:53:43 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:02:56.458 16:53:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:02:56.458 16:53:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:02:56.458 16:53:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:02:56.458 16:53:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:56.458 16:53:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:56.458 16:53:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:56.458 16:53:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:56.458 16:53:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:56.458 16:53:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:56.458 16:53:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.458 16:53:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.459 16:53:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381164 kB' 'MemFree: 173615800 kB' 'MemAvailable: 177618792 kB' 'Buffers: 3888 kB' 'Cached: 12111572 kB' 'SwapCached: 0 kB' 'Active: 8151264 kB' 'Inactive: 4449832 kB' 'Active(anon): 7585760 kB' 'Inactive(anon): 0 kB' 'Active(file): 565504 kB' 'Inactive(file): 4449832 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 488836 kB' 'Mapped: 177464 kB' 'Shmem: 7100124 kB' 'KReclaimable: 264216 kB' 'Slab: 836120 kB' 'SReclaimable: 264216 kB' 'SUnreclaim: 571904 kB' 'KernelStack: 20576 kB' 'PageTables: 9544 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030608 kB' 'Committed_AS: 9019252 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 314844 kB' 'VmallocChunk: 0 kB' 'Percpu: 69888 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2796500 kB' 'DirectMap2M: 19951616 kB' 'DirectMap1G: 179306496 kB' 00:02:56.459 16:53:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:56.459 16:53:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.459 16:53:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.459 16:53:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.459 16:53:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:56.459 16:53:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.459 16:53:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.459 16:53:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.459 16:53:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:56.459 16:53:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.459 16:53:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.459 16:53:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.459 16:53:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:56.459 16:53:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.459 16:53:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.459 16:53:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.459 16:53:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:56.459 16:53:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.459 16:53:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.459 16:53:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.459 16:53:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:56.459 16:53:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.459 16:53:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.459 16:53:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.459 16:53:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:56.459 16:53:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.459 16:53:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.459 16:53:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.459 16:53:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:56.459 16:53:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.459 16:53:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.459 16:53:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.459 16:53:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:56.459 16:53:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.459 16:53:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.459 16:53:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.459 16:53:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:56.459 16:53:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.459 16:53:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.459 16:53:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.459 16:53:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:56.459 16:53:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.459 16:53:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.459 16:53:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.459 16:53:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:56.459 16:53:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.459 16:53:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.459 16:53:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.459 16:53:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:56.459 16:53:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.459 16:53:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.459 16:53:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.459 16:53:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:56.459 16:53:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.459 16:53:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.459 16:53:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.459 16:53:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:56.459 16:53:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.459 16:53:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.459 16:53:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.459 16:53:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:56.459 16:53:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.459 16:53:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.459 16:53:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.459 16:53:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:56.459 16:53:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.459 16:53:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.459 16:53:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.459 16:53:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:56.459 16:53:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.459 16:53:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.459 16:53:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.459 16:53:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:56.459 16:53:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.459 16:53:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.459 16:53:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.459 16:53:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:56.459 16:53:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.459 16:53:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.459 16:53:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.459 16:53:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:56.459 16:53:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.459 16:53:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.459 16:53:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.459 16:53:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:56.459 16:53:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.459 16:53:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.459 16:53:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.459 16:53:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:56.460 16:53:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.460 16:53:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.460 16:53:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.460 16:53:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:56.460 16:53:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.460 16:53:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.460 16:53:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.460 16:53:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:56.460 16:53:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.460 16:53:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.460 16:53:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.460 16:53:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:56.460 16:53:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.460 16:53:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.460 16:53:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.460 16:53:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:56.460 16:53:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.460 16:53:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.460 16:53:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.460 16:53:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:56.460 16:53:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.460 16:53:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.460 16:53:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.460 16:53:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:56.460 16:53:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.460 16:53:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.460 16:53:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.460 16:53:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:56.460 16:53:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.460 16:53:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.460 16:53:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.460 16:53:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:56.460 16:53:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.460 16:53:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.460 16:53:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.460 16:53:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:56.460 16:53:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.460 16:53:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.460 16:53:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.460 16:53:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:56.460 16:53:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.460 16:53:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.460 16:53:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.460 16:53:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:56.460 16:53:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.460 16:53:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.460 16:53:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.460 16:53:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:56.460 16:53:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.460 16:53:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.460 16:53:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.460 16:53:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:56.460 16:53:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.460 16:53:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.460 16:53:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.460 16:53:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:56.460 16:53:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.460 16:53:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.460 16:53:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.460 16:53:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:56.460 16:53:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.460 16:53:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.460 16:53:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.460 16:53:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:56.460 16:53:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.460 16:53:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.460 16:53:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.460 16:53:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:56.460 16:53:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.460 16:53:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.460 16:53:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.460 16:53:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:56.460 16:53:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.460 16:53:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.460 16:53:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.460 16:53:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:56.460 16:53:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.460 16:53:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.460 16:53:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.460 16:53:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:56.460 16:53:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.460 16:53:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.460 16:53:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.460 16:53:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:56.460 16:53:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.460 16:53:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.460 16:53:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.460 16:53:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:56.460 16:53:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.460 16:53:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.460 16:53:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.460 16:53:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:56.460 16:53:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.460 16:53:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.460 16:53:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.460 16:53:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:56.460 16:53:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.460 16:53:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.460 16:53:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.460 16:53:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:56.460 16:53:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.460 16:53:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.460 16:53:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.460 16:53:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:56.460 16:53:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.460 16:53:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.460 16:53:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.460 16:53:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:56.460 16:53:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.460 16:53:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.460 16:53:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.460 16:53:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:56.460 16:53:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:02:56.461 16:53:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:02:56.461 16:53:43 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@100 -- # resv=0 00:02:56.461 16:53:43 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:02:56.461 nr_hugepages=1024 00:02:56.461 16:53:43 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:02:56.461 resv_hugepages=0 00:02:56.461 16:53:43 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:02:56.461 surplus_hugepages=0 00:02:56.461 16:53:43 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:02:56.461 anon_hugepages=0 00:02:56.461 16:53:43 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:02:56.461 16:53:43 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:02:56.461 16:53:43 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:02:56.461 16:53:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:02:56.461 16:53:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:02:56.461 16:53:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:02:56.461 16:53:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:56.461 16:53:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:56.461 16:53:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:56.461 16:53:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:56.461 16:53:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:56.461 16:53:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:56.461 16:53:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.461 16:53:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.461 16:53:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381164 kB' 'MemFree: 173615840 kB' 'MemAvailable: 177618832 kB' 'Buffers: 3888 kB' 'Cached: 12111600 kB' 'SwapCached: 0 kB' 'Active: 8151196 kB' 'Inactive: 4449832 kB' 'Active(anon): 7585692 kB' 'Inactive(anon): 0 kB' 'Active(file): 565504 kB' 'Inactive(file): 4449832 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 488776 kB' 'Mapped: 177464 kB' 'Shmem: 7100152 kB' 'KReclaimable: 264216 kB' 'Slab: 836120 kB' 'SReclaimable: 264216 kB' 'SUnreclaim: 571904 kB' 'KernelStack: 20528 kB' 'PageTables: 9260 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030608 kB' 'Committed_AS: 9018900 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 314812 kB' 'VmallocChunk: 0 kB' 'Percpu: 69888 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2796500 kB' 'DirectMap2M: 19951616 kB' 'DirectMap1G: 179306496 kB' 00:02:56.461 16:53:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:56.461 16:53:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.461 16:53:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.461 16:53:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.461 16:53:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:56.461 16:53:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.461 16:53:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.461 16:53:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.461 16:53:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:56.461 16:53:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.461 16:53:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.461 16:53:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.461 16:53:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:56.461 16:53:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.461 16:53:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.461 16:53:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.461 16:53:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:56.461 16:53:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.461 16:53:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.461 16:53:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.461 16:53:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:56.461 16:53:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.461 16:53:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.461 16:53:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.461 16:53:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:56.461 16:53:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.461 16:53:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.461 16:53:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.461 16:53:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:56.461 16:53:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.461 16:53:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.461 16:53:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.461 16:53:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:56.461 16:53:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.461 16:53:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.461 16:53:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.461 16:53:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:56.461 16:53:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.461 16:53:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.461 16:53:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.461 16:53:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:56.461 16:53:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.461 16:53:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.461 16:53:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.461 16:53:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:56.461 16:53:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.461 16:53:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.461 16:53:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.461 16:53:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:56.461 16:53:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.461 16:53:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.461 16:53:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.461 16:53:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:56.461 16:53:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.461 16:53:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.461 16:53:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.461 16:53:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:56.461 16:53:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.461 16:53:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.461 16:53:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.461 16:53:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:56.461 16:53:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.461 16:53:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.461 16:53:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.461 16:53:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:56.461 16:53:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.461 16:53:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.461 16:53:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.461 16:53:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:56.461 16:53:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.461 16:53:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.461 16:53:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.461 16:53:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:56.461 16:53:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.461 16:53:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.461 16:53:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.461 16:53:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:56.461 16:53:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.461 16:53:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.461 16:53:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.461 16:53:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:56.461 16:53:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.461 16:53:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.462 16:53:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.462 16:53:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:56.462 16:53:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.462 16:53:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.462 16:53:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.462 16:53:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:56.462 16:53:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.462 16:53:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.462 16:53:43 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.462 16:53:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:56.462 16:53:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.462 16:53:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.462 16:53:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.462 16:53:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:56.462 16:53:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.462 16:53:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.462 16:53:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.462 16:53:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:56.462 16:53:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.462 16:53:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.462 16:53:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.462 16:53:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:56.462 16:53:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.462 16:53:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.462 16:53:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.462 16:53:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:56.462 16:53:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.462 16:53:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.462 16:53:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.462 16:53:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:56.462 16:53:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.462 16:53:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.462 16:53:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.462 16:53:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:56.462 16:53:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.462 16:53:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.462 16:53:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.462 16:53:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:56.462 16:53:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.462 16:53:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.462 16:53:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.462 16:53:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:56.462 16:53:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.462 16:53:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.462 16:53:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.462 16:53:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:56.462 16:53:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.462 16:53:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.462 16:53:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.462 16:53:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:56.462 16:53:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.462 16:53:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.462 16:53:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.462 16:53:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:56.462 16:53:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.462 16:53:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.462 16:53:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.462 16:53:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:56.462 16:53:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.462 16:53:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.462 16:53:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.462 16:53:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:56.462 16:53:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.462 16:53:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.462 16:53:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.462 16:53:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:56.462 16:53:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.462 16:53:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.462 16:53:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.462 16:53:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:56.462 16:53:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.462 16:53:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.462 16:53:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.462 16:53:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:56.462 16:53:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.462 16:53:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.462 16:53:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.462 16:53:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:56.462 16:53:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.462 16:53:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.462 16:53:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.462 16:53:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:56.462 16:53:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.462 16:53:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.462 16:53:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.462 16:53:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:56.462 16:53:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.462 16:53:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.462 16:53:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.462 16:53:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:56.462 16:53:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.462 16:53:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.462 16:53:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.462 16:53:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:56.462 16:53:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.462 16:53:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.462 16:53:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.462 16:53:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:56.462 16:53:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.462 16:53:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.462 16:53:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.462 16:53:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:56.462 16:53:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.462 16:53:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.462 16:53:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.462 16:53:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:56.462 16:53:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.462 16:53:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.462 16:53:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.462 16:53:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:56.462 16:53:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 1024 00:02:56.462 16:53:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:02:56.462 16:53:44 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:02:56.462 16:53:44 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:02:56.462 16:53:44 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@27 -- # local node 00:02:56.462 16:53:44 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:02:56.462 16:53:44 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:02:56.462 16:53:44 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:02:56.462 16:53:44 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:02:56.462 16:53:44 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:02:56.462 16:53:44 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:02:56.462 16:53:44 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:02:56.462 16:53:44 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:02:56.463 16:53:44 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:02:56.463 16:53:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:02:56.463 16:53:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node=0 00:02:56.463 16:53:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:02:56.463 16:53:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:56.463 16:53:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:56.463 16:53:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:02:56.463 16:53:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:02:56.463 16:53:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:56.463 16:53:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:56.463 16:53:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.463 16:53:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.463 16:53:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 97662684 kB' 'MemFree: 84601056 kB' 'MemUsed: 13061628 kB' 'SwapCached: 0 kB' 'Active: 6529484 kB' 'Inactive: 4007180 kB' 'Active(anon): 6061868 kB' 'Inactive(anon): 0 kB' 'Active(file): 467616 kB' 'Inactive(file): 4007180 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 10246720 kB' 'Mapped: 149308 kB' 'AnonPages: 293120 kB' 'Shmem: 5771924 kB' 'KernelStack: 12120 kB' 'PageTables: 6656 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 131308 kB' 'Slab: 420324 kB' 'SReclaimable: 131308 kB' 'SUnreclaim: 289016 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:02:56.463 16:53:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.463 16:53:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.463 16:53:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.463 16:53:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.463 16:53:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.463 16:53:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.463 16:53:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.463 16:53:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.463 16:53:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.463 16:53:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.463 16:53:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.463 16:53:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.463 16:53:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.463 16:53:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.463 16:53:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.463 16:53:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.463 16:53:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.463 16:53:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.463 16:53:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.463 16:53:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.463 16:53:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.463 16:53:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.463 16:53:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.463 16:53:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.463 16:53:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.463 16:53:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.463 16:53:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.463 16:53:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.463 16:53:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.463 16:53:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.463 16:53:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.463 16:53:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.463 16:53:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.463 16:53:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.463 16:53:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.463 16:53:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.463 16:53:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.463 16:53:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.463 16:53:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.463 16:53:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.463 16:53:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.463 16:53:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.463 16:53:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.463 16:53:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.463 16:53:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.463 16:53:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.463 16:53:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.463 16:53:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.463 16:53:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.463 16:53:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.463 16:53:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.463 16:53:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.463 16:53:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.463 16:53:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.463 16:53:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.463 16:53:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.463 16:53:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.463 16:53:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.463 16:53:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.463 16:53:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.463 16:53:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.463 16:53:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.463 16:53:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.463 16:53:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.463 16:53:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.463 16:53:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.463 16:53:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.463 16:53:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.463 16:53:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.463 16:53:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.463 16:53:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.463 16:53:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.463 16:53:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.463 16:53:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.463 16:53:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.463 16:53:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.463 16:53:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.463 16:53:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.463 16:53:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.463 16:53:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.463 16:53:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.463 16:53:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.463 16:53:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.463 16:53:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.463 16:53:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.463 16:53:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.463 16:53:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.463 16:53:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.463 16:53:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.463 16:53:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.463 16:53:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.463 16:53:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.463 16:53:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.463 16:53:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.463 16:53:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.463 16:53:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.463 16:53:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.463 16:53:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.463 16:53:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.463 16:53:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.463 16:53:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.463 16:53:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.464 16:53:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.464 16:53:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.464 16:53:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.464 16:53:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.464 16:53:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.464 16:53:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.464 16:53:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.464 16:53:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.464 16:53:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.464 16:53:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.464 16:53:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.464 16:53:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.464 16:53:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.464 16:53:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.464 16:53:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.464 16:53:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.464 16:53:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.464 16:53:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.464 16:53:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.464 16:53:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.464 16:53:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.464 16:53:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.464 16:53:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.464 16:53:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.464 16:53:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.464 16:53:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.464 16:53:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.464 16:53:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.464 16:53:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.464 16:53:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.464 16:53:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.464 16:53:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.464 16:53:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.464 16:53:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.464 16:53:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.464 16:53:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.464 16:53:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.464 16:53:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.464 16:53:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.464 16:53:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.464 16:53:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.464 16:53:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.464 16:53:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.464 16:53:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:02:56.464 16:53:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:02:56.464 16:53:44 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:02:56.464 16:53:44 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:02:56.464 16:53:44 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:02:56.464 16:53:44 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:02:56.464 16:53:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:02:56.464 16:53:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node=1 00:02:56.464 16:53:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:02:56.464 16:53:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:56.464 16:53:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:56.464 16:53:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:02:56.464 16:53:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:02:56.464 16:53:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:56.464 16:53:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:56.464 16:53:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.464 16:53:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.464 16:53:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 93718480 kB' 'MemFree: 89014512 kB' 'MemUsed: 4703968 kB' 'SwapCached: 0 kB' 'Active: 1621584 kB' 'Inactive: 442652 kB' 'Active(anon): 1523696 kB' 'Inactive(anon): 0 kB' 'Active(file): 97888 kB' 'Inactive(file): 442652 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 1868788 kB' 'Mapped: 28156 kB' 'AnonPages: 195504 kB' 'Shmem: 1328248 kB' 'KernelStack: 8232 kB' 'PageTables: 2204 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 132908 kB' 'Slab: 415732 kB' 'SReclaimable: 132908 kB' 'SUnreclaim: 282824 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:02:56.464 16:53:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.464 16:53:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.464 16:53:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.464 16:53:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.464 16:53:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.464 16:53:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.464 16:53:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.464 16:53:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.464 16:53:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.464 16:53:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.464 16:53:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.464 16:53:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.464 16:53:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.464 16:53:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.464 16:53:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.464 16:53:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.464 16:53:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.464 16:53:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.464 16:53:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.464 16:53:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.464 16:53:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.464 16:53:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.464 16:53:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.464 16:53:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.464 16:53:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.464 16:53:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.464 16:53:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.464 16:53:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.464 16:53:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.464 16:53:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.464 16:53:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.464 16:53:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.464 16:53:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.465 16:53:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.465 16:53:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.465 16:53:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.465 16:53:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.465 16:53:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.465 16:53:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.465 16:53:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.465 16:53:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.465 16:53:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.465 16:53:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.465 16:53:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.465 16:53:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.465 16:53:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.465 16:53:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.465 16:53:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.465 16:53:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.465 16:53:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.465 16:53:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.465 16:53:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.465 16:53:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.465 16:53:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.465 16:53:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.465 16:53:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.465 16:53:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.465 16:53:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.465 16:53:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.465 16:53:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.465 16:53:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.465 16:53:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.465 16:53:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.465 16:53:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.465 16:53:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.465 16:53:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.465 16:53:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.465 16:53:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.465 16:53:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.465 16:53:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.465 16:53:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.465 16:53:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.465 16:53:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.465 16:53:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.465 16:53:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.465 16:53:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.465 16:53:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.465 16:53:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.465 16:53:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.465 16:53:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.465 16:53:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.465 16:53:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.465 16:53:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.465 16:53:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.465 16:53:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.465 16:53:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.465 16:53:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.465 16:53:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.465 16:53:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.465 16:53:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.465 16:53:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.465 16:53:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.465 16:53:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.465 16:53:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.465 16:53:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.465 16:53:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.465 16:53:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.465 16:53:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.465 16:53:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.465 16:53:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.465 16:53:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.465 16:53:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.465 16:53:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.465 16:53:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.465 16:53:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.465 16:53:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.465 16:53:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.465 16:53:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.465 16:53:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.465 16:53:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.465 16:53:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.465 16:53:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.465 16:53:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.465 16:53:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.465 16:53:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.465 16:53:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.465 16:53:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.465 16:53:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.465 16:53:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.465 16:53:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.465 16:53:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.465 16:53:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.465 16:53:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.465 16:53:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.465 16:53:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.465 16:53:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.465 16:53:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.465 16:53:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.465 16:53:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.465 16:53:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.465 16:53:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.465 16:53:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.465 16:53:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.465 16:53:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.465 16:53:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.465 16:53:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.465 16:53:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.465 16:53:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.465 16:53:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.465 16:53:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.465 16:53:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.465 16:53:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:56.465 16:53:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:56.465 16:53:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:56.465 16:53:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.465 16:53:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:02:56.465 16:53:44 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:02:56.465 16:53:44 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:02:56.465 16:53:44 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:02:56.465 16:53:44 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:02:56.465 16:53:44 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:02:56.465 16:53:44 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:02:56.465 node0=512 expecting 512 00:02:56.465 16:53:44 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:02:56.465 16:53:44 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:02:56.465 16:53:44 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:02:56.465 16:53:44 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@128 -- # echo 'node1=512 expecting 512' 00:02:56.466 node1=512 expecting 512 00:02:56.466 16:53:44 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:02:56.466 00:02:56.466 real 0m2.890s 00:02:56.466 user 0m1.179s 00:02:56.466 sys 0m1.755s 00:02:56.466 16:53:44 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:02:56.466 16:53:44 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@10 -- # set +x 00:02:56.466 ************************************ 00:02:56.466 END TEST even_2G_alloc 00:02:56.466 ************************************ 00:02:56.725 16:53:44 setup.sh.hugepages -- setup/hugepages.sh@213 -- # run_test odd_alloc odd_alloc 00:02:56.725 16:53:44 setup.sh.hugepages -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:02:56.725 16:53:44 setup.sh.hugepages -- common/autotest_common.sh@1103 -- # xtrace_disable 00:02:56.725 16:53:44 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:02:56.725 ************************************ 00:02:56.725 START TEST odd_alloc 00:02:56.725 ************************************ 00:02:56.725 16:53:44 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@1121 -- # odd_alloc 00:02:56.725 16:53:44 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@159 -- # get_test_nr_hugepages 2098176 00:02:56.725 16:53:44 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@49 -- # local size=2098176 00:02:56.725 16:53:44 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:02:56.725 16:53:44 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:02:56.725 16:53:44 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1025 00:02:56.725 16:53:44 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:02:56.725 16:53:44 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:02:56.725 16:53:44 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:02:56.725 16:53:44 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1025 00:02:56.725 16:53:44 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:02:56.725 16:53:44 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:02:56.725 16:53:44 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:02:56.725 16:53:44 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:02:56.725 16:53:44 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:02:56.725 16:53:44 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:02:56.725 16:53:44 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:02:56.725 16:53:44 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@83 -- # : 513 00:02:56.725 16:53:44 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@84 -- # : 1 00:02:56.725 16:53:44 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:02:56.725 16:53:44 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=513 00:02:56.725 16:53:44 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@83 -- # : 0 00:02:56.725 16:53:44 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@84 -- # : 0 00:02:56.725 16:53:44 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:02:56.725 16:53:44 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # HUGEMEM=2049 00:02:56.725 16:53:44 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # HUGE_EVEN_ALLOC=yes 00:02:56.725 16:53:44 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # setup output 00:02:56.725 16:53:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:02:56.725 16:53:44 setup.sh.hugepages.odd_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:02:59.268 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:02:59.268 0000:5e:00.0 (8086 0a54): Already using the vfio-pci driver 00:02:59.268 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:02:59.268 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:02:59.268 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:02:59.268 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:02:59.268 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:02:59.268 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:02:59.268 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:02:59.268 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:02:59.268 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:02:59.268 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:02:59.268 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:02:59.268 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:02:59.268 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:02:59.268 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:02:59.268 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:02:59.268 16:53:46 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@161 -- # verify_nr_hugepages 00:02:59.268 16:53:46 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@89 -- # local node 00:02:59.268 16:53:46 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:02:59.268 16:53:46 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:02:59.268 16:53:46 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@92 -- # local surp 00:02:59.268 16:53:46 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@93 -- # local resv 00:02:59.268 16:53:46 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@94 -- # local anon 00:02:59.268 16:53:46 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:02:59.268 16:53:46 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:02:59.269 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:02:59.269 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:02:59.269 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:02:59.269 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:59.269 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:59.269 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:59.269 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:59.269 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:59.269 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:59.269 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.269 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.269 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381164 kB' 'MemFree: 173621316 kB' 'MemAvailable: 177624308 kB' 'Buffers: 3888 kB' 'Cached: 12111712 kB' 'SwapCached: 0 kB' 'Active: 8152016 kB' 'Inactive: 4449832 kB' 'Active(anon): 7586512 kB' 'Inactive(anon): 0 kB' 'Active(file): 565504 kB' 'Inactive(file): 4449832 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 488968 kB' 'Mapped: 177580 kB' 'Shmem: 7100264 kB' 'KReclaimable: 264216 kB' 'Slab: 836000 kB' 'SReclaimable: 264216 kB' 'SUnreclaim: 571784 kB' 'KernelStack: 20448 kB' 'PageTables: 8576 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103029584 kB' 'Committed_AS: 9019880 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 314828 kB' 'VmallocChunk: 0 kB' 'Percpu: 69888 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 2796500 kB' 'DirectMap2M: 19951616 kB' 'DirectMap1G: 179306496 kB' 00:02:59.269 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:59.269 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:59.269 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.269 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.269 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:59.269 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:59.269 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.269 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.269 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:59.269 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:59.269 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.269 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.269 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:59.269 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:59.269 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.269 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.269 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:59.269 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:59.269 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.269 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.269 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:59.269 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:59.269 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.269 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.269 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:59.269 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:59.269 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.269 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.269 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:59.269 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:59.269 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.269 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.269 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:59.269 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:59.269 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.269 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.269 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:59.269 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:59.269 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.269 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.269 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:59.269 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:59.269 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.269 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.269 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:59.269 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:59.269 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.269 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.269 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:59.269 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:59.269 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.269 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.269 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:59.269 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:59.269 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.269 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.269 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:59.269 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:59.269 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.269 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.269 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:59.269 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:59.269 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.269 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.269 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:59.269 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:59.269 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.269 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.269 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:59.269 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:59.269 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.269 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.269 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:59.269 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:59.269 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.269 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.269 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:59.269 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:59.269 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.269 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.269 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:59.269 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:59.269 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.269 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.269 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:59.269 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:59.269 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.269 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.269 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:59.269 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:59.269 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.269 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.269 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:59.269 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:59.269 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.269 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.269 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:59.269 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:59.269 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.269 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.269 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:59.269 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:59.269 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.270 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.270 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:59.270 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:59.270 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.270 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.270 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:59.270 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:59.270 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.270 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.270 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:59.270 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:59.270 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.270 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.270 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:59.270 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:59.270 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.270 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.270 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:59.270 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:59.270 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.270 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.270 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:59.270 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:59.270 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.270 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.270 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:59.270 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:59.270 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.270 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.270 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:59.270 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:59.270 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.270 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.270 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:59.270 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:59.270 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.270 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.270 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:59.270 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:59.270 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.270 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.270 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:59.270 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:59.270 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.270 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.270 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:59.270 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:59.270 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.270 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.270 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:59.270 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:59.270 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.270 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.270 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:59.270 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:59.270 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.270 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.270 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:59.270 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:02:59.270 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:02:59.270 16:53:46 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@97 -- # anon=0 00:02:59.270 16:53:46 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:02:59.270 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:02:59.270 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:02:59.270 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:02:59.270 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:59.270 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:59.270 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:59.270 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:59.270 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:59.270 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:59.270 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.270 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.270 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381164 kB' 'MemFree: 173622052 kB' 'MemAvailable: 177625044 kB' 'Buffers: 3888 kB' 'Cached: 12111716 kB' 'SwapCached: 0 kB' 'Active: 8151452 kB' 'Inactive: 4449832 kB' 'Active(anon): 7585948 kB' 'Inactive(anon): 0 kB' 'Active(file): 565504 kB' 'Inactive(file): 4449832 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 488912 kB' 'Mapped: 177500 kB' 'Shmem: 7100268 kB' 'KReclaimable: 264216 kB' 'Slab: 836012 kB' 'SReclaimable: 264216 kB' 'SUnreclaim: 571796 kB' 'KernelStack: 20320 kB' 'PageTables: 8668 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103029584 kB' 'Committed_AS: 9019896 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 314844 kB' 'VmallocChunk: 0 kB' 'Percpu: 69888 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 2796500 kB' 'DirectMap2M: 19951616 kB' 'DirectMap1G: 179306496 kB' 00:02:59.270 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.270 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:59.270 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.270 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.270 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.270 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:59.270 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.270 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.270 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.270 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:59.270 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.270 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.270 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.270 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:59.270 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.270 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.270 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.270 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:59.270 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.270 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.270 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.270 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:59.270 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.270 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.270 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.270 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:59.270 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.270 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.270 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.270 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:59.270 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.270 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.270 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.270 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:59.270 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.270 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.270 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.270 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:59.271 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.271 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.271 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.271 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:59.271 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.271 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.271 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.271 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:59.271 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.271 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.271 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.271 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:59.271 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.271 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.271 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.271 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:59.271 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.271 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.271 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.271 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:59.271 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.271 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.271 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.271 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:59.271 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.271 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.271 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.271 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:59.271 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.271 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.271 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.271 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:59.271 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.271 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.271 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.271 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:59.271 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.271 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.271 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.271 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:59.271 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.271 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.271 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.271 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:59.271 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.271 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.271 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.271 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:59.271 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.271 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.271 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.271 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:59.271 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.271 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.271 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.271 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:59.271 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.271 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.271 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.271 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:59.271 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.271 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.271 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.271 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:59.271 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.271 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.271 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.271 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:59.271 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.271 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.271 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.271 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:59.271 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.271 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.271 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.271 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:59.271 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.271 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.271 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.271 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:59.271 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.271 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.271 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.271 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:59.271 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.271 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.271 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.271 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:59.271 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.271 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.271 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.271 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:59.271 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.271 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.271 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.271 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:59.271 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.271 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.271 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.271 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:59.271 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.271 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.271 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.271 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:59.271 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.271 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.271 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.271 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:59.271 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.271 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.271 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.271 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:59.271 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.271 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.271 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.271 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:59.271 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.271 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.271 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.271 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:59.271 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.271 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.271 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.272 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:59.272 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.272 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.272 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.272 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:59.272 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.272 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.272 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.272 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:59.272 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.272 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.272 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.272 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:59.272 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.272 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.272 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.272 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:59.272 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.272 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.272 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.272 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:59.272 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.272 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.272 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.272 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:59.272 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.272 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.272 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.272 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:59.272 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.272 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.272 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.272 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:59.272 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.272 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.272 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.272 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:59.272 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.272 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.272 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.272 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:59.272 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.272 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.272 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.272 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:02:59.272 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:02:59.272 16:53:46 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@99 -- # surp=0 00:02:59.272 16:53:46 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:02:59.272 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:02:59.272 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:02:59.272 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:02:59.272 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:59.272 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:59.272 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:59.272 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:59.272 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:59.272 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:59.272 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.272 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.272 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381164 kB' 'MemFree: 173621656 kB' 'MemAvailable: 177624648 kB' 'Buffers: 3888 kB' 'Cached: 12111732 kB' 'SwapCached: 0 kB' 'Active: 8151912 kB' 'Inactive: 4449832 kB' 'Active(anon): 7586408 kB' 'Inactive(anon): 0 kB' 'Active(file): 565504 kB' 'Inactive(file): 4449832 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 489372 kB' 'Mapped: 177500 kB' 'Shmem: 7100284 kB' 'KReclaimable: 264216 kB' 'Slab: 836012 kB' 'SReclaimable: 264216 kB' 'SUnreclaim: 571796 kB' 'KernelStack: 20480 kB' 'PageTables: 9004 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103029584 kB' 'Committed_AS: 9019920 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 314844 kB' 'VmallocChunk: 0 kB' 'Percpu: 69888 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 2796500 kB' 'DirectMap2M: 19951616 kB' 'DirectMap1G: 179306496 kB' 00:02:59.272 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:59.272 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:59.272 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.272 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.272 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:59.272 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:59.272 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.272 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.272 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:59.272 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:59.272 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.272 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.272 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:59.272 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:59.272 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.272 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.272 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:59.272 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:59.272 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.272 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.272 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:59.272 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:59.272 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.272 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.272 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:59.272 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:59.272 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.272 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.272 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:59.272 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:59.272 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.272 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.272 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:59.272 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:59.272 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.272 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.272 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:59.272 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:59.272 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.272 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.272 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:59.272 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:59.272 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.272 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.272 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:59.272 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:59.272 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.272 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.272 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:59.272 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:59.272 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.272 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.272 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:59.272 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:59.272 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.272 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.273 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:59.273 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:59.273 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.273 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.273 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:59.273 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:59.273 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.273 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.273 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:59.273 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:59.273 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.273 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.273 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:59.273 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:59.273 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.273 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.273 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:59.273 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:59.273 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.273 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.273 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:59.273 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:59.273 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.273 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.273 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:59.273 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:59.273 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.273 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.273 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:59.273 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:59.273 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.273 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.273 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:59.273 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:59.273 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.273 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.273 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:59.273 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:59.273 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.273 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.273 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:59.273 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:59.273 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.273 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.273 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:59.273 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:59.273 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.273 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.273 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:59.273 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:59.273 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.273 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.273 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:59.273 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:59.273 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.273 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.273 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:59.273 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:59.273 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.273 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.273 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:59.273 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:59.273 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.273 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.273 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:59.273 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:59.273 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.273 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.273 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:59.273 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:59.273 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.273 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.273 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:59.273 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:59.273 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.273 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.273 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:59.273 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:59.273 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.273 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.273 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:59.273 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:59.273 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.273 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.273 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:59.273 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:59.273 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.273 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.273 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:59.273 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:59.273 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.273 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.273 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:59.273 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:59.273 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.273 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.273 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:59.273 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:59.273 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.273 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.273 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:59.273 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:59.273 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.273 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.274 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:59.274 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:59.274 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.274 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.274 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:59.274 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:59.274 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.274 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.274 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:59.274 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:59.274 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.274 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.274 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:59.274 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:59.274 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.274 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.274 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:59.274 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:59.274 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.274 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.274 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:59.274 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:59.274 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.274 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.274 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:59.274 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:59.274 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.274 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.274 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:59.274 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:59.274 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.274 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.274 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:59.274 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:59.274 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.274 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.274 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:59.274 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:59.274 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.274 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.274 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:59.274 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:02:59.274 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:02:59.274 16:53:46 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@100 -- # resv=0 00:02:59.274 16:53:46 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1025 00:02:59.274 nr_hugepages=1025 00:02:59.274 16:53:46 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:02:59.274 resv_hugepages=0 00:02:59.274 16:53:46 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:02:59.274 surplus_hugepages=0 00:02:59.274 16:53:46 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:02:59.274 anon_hugepages=0 00:02:59.274 16:53:46 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@107 -- # (( 1025 == nr_hugepages + surp + resv )) 00:02:59.274 16:53:46 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@109 -- # (( 1025 == nr_hugepages )) 00:02:59.274 16:53:46 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:02:59.274 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:02:59.274 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:02:59.274 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:02:59.274 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:59.274 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:59.274 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:59.274 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:59.274 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:59.274 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:59.274 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381164 kB' 'MemFree: 173622024 kB' 'MemAvailable: 177625016 kB' 'Buffers: 3888 kB' 'Cached: 12111732 kB' 'SwapCached: 0 kB' 'Active: 8151704 kB' 'Inactive: 4449832 kB' 'Active(anon): 7586200 kB' 'Inactive(anon): 0 kB' 'Active(file): 565504 kB' 'Inactive(file): 4449832 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 489160 kB' 'Mapped: 178004 kB' 'Shmem: 7100284 kB' 'KReclaimable: 264216 kB' 'Slab: 836012 kB' 'SReclaimable: 264216 kB' 'SUnreclaim: 571796 kB' 'KernelStack: 20320 kB' 'PageTables: 8644 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103029584 kB' 'Committed_AS: 9018812 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 314716 kB' 'VmallocChunk: 0 kB' 'Percpu: 69888 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 2796500 kB' 'DirectMap2M: 19951616 kB' 'DirectMap1G: 179306496 kB' 00:02:59.274 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.274 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.274 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:59.274 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:59.274 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.274 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.274 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:59.274 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:59.274 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.274 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.274 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:59.274 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:59.274 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.274 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.274 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:59.274 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:59.274 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.274 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.274 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:59.274 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:59.274 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.274 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.274 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:59.274 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:59.274 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.274 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.274 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:59.274 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:59.274 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.274 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.274 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:59.274 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:59.274 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.274 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.274 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:59.274 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:59.274 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.274 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.274 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:59.274 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:59.274 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.274 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.274 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:59.274 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:59.274 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.274 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.274 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:59.274 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:59.275 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.275 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.275 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:59.275 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:59.275 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.275 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.275 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:59.275 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:59.275 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.275 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.275 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:59.275 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:59.275 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.275 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.275 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:59.275 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:59.275 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.275 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.275 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:59.275 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:59.275 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.275 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.275 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:59.275 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:59.275 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.275 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.275 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:59.275 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:59.275 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.275 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.275 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:59.275 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:59.275 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.275 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.275 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:59.275 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:59.275 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.275 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.275 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:59.275 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:59.275 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.275 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.275 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:59.275 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:59.275 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.275 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.275 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:59.275 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:59.275 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.275 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.275 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:59.275 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:59.275 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.275 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.275 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:59.275 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:59.275 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.275 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.275 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:59.275 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:59.275 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.275 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.275 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:59.275 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:59.275 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.275 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.275 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:59.275 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:59.275 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.275 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.275 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:59.275 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:59.275 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.275 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.275 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:59.275 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:59.275 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.275 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.275 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:59.275 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:59.275 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.275 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.275 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:59.275 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:59.275 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.275 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.275 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:59.275 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:59.275 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.275 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.275 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:59.275 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:59.275 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.275 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.275 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:59.275 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:59.275 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.275 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.275 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:59.275 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:59.275 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.275 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.275 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:59.275 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:59.275 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.275 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.275 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:59.275 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:59.275 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.275 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.275 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:59.275 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:59.275 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.275 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.275 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:59.275 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:59.275 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.275 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.275 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:59.275 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:59.276 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.276 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.276 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:59.276 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:59.276 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.276 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.276 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:59.276 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:59.276 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.276 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.276 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:59.276 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:59.276 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.276 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.276 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:59.276 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:59.276 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.276 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.276 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:59.276 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:59.276 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.276 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.276 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:59.276 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:59.276 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.276 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.276 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:59.276 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 1025 00:02:59.276 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:02:59.276 16:53:46 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@110 -- # (( 1025 == nr_hugepages + surp + resv )) 00:02:59.276 16:53:46 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:02:59.276 16:53:46 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@27 -- # local node 00:02:59.276 16:53:46 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:02:59.276 16:53:46 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:02:59.276 16:53:46 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:02:59.276 16:53:46 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=513 00:02:59.276 16:53:46 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:02:59.276 16:53:46 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:02:59.276 16:53:46 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:02:59.276 16:53:46 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:02:59.276 16:53:46 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:02:59.276 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:02:59.276 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node=0 00:02:59.276 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:02:59.276 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:59.276 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:59.276 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:02:59.276 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:02:59.276 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:59.276 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:59.276 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.276 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.276 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 97662684 kB' 'MemFree: 84595996 kB' 'MemUsed: 13066688 kB' 'SwapCached: 0 kB' 'Active: 6533344 kB' 'Inactive: 4007180 kB' 'Active(anon): 6065728 kB' 'Inactive(anon): 0 kB' 'Active(file): 467616 kB' 'Inactive(file): 4007180 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 10246836 kB' 'Mapped: 149332 kB' 'AnonPages: 297324 kB' 'Shmem: 5772040 kB' 'KernelStack: 11864 kB' 'PageTables: 5896 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 131308 kB' 'Slab: 420088 kB' 'SReclaimable: 131308 kB' 'SUnreclaim: 288780 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:02:59.276 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.276 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:59.276 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.276 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.276 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.276 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:59.276 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.276 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.276 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.276 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:59.276 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.276 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.276 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.276 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:59.276 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.276 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.276 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.276 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:59.276 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.276 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.276 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.276 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:59.276 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.276 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.276 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.276 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:59.276 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.276 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.276 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.276 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:59.276 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.276 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.276 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.276 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:59.276 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.276 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.276 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.276 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:59.276 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.276 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.276 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.276 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:59.276 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.276 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.276 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.276 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:59.276 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.276 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.276 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.276 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:59.276 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.276 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.276 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.276 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:59.276 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.276 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.276 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.276 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:59.276 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.277 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.277 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.277 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:59.277 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.277 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.277 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.277 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:59.277 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.277 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.277 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.277 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:59.277 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.277 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.277 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.277 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:59.277 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.277 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.277 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.277 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:59.277 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.277 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.277 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.277 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:59.277 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.277 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.277 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.277 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:59.277 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.277 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.277 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.277 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:59.277 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.277 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.277 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.277 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:59.277 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.277 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.277 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.277 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:59.277 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.277 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.277 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.277 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:59.277 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.277 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.277 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.277 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:59.277 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.277 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.277 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.277 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:59.277 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.277 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.277 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.277 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:59.277 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.277 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.277 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.277 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:59.277 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.277 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.277 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.277 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:59.277 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.277 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.277 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.277 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:59.277 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.277 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.277 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.277 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:59.277 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.277 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.277 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.277 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:59.277 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.277 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.277 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.277 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:59.277 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.277 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.277 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.277 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:59.277 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.277 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.277 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.277 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:02:59.277 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:02:59.277 16:53:46 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:02:59.277 16:53:46 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:02:59.277 16:53:46 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:02:59.277 16:53:46 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:02:59.277 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:02:59.277 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node=1 00:02:59.277 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:02:59.277 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:59.277 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:59.277 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:02:59.277 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:02:59.277 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:59.277 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:59.277 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.277 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.277 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 93718480 kB' 'MemFree: 89019092 kB' 'MemUsed: 4699388 kB' 'SwapCached: 0 kB' 'Active: 1622684 kB' 'Inactive: 442652 kB' 'Active(anon): 1524796 kB' 'Inactive(anon): 0 kB' 'Active(file): 97888 kB' 'Inactive(file): 442652 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 1868824 kB' 'Mapped: 28568 kB' 'AnonPages: 196708 kB' 'Shmem: 1328284 kB' 'KernelStack: 8376 kB' 'PageTables: 2672 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 132908 kB' 'Slab: 415924 kB' 'SReclaimable: 132908 kB' 'SUnreclaim: 283016 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 513' 'HugePages_Free: 513' 'HugePages_Surp: 0' 00:02:59.277 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.277 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:59.277 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.277 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.277 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.277 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:59.277 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.277 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.277 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.277 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:59.277 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.277 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.277 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.278 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:59.278 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.278 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.278 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.278 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:59.278 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.278 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.278 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.278 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:59.278 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.278 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.278 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.278 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:59.278 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.278 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.278 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.278 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:59.278 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.278 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.278 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.278 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:59.278 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.278 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.278 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.278 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:59.278 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.278 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.278 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.278 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:59.278 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.278 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.278 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.278 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:59.278 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.278 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.278 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.278 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:59.278 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.278 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.278 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.278 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:59.278 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.278 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.278 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.278 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:59.278 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.278 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.278 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.278 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:59.278 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.278 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.278 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.278 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:59.278 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.278 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.278 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.278 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:59.278 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.278 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.278 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.278 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:59.278 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.278 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.278 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.278 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:59.278 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.278 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.278 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.278 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:59.278 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.278 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.278 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.278 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:59.278 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.278 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.278 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.278 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:59.278 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.278 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.278 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.278 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:59.278 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.278 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.278 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.278 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:59.278 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.278 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.278 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.278 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:59.278 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.278 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.278 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.278 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:59.278 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.278 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.278 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.278 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:59.278 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.278 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.278 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.278 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:59.278 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.278 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.278 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.278 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:59.279 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.279 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.279 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.279 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:59.279 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.279 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.279 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.279 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:59.279 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.279 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.279 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.279 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:59.279 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.279 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.279 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.279 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:59.279 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.279 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.279 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.279 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:59.279 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.279 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.279 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.279 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:59.279 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:59.279 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:59.279 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.279 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:02:59.279 16:53:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:02:59.279 16:53:46 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:02:59.279 16:53:46 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:02:59.279 16:53:46 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:02:59.279 16:53:46 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:02:59.279 16:53:46 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 513' 00:02:59.279 node0=512 expecting 513 00:02:59.279 16:53:46 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:02:59.279 16:53:46 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:02:59.279 16:53:46 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:02:59.279 16:53:46 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@128 -- # echo 'node1=513 expecting 512' 00:02:59.279 node1=513 expecting 512 00:02:59.279 16:53:46 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@130 -- # [[ 512 513 == \5\1\2\ \5\1\3 ]] 00:02:59.279 00:02:59.279 real 0m2.656s 00:02:59.279 user 0m1.073s 00:02:59.279 sys 0m1.597s 00:02:59.279 16:53:46 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:02:59.279 16:53:46 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@10 -- # set +x 00:02:59.279 ************************************ 00:02:59.279 END TEST odd_alloc 00:02:59.279 ************************************ 00:02:59.279 16:53:46 setup.sh.hugepages -- setup/hugepages.sh@214 -- # run_test custom_alloc custom_alloc 00:02:59.279 16:53:46 setup.sh.hugepages -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:02:59.279 16:53:46 setup.sh.hugepages -- common/autotest_common.sh@1103 -- # xtrace_disable 00:02:59.279 16:53:46 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:02:59.279 ************************************ 00:02:59.279 START TEST custom_alloc 00:02:59.279 ************************************ 00:02:59.279 16:53:46 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@1121 -- # custom_alloc 00:02:59.279 16:53:46 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@167 -- # local IFS=, 00:02:59.279 16:53:46 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@169 -- # local node 00:02:59.279 16:53:46 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@170 -- # nodes_hp=() 00:02:59.279 16:53:46 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@170 -- # local nodes_hp 00:02:59.279 16:53:46 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@172 -- # local nr_hugepages=0 _nr_hugepages=0 00:02:59.279 16:53:46 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@174 -- # get_test_nr_hugepages 1048576 00:02:59.279 16:53:46 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@49 -- # local size=1048576 00:02:59.279 16:53:46 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:02:59.279 16:53:46 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:02:59.279 16:53:46 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:02:59.279 16:53:46 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:02:59.279 16:53:46 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:02:59.279 16:53:46 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:02:59.279 16:53:46 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:02:59.279 16:53:46 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:02:59.279 16:53:46 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:02:59.279 16:53:46 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:02:59.279 16:53:46 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:02:59.279 16:53:46 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:02:59.279 16:53:46 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:02:59.279 16:53:46 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=256 00:02:59.279 16:53:46 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@83 -- # : 256 00:02:59.279 16:53:46 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@84 -- # : 1 00:02:59.279 16:53:46 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:02:59.279 16:53:46 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=256 00:02:59.279 16:53:46 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@83 -- # : 0 00:02:59.279 16:53:46 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@84 -- # : 0 00:02:59.279 16:53:46 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:02:59.279 16:53:46 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@175 -- # nodes_hp[0]=512 00:02:59.279 16:53:46 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@176 -- # (( 2 > 1 )) 00:02:59.279 16:53:46 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@177 -- # get_test_nr_hugepages 2097152 00:02:59.279 16:53:46 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:02:59.279 16:53:46 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:02:59.279 16:53:46 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:02:59.279 16:53:46 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:02:59.279 16:53:46 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:02:59.279 16:53:46 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:02:59.279 16:53:46 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:02:59.279 16:53:46 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:02:59.279 16:53:46 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:02:59.279 16:53:46 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:02:59.279 16:53:46 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:02:59.279 16:53:46 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:02:59.279 16:53:46 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 1 > 0 )) 00:02:59.279 16:53:46 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:02:59.279 16:53:46 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:02:59.279 16:53:46 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@78 -- # return 0 00:02:59.279 16:53:46 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@178 -- # nodes_hp[1]=1024 00:02:59.279 16:53:46 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:02:59.279 16:53:46 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:02:59.279 16:53:46 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:02:59.279 16:53:46 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:02:59.279 16:53:46 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:02:59.279 16:53:46 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:02:59.279 16:53:46 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@186 -- # get_test_nr_hugepages_per_node 00:02:59.279 16:53:46 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:02:59.279 16:53:46 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:02:59.279 16:53:46 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:02:59.279 16:53:46 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:02:59.279 16:53:46 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:02:59.279 16:53:46 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:02:59.279 16:53:46 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:02:59.279 16:53:46 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 2 > 0 )) 00:02:59.279 16:53:46 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:02:59.279 16:53:46 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:02:59.279 16:53:46 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:02:59.279 16:53:46 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=1024 00:02:59.280 16:53:46 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@78 -- # return 0 00:02:59.280 16:53:46 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@187 -- # HUGENODE='nodes_hp[0]=512,nodes_hp[1]=1024' 00:02:59.280 16:53:46 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@187 -- # setup output 00:02:59.280 16:53:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:02:59.280 16:53:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:01.821 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:03:01.821 0000:5e:00.0 (8086 0a54): Already using the vfio-pci driver 00:03:01.821 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:03:01.821 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:03:01.821 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:03:01.821 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:03:01.821 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:03:01.821 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:03:01.821 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:03:01.821 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:03:01.821 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:03:01.821 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:03:01.821 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:03:01.821 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:03:01.821 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:03:01.821 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:03:01.821 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:03:01.821 16:53:49 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@188 -- # nr_hugepages=1536 00:03:01.821 16:53:49 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@188 -- # verify_nr_hugepages 00:03:01.821 16:53:49 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@89 -- # local node 00:03:01.821 16:53:49 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:03:01.821 16:53:49 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:03:01.821 16:53:49 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@92 -- # local surp 00:03:01.821 16:53:49 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@93 -- # local resv 00:03:01.821 16:53:49 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@94 -- # local anon 00:03:01.821 16:53:49 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:01.821 16:53:49 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:01.821 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:01.821 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:03:01.821 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:03:01.821 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:01.821 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:01.821 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:01.821 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:01.821 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:01.821 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:01.821 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.821 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.822 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381164 kB' 'MemFree: 172570940 kB' 'MemAvailable: 176573932 kB' 'Buffers: 3888 kB' 'Cached: 12111856 kB' 'SwapCached: 0 kB' 'Active: 8152044 kB' 'Inactive: 4449832 kB' 'Active(anon): 7586540 kB' 'Inactive(anon): 0 kB' 'Active(file): 565504 kB' 'Inactive(file): 4449832 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 489368 kB' 'Mapped: 177672 kB' 'Shmem: 7100408 kB' 'KReclaimable: 264216 kB' 'Slab: 835632 kB' 'SReclaimable: 264216 kB' 'SUnreclaim: 571416 kB' 'KernelStack: 20352 kB' 'PageTables: 8728 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 102506320 kB' 'Committed_AS: 9019292 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 314700 kB' 'VmallocChunk: 0 kB' 'Percpu: 69888 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 2796500 kB' 'DirectMap2M: 19951616 kB' 'DirectMap1G: 179306496 kB' 00:03:01.822 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:01.822 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:01.822 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.822 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.822 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:01.822 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:01.822 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.822 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.822 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:01.822 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:01.822 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.822 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.822 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:01.822 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:01.822 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.822 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.822 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:01.822 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:01.822 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.822 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.822 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:01.822 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:01.822 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.822 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.822 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:01.822 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:01.822 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.822 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.822 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:01.822 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:01.822 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.822 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.822 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:01.822 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:01.822 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.822 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.822 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:01.822 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:01.822 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.822 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.822 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:01.822 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:01.822 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.822 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.822 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:01.822 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:01.822 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.822 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.822 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:01.822 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:01.822 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.822 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.822 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:01.822 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:01.822 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.822 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.822 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:01.822 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:01.822 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.822 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.822 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:01.822 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:01.822 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.822 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.822 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:01.822 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:01.822 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.822 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.822 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:01.822 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:01.822 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.822 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.822 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:01.822 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:01.822 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.822 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.822 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:01.822 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:01.822 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.822 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.822 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:01.822 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:01.822 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.822 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.822 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:01.822 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:01.822 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.822 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.822 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:01.822 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:01.822 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.822 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.822 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:01.822 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:01.822 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.822 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.822 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:01.822 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:01.822 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.822 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.822 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:01.823 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:01.823 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.823 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.823 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:01.823 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:01.823 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.823 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.823 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:01.823 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:01.823 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.823 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.823 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:01.823 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:01.823 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.823 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.823 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:01.823 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:01.823 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.823 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.823 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:01.823 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:01.823 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.823 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.823 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:01.823 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:01.823 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.823 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.823 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:01.823 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:01.823 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.823 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.823 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:01.823 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:01.823 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.823 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.823 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:01.823 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:01.823 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.823 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.823 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:01.823 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:01.823 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.823 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.823 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:01.823 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:01.823 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.823 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.823 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:01.823 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:01.823 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.823 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.823 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:01.823 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:01.823 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.823 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.823 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:01.823 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:01.823 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.823 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.823 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:01.823 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:03:01.823 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:03:01.823 16:53:49 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@97 -- # anon=0 00:03:01.823 16:53:49 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:01.823 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:01.823 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:03:01.823 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:03:01.823 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:01.823 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:01.823 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:01.823 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:01.823 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:01.823 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:01.823 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.823 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.823 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381164 kB' 'MemFree: 172572580 kB' 'MemAvailable: 176575572 kB' 'Buffers: 3888 kB' 'Cached: 12111860 kB' 'SwapCached: 0 kB' 'Active: 8152252 kB' 'Inactive: 4449832 kB' 'Active(anon): 7586748 kB' 'Inactive(anon): 0 kB' 'Active(file): 565504 kB' 'Inactive(file): 4449832 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 489400 kB' 'Mapped: 177568 kB' 'Shmem: 7100412 kB' 'KReclaimable: 264216 kB' 'Slab: 835344 kB' 'SReclaimable: 264216 kB' 'SUnreclaim: 571128 kB' 'KernelStack: 20384 kB' 'PageTables: 9056 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 102506320 kB' 'Committed_AS: 9020428 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 314732 kB' 'VmallocChunk: 0 kB' 'Percpu: 69888 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 2796500 kB' 'DirectMap2M: 19951616 kB' 'DirectMap1G: 179306496 kB' 00:03:01.823 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.823 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:01.823 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.823 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.823 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.823 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:01.823 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.823 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.823 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.823 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:01.823 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.823 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.823 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.824 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:01.824 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.824 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.824 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.824 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:01.824 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.824 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.824 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.824 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:01.824 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.824 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.824 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.824 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:01.824 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.824 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.824 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.824 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:01.824 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.824 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.824 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.824 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:01.824 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.824 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.824 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.824 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:01.824 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.824 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.824 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.824 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:01.824 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.824 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.824 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.824 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:01.824 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.824 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.824 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.824 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:01.824 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.824 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.824 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.824 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:01.824 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.824 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.824 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.824 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:01.824 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.824 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.824 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.824 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:01.824 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.824 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.824 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.824 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:01.824 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.824 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.824 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.824 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:01.824 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.824 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.824 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.824 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:01.824 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.824 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.824 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.824 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:01.824 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.824 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.824 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.824 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:01.824 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.824 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.824 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.824 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:01.824 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.824 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.824 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.824 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:01.824 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.824 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.824 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.824 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:01.824 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.824 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.824 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.824 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:01.824 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.824 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.824 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.824 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:01.824 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.824 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.824 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.824 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:01.824 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.824 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.824 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.824 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:01.824 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.824 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.824 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.824 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:01.824 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.824 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.824 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.824 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:01.824 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.824 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.824 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.824 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:01.824 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.825 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.825 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.825 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:01.825 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.825 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.825 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.825 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:01.825 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.825 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.825 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.825 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:01.825 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.825 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.825 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.825 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:01.825 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.825 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.825 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.825 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:01.825 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.825 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.825 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.825 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:01.825 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.825 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.825 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.825 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:01.825 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.825 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.825 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.825 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:01.825 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.825 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.825 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.825 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:01.825 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.825 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.825 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.825 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:01.825 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.825 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.825 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.825 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:01.825 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.825 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.825 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.825 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:01.825 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.825 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.825 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.825 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:01.825 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.825 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.825 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.825 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:01.825 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.825 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.825 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.825 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:01.825 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.825 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.825 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.825 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:01.825 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.825 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.825 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.825 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:01.825 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.825 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.825 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.825 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:01.825 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.825 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.825 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.825 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:01.825 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.825 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.825 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.825 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:01.825 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.825 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.825 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.825 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:03:01.825 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:03:01.825 16:53:49 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@99 -- # surp=0 00:03:01.825 16:53:49 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:01.825 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:01.825 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:03:01.825 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:03:01.825 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:01.825 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:01.825 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:01.825 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:01.825 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:01.825 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:01.825 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.825 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.826 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381164 kB' 'MemFree: 172570888 kB' 'MemAvailable: 176573880 kB' 'Buffers: 3888 kB' 'Cached: 12111876 kB' 'SwapCached: 0 kB' 'Active: 8151552 kB' 'Inactive: 4449832 kB' 'Active(anon): 7586048 kB' 'Inactive(anon): 0 kB' 'Active(file): 565504 kB' 'Inactive(file): 4449832 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 488900 kB' 'Mapped: 177568 kB' 'Shmem: 7100428 kB' 'KReclaimable: 264216 kB' 'Slab: 835312 kB' 'SReclaimable: 264216 kB' 'SUnreclaim: 571096 kB' 'KernelStack: 20176 kB' 'PageTables: 8676 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 102506320 kB' 'Committed_AS: 9020452 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 314700 kB' 'VmallocChunk: 0 kB' 'Percpu: 69888 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 2796500 kB' 'DirectMap2M: 19951616 kB' 'DirectMap1G: 179306496 kB' 00:03:01.826 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:01.826 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:01.826 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.826 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.826 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:01.826 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:01.826 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.826 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.826 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:01.826 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:01.826 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.826 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.826 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:01.826 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:01.826 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.826 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.826 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:01.826 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:01.826 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.826 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.826 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:01.826 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:01.826 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.826 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.826 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:01.826 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:01.826 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.826 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.826 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:01.826 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:01.826 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.826 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.826 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:01.826 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:01.826 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.826 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.826 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:01.826 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:01.826 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.826 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.826 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:01.826 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:01.826 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.826 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.826 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:01.826 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:01.826 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.826 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.826 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:01.826 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:01.826 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.826 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.826 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:01.826 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:01.826 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.826 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.826 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:01.826 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:01.826 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.826 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.826 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:01.826 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:01.826 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.826 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.826 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:01.826 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:01.826 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.826 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.826 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:01.826 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:01.826 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.826 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.826 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:01.826 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:01.826 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.826 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.826 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:01.826 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:01.826 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.826 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.826 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:01.827 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:01.827 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.827 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.827 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:01.827 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:01.827 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.827 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.827 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:01.827 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:01.827 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.827 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.827 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:01.827 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:01.827 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.827 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.827 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:01.827 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:01.827 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.827 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.827 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:01.827 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:01.827 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.827 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.827 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:01.827 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:01.827 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.827 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.827 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:01.827 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:01.827 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.827 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.827 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:01.827 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:01.827 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.827 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.827 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:01.827 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:01.827 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.827 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.827 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:01.827 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:01.827 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.827 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.827 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:01.827 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:01.827 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.827 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.827 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:01.827 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:01.827 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.827 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.827 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:01.827 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:01.827 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.827 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.827 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:01.827 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:01.827 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.827 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.827 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:01.827 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:01.827 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.827 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.827 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:01.827 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:01.827 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.827 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.827 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:01.827 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:01.827 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.827 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.827 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:01.827 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:01.827 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.827 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.827 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:01.827 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:01.827 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.827 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.827 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:01.827 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:01.827 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.827 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.827 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:01.827 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:01.827 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.827 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.827 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:01.827 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:01.827 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.827 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.827 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:01.827 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:01.827 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.827 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.827 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:01.827 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:01.827 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.827 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.827 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:01.827 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:01.827 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.827 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.827 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:01.827 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:01.827 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.827 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.827 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:01.827 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:01.827 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.827 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.827 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:01.828 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:01.828 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.828 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.828 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:01.828 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:01.828 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.828 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.828 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:01.828 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:03:01.828 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:03:01.828 16:53:49 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@100 -- # resv=0 00:03:01.828 16:53:49 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1536 00:03:01.828 nr_hugepages=1536 00:03:01.828 16:53:49 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:01.828 resv_hugepages=0 00:03:01.828 16:53:49 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:01.828 surplus_hugepages=0 00:03:01.828 16:53:49 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:01.828 anon_hugepages=0 00:03:01.828 16:53:49 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@107 -- # (( 1536 == nr_hugepages + surp + resv )) 00:03:01.828 16:53:49 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@109 -- # (( 1536 == nr_hugepages )) 00:03:01.828 16:53:49 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:01.828 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:01.828 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:03:01.828 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:03:01.828 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:01.828 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:01.828 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:01.828 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:01.828 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:01.828 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:01.828 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.828 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.828 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381164 kB' 'MemFree: 172570736 kB' 'MemAvailable: 176573728 kB' 'Buffers: 3888 kB' 'Cached: 12111896 kB' 'SwapCached: 0 kB' 'Active: 8152004 kB' 'Inactive: 4449832 kB' 'Active(anon): 7586500 kB' 'Inactive(anon): 0 kB' 'Active(file): 565504 kB' 'Inactive(file): 4449832 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 489312 kB' 'Mapped: 177568 kB' 'Shmem: 7100448 kB' 'KReclaimable: 264216 kB' 'Slab: 835312 kB' 'SReclaimable: 264216 kB' 'SUnreclaim: 571096 kB' 'KernelStack: 20320 kB' 'PageTables: 8964 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 102506320 kB' 'Committed_AS: 9020472 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 314796 kB' 'VmallocChunk: 0 kB' 'Percpu: 69888 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 2796500 kB' 'DirectMap2M: 19951616 kB' 'DirectMap1G: 179306496 kB' 00:03:01.828 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:01.828 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:01.828 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.828 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.828 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:01.828 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:01.828 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.828 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.828 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:01.828 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:01.828 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.828 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.828 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:01.828 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:01.828 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.828 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.828 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:01.828 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:01.828 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.828 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.828 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:01.828 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:01.828 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.828 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.828 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:01.828 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:01.828 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.828 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.828 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:01.828 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:01.828 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.828 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.828 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:01.828 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:01.828 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.828 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.828 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:01.828 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:01.828 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.828 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.828 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:01.828 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:01.828 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.828 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.828 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:01.828 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:01.828 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.828 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.828 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:01.828 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:01.828 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.828 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.828 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:01.828 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:01.828 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.828 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.828 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:01.829 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:01.829 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.829 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.829 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:01.829 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:01.829 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.829 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.829 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:01.829 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:01.829 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.829 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.829 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:01.829 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:01.829 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.829 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.829 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:01.829 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:01.829 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.829 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.829 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:01.829 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:01.829 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.829 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.829 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:01.829 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:01.829 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.829 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.829 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:01.829 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:01.829 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.829 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.829 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:01.829 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:01.829 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.829 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.829 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:01.829 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:01.829 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.829 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.829 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:01.829 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:01.829 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.829 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.829 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:01.829 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:01.829 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.829 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.829 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:01.829 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:01.829 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.829 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.829 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:01.829 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:01.829 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.829 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.829 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:01.829 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:01.829 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.829 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.829 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:01.829 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:01.829 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.829 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.829 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:01.829 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:01.829 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.829 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.829 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:01.829 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:01.829 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.829 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.829 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:01.829 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:01.829 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.829 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.829 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:01.829 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:01.829 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.829 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.829 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:01.829 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:01.829 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.829 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.829 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:01.829 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:01.829 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.829 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.829 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:01.829 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:01.829 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.829 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.829 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:01.829 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:01.829 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.829 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.829 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:01.829 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:01.829 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.829 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.829 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:01.829 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:01.829 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.829 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.829 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:01.830 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:01.830 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.830 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.830 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:01.830 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:01.830 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.830 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.830 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:01.830 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:01.830 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.830 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.830 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:01.830 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:01.830 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.830 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.830 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:01.830 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:01.830 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.830 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.830 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:01.830 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:01.830 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.830 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.830 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:01.830 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:01.830 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.830 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.830 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:01.830 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:01.830 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.830 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.830 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:01.830 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 1536 00:03:01.830 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:03:01.830 16:53:49 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@110 -- # (( 1536 == nr_hugepages + surp + resv )) 00:03:01.830 16:53:49 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:03:01.830 16:53:49 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@27 -- # local node 00:03:01.830 16:53:49 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:01.830 16:53:49 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:01.830 16:53:49 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:01.830 16:53:49 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:03:01.830 16:53:49 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:01.830 16:53:49 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:01.830 16:53:49 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:01.830 16:53:49 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:01.830 16:53:49 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:01.830 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:01.830 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node=0 00:03:01.830 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:03:01.830 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:01.830 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:01.830 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:01.830 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:01.830 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:01.830 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:01.830 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.830 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.830 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 97662684 kB' 'MemFree: 84601308 kB' 'MemUsed: 13061376 kB' 'SwapCached: 0 kB' 'Active: 6528584 kB' 'Inactive: 4007180 kB' 'Active(anon): 6060968 kB' 'Inactive(anon): 0 kB' 'Active(file): 467616 kB' 'Inactive(file): 4007180 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 10246936 kB' 'Mapped: 149412 kB' 'AnonPages: 291984 kB' 'Shmem: 5772140 kB' 'KernelStack: 11880 kB' 'PageTables: 6076 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 131308 kB' 'Slab: 419852 kB' 'SReclaimable: 131308 kB' 'SUnreclaim: 288544 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:01.830 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.830 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:01.830 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.830 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.830 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.830 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:01.830 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.830 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.830 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.830 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:01.830 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.830 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.830 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.830 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:01.830 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.830 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.830 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.830 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:01.830 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.830 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.830 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.830 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:01.830 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.830 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.830 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.830 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:01.830 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.830 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.830 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.830 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:01.830 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.830 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.830 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.830 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:01.830 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.830 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.830 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.831 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:01.831 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.831 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.831 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.831 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:01.831 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.831 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.831 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.831 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:01.831 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.831 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.831 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.831 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:01.831 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.831 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.831 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.831 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:01.831 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.831 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.831 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.831 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:01.831 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.831 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.831 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.831 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:01.831 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.831 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.831 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.831 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:01.831 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.831 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.831 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.831 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:01.831 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.831 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.831 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.831 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:01.831 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.831 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.831 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.831 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:01.831 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.831 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.831 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.831 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:01.831 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.831 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.831 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.831 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:01.831 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.831 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.831 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.831 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:01.831 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.831 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.831 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.831 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:01.831 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.831 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.831 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.831 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:01.831 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.831 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.831 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.831 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:01.831 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.831 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.831 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.831 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:01.831 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.831 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.831 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.831 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:01.831 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.831 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.831 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.831 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:01.831 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.831 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.831 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.831 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:01.831 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.831 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.831 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.831 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:01.831 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.831 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.831 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.831 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:01.831 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.831 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.831 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.831 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:01.831 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.831 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.831 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.831 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:01.831 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.831 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.831 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.831 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:01.831 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.831 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.831 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.831 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:01.831 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.832 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.832 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.832 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:03:01.832 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:03:01.832 16:53:49 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:01.832 16:53:49 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:01.832 16:53:49 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:01.832 16:53:49 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:03:01.832 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:01.832 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node=1 00:03:01.832 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:03:01.832 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:01.832 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:01.832 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:03:01.832 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:03:01.832 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:01.832 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:01.832 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.832 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.832 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 93718480 kB' 'MemFree: 87967444 kB' 'MemUsed: 5751036 kB' 'SwapCached: 0 kB' 'Active: 1623440 kB' 'Inactive: 442652 kB' 'Active(anon): 1525552 kB' 'Inactive(anon): 0 kB' 'Active(file): 97888 kB' 'Inactive(file): 442652 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 1868848 kB' 'Mapped: 28156 kB' 'AnonPages: 197348 kB' 'Shmem: 1328308 kB' 'KernelStack: 8472 kB' 'PageTables: 2576 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 132908 kB' 'Slab: 415460 kB' 'SReclaimable: 132908 kB' 'SUnreclaim: 282552 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:03:01.832 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.832 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:01.832 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.832 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.832 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.832 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:01.832 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.832 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.832 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.832 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:01.832 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.832 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.832 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.832 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:01.832 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.832 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.832 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.832 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:01.832 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.832 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.832 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.832 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:01.832 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.832 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.832 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.832 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:01.832 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.832 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.832 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.832 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:01.832 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.832 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.832 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.832 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:01.832 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.832 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.832 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.832 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:01.832 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.832 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.832 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.832 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:01.832 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.832 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.832 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.832 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:01.832 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.832 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.832 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.832 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:01.832 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.832 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.832 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.832 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:01.832 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.832 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.832 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.832 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:01.832 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.832 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.833 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.833 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:01.833 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.833 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.833 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.833 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:01.833 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.833 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.833 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.833 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:01.833 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.833 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.833 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.833 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:01.833 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.833 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.833 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.833 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:01.833 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.833 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.833 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.833 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:01.833 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.833 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.833 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.833 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:01.833 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.833 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.833 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.833 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:01.833 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.833 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.833 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.833 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:01.833 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.833 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.833 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.833 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:01.833 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.833 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.833 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.833 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:01.833 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.833 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.833 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.833 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:01.833 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.833 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.833 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.833 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:01.833 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.833 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.833 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.833 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:01.833 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.833 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.833 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.833 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:01.833 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.833 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.833 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.833 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:01.833 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.833 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.833 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.833 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:01.833 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.833 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.833 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.833 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:01.833 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.833 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.833 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.833 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:01.833 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.833 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.833 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.833 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:01.833 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.833 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.833 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.833 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:01.833 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.833 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.833 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.833 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:03:01.833 16:53:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:03:01.833 16:53:49 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:01.833 16:53:49 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:01.833 16:53:49 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:02.092 16:53:49 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:02.092 16:53:49 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:03:02.092 node0=512 expecting 512 00:03:02.092 16:53:49 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:02.092 16:53:49 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:02.092 16:53:49 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:02.092 16:53:49 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@128 -- # echo 'node1=1024 expecting 1024' 00:03:02.092 node1=1024 expecting 1024 00:03:02.092 16:53:49 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@130 -- # [[ 512,1024 == \5\1\2\,\1\0\2\4 ]] 00:03:02.092 00:03:02.092 real 0m2.621s 00:03:02.092 user 0m0.993s 00:03:02.092 sys 0m1.649s 00:03:02.092 16:53:49 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:03:02.092 16:53:49 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@10 -- # set +x 00:03:02.092 ************************************ 00:03:02.092 END TEST custom_alloc 00:03:02.092 ************************************ 00:03:02.092 16:53:49 setup.sh.hugepages -- setup/hugepages.sh@215 -- # run_test no_shrink_alloc no_shrink_alloc 00:03:02.092 16:53:49 setup.sh.hugepages -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:03:02.092 16:53:49 setup.sh.hugepages -- common/autotest_common.sh@1103 -- # xtrace_disable 00:03:02.092 16:53:49 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:02.092 ************************************ 00:03:02.092 START TEST no_shrink_alloc 00:03:02.092 ************************************ 00:03:02.092 16:53:49 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@1121 -- # no_shrink_alloc 00:03:02.092 16:53:49 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@195 -- # get_test_nr_hugepages 2097152 0 00:03:02.092 16:53:49 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:03:02.092 16:53:49 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:03:02.092 16:53:49 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@51 -- # shift 00:03:02.092 16:53:49 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@52 -- # node_ids=('0') 00:03:02.092 16:53:49 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@52 -- # local node_ids 00:03:02.092 16:53:49 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:02.092 16:53:49 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:03:02.092 16:53:49 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:03:02.092 16:53:49 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:03:02.092 16:53:49 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:03:02.092 16:53:49 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:02.092 16:53:49 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:02.092 16:53:49 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:02.092 16:53:49 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:02.092 16:53:49 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:03:02.092 16:53:49 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:03:02.092 16:53:49 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:03:02.092 16:53:49 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@73 -- # return 0 00:03:02.092 16:53:49 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@198 -- # setup output 00:03:02.092 16:53:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:03:02.092 16:53:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:03.995 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:03:03.995 0000:5e:00.0 (8086 0a54): Already using the vfio-pci driver 00:03:03.995 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:03:03.995 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:03:03.995 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:03:03.995 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:03:03.995 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:03:03.995 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:03:03.995 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:03:03.995 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:03:03.995 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:03:04.259 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:03:04.259 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:03:04.259 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:03:04.259 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:03:04.259 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:03:04.259 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:03:04.259 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@199 -- # verify_nr_hugepages 00:03:04.259 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@89 -- # local node 00:03:04.259 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:03:04.259 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:03:04.259 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@92 -- # local surp 00:03:04.259 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@93 -- # local resv 00:03:04.259 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@94 -- # local anon 00:03:04.259 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:04.259 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:04.259 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:04.259 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:04.259 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:04.259 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:04.259 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:04.259 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:04.259 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:04.259 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:04.259 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:04.259 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.259 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.259 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381164 kB' 'MemFree: 173607860 kB' 'MemAvailable: 177610848 kB' 'Buffers: 3888 kB' 'Cached: 12112000 kB' 'SwapCached: 0 kB' 'Active: 8151084 kB' 'Inactive: 4449832 kB' 'Active(anon): 7585580 kB' 'Inactive(anon): 0 kB' 'Active(file): 565504 kB' 'Inactive(file): 4449832 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 488248 kB' 'Mapped: 177536 kB' 'Shmem: 7100552 kB' 'KReclaimable: 264208 kB' 'Slab: 835512 kB' 'SReclaimable: 264208 kB' 'SUnreclaim: 571304 kB' 'KernelStack: 20208 kB' 'PageTables: 8480 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030608 kB' 'Committed_AS: 9018200 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 314764 kB' 'VmallocChunk: 0 kB' 'Percpu: 69888 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2796500 kB' 'DirectMap2M: 19951616 kB' 'DirectMap1G: 179306496 kB' 00:03:04.259 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:04.259 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:04.259 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.259 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.259 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:04.259 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:04.259 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.259 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.259 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:04.259 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:04.259 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.259 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.259 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:04.259 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:04.259 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.259 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.259 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:04.259 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:04.259 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.259 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.259 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:04.259 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:04.259 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.259 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.259 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:04.259 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:04.259 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.259 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.259 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:04.259 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:04.259 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.259 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.259 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:04.259 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:04.259 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.259 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.259 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:04.259 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:04.259 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.259 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.259 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:04.259 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:04.259 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.259 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.259 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:04.259 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:04.259 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.259 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.259 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:04.259 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:04.259 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.259 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.259 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:04.259 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:04.259 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.259 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.260 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:04.260 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:04.260 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.260 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.260 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:04.260 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:04.260 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.260 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.260 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:04.260 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:04.260 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.260 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.260 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:04.260 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:04.260 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.260 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.260 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:04.260 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:04.260 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.260 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.260 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:04.260 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:04.260 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.260 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.260 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:04.260 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:04.260 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.260 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.260 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:04.260 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:04.260 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.260 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.260 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:04.260 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:04.260 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.260 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.260 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:04.260 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:04.260 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.260 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.260 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:04.260 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:04.260 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.260 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.260 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:04.260 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:04.260 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.260 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.260 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:04.260 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:04.260 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.260 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.260 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:04.260 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:04.260 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.260 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.260 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:04.260 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:04.260 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.260 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.260 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:04.260 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:04.260 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.260 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.260 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:04.260 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:04.260 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.260 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.260 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:04.260 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:04.260 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.260 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.260 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:04.260 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:04.260 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.260 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.260 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:04.260 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:04.260 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.260 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.260 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:04.260 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:04.260 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.260 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.260 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:04.260 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:04.260 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.260 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.260 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:04.260 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:04.260 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.260 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.260 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:04.260 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:04.260 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.260 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.260 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:04.260 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:04.260 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.260 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.260 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:04.260 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:04.260 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.260 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.260 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:04.260 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:04.260 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:04.260 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # anon=0 00:03:04.260 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:04.260 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:04.260 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:04.260 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:04.260 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:04.260 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:04.260 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:04.260 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:04.260 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:04.260 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:04.260 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.260 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.261 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381164 kB' 'MemFree: 173606800 kB' 'MemAvailable: 177609788 kB' 'Buffers: 3888 kB' 'Cached: 12112000 kB' 'SwapCached: 0 kB' 'Active: 8151108 kB' 'Inactive: 4449832 kB' 'Active(anon): 7585604 kB' 'Inactive(anon): 0 kB' 'Active(file): 565504 kB' 'Inactive(file): 4449832 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 488276 kB' 'Mapped: 177512 kB' 'Shmem: 7100552 kB' 'KReclaimable: 264208 kB' 'Slab: 835512 kB' 'SReclaimable: 264208 kB' 'SUnreclaim: 571304 kB' 'KernelStack: 20240 kB' 'PageTables: 8592 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030608 kB' 'Committed_AS: 9032320 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 314764 kB' 'VmallocChunk: 0 kB' 'Percpu: 69888 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2796500 kB' 'DirectMap2M: 19951616 kB' 'DirectMap1G: 179306496 kB' 00:03:04.261 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.261 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:04.261 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.261 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.261 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.261 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:04.261 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.261 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.261 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.261 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:04.261 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.261 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.261 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.261 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:04.261 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.261 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.261 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.261 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:04.261 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.261 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.261 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.261 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:04.261 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.261 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.261 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.261 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:04.261 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.261 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.261 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.261 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:04.261 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.261 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.261 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.261 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:04.261 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.261 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.261 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.261 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:04.261 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.261 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.261 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.261 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:04.261 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.261 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.261 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.261 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:04.261 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.261 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.261 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.261 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:04.261 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.261 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.261 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.261 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:04.261 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.261 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.261 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.261 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:04.261 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.261 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.261 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.261 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:04.261 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.261 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.261 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.261 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:04.261 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.261 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.261 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.261 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:04.261 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.261 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.261 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.261 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:04.261 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.261 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.261 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.261 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:04.261 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.261 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.261 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.261 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:04.261 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.261 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.261 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.261 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:04.261 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.261 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.261 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.261 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:04.261 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.261 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.261 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.261 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:04.261 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.261 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.261 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.261 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:04.261 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.261 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.261 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.261 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:04.261 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.261 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.261 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.261 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:04.262 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.262 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.262 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.262 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:04.262 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.262 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.262 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.262 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:04.262 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.262 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.262 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.262 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:04.262 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.262 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.262 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.262 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:04.262 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.262 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.262 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.262 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:04.262 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.262 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.262 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.262 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:04.262 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.262 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.262 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.262 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:04.262 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.262 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.262 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.262 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:04.262 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.262 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.262 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.262 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:04.262 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.262 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.262 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.262 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:04.262 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.262 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.262 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.262 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:04.262 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.262 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.262 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.262 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:04.262 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.262 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.262 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.262 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:04.262 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.262 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.262 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.262 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:04.262 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.262 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.262 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.262 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:04.262 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.262 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.262 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.262 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:04.262 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.262 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.262 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.262 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:04.262 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.262 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.262 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.262 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:04.262 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.262 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.262 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.262 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:04.262 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.262 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.262 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.262 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:04.262 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.262 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.262 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.262 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:04.262 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.262 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.262 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.262 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:04.262 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.262 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.262 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.262 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:04.262 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.262 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.262 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.262 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:04.262 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.262 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.262 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.262 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:04.262 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:04.262 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # surp=0 00:03:04.262 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:04.262 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:04.262 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:04.262 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:04.262 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:04.262 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:04.262 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:04.262 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:04.262 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:04.262 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:04.262 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.262 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.263 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381164 kB' 'MemFree: 173609036 kB' 'MemAvailable: 177612024 kB' 'Buffers: 3888 kB' 'Cached: 12112016 kB' 'SwapCached: 0 kB' 'Active: 8152240 kB' 'Inactive: 4449832 kB' 'Active(anon): 7586736 kB' 'Inactive(anon): 0 kB' 'Active(file): 565504 kB' 'Inactive(file): 4449832 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 489068 kB' 'Mapped: 177572 kB' 'Shmem: 7100568 kB' 'KReclaimable: 264208 kB' 'Slab: 835512 kB' 'SReclaimable: 264208 kB' 'SUnreclaim: 571304 kB' 'KernelStack: 20304 kB' 'PageTables: 8880 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030608 kB' 'Committed_AS: 9020488 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 314716 kB' 'VmallocChunk: 0 kB' 'Percpu: 69888 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2796500 kB' 'DirectMap2M: 19951616 kB' 'DirectMap1G: 179306496 kB' 00:03:04.263 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:04.263 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:04.263 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.263 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.263 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:04.263 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:04.263 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.263 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.263 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:04.263 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:04.263 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.263 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.263 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:04.263 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:04.263 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.263 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.263 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:04.263 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:04.263 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.263 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.263 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:04.263 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:04.263 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.263 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.263 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:04.263 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:04.263 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.263 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.263 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:04.263 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:04.263 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.263 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.263 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:04.263 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:04.263 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.263 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.263 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:04.263 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:04.263 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.263 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.263 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:04.263 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:04.263 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.263 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.263 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:04.263 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:04.263 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.263 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.263 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:04.263 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:04.263 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.263 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.263 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:04.263 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:04.263 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.263 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.263 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:04.263 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:04.263 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.263 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.263 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:04.263 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:04.263 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.263 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.263 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:04.263 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:04.263 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.263 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.263 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:04.263 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:04.263 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.263 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.263 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:04.263 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:04.263 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.263 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.263 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:04.263 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:04.263 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.263 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.263 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:04.263 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:04.263 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.263 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.263 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:04.263 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:04.263 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.263 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.263 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:04.263 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:04.263 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.263 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.263 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:04.263 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:04.263 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.263 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.264 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:04.264 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:04.264 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.264 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.264 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:04.264 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:04.264 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.264 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.264 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:04.264 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:04.264 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.264 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.264 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:04.264 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:04.264 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.264 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.264 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:04.264 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:04.264 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.264 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.264 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:04.264 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:04.264 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.264 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.264 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:04.264 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:04.264 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.264 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.264 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:04.264 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:04.264 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.264 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.264 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:04.264 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:04.264 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.264 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.264 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:04.264 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:04.264 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.264 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.264 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:04.264 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:04.264 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.264 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.264 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:04.264 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:04.264 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.264 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.264 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:04.264 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:04.264 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.264 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.264 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:04.264 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:04.264 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.264 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.264 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:04.264 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:04.264 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.264 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.264 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:04.264 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:04.264 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.264 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.264 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:04.264 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:04.264 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.264 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.264 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:04.264 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:04.264 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.264 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.264 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:04.264 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:04.264 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.264 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.264 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:04.264 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:04.264 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.264 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.264 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:04.264 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:04.264 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.264 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.264 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:04.264 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:04.264 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.264 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.264 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:04.264 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:04.264 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.264 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.264 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:04.264 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:04.264 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.264 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.264 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:04.264 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:04.264 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.264 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.264 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:04.264 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:04.264 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.264 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.264 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:04.264 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:04.264 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:04.264 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # resv=0 00:03:04.264 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:04.264 nr_hugepages=1024 00:03:04.264 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:04.264 resv_hugepages=0 00:03:04.264 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:04.264 surplus_hugepages=0 00:03:04.264 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:04.264 anon_hugepages=0 00:03:04.264 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:04.264 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:04.264 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:04.264 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:04.264 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:04.265 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:04.265 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:04.265 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:04.265 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:04.265 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:04.265 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:04.265 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:04.265 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.265 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.265 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381164 kB' 'MemFree: 173606556 kB' 'MemAvailable: 177609544 kB' 'Buffers: 3888 kB' 'Cached: 12112040 kB' 'SwapCached: 0 kB' 'Active: 8152188 kB' 'Inactive: 4449832 kB' 'Active(anon): 7586684 kB' 'Inactive(anon): 0 kB' 'Active(file): 565504 kB' 'Inactive(file): 4449832 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 489388 kB' 'Mapped: 177512 kB' 'Shmem: 7100592 kB' 'KReclaimable: 264208 kB' 'Slab: 835504 kB' 'SReclaimable: 264208 kB' 'SUnreclaim: 571296 kB' 'KernelStack: 20208 kB' 'PageTables: 8292 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030608 kB' 'Committed_AS: 9019016 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 314716 kB' 'VmallocChunk: 0 kB' 'Percpu: 69888 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2796500 kB' 'DirectMap2M: 19951616 kB' 'DirectMap1G: 179306496 kB' 00:03:04.265 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:04.265 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:04.265 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.265 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.265 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:04.265 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:04.265 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.265 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.265 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:04.265 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:04.265 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.265 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.265 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:04.265 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:04.265 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.265 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.265 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:04.265 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:04.265 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.265 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.265 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:04.265 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:04.265 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.265 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.265 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:04.265 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:04.265 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.265 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.265 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:04.265 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:04.265 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.265 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.265 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:04.265 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:04.265 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.265 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.265 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:04.265 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:04.265 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.265 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.265 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:04.265 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:04.265 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.265 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.265 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:04.265 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:04.265 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.265 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.265 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:04.265 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:04.265 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.265 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.265 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:04.265 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:04.265 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.265 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.265 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:04.265 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:04.265 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.265 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.265 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:04.265 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:04.265 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.265 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.265 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:04.265 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:04.265 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.265 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.265 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:04.265 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:04.265 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.265 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.265 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:04.265 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:04.265 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.265 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.265 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:04.265 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:04.265 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.265 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.265 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:04.265 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:04.265 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.265 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.265 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:04.265 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:04.265 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.265 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.265 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:04.265 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:04.265 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.265 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.265 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:04.265 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:04.265 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.265 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.265 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:04.265 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:04.265 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.265 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.265 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:04.266 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:04.266 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.266 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.266 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:04.266 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:04.266 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.266 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.266 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:04.266 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:04.266 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.266 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.266 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:04.266 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:04.266 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.266 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.266 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:04.266 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:04.266 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.266 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.266 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:04.266 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:04.266 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.266 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.266 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:04.266 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:04.266 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.266 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.266 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:04.266 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:04.266 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.266 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.266 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:04.266 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:04.266 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.266 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.266 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:04.266 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:04.266 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.266 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.266 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:04.266 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:04.266 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.266 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.266 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:04.266 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:04.266 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.266 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.266 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:04.266 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:04.266 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.266 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.266 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:04.266 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:04.266 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.266 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.266 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:04.266 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:04.266 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.266 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.266 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:04.266 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:04.266 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.266 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.266 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:04.266 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:04.266 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.266 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.266 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:04.266 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:04.266 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.266 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.266 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:04.266 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:04.266 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.266 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.266 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:04.266 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:04.266 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.266 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.266 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:04.266 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:04.266 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.266 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.266 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:04.266 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:04.266 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.266 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.266 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:04.266 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:04.266 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.266 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.266 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:04.266 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 1024 00:03:04.266 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:04.266 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:04.266 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:03:04.266 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@27 -- # local node 00:03:04.266 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:04.266 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:03:04.266 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:04.266 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:03:04.266 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:04.266 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:04.266 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:04.266 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:04.266 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:04.266 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:04.266 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node=0 00:03:04.266 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:04.266 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:04.266 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:04.266 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:04.266 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:04.266 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:04.266 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:04.266 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.527 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.527 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 97662684 kB' 'MemFree: 83549692 kB' 'MemUsed: 14112992 kB' 'SwapCached: 0 kB' 'Active: 6529352 kB' 'Inactive: 4007180 kB' 'Active(anon): 6061736 kB' 'Inactive(anon): 0 kB' 'Active(file): 467616 kB' 'Inactive(file): 4007180 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 10247056 kB' 'Mapped: 149356 kB' 'AnonPages: 292648 kB' 'Shmem: 5772260 kB' 'KernelStack: 12152 kB' 'PageTables: 6472 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 131300 kB' 'Slab: 420036 kB' 'SReclaimable: 131300 kB' 'SUnreclaim: 288736 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:03:04.527 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.527 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:04.527 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.527 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.527 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.527 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:04.527 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.527 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.527 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.527 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:04.527 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.527 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.527 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.527 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:04.527 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.527 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.527 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.527 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:04.527 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.527 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.527 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.527 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:04.527 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.527 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.527 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.527 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:04.527 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.527 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.527 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.527 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:04.527 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.527 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.527 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.527 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:04.527 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.527 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.527 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.527 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:04.527 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.527 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.527 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.527 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:04.527 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.527 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.527 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.527 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:04.527 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.527 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.527 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.527 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:04.527 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.527 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.527 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.527 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:04.527 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.527 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.527 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.527 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:04.527 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.527 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.527 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.527 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:04.527 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.527 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.527 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.527 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:04.527 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.527 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.527 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.527 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:04.527 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.527 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.527 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.527 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:04.527 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.527 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.527 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.527 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:04.527 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.527 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.527 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.527 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:04.527 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.527 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.527 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.527 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:04.527 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.527 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.527 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.527 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:04.527 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.527 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.527 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.527 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:04.527 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.527 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.527 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.527 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:04.527 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.527 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.527 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.527 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:04.527 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.527 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.527 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.527 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:04.527 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.527 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.527 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.527 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:04.527 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.527 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.527 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.527 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:04.527 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.527 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.527 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.527 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:04.527 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.527 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.527 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.527 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:04.527 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.527 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.527 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.527 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:04.528 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.528 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.528 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.528 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:04.528 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.528 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.528 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.528 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:04.528 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.528 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.528 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.528 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:04.528 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.528 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.528 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.528 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:04.528 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.528 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.528 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.528 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:04.528 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:04.528 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:04.528 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:04.528 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:04.528 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:04.528 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:03:04.528 node0=1024 expecting 1024 00:03:04.528 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:03:04.528 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # CLEAR_HUGE=no 00:03:04.528 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # NRHUGE=512 00:03:04.528 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # setup output 00:03:04.528 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:03:04.528 16:53:51 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:07.067 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:03:07.067 0000:5e:00.0 (8086 0a54): Already using the vfio-pci driver 00:03:07.067 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:03:07.067 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:03:07.067 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:03:07.067 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:03:07.067 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:03:07.067 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:03:07.067 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:03:07.067 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:03:07.067 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:03:07.067 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:03:07.067 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:03:07.067 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:03:07.067 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:03:07.067 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:03:07.067 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:03:07.067 INFO: Requested 512 hugepages but 1024 already allocated on node0 00:03:07.067 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@204 -- # verify_nr_hugepages 00:03:07.067 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@89 -- # local node 00:03:07.067 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:03:07.067 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:03:07.067 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@92 -- # local surp 00:03:07.067 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@93 -- # local resv 00:03:07.067 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@94 -- # local anon 00:03:07.067 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:07.067 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:07.067 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:07.067 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:07.067 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:07.067 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:07.067 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:07.067 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:07.067 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:07.067 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:07.067 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:07.067 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.067 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.067 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381164 kB' 'MemFree: 173604280 kB' 'MemAvailable: 177607268 kB' 'Buffers: 3888 kB' 'Cached: 12112136 kB' 'SwapCached: 0 kB' 'Active: 8152572 kB' 'Inactive: 4449832 kB' 'Active(anon): 7587068 kB' 'Inactive(anon): 0 kB' 'Active(file): 565504 kB' 'Inactive(file): 4449832 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 489728 kB' 'Mapped: 177712 kB' 'Shmem: 7100688 kB' 'KReclaimable: 264208 kB' 'Slab: 834600 kB' 'SReclaimable: 264208 kB' 'SUnreclaim: 570392 kB' 'KernelStack: 20240 kB' 'PageTables: 8564 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030608 kB' 'Committed_AS: 9018732 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 314700 kB' 'VmallocChunk: 0 kB' 'Percpu: 69888 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2796500 kB' 'DirectMap2M: 19951616 kB' 'DirectMap1G: 179306496 kB' 00:03:07.067 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:07.067 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:07.067 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.067 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.067 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:07.067 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:07.067 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.067 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.067 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:07.067 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:07.067 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.067 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.067 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:07.067 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:07.067 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.067 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.067 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:07.067 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:07.067 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.067 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.067 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:07.067 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:07.067 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.067 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.067 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:07.067 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:07.067 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.067 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.067 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:07.067 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:07.067 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.067 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.067 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:07.067 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:07.067 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.067 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.067 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:07.067 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:07.067 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.067 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.067 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:07.067 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:07.067 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.067 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.068 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:07.068 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:07.068 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.068 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.068 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:07.068 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:07.068 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.068 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.068 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:07.068 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:07.068 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.068 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.068 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:07.068 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:07.068 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.068 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.068 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:07.068 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:07.068 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.068 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.068 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:07.068 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:07.068 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.068 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.068 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:07.068 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:07.068 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.068 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.068 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:07.068 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:07.068 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.068 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.068 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:07.068 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:07.068 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.068 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.068 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:07.068 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:07.068 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.068 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.068 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:07.068 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:07.068 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.068 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.068 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:07.068 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:07.068 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.068 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.068 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:07.068 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:07.068 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.068 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.068 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:07.068 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:07.068 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.068 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.068 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:07.068 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:07.068 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.068 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.068 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:07.068 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:07.068 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.068 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.068 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:07.068 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:07.068 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.068 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.068 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:07.068 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:07.068 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.068 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.068 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:07.068 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:07.068 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.068 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.068 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:07.068 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:07.068 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.068 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.068 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:07.068 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:07.068 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.068 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.068 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:07.068 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:07.068 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.068 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.068 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:07.068 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:07.068 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.068 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.068 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:07.068 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:07.068 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.068 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.068 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:07.068 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:07.068 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.068 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.068 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:07.068 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:07.068 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.068 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.068 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:07.068 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:07.068 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.068 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.068 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:07.068 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:07.068 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.068 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.068 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:07.068 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:07.068 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.068 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.068 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:07.068 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:07.068 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:07.068 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # anon=0 00:03:07.068 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:07.068 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:07.068 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:07.068 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:07.069 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:07.069 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:07.069 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:07.069 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:07.069 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:07.069 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:07.069 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.069 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.069 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381164 kB' 'MemFree: 173604436 kB' 'MemAvailable: 177607424 kB' 'Buffers: 3888 kB' 'Cached: 12112136 kB' 'SwapCached: 0 kB' 'Active: 8152184 kB' 'Inactive: 4449832 kB' 'Active(anon): 7586680 kB' 'Inactive(anon): 0 kB' 'Active(file): 565504 kB' 'Inactive(file): 4449832 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 489268 kB' 'Mapped: 177520 kB' 'Shmem: 7100688 kB' 'KReclaimable: 264208 kB' 'Slab: 834688 kB' 'SReclaimable: 264208 kB' 'SUnreclaim: 570480 kB' 'KernelStack: 20176 kB' 'PageTables: 8376 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030608 kB' 'Committed_AS: 9019872 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 314684 kB' 'VmallocChunk: 0 kB' 'Percpu: 69888 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2796500 kB' 'DirectMap2M: 19951616 kB' 'DirectMap1G: 179306496 kB' 00:03:07.069 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.069 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:07.069 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.069 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.069 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.069 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:07.069 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.069 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.069 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.069 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:07.069 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.069 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.069 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.069 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:07.069 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.069 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.069 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.069 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:07.069 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.069 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.069 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.069 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:07.069 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.069 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.069 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.069 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:07.069 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.069 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.069 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.069 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:07.069 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.069 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.069 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.069 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:07.069 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.069 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.069 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.069 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:07.069 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.069 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.069 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.069 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:07.069 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.069 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.069 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.069 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:07.069 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.069 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.069 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.069 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:07.069 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.069 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.069 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.069 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:07.069 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.069 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.069 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.069 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:07.069 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.069 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.069 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.069 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:07.069 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.069 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.069 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.069 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:07.069 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.069 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.069 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.069 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:07.069 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.069 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.069 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.069 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:07.069 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.069 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.069 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.069 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:07.069 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.069 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.069 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.069 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:07.069 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.069 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.069 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.069 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:07.069 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.069 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.069 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.069 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:07.069 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.069 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.069 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.069 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:07.069 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.069 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.069 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.069 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:07.069 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.069 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.069 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.070 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:07.070 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.070 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.070 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.070 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:07.070 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.070 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.070 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.070 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:07.070 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.070 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.070 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.070 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:07.070 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.070 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.070 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.070 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:07.070 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.070 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.070 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.070 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:07.070 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.070 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.070 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.070 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:07.070 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.070 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.070 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.070 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:07.070 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.070 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.070 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.070 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:07.070 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.070 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.070 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.070 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:07.070 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.070 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.070 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.070 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:07.070 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.070 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.070 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.070 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:07.070 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.070 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.070 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.070 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:07.070 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.070 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.070 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.070 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:07.070 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.070 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.070 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.070 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:07.070 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.070 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.070 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.070 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:07.070 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.070 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.070 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.070 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:07.070 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.070 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.070 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.070 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:07.070 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.070 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.070 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.070 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:07.070 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.070 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.070 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.070 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:07.070 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.070 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.070 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.070 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:07.070 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.070 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.070 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.070 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:07.070 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.070 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.070 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.070 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:07.070 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.070 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.070 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.070 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:07.070 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.070 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.070 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.070 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:07.070 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.070 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.070 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.070 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:07.070 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.070 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.070 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.070 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:07.070 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:07.070 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # surp=0 00:03:07.070 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:07.070 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:07.070 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:07.070 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:07.070 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:07.070 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:07.070 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:07.070 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:07.070 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:07.070 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:07.071 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381164 kB' 'MemFree: 173610324 kB' 'MemAvailable: 177613312 kB' 'Buffers: 3888 kB' 'Cached: 12112156 kB' 'SwapCached: 0 kB' 'Active: 8152180 kB' 'Inactive: 4449832 kB' 'Active(anon): 7586676 kB' 'Inactive(anon): 0 kB' 'Active(file): 565504 kB' 'Inactive(file): 4449832 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 489272 kB' 'Mapped: 177516 kB' 'Shmem: 7100708 kB' 'KReclaimable: 264208 kB' 'Slab: 834688 kB' 'SReclaimable: 264208 kB' 'SUnreclaim: 570480 kB' 'KernelStack: 20272 kB' 'PageTables: 8252 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030608 kB' 'Committed_AS: 9019892 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 314700 kB' 'VmallocChunk: 0 kB' 'Percpu: 69888 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2796500 kB' 'DirectMap2M: 19951616 kB' 'DirectMap1G: 179306496 kB' 00:03:07.071 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.071 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.071 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:07.071 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:07.071 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.071 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.071 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:07.071 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:07.071 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.071 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.071 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:07.071 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:07.071 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.071 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.071 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:07.071 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:07.071 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.071 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.071 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:07.071 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:07.071 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.071 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.071 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:07.071 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:07.071 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.071 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.071 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:07.071 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:07.071 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.071 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.071 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:07.071 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:07.071 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.071 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.071 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:07.071 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:07.071 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.071 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.071 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:07.071 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:07.071 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.071 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.071 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:07.071 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:07.071 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.071 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.071 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:07.071 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:07.071 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.071 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.071 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:07.071 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:07.071 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.071 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.071 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:07.071 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:07.071 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.071 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.071 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:07.071 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:07.071 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.071 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.071 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:07.071 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:07.071 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.071 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.071 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:07.071 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:07.071 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.071 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.071 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:07.071 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:07.071 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.071 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.071 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:07.071 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:07.071 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.071 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.071 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:07.071 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:07.071 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.071 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.071 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:07.071 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:07.071 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.071 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.071 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:07.071 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:07.071 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.071 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.071 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:07.071 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:07.071 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.071 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.072 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:07.072 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:07.072 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.072 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.072 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:07.072 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:07.072 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.072 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.072 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:07.072 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:07.072 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.072 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.072 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:07.072 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:07.072 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.072 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.072 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:07.072 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:07.072 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.072 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.072 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:07.072 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:07.072 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.072 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.072 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:07.072 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:07.072 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.072 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.072 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:07.072 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:07.072 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.072 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.072 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:07.072 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:07.072 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.072 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.072 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:07.072 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:07.072 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.072 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.072 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:07.072 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:07.072 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.072 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.072 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:07.072 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:07.072 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.072 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.072 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:07.072 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:07.072 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.072 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.072 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:07.072 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:07.072 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.072 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.072 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:07.072 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:07.072 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.072 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.072 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:07.072 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:07.072 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.072 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.072 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:07.072 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:07.072 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.072 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.072 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:07.072 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:07.072 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.072 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.072 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:07.072 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:07.072 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.072 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.072 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:07.072 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:07.072 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.072 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.072 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:07.072 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:07.072 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.072 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.072 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:07.072 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:07.072 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.072 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.072 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:07.072 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:07.072 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.072 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.072 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:07.072 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:07.072 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.072 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.072 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:07.072 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:07.072 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.072 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.072 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:07.072 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:07.072 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.072 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.072 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:07.072 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:07.072 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.072 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.072 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:07.072 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:07.072 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:07.072 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # resv=0 00:03:07.072 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:07.072 nr_hugepages=1024 00:03:07.072 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:07.072 resv_hugepages=0 00:03:07.072 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:07.072 surplus_hugepages=0 00:03:07.072 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:07.072 anon_hugepages=0 00:03:07.072 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:07.072 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:07.073 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:07.073 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:07.073 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:07.073 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:07.073 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:07.073 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:07.073 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:07.073 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:07.073 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:07.073 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:07.073 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.073 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.073 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381164 kB' 'MemFree: 173609032 kB' 'MemAvailable: 177612020 kB' 'Buffers: 3888 kB' 'Cached: 12112180 kB' 'SwapCached: 0 kB' 'Active: 8152624 kB' 'Inactive: 4449832 kB' 'Active(anon): 7587120 kB' 'Inactive(anon): 0 kB' 'Active(file): 565504 kB' 'Inactive(file): 4449832 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 489700 kB' 'Mapped: 177516 kB' 'Shmem: 7100732 kB' 'KReclaimable: 264208 kB' 'Slab: 834688 kB' 'SReclaimable: 264208 kB' 'SUnreclaim: 570480 kB' 'KernelStack: 20224 kB' 'PageTables: 8372 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030608 kB' 'Committed_AS: 9021408 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 314796 kB' 'VmallocChunk: 0 kB' 'Percpu: 69888 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2796500 kB' 'DirectMap2M: 19951616 kB' 'DirectMap1G: 179306496 kB' 00:03:07.073 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:07.073 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:07.073 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.073 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.073 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:07.073 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:07.073 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.073 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.073 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:07.073 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:07.073 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.073 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.073 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:07.073 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:07.073 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.073 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.073 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:07.073 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:07.073 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.073 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.073 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:07.073 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:07.073 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.073 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.073 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:07.073 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:07.073 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.073 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.073 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:07.073 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:07.073 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.073 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.073 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:07.073 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:07.073 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.073 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.073 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:07.073 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:07.073 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.073 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.073 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:07.073 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:07.073 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.073 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.073 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:07.073 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:07.073 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.073 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.073 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:07.073 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:07.073 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.073 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.073 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:07.073 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:07.073 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.073 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.073 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:07.073 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:07.073 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.073 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.073 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:07.073 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:07.073 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.073 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.073 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:07.073 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:07.073 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.073 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.073 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:07.073 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:07.073 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.073 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.073 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:07.073 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:07.073 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.073 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.073 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:07.073 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:07.073 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.073 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.073 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:07.073 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:07.073 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.073 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.073 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:07.073 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:07.073 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.073 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.073 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:07.073 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:07.073 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.073 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.073 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:07.073 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:07.073 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.073 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.074 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:07.074 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:07.074 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.074 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.074 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:07.074 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:07.074 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.074 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.074 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:07.074 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:07.074 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.074 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.074 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:07.074 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:07.074 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.074 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.074 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:07.074 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:07.074 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.074 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.074 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:07.074 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:07.074 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.074 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.074 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:07.074 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:07.074 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.074 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.074 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:07.074 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:07.074 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.074 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.074 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:07.074 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:07.074 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.074 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.074 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:07.074 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:07.074 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.074 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.074 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:07.074 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:07.074 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.074 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.074 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:07.074 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:07.074 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.074 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.074 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:07.074 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:07.074 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.074 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.074 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:07.074 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:07.074 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.074 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.074 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:07.074 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:07.074 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.074 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.074 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:07.074 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:07.074 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.074 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.074 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:07.074 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:07.074 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.074 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.074 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:07.074 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:07.074 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.074 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.074 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:07.074 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:07.074 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.074 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.074 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:07.074 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:07.074 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.074 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.074 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:07.074 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:07.074 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.074 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.074 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:07.074 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:07.074 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.074 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.074 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:07.074 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:07.074 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.074 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.074 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:07.074 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:07.074 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.074 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.074 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:07.074 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 1024 00:03:07.074 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:07.074 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:07.074 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:03:07.074 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@27 -- # local node 00:03:07.074 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:07.074 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:03:07.074 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:07.074 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:03:07.074 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:07.074 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:07.074 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:07.074 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:07.074 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:07.074 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:07.074 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node=0 00:03:07.074 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:07.074 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:07.074 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:07.074 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:07.074 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:07.074 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:07.075 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:07.075 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.075 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.075 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 97662684 kB' 'MemFree: 83556680 kB' 'MemUsed: 14106004 kB' 'SwapCached: 0 kB' 'Active: 6530336 kB' 'Inactive: 4007180 kB' 'Active(anon): 6062720 kB' 'Inactive(anon): 0 kB' 'Active(file): 467616 kB' 'Inactive(file): 4007180 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 10247100 kB' 'Mapped: 149360 kB' 'AnonPages: 293524 kB' 'Shmem: 5772304 kB' 'KernelStack: 12296 kB' 'PageTables: 7020 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 131300 kB' 'Slab: 419460 kB' 'SReclaimable: 131300 kB' 'SUnreclaim: 288160 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:03:07.075 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.075 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:07.075 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.075 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.075 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.075 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:07.075 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.075 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.075 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.075 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:07.075 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.075 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.075 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.075 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:07.075 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.075 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.075 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.075 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:07.075 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.075 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.075 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.075 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:07.075 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.075 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.075 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.075 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:07.075 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.075 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.075 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.075 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:07.075 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.075 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.075 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.075 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:07.075 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.075 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.075 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.075 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:07.075 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.075 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.075 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.075 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:07.075 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.075 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.075 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.075 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:07.075 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.075 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.075 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.075 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:07.075 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.075 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.075 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.075 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:07.075 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.075 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.075 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.075 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:07.075 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.075 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.075 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.075 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:07.075 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.075 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.075 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.075 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:07.075 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.075 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.075 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.075 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:07.075 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.075 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.075 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.075 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:07.075 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.075 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.075 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.075 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:07.075 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.075 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.075 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.075 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:07.075 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.075 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.075 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.075 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:07.075 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.075 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.075 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.075 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:07.075 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.075 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.075 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.075 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:07.075 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.075 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.075 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.075 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:07.075 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.075 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.075 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.075 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:07.075 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.075 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.075 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.075 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:07.075 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.075 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.075 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.076 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:07.076 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.076 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.076 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.076 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:07.076 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.076 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.076 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.076 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:07.076 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.076 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.076 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.076 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:07.076 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.076 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.076 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.076 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:07.076 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.076 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.076 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.076 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:07.076 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.076 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.076 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.076 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:07.076 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.076 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.076 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.076 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:07.076 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.076 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.076 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.076 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:07.076 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.076 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.076 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.076 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:07.076 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:07.076 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:07.076 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:07.076 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:07.076 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:07.076 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:03:07.076 node0=1024 expecting 1024 00:03:07.076 16:53:54 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:03:07.076 00:03:07.076 real 0m5.071s 00:03:07.076 user 0m1.923s 00:03:07.076 sys 0m3.071s 00:03:07.076 16:53:54 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:03:07.076 16:53:54 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@10 -- # set +x 00:03:07.076 ************************************ 00:03:07.076 END TEST no_shrink_alloc 00:03:07.076 ************************************ 00:03:07.076 16:53:54 setup.sh.hugepages -- setup/hugepages.sh@217 -- # clear_hp 00:03:07.076 16:53:54 setup.sh.hugepages -- setup/hugepages.sh@37 -- # local node hp 00:03:07.076 16:53:54 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:03:07.076 16:53:54 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:07.076 16:53:54 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:07.076 16:53:54 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:07.076 16:53:54 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:07.076 16:53:54 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:03:07.076 16:53:54 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:07.076 16:53:54 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:07.076 16:53:54 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:07.076 16:53:54 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:07.076 16:53:54 setup.sh.hugepages -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:03:07.076 16:53:54 setup.sh.hugepages -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:03:07.076 00:03:07.076 real 0m19.913s 00:03:07.076 user 0m7.533s 00:03:07.076 sys 0m11.660s 00:03:07.076 16:53:54 setup.sh.hugepages -- common/autotest_common.sh@1122 -- # xtrace_disable 00:03:07.076 16:53:54 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:07.076 ************************************ 00:03:07.076 END TEST hugepages 00:03:07.076 ************************************ 00:03:07.076 16:53:54 setup.sh -- setup/test-setup.sh@14 -- # run_test driver /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/driver.sh 00:03:07.076 16:53:54 setup.sh -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:03:07.076 16:53:54 setup.sh -- common/autotest_common.sh@1103 -- # xtrace_disable 00:03:07.076 16:53:54 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:03:07.336 ************************************ 00:03:07.336 START TEST driver 00:03:07.336 ************************************ 00:03:07.336 16:53:54 setup.sh.driver -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/driver.sh 00:03:07.336 * Looking for test storage... 00:03:07.336 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:03:07.336 16:53:54 setup.sh.driver -- setup/driver.sh@68 -- # setup reset 00:03:07.336 16:53:54 setup.sh.driver -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:07.336 16:53:54 setup.sh.driver -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:11.530 16:53:58 setup.sh.driver -- setup/driver.sh@69 -- # run_test guess_driver guess_driver 00:03:11.530 16:53:58 setup.sh.driver -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:03:11.530 16:53:58 setup.sh.driver -- common/autotest_common.sh@1103 -- # xtrace_disable 00:03:11.530 16:53:58 setup.sh.driver -- common/autotest_common.sh@10 -- # set +x 00:03:11.530 ************************************ 00:03:11.530 START TEST guess_driver 00:03:11.530 ************************************ 00:03:11.530 16:53:58 setup.sh.driver.guess_driver -- common/autotest_common.sh@1121 -- # guess_driver 00:03:11.531 16:53:58 setup.sh.driver.guess_driver -- setup/driver.sh@46 -- # local driver setup_driver marker 00:03:11.531 16:53:58 setup.sh.driver.guess_driver -- setup/driver.sh@47 -- # local fail=0 00:03:11.531 16:53:58 setup.sh.driver.guess_driver -- setup/driver.sh@49 -- # pick_driver 00:03:11.531 16:53:58 setup.sh.driver.guess_driver -- setup/driver.sh@36 -- # vfio 00:03:11.531 16:53:58 setup.sh.driver.guess_driver -- setup/driver.sh@21 -- # local iommu_grups 00:03:11.531 16:53:58 setup.sh.driver.guess_driver -- setup/driver.sh@22 -- # local unsafe_vfio 00:03:11.531 16:53:58 setup.sh.driver.guess_driver -- setup/driver.sh@24 -- # [[ -e /sys/module/vfio/parameters/enable_unsafe_noiommu_mode ]] 00:03:11.531 16:53:58 setup.sh.driver.guess_driver -- setup/driver.sh@25 -- # unsafe_vfio=N 00:03:11.531 16:53:58 setup.sh.driver.guess_driver -- setup/driver.sh@27 -- # iommu_groups=(/sys/kernel/iommu_groups/*) 00:03:11.531 16:53:58 setup.sh.driver.guess_driver -- setup/driver.sh@29 -- # (( 174 > 0 )) 00:03:11.531 16:53:58 setup.sh.driver.guess_driver -- setup/driver.sh@30 -- # is_driver vfio_pci 00:03:11.531 16:53:58 setup.sh.driver.guess_driver -- setup/driver.sh@14 -- # mod vfio_pci 00:03:11.531 16:53:58 setup.sh.driver.guess_driver -- setup/driver.sh@12 -- # dep vfio_pci 00:03:11.531 16:53:58 setup.sh.driver.guess_driver -- setup/driver.sh@11 -- # modprobe --show-depends vfio_pci 00:03:11.531 16:53:58 setup.sh.driver.guess_driver -- setup/driver.sh@12 -- # [[ insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/virt/lib/irqbypass.ko.xz 00:03:11.531 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/iommu/iommufd/iommufd.ko.xz 00:03:11.531 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio.ko.xz 00:03:11.531 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/iommu/iommufd/iommufd.ko.xz 00:03:11.531 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio.ko.xz 00:03:11.531 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio_iommu_type1.ko.xz 00:03:11.531 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/pci/vfio-pci-core.ko.xz 00:03:11.531 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/pci/vfio-pci.ko.xz == *\.\k\o* ]] 00:03:11.531 16:53:58 setup.sh.driver.guess_driver -- setup/driver.sh@30 -- # return 0 00:03:11.531 16:53:58 setup.sh.driver.guess_driver -- setup/driver.sh@37 -- # echo vfio-pci 00:03:11.531 16:53:58 setup.sh.driver.guess_driver -- setup/driver.sh@49 -- # driver=vfio-pci 00:03:11.531 16:53:58 setup.sh.driver.guess_driver -- setup/driver.sh@51 -- # [[ vfio-pci == \N\o\ \v\a\l\i\d\ \d\r\i\v\e\r\ \f\o\u\n\d ]] 00:03:11.531 16:53:58 setup.sh.driver.guess_driver -- setup/driver.sh@56 -- # echo 'Looking for driver=vfio-pci' 00:03:11.531 Looking for driver=vfio-pci 00:03:11.531 16:53:58 setup.sh.driver.guess_driver -- setup/driver.sh@45 -- # setup output config 00:03:11.531 16:53:58 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:11.531 16:53:58 setup.sh.driver.guess_driver -- setup/common.sh@9 -- # [[ output == output ]] 00:03:11.531 16:53:58 setup.sh.driver.guess_driver -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:03:13.437 16:54:00 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:13.437 16:54:00 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:13.437 16:54:00 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:13.437 16:54:00 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:13.437 16:54:00 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:13.437 16:54:00 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:13.437 16:54:00 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:13.437 16:54:00 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:13.437 16:54:00 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:13.437 16:54:00 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:13.437 16:54:00 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:13.437 16:54:00 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:13.437 16:54:00 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:13.437 16:54:00 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:13.437 16:54:00 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:13.437 16:54:00 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:13.437 16:54:00 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:13.437 16:54:00 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:13.437 16:54:00 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:13.437 16:54:00 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:13.437 16:54:00 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:13.437 16:54:01 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:13.437 16:54:01 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:13.437 16:54:01 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:13.437 16:54:01 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:13.437 16:54:01 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:13.437 16:54:01 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:13.437 16:54:01 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:13.437 16:54:01 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:13.437 16:54:01 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:13.437 16:54:01 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:13.437 16:54:01 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:13.437 16:54:01 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:13.437 16:54:01 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:13.437 16:54:01 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:13.437 16:54:01 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:13.438 16:54:01 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:13.438 16:54:01 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:13.438 16:54:01 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:13.438 16:54:01 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:13.438 16:54:01 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:13.438 16:54:01 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:13.438 16:54:01 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:13.438 16:54:01 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:13.438 16:54:01 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:13.696 16:54:01 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:13.696 16:54:01 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:13.696 16:54:01 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:14.293 16:54:01 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:14.293 16:54:01 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:14.293 16:54:01 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:14.552 16:54:02 setup.sh.driver.guess_driver -- setup/driver.sh@64 -- # (( fail == 0 )) 00:03:14.552 16:54:02 setup.sh.driver.guess_driver -- setup/driver.sh@65 -- # setup reset 00:03:14.552 16:54:02 setup.sh.driver.guess_driver -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:14.552 16:54:02 setup.sh.driver.guess_driver -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:18.742 00:03:18.742 real 0m7.310s 00:03:18.742 user 0m1.927s 00:03:18.742 sys 0m3.809s 00:03:18.742 16:54:05 setup.sh.driver.guess_driver -- common/autotest_common.sh@1122 -- # xtrace_disable 00:03:18.742 16:54:05 setup.sh.driver.guess_driver -- common/autotest_common.sh@10 -- # set +x 00:03:18.742 ************************************ 00:03:18.743 END TEST guess_driver 00:03:18.743 ************************************ 00:03:18.743 00:03:18.743 real 0m11.143s 00:03:18.743 user 0m2.939s 00:03:18.743 sys 0m5.834s 00:03:18.743 16:54:05 setup.sh.driver -- common/autotest_common.sh@1122 -- # xtrace_disable 00:03:18.743 16:54:05 setup.sh.driver -- common/autotest_common.sh@10 -- # set +x 00:03:18.743 ************************************ 00:03:18.743 END TEST driver 00:03:18.743 ************************************ 00:03:18.743 16:54:05 setup.sh -- setup/test-setup.sh@15 -- # run_test devices /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/devices.sh 00:03:18.743 16:54:05 setup.sh -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:03:18.743 16:54:05 setup.sh -- common/autotest_common.sh@1103 -- # xtrace_disable 00:03:18.743 16:54:05 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:03:18.743 ************************************ 00:03:18.743 START TEST devices 00:03:18.743 ************************************ 00:03:18.743 16:54:05 setup.sh.devices -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/devices.sh 00:03:18.743 * Looking for test storage... 00:03:18.743 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:03:18.743 16:54:06 setup.sh.devices -- setup/devices.sh@190 -- # trap cleanup EXIT 00:03:18.743 16:54:06 setup.sh.devices -- setup/devices.sh@192 -- # setup reset 00:03:18.743 16:54:06 setup.sh.devices -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:18.743 16:54:06 setup.sh.devices -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:22.032 16:54:09 setup.sh.devices -- setup/devices.sh@194 -- # get_zoned_devs 00:03:22.032 16:54:09 setup.sh.devices -- common/autotest_common.sh@1665 -- # zoned_devs=() 00:03:22.032 16:54:09 setup.sh.devices -- common/autotest_common.sh@1665 -- # local -gA zoned_devs 00:03:22.032 16:54:09 setup.sh.devices -- common/autotest_common.sh@1666 -- # local nvme bdf 00:03:22.032 16:54:09 setup.sh.devices -- common/autotest_common.sh@1668 -- # for nvme in /sys/block/nvme* 00:03:22.032 16:54:09 setup.sh.devices -- common/autotest_common.sh@1669 -- # is_block_zoned nvme0n1 00:03:22.032 16:54:09 setup.sh.devices -- common/autotest_common.sh@1658 -- # local device=nvme0n1 00:03:22.032 16:54:09 setup.sh.devices -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:03:22.032 16:54:09 setup.sh.devices -- common/autotest_common.sh@1661 -- # [[ none != none ]] 00:03:22.032 16:54:09 setup.sh.devices -- setup/devices.sh@196 -- # blocks=() 00:03:22.032 16:54:09 setup.sh.devices -- setup/devices.sh@196 -- # declare -a blocks 00:03:22.032 16:54:09 setup.sh.devices -- setup/devices.sh@197 -- # blocks_to_pci=() 00:03:22.032 16:54:09 setup.sh.devices -- setup/devices.sh@197 -- # declare -A blocks_to_pci 00:03:22.032 16:54:09 setup.sh.devices -- setup/devices.sh@198 -- # min_disk_size=3221225472 00:03:22.032 16:54:09 setup.sh.devices -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:03:22.032 16:54:09 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0n1 00:03:22.032 16:54:09 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0 00:03:22.032 16:54:09 setup.sh.devices -- setup/devices.sh@202 -- # pci=0000:5e:00.0 00:03:22.032 16:54:09 setup.sh.devices -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\5\e\:\0\0\.\0* ]] 00:03:22.032 16:54:09 setup.sh.devices -- setup/devices.sh@204 -- # block_in_use nvme0n1 00:03:22.032 16:54:09 setup.sh.devices -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:03:22.032 16:54:09 setup.sh.devices -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:03:22.032 No valid GPT data, bailing 00:03:22.032 16:54:09 setup.sh.devices -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:03:22.032 16:54:09 setup.sh.devices -- scripts/common.sh@391 -- # pt= 00:03:22.032 16:54:09 setup.sh.devices -- scripts/common.sh@392 -- # return 1 00:03:22.032 16:54:09 setup.sh.devices -- setup/devices.sh@204 -- # sec_size_to_bytes nvme0n1 00:03:22.032 16:54:09 setup.sh.devices -- setup/common.sh@76 -- # local dev=nvme0n1 00:03:22.032 16:54:09 setup.sh.devices -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:03:22.032 16:54:09 setup.sh.devices -- setup/common.sh@80 -- # echo 1000204886016 00:03:22.032 16:54:09 setup.sh.devices -- setup/devices.sh@204 -- # (( 1000204886016 >= min_disk_size )) 00:03:22.032 16:54:09 setup.sh.devices -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:03:22.032 16:54:09 setup.sh.devices -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:5e:00.0 00:03:22.032 16:54:09 setup.sh.devices -- setup/devices.sh@209 -- # (( 1 > 0 )) 00:03:22.032 16:54:09 setup.sh.devices -- setup/devices.sh@211 -- # declare -r test_disk=nvme0n1 00:03:22.032 16:54:09 setup.sh.devices -- setup/devices.sh@213 -- # run_test nvme_mount nvme_mount 00:03:22.032 16:54:09 setup.sh.devices -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:03:22.032 16:54:09 setup.sh.devices -- common/autotest_common.sh@1103 -- # xtrace_disable 00:03:22.032 16:54:09 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:03:22.032 ************************************ 00:03:22.032 START TEST nvme_mount 00:03:22.032 ************************************ 00:03:22.032 16:54:09 setup.sh.devices.nvme_mount -- common/autotest_common.sh@1121 -- # nvme_mount 00:03:22.032 16:54:09 setup.sh.devices.nvme_mount -- setup/devices.sh@95 -- # nvme_disk=nvme0n1 00:03:22.032 16:54:09 setup.sh.devices.nvme_mount -- setup/devices.sh@96 -- # nvme_disk_p=nvme0n1p1 00:03:22.032 16:54:09 setup.sh.devices.nvme_mount -- setup/devices.sh@97 -- # nvme_mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:22.032 16:54:09 setup.sh.devices.nvme_mount -- setup/devices.sh@98 -- # nvme_dummy_test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:03:22.032 16:54:09 setup.sh.devices.nvme_mount -- setup/devices.sh@101 -- # partition_drive nvme0n1 1 00:03:22.032 16:54:09 setup.sh.devices.nvme_mount -- setup/common.sh@39 -- # local disk=nvme0n1 00:03:22.032 16:54:09 setup.sh.devices.nvme_mount -- setup/common.sh@40 -- # local part_no=1 00:03:22.032 16:54:09 setup.sh.devices.nvme_mount -- setup/common.sh@41 -- # local size=1073741824 00:03:22.032 16:54:09 setup.sh.devices.nvme_mount -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:03:22.032 16:54:09 setup.sh.devices.nvme_mount -- setup/common.sh@44 -- # parts=() 00:03:22.032 16:54:09 setup.sh.devices.nvme_mount -- setup/common.sh@44 -- # local parts 00:03:22.032 16:54:09 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part = 1 )) 00:03:22.032 16:54:09 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:03:22.032 16:54:09 setup.sh.devices.nvme_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:03:22.032 16:54:09 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part++ )) 00:03:22.032 16:54:09 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:03:22.032 16:54:09 setup.sh.devices.nvme_mount -- setup/common.sh@51 -- # (( size /= 512 )) 00:03:22.032 16:54:09 setup.sh.devices.nvme_mount -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:03:22.032 16:54:09 setup.sh.devices.nvme_mount -- setup/common.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 00:03:22.601 Creating new GPT entries in memory. 00:03:22.601 GPT data structures destroyed! You may now partition the disk using fdisk or 00:03:22.601 other utilities. 00:03:22.601 16:54:10 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part = 1 )) 00:03:22.601 16:54:10 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:03:22.601 16:54:10 setup.sh.devices.nvme_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:03:22.601 16:54:10 setup.sh.devices.nvme_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:03:22.601 16:54:10 setup.sh.devices.nvme_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:2099199 00:03:23.982 Creating new GPT entries in memory. 00:03:23.982 The operation has completed successfully. 00:03:23.982 16:54:11 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part++ )) 00:03:23.982 16:54:11 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:03:23.982 16:54:11 setup.sh.devices.nvme_mount -- setup/common.sh@62 -- # wait 2865942 00:03:23.982 16:54:11 setup.sh.devices.nvme_mount -- setup/devices.sh@102 -- # mkfs /dev/nvme0n1p1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:23.982 16:54:11 setup.sh.devices.nvme_mount -- setup/common.sh@66 -- # local dev=/dev/nvme0n1p1 mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount size= 00:03:23.982 16:54:11 setup.sh.devices.nvme_mount -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:23.982 16:54:11 setup.sh.devices.nvme_mount -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1p1 ]] 00:03:23.982 16:54:11 setup.sh.devices.nvme_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1p1 00:03:23.982 16:54:11 setup.sh.devices.nvme_mount -- setup/common.sh@72 -- # mount /dev/nvme0n1p1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:23.982 16:54:11 setup.sh.devices.nvme_mount -- setup/devices.sh@105 -- # verify 0000:5e:00.0 nvme0n1:nvme0n1p1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:03:23.982 16:54:11 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:5e:00.0 00:03:23.982 16:54:11 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1p1 00:03:23.982 16:54:11 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:23.982 16:54:11 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:03:23.982 16:54:11 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:03:23.982 16:54:11 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:03:23.982 16:54:11 setup.sh.devices.nvme_mount -- setup/devices.sh@56 -- # : 00:03:23.982 16:54:11 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:03:23.982 16:54:11 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:23.982 16:54:11 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:5e:00.0 00:03:23.982 16:54:11 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:03:23.982 16:54:11 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:03:23.982 16:54:11 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:03:26.516 16:54:13 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:5e:00.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:26.516 16:54:13 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1p1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1\p\1* ]] 00:03:26.516 16:54:13 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:03:26.516 16:54:13 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:26.516 16:54:13 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:26.516 16:54:13 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:26.516 16:54:13 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:26.516 16:54:13 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:26.516 16:54:13 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:26.516 16:54:13 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:26.516 16:54:13 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:26.516 16:54:13 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:26.516 16:54:13 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:26.516 16:54:13 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:26.516 16:54:13 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:26.516 16:54:13 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:26.516 16:54:13 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:26.517 16:54:13 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:26.517 16:54:13 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:26.517 16:54:13 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:26.517 16:54:13 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:26.517 16:54:13 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:26.517 16:54:13 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:26.517 16:54:13 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:26.517 16:54:13 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:26.517 16:54:13 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:26.517 16:54:13 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:26.517 16:54:13 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:26.517 16:54:13 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:26.517 16:54:13 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:26.517 16:54:13 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:26.517 16:54:13 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:26.517 16:54:13 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:26.517 16:54:13 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:26.517 16:54:13 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:26.517 16:54:13 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:26.517 16:54:13 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:03:26.517 16:54:13 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount ]] 00:03:26.517 16:54:13 setup.sh.devices.nvme_mount -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:26.517 16:54:13 setup.sh.devices.nvme_mount -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:03:26.517 16:54:13 setup.sh.devices.nvme_mount -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:03:26.517 16:54:13 setup.sh.devices.nvme_mount -- setup/devices.sh@110 -- # cleanup_nvme 00:03:26.517 16:54:13 setup.sh.devices.nvme_mount -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:26.517 16:54:13 setup.sh.devices.nvme_mount -- setup/devices.sh@21 -- # umount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:26.517 16:54:13 setup.sh.devices.nvme_mount -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:03:26.517 16:54:13 setup.sh.devices.nvme_mount -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:03:26.517 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:03:26.517 16:54:13 setup.sh.devices.nvme_mount -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:03:26.517 16:54:13 setup.sh.devices.nvme_mount -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:03:26.776 /dev/nvme0n1: 8 bytes were erased at offset 0x00000200 (gpt): 45 46 49 20 50 41 52 54 00:03:26.776 /dev/nvme0n1: 8 bytes were erased at offset 0xe8e0db5e00 (gpt): 45 46 49 20 50 41 52 54 00:03:26.776 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:03:26.776 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:03:26.776 16:54:14 setup.sh.devices.nvme_mount -- setup/devices.sh@113 -- # mkfs /dev/nvme0n1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 1024M 00:03:26.776 16:54:14 setup.sh.devices.nvme_mount -- setup/common.sh@66 -- # local dev=/dev/nvme0n1 mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount size=1024M 00:03:26.776 16:54:14 setup.sh.devices.nvme_mount -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:26.776 16:54:14 setup.sh.devices.nvme_mount -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1 ]] 00:03:26.776 16:54:14 setup.sh.devices.nvme_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1 1024M 00:03:26.776 16:54:14 setup.sh.devices.nvme_mount -- setup/common.sh@72 -- # mount /dev/nvme0n1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:26.776 16:54:14 setup.sh.devices.nvme_mount -- setup/devices.sh@116 -- # verify 0000:5e:00.0 nvme0n1:nvme0n1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:03:26.776 16:54:14 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:5e:00.0 00:03:26.776 16:54:14 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1 00:03:26.776 16:54:14 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:26.776 16:54:14 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:03:26.776 16:54:14 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:03:26.777 16:54:14 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:03:26.777 16:54:14 setup.sh.devices.nvme_mount -- setup/devices.sh@56 -- # : 00:03:26.777 16:54:14 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:03:26.777 16:54:14 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:26.777 16:54:14 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:5e:00.0 00:03:26.777 16:54:14 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:03:26.777 16:54:14 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:03:26.777 16:54:14 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:03:29.309 16:54:16 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:5e:00.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:29.309 16:54:16 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1* ]] 00:03:29.309 16:54:16 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:03:29.309 16:54:16 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:29.309 16:54:16 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:29.309 16:54:16 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:29.309 16:54:16 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:29.309 16:54:16 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:29.309 16:54:16 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:29.309 16:54:16 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:29.309 16:54:16 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:29.309 16:54:16 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:29.309 16:54:16 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:29.309 16:54:16 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:29.309 16:54:16 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:29.309 16:54:16 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:29.309 16:54:16 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:29.309 16:54:16 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:29.309 16:54:16 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:29.309 16:54:16 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:29.309 16:54:16 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:29.309 16:54:16 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:29.309 16:54:16 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:29.309 16:54:16 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:29.309 16:54:16 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:29.309 16:54:16 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:29.309 16:54:16 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:29.309 16:54:16 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:29.309 16:54:16 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:29.309 16:54:16 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:29.309 16:54:16 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:29.309 16:54:16 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:29.309 16:54:16 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:29.309 16:54:16 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:29.309 16:54:16 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:29.309 16:54:16 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:29.309 16:54:16 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:03:29.309 16:54:16 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount ]] 00:03:29.309 16:54:16 setup.sh.devices.nvme_mount -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:29.309 16:54:16 setup.sh.devices.nvme_mount -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:03:29.309 16:54:16 setup.sh.devices.nvme_mount -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:03:29.309 16:54:16 setup.sh.devices.nvme_mount -- setup/devices.sh@123 -- # umount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:29.309 16:54:16 setup.sh.devices.nvme_mount -- setup/devices.sh@125 -- # verify 0000:5e:00.0 data@nvme0n1 '' '' 00:03:29.309 16:54:16 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:5e:00.0 00:03:29.309 16:54:16 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=data@nvme0n1 00:03:29.309 16:54:16 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point= 00:03:29.309 16:54:16 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file= 00:03:29.309 16:54:16 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:03:29.309 16:54:16 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n '' ]] 00:03:29.309 16:54:16 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:03:29.309 16:54:16 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:29.309 16:54:16 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:5e:00.0 00:03:29.309 16:54:16 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:03:29.309 16:54:16 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:03:29.309 16:54:16 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:03:31.846 16:54:18 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:5e:00.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:31.846 16:54:18 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: data@nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\d\a\t\a\@\n\v\m\e\0\n\1* ]] 00:03:31.846 16:54:18 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:03:31.846 16:54:18 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:31.846 16:54:18 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:31.846 16:54:18 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:31.846 16:54:18 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:31.846 16:54:18 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:31.846 16:54:18 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:31.846 16:54:18 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:31.846 16:54:18 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:31.846 16:54:18 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:31.846 16:54:18 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:31.846 16:54:18 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:31.846 16:54:18 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:31.846 16:54:18 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:31.846 16:54:18 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:31.846 16:54:18 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:31.846 16:54:18 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:31.846 16:54:18 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:31.846 16:54:18 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:31.846 16:54:18 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:31.846 16:54:18 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:31.846 16:54:18 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:31.846 16:54:18 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:31.846 16:54:18 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:31.846 16:54:18 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:31.846 16:54:18 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:31.846 16:54:18 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:31.846 16:54:18 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:31.846 16:54:18 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:31.846 16:54:18 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:31.846 16:54:18 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:31.846 16:54:18 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:31.846 16:54:18 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:31.846 16:54:18 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:31.846 16:54:19 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:03:31.846 16:54:19 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n '' ]] 00:03:31.846 16:54:19 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # return 0 00:03:31.846 16:54:19 setup.sh.devices.nvme_mount -- setup/devices.sh@128 -- # cleanup_nvme 00:03:31.846 16:54:19 setup.sh.devices.nvme_mount -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:31.846 16:54:19 setup.sh.devices.nvme_mount -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:03:31.846 16:54:19 setup.sh.devices.nvme_mount -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:03:31.846 16:54:19 setup.sh.devices.nvme_mount -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:03:31.846 /dev/nvme0n1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:03:31.846 00:03:31.846 real 0m9.998s 00:03:31.846 user 0m2.720s 00:03:31.846 sys 0m4.972s 00:03:31.846 16:54:19 setup.sh.devices.nvme_mount -- common/autotest_common.sh@1122 -- # xtrace_disable 00:03:31.846 16:54:19 setup.sh.devices.nvme_mount -- common/autotest_common.sh@10 -- # set +x 00:03:31.846 ************************************ 00:03:31.846 END TEST nvme_mount 00:03:31.846 ************************************ 00:03:31.846 16:54:19 setup.sh.devices -- setup/devices.sh@214 -- # run_test dm_mount dm_mount 00:03:31.846 16:54:19 setup.sh.devices -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:03:31.846 16:54:19 setup.sh.devices -- common/autotest_common.sh@1103 -- # xtrace_disable 00:03:31.846 16:54:19 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:03:31.846 ************************************ 00:03:31.846 START TEST dm_mount 00:03:31.846 ************************************ 00:03:31.846 16:54:19 setup.sh.devices.dm_mount -- common/autotest_common.sh@1121 -- # dm_mount 00:03:31.846 16:54:19 setup.sh.devices.dm_mount -- setup/devices.sh@144 -- # pv=nvme0n1 00:03:31.846 16:54:19 setup.sh.devices.dm_mount -- setup/devices.sh@145 -- # pv0=nvme0n1p1 00:03:31.846 16:54:19 setup.sh.devices.dm_mount -- setup/devices.sh@146 -- # pv1=nvme0n1p2 00:03:31.846 16:54:19 setup.sh.devices.dm_mount -- setup/devices.sh@148 -- # partition_drive nvme0n1 00:03:31.846 16:54:19 setup.sh.devices.dm_mount -- setup/common.sh@39 -- # local disk=nvme0n1 00:03:31.846 16:54:19 setup.sh.devices.dm_mount -- setup/common.sh@40 -- # local part_no=2 00:03:31.846 16:54:19 setup.sh.devices.dm_mount -- setup/common.sh@41 -- # local size=1073741824 00:03:31.846 16:54:19 setup.sh.devices.dm_mount -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:03:31.846 16:54:19 setup.sh.devices.dm_mount -- setup/common.sh@44 -- # parts=() 00:03:31.846 16:54:19 setup.sh.devices.dm_mount -- setup/common.sh@44 -- # local parts 00:03:31.846 16:54:19 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part = 1 )) 00:03:31.846 16:54:19 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:03:31.846 16:54:19 setup.sh.devices.dm_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:03:31.846 16:54:19 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part++ )) 00:03:31.846 16:54:19 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:03:31.846 16:54:19 setup.sh.devices.dm_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:03:31.846 16:54:19 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part++ )) 00:03:31.846 16:54:19 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:03:31.846 16:54:19 setup.sh.devices.dm_mount -- setup/common.sh@51 -- # (( size /= 512 )) 00:03:31.846 16:54:19 setup.sh.devices.dm_mount -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:03:31.846 16:54:19 setup.sh.devices.dm_mount -- setup/common.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 nvme0n1p2 00:03:32.820 Creating new GPT entries in memory. 00:03:32.820 GPT data structures destroyed! You may now partition the disk using fdisk or 00:03:32.820 other utilities. 00:03:32.820 16:54:20 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part = 1 )) 00:03:32.820 16:54:20 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:03:32.820 16:54:20 setup.sh.devices.dm_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:03:32.820 16:54:20 setup.sh.devices.dm_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:03:32.820 16:54:20 setup.sh.devices.dm_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:2099199 00:03:33.753 Creating new GPT entries in memory. 00:03:33.753 The operation has completed successfully. 00:03:33.753 16:54:21 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part++ )) 00:03:33.753 16:54:21 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:03:33.753 16:54:21 setup.sh.devices.dm_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:03:33.753 16:54:21 setup.sh.devices.dm_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:03:33.753 16:54:21 setup.sh.devices.dm_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=2:2099200:4196351 00:03:34.691 The operation has completed successfully. 00:03:34.691 16:54:22 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part++ )) 00:03:34.691 16:54:22 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:03:34.691 16:54:22 setup.sh.devices.dm_mount -- setup/common.sh@62 -- # wait 2869902 00:03:34.691 16:54:22 setup.sh.devices.dm_mount -- setup/devices.sh@150 -- # dm_name=nvme_dm_test 00:03:34.691 16:54:22 setup.sh.devices.dm_mount -- setup/devices.sh@151 -- # dm_mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:03:34.691 16:54:22 setup.sh.devices.dm_mount -- setup/devices.sh@152 -- # dm_dummy_test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:03:34.691 16:54:22 setup.sh.devices.dm_mount -- setup/devices.sh@155 -- # dmsetup create nvme_dm_test 00:03:34.950 16:54:22 setup.sh.devices.dm_mount -- setup/devices.sh@160 -- # for t in {1..5} 00:03:34.950 16:54:22 setup.sh.devices.dm_mount -- setup/devices.sh@161 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:03:34.950 16:54:22 setup.sh.devices.dm_mount -- setup/devices.sh@161 -- # break 00:03:34.950 16:54:22 setup.sh.devices.dm_mount -- setup/devices.sh@164 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:03:34.950 16:54:22 setup.sh.devices.dm_mount -- setup/devices.sh@165 -- # readlink -f /dev/mapper/nvme_dm_test 00:03:34.950 16:54:22 setup.sh.devices.dm_mount -- setup/devices.sh@165 -- # dm=/dev/dm-2 00:03:34.950 16:54:22 setup.sh.devices.dm_mount -- setup/devices.sh@166 -- # dm=dm-2 00:03:34.950 16:54:22 setup.sh.devices.dm_mount -- setup/devices.sh@168 -- # [[ -e /sys/class/block/nvme0n1p1/holders/dm-2 ]] 00:03:34.950 16:54:22 setup.sh.devices.dm_mount -- setup/devices.sh@169 -- # [[ -e /sys/class/block/nvme0n1p2/holders/dm-2 ]] 00:03:34.950 16:54:22 setup.sh.devices.dm_mount -- setup/devices.sh@171 -- # mkfs /dev/mapper/nvme_dm_test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:03:34.950 16:54:22 setup.sh.devices.dm_mount -- setup/common.sh@66 -- # local dev=/dev/mapper/nvme_dm_test mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount size= 00:03:34.950 16:54:22 setup.sh.devices.dm_mount -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:03:34.950 16:54:22 setup.sh.devices.dm_mount -- setup/common.sh@70 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:03:34.950 16:54:22 setup.sh.devices.dm_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/mapper/nvme_dm_test 00:03:34.950 16:54:22 setup.sh.devices.dm_mount -- setup/common.sh@72 -- # mount /dev/mapper/nvme_dm_test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:03:34.950 16:54:22 setup.sh.devices.dm_mount -- setup/devices.sh@174 -- # verify 0000:5e:00.0 nvme0n1:nvme_dm_test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:03:34.950 16:54:22 setup.sh.devices.dm_mount -- setup/devices.sh@48 -- # local dev=0000:5e:00.0 00:03:34.950 16:54:22 setup.sh.devices.dm_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme_dm_test 00:03:34.950 16:54:22 setup.sh.devices.dm_mount -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:03:34.950 16:54:22 setup.sh.devices.dm_mount -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:03:34.950 16:54:22 setup.sh.devices.dm_mount -- setup/devices.sh@53 -- # local found=0 00:03:34.950 16:54:22 setup.sh.devices.dm_mount -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm ]] 00:03:34.950 16:54:22 setup.sh.devices.dm_mount -- setup/devices.sh@56 -- # : 00:03:34.950 16:54:22 setup.sh.devices.dm_mount -- setup/devices.sh@59 -- # local pci status 00:03:34.950 16:54:22 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:34.950 16:54:22 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:5e:00.0 00:03:34.950 16:54:22 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # setup output config 00:03:34.950 16:54:22 setup.sh.devices.dm_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:03:34.950 16:54:22 setup.sh.devices.dm_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:03:37.483 16:54:25 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:5e:00.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:37.483 16:54:25 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-2,holder@nvme0n1p2:dm-2,mount@nvme0n1:nvme_dm_test, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\_\d\m\_\t\e\s\t* ]] 00:03:37.483 16:54:25 setup.sh.devices.dm_mount -- setup/devices.sh@63 -- # found=1 00:03:37.483 16:54:25 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:37.483 16:54:25 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:37.483 16:54:25 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:37.483 16:54:25 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:37.483 16:54:25 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:37.483 16:54:25 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:37.483 16:54:25 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:37.483 16:54:25 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:37.483 16:54:25 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:37.483 16:54:25 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:37.483 16:54:25 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:37.483 16:54:25 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:37.483 16:54:25 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:37.483 16:54:25 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:37.483 16:54:25 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:37.483 16:54:25 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:37.483 16:54:25 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:37.483 16:54:25 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:37.483 16:54:25 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:37.483 16:54:25 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:37.483 16:54:25 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:37.483 16:54:25 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:37.483 16:54:25 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:37.483 16:54:25 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:37.483 16:54:25 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:37.483 16:54:25 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:37.483 16:54:25 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:37.483 16:54:25 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:37.483 16:54:25 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:37.483 16:54:25 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:37.483 16:54:25 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:37.483 16:54:25 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:37.483 16:54:25 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:37.742 16:54:25 setup.sh.devices.dm_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:03:37.742 16:54:25 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount ]] 00:03:37.742 16:54:25 setup.sh.devices.dm_mount -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:03:37.742 16:54:25 setup.sh.devices.dm_mount -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm ]] 00:03:37.742 16:54:25 setup.sh.devices.dm_mount -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:03:37.742 16:54:25 setup.sh.devices.dm_mount -- setup/devices.sh@182 -- # umount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:03:37.742 16:54:25 setup.sh.devices.dm_mount -- setup/devices.sh@184 -- # verify 0000:5e:00.0 holder@nvme0n1p1:dm-2,holder@nvme0n1p2:dm-2 '' '' 00:03:37.742 16:54:25 setup.sh.devices.dm_mount -- setup/devices.sh@48 -- # local dev=0000:5e:00.0 00:03:37.742 16:54:25 setup.sh.devices.dm_mount -- setup/devices.sh@49 -- # local mounts=holder@nvme0n1p1:dm-2,holder@nvme0n1p2:dm-2 00:03:37.742 16:54:25 setup.sh.devices.dm_mount -- setup/devices.sh@50 -- # local mount_point= 00:03:37.742 16:54:25 setup.sh.devices.dm_mount -- setup/devices.sh@51 -- # local test_file= 00:03:37.742 16:54:25 setup.sh.devices.dm_mount -- setup/devices.sh@53 -- # local found=0 00:03:37.742 16:54:25 setup.sh.devices.dm_mount -- setup/devices.sh@55 -- # [[ -n '' ]] 00:03:37.742 16:54:25 setup.sh.devices.dm_mount -- setup/devices.sh@59 -- # local pci status 00:03:37.742 16:54:25 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:37.742 16:54:25 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:5e:00.0 00:03:37.742 16:54:25 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # setup output config 00:03:37.742 16:54:25 setup.sh.devices.dm_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:03:37.742 16:54:25 setup.sh.devices.dm_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:03:40.278 16:54:27 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:5e:00.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:40.278 16:54:27 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-2,holder@nvme0n1p2:dm-2, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\1\:\d\m\-\2\,\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\2\:\d\m\-\2* ]] 00:03:40.278 16:54:27 setup.sh.devices.dm_mount -- setup/devices.sh@63 -- # found=1 00:03:40.278 16:54:27 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:40.278 16:54:27 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:40.278 16:54:27 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:40.278 16:54:27 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:40.278 16:54:27 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:40.278 16:54:27 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:40.278 16:54:27 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:40.278 16:54:27 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:40.278 16:54:27 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:40.278 16:54:27 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:40.278 16:54:27 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:40.278 16:54:27 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:40.278 16:54:27 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:40.278 16:54:27 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:40.278 16:54:27 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:40.278 16:54:27 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:40.278 16:54:27 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:40.278 16:54:27 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:40.278 16:54:27 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:40.278 16:54:27 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:40.278 16:54:27 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:40.278 16:54:27 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:40.278 16:54:27 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:40.278 16:54:27 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:40.278 16:54:27 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:40.278 16:54:27 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:40.278 16:54:27 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:40.278 16:54:27 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:40.278 16:54:27 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:40.278 16:54:27 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:40.278 16:54:27 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:40.278 16:54:27 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:40.278 16:54:27 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:40.278 16:54:27 setup.sh.devices.dm_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:03:40.278 16:54:27 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # [[ -n '' ]] 00:03:40.278 16:54:27 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # return 0 00:03:40.278 16:54:27 setup.sh.devices.dm_mount -- setup/devices.sh@187 -- # cleanup_dm 00:03:40.278 16:54:27 setup.sh.devices.dm_mount -- setup/devices.sh@33 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:03:40.278 16:54:27 setup.sh.devices.dm_mount -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:03:40.278 16:54:27 setup.sh.devices.dm_mount -- setup/devices.sh@37 -- # dmsetup remove --force nvme_dm_test 00:03:40.278 16:54:27 setup.sh.devices.dm_mount -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:03:40.278 16:54:27 setup.sh.devices.dm_mount -- setup/devices.sh@40 -- # wipefs --all /dev/nvme0n1p1 00:03:40.278 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:03:40.278 16:54:27 setup.sh.devices.dm_mount -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:03:40.278 16:54:27 setup.sh.devices.dm_mount -- setup/devices.sh@43 -- # wipefs --all /dev/nvme0n1p2 00:03:40.278 00:03:40.278 real 0m8.635s 00:03:40.278 user 0m2.075s 00:03:40.278 sys 0m3.550s 00:03:40.278 16:54:27 setup.sh.devices.dm_mount -- common/autotest_common.sh@1122 -- # xtrace_disable 00:03:40.278 16:54:27 setup.sh.devices.dm_mount -- common/autotest_common.sh@10 -- # set +x 00:03:40.278 ************************************ 00:03:40.278 END TEST dm_mount 00:03:40.278 ************************************ 00:03:40.278 16:54:27 setup.sh.devices -- setup/devices.sh@1 -- # cleanup 00:03:40.278 16:54:27 setup.sh.devices -- setup/devices.sh@11 -- # cleanup_nvme 00:03:40.278 16:54:27 setup.sh.devices -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:40.278 16:54:27 setup.sh.devices -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:03:40.278 16:54:27 setup.sh.devices -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:03:40.278 16:54:27 setup.sh.devices -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:03:40.278 16:54:27 setup.sh.devices -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:03:40.537 /dev/nvme0n1: 8 bytes were erased at offset 0x00000200 (gpt): 45 46 49 20 50 41 52 54 00:03:40.537 /dev/nvme0n1: 8 bytes were erased at offset 0xe8e0db5e00 (gpt): 45 46 49 20 50 41 52 54 00:03:40.537 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:03:40.537 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:03:40.537 16:54:28 setup.sh.devices -- setup/devices.sh@12 -- # cleanup_dm 00:03:40.537 16:54:28 setup.sh.devices -- setup/devices.sh@33 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:03:40.538 16:54:28 setup.sh.devices -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:03:40.538 16:54:28 setup.sh.devices -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:03:40.538 16:54:28 setup.sh.devices -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:03:40.538 16:54:28 setup.sh.devices -- setup/devices.sh@14 -- # [[ -b /dev/nvme0n1 ]] 00:03:40.538 16:54:28 setup.sh.devices -- setup/devices.sh@15 -- # wipefs --all /dev/nvme0n1 00:03:40.796 00:03:40.796 real 0m22.261s 00:03:40.796 user 0m6.064s 00:03:40.796 sys 0m10.741s 00:03:40.796 16:54:28 setup.sh.devices -- common/autotest_common.sh@1122 -- # xtrace_disable 00:03:40.796 16:54:28 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:03:40.796 ************************************ 00:03:40.796 END TEST devices 00:03:40.796 ************************************ 00:03:40.796 00:03:40.796 real 1m12.004s 00:03:40.796 user 0m22.536s 00:03:40.796 sys 0m39.282s 00:03:40.796 16:54:28 setup.sh -- common/autotest_common.sh@1122 -- # xtrace_disable 00:03:40.796 16:54:28 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:03:40.796 ************************************ 00:03:40.796 END TEST setup.sh 00:03:40.796 ************************************ 00:03:40.796 16:54:28 -- spdk/autotest.sh@128 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:03:43.331 Hugepages 00:03:43.331 node hugesize free / total 00:03:43.331 node0 1048576kB 0 / 0 00:03:43.331 node0 2048kB 2048 / 2048 00:03:43.331 node1 1048576kB 0 / 0 00:03:43.331 node1 2048kB 0 / 0 00:03:43.331 00:03:43.331 Type BDF Vendor Device NUMA Driver Device Block devices 00:03:43.331 I/OAT 0000:00:04.0 8086 2021 0 ioatdma - - 00:03:43.331 I/OAT 0000:00:04.1 8086 2021 0 ioatdma - - 00:03:43.331 I/OAT 0000:00:04.2 8086 2021 0 ioatdma - - 00:03:43.331 I/OAT 0000:00:04.3 8086 2021 0 ioatdma - - 00:03:43.331 I/OAT 0000:00:04.4 8086 2021 0 ioatdma - - 00:03:43.331 I/OAT 0000:00:04.5 8086 2021 0 ioatdma - - 00:03:43.331 I/OAT 0000:00:04.6 8086 2021 0 ioatdma - - 00:03:43.331 I/OAT 0000:00:04.7 8086 2021 0 ioatdma - - 00:03:43.331 NVMe 0000:5e:00.0 8086 0a54 0 nvme nvme0 nvme0n1 00:03:43.331 I/OAT 0000:80:04.0 8086 2021 1 ioatdma - - 00:03:43.331 I/OAT 0000:80:04.1 8086 2021 1 ioatdma - - 00:03:43.331 I/OAT 0000:80:04.2 8086 2021 1 ioatdma - - 00:03:43.331 I/OAT 0000:80:04.3 8086 2021 1 ioatdma - - 00:03:43.331 I/OAT 0000:80:04.4 8086 2021 1 ioatdma - - 00:03:43.331 I/OAT 0000:80:04.5 8086 2021 1 ioatdma - - 00:03:43.331 I/OAT 0000:80:04.6 8086 2021 1 ioatdma - - 00:03:43.331 I/OAT 0000:80:04.7 8086 2021 1 ioatdma - - 00:03:43.331 16:54:30 -- spdk/autotest.sh@130 -- # uname -s 00:03:43.331 16:54:30 -- spdk/autotest.sh@130 -- # [[ Linux == Linux ]] 00:03:43.331 16:54:30 -- spdk/autotest.sh@132 -- # nvme_namespace_revert 00:03:43.331 16:54:30 -- common/autotest_common.sh@1527 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:45.925 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:03:45.925 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:03:45.925 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:03:45.925 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:03:45.925 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:03:45.925 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:03:45.926 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:03:45.926 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:03:45.926 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:03:45.926 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:03:45.926 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:03:45.926 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:03:45.926 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:03:46.184 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:03:46.184 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:03:46.184 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:03:46.752 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:03:47.010 16:54:34 -- common/autotest_common.sh@1528 -- # sleep 1 00:03:47.947 16:54:35 -- common/autotest_common.sh@1529 -- # bdfs=() 00:03:47.947 16:54:35 -- common/autotest_common.sh@1529 -- # local bdfs 00:03:47.947 16:54:35 -- common/autotest_common.sh@1530 -- # bdfs=($(get_nvme_bdfs)) 00:03:47.947 16:54:35 -- common/autotest_common.sh@1530 -- # get_nvme_bdfs 00:03:47.947 16:54:35 -- common/autotest_common.sh@1509 -- # bdfs=() 00:03:47.947 16:54:35 -- common/autotest_common.sh@1509 -- # local bdfs 00:03:47.947 16:54:35 -- common/autotest_common.sh@1510 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:03:47.947 16:54:35 -- common/autotest_common.sh@1510 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:03:47.947 16:54:35 -- common/autotest_common.sh@1510 -- # jq -r '.config[].params.traddr' 00:03:47.947 16:54:35 -- common/autotest_common.sh@1511 -- # (( 1 == 0 )) 00:03:47.947 16:54:35 -- common/autotest_common.sh@1515 -- # printf '%s\n' 0000:5e:00.0 00:03:47.947 16:54:35 -- common/autotest_common.sh@1532 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:50.481 Waiting for block devices as requested 00:03:50.481 0000:5e:00.0 (8086 0a54): vfio-pci -> nvme 00:03:50.481 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:03:50.481 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:03:50.481 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:03:50.481 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:03:50.481 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:03:50.741 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:03:50.741 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:03:50.741 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:03:50.999 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:03:50.999 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:03:50.999 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:03:50.999 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:03:51.259 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:03:51.259 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:03:51.259 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:03:51.259 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:03:51.519 16:54:38 -- common/autotest_common.sh@1534 -- # for bdf in "${bdfs[@]}" 00:03:51.519 16:54:39 -- common/autotest_common.sh@1535 -- # get_nvme_ctrlr_from_bdf 0000:5e:00.0 00:03:51.519 16:54:39 -- common/autotest_common.sh@1498 -- # readlink -f /sys/class/nvme/nvme0 00:03:51.519 16:54:39 -- common/autotest_common.sh@1498 -- # grep 0000:5e:00.0/nvme/nvme 00:03:51.519 16:54:39 -- common/autotest_common.sh@1498 -- # bdf_sysfs_path=/sys/devices/pci0000:5d/0000:5d:02.0/0000:5e:00.0/nvme/nvme0 00:03:51.519 16:54:39 -- common/autotest_common.sh@1499 -- # [[ -z /sys/devices/pci0000:5d/0000:5d:02.0/0000:5e:00.0/nvme/nvme0 ]] 00:03:51.519 16:54:39 -- common/autotest_common.sh@1503 -- # basename /sys/devices/pci0000:5d/0000:5d:02.0/0000:5e:00.0/nvme/nvme0 00:03:51.519 16:54:39 -- common/autotest_common.sh@1503 -- # printf '%s\n' nvme0 00:03:51.519 16:54:39 -- common/autotest_common.sh@1535 -- # nvme_ctrlr=/dev/nvme0 00:03:51.519 16:54:39 -- common/autotest_common.sh@1536 -- # [[ -z /dev/nvme0 ]] 00:03:51.519 16:54:39 -- common/autotest_common.sh@1541 -- # nvme id-ctrl /dev/nvme0 00:03:51.519 16:54:39 -- common/autotest_common.sh@1541 -- # grep oacs 00:03:51.519 16:54:39 -- common/autotest_common.sh@1541 -- # cut -d: -f2 00:03:51.519 16:54:39 -- common/autotest_common.sh@1541 -- # oacs=' 0xe' 00:03:51.519 16:54:39 -- common/autotest_common.sh@1542 -- # oacs_ns_manage=8 00:03:51.519 16:54:39 -- common/autotest_common.sh@1544 -- # [[ 8 -ne 0 ]] 00:03:51.519 16:54:39 -- common/autotest_common.sh@1550 -- # nvme id-ctrl /dev/nvme0 00:03:51.519 16:54:39 -- common/autotest_common.sh@1550 -- # grep unvmcap 00:03:51.519 16:54:39 -- common/autotest_common.sh@1550 -- # cut -d: -f2 00:03:51.519 16:54:39 -- common/autotest_common.sh@1550 -- # unvmcap=' 0' 00:03:51.519 16:54:39 -- common/autotest_common.sh@1551 -- # [[ 0 -eq 0 ]] 00:03:51.519 16:54:39 -- common/autotest_common.sh@1553 -- # continue 00:03:51.519 16:54:39 -- spdk/autotest.sh@135 -- # timing_exit pre_cleanup 00:03:51.519 16:54:39 -- common/autotest_common.sh@726 -- # xtrace_disable 00:03:51.519 16:54:39 -- common/autotest_common.sh@10 -- # set +x 00:03:51.519 16:54:39 -- spdk/autotest.sh@138 -- # timing_enter afterboot 00:03:51.519 16:54:39 -- common/autotest_common.sh@720 -- # xtrace_disable 00:03:51.519 16:54:39 -- common/autotest_common.sh@10 -- # set +x 00:03:51.519 16:54:39 -- spdk/autotest.sh@139 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:54.052 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:03:54.052 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:03:54.052 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:03:54.052 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:03:54.052 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:03:54.052 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:03:54.052 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:03:54.052 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:03:54.052 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:03:54.052 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:03:54.052 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:03:54.052 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:03:54.052 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:03:54.052 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:03:54.052 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:03:54.052 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:03:54.619 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:03:54.878 16:54:42 -- spdk/autotest.sh@140 -- # timing_exit afterboot 00:03:54.878 16:54:42 -- common/autotest_common.sh@726 -- # xtrace_disable 00:03:54.878 16:54:42 -- common/autotest_common.sh@10 -- # set +x 00:03:54.878 16:54:42 -- spdk/autotest.sh@144 -- # opal_revert_cleanup 00:03:54.878 16:54:42 -- common/autotest_common.sh@1587 -- # mapfile -t bdfs 00:03:54.878 16:54:42 -- common/autotest_common.sh@1587 -- # get_nvme_bdfs_by_id 0x0a54 00:03:54.878 16:54:42 -- common/autotest_common.sh@1573 -- # bdfs=() 00:03:54.878 16:54:42 -- common/autotest_common.sh@1573 -- # local bdfs 00:03:54.878 16:54:42 -- common/autotest_common.sh@1575 -- # get_nvme_bdfs 00:03:54.878 16:54:42 -- common/autotest_common.sh@1509 -- # bdfs=() 00:03:54.878 16:54:42 -- common/autotest_common.sh@1509 -- # local bdfs 00:03:54.878 16:54:42 -- common/autotest_common.sh@1510 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:03:54.878 16:54:42 -- common/autotest_common.sh@1510 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:03:54.878 16:54:42 -- common/autotest_common.sh@1510 -- # jq -r '.config[].params.traddr' 00:03:54.878 16:54:42 -- common/autotest_common.sh@1511 -- # (( 1 == 0 )) 00:03:54.878 16:54:42 -- common/autotest_common.sh@1515 -- # printf '%s\n' 0000:5e:00.0 00:03:54.878 16:54:42 -- common/autotest_common.sh@1575 -- # for bdf in $(get_nvme_bdfs) 00:03:54.878 16:54:42 -- common/autotest_common.sh@1576 -- # cat /sys/bus/pci/devices/0000:5e:00.0/device 00:03:54.878 16:54:42 -- common/autotest_common.sh@1576 -- # device=0x0a54 00:03:54.878 16:54:42 -- common/autotest_common.sh@1577 -- # [[ 0x0a54 == \0\x\0\a\5\4 ]] 00:03:54.878 16:54:42 -- common/autotest_common.sh@1578 -- # bdfs+=($bdf) 00:03:54.878 16:54:42 -- common/autotest_common.sh@1582 -- # printf '%s\n' 0000:5e:00.0 00:03:54.878 16:54:42 -- common/autotest_common.sh@1588 -- # [[ -z 0000:5e:00.0 ]] 00:03:54.878 16:54:42 -- common/autotest_common.sh@1593 -- # spdk_tgt_pid=2878532 00:03:54.878 16:54:42 -- common/autotest_common.sh@1594 -- # waitforlisten 2878532 00:03:54.878 16:54:42 -- common/autotest_common.sh@1592 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:03:54.878 16:54:42 -- common/autotest_common.sh@827 -- # '[' -z 2878532 ']' 00:03:54.878 16:54:42 -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:03:54.878 16:54:42 -- common/autotest_common.sh@832 -- # local max_retries=100 00:03:54.878 16:54:42 -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:03:54.878 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:03:54.878 16:54:42 -- common/autotest_common.sh@836 -- # xtrace_disable 00:03:54.878 16:54:42 -- common/autotest_common.sh@10 -- # set +x 00:03:55.137 [2024-05-15 16:54:42.554119] Starting SPDK v24.05-pre git sha1 0ba8ca574 / DPDK 23.11.0 initialization... 00:03:55.137 [2024-05-15 16:54:42.554173] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2878532 ] 00:03:55.137 EAL: No free 2048 kB hugepages reported on node 1 00:03:55.137 [2024-05-15 16:54:42.607981] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:03:55.137 [2024-05-15 16:54:42.685865] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:03:55.716 16:54:43 -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:03:55.716 16:54:43 -- common/autotest_common.sh@860 -- # return 0 00:03:55.716 16:54:43 -- common/autotest_common.sh@1596 -- # bdf_id=0 00:03:55.716 16:54:43 -- common/autotest_common.sh@1597 -- # for bdf in "${bdfs[@]}" 00:03:55.716 16:54:43 -- common/autotest_common.sh@1598 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t pcie -a 0000:5e:00.0 00:03:59.005 nvme0n1 00:03:59.005 16:54:46 -- common/autotest_common.sh@1600 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_opal_revert -b nvme0 -p test 00:03:59.005 [2024-05-15 16:54:46.477435] vbdev_opal_rpc.c: 125:rpc_bdev_nvme_opal_revert: *ERROR*: nvme0 not support opal 00:03:59.005 request: 00:03:59.005 { 00:03:59.005 "nvme_ctrlr_name": "nvme0", 00:03:59.005 "password": "test", 00:03:59.005 "method": "bdev_nvme_opal_revert", 00:03:59.005 "req_id": 1 00:03:59.005 } 00:03:59.005 Got JSON-RPC error response 00:03:59.005 response: 00:03:59.005 { 00:03:59.005 "code": -32602, 00:03:59.005 "message": "Invalid parameters" 00:03:59.005 } 00:03:59.005 16:54:46 -- common/autotest_common.sh@1600 -- # true 00:03:59.005 16:54:46 -- common/autotest_common.sh@1601 -- # (( ++bdf_id )) 00:03:59.005 16:54:46 -- common/autotest_common.sh@1604 -- # killprocess 2878532 00:03:59.005 16:54:46 -- common/autotest_common.sh@946 -- # '[' -z 2878532 ']' 00:03:59.005 16:54:46 -- common/autotest_common.sh@950 -- # kill -0 2878532 00:03:59.005 16:54:46 -- common/autotest_common.sh@951 -- # uname 00:03:59.005 16:54:46 -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:03:59.005 16:54:46 -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 2878532 00:03:59.005 16:54:46 -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:03:59.005 16:54:46 -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:03:59.005 16:54:46 -- common/autotest_common.sh@964 -- # echo 'killing process with pid 2878532' 00:03:59.005 killing process with pid 2878532 00:03:59.005 16:54:46 -- common/autotest_common.sh@965 -- # kill 2878532 00:03:59.005 16:54:46 -- common/autotest_common.sh@970 -- # wait 2878532 00:04:00.910 16:54:48 -- spdk/autotest.sh@150 -- # '[' 0 -eq 1 ']' 00:04:00.910 16:54:48 -- spdk/autotest.sh@154 -- # '[' 1 -eq 1 ']' 00:04:00.910 16:54:48 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:04:00.910 16:54:48 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:04:00.910 16:54:48 -- spdk/autotest.sh@162 -- # timing_enter lib 00:04:00.910 16:54:48 -- common/autotest_common.sh@720 -- # xtrace_disable 00:04:00.910 16:54:48 -- common/autotest_common.sh@10 -- # set +x 00:04:00.910 16:54:48 -- spdk/autotest.sh@164 -- # run_test env /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:04:00.910 16:54:48 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:04:00.910 16:54:48 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:00.910 16:54:48 -- common/autotest_common.sh@10 -- # set +x 00:04:00.910 ************************************ 00:04:00.910 START TEST env 00:04:00.910 ************************************ 00:04:00.910 16:54:48 env -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:04:00.910 * Looking for test storage... 00:04:00.910 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env 00:04:00.910 16:54:48 env -- env/env.sh@10 -- # run_test env_memory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:04:00.910 16:54:48 env -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:04:00.910 16:54:48 env -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:00.910 16:54:48 env -- common/autotest_common.sh@10 -- # set +x 00:04:00.910 ************************************ 00:04:00.910 START TEST env_memory 00:04:00.910 ************************************ 00:04:00.910 16:54:48 env.env_memory -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:04:00.910 00:04:00.910 00:04:00.910 CUnit - A unit testing framework for C - Version 2.1-3 00:04:00.910 http://cunit.sourceforge.net/ 00:04:00.910 00:04:00.910 00:04:00.910 Suite: memory 00:04:00.910 Test: alloc and free memory map ...[2024-05-15 16:54:48.370752] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:04:00.910 passed 00:04:00.910 Test: mem map translation ...[2024-05-15 16:54:48.390225] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:04:00.910 [2024-05-15 16:54:48.390240] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:04:00.910 [2024-05-15 16:54:48.390276] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 584:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:04:00.910 [2024-05-15 16:54:48.390283] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 600:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:04:00.910 passed 00:04:00.910 Test: mem map registration ...[2024-05-15 16:54:48.429096] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x200000 len=1234 00:04:00.910 [2024-05-15 16:54:48.429110] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x4d2 len=2097152 00:04:00.910 passed 00:04:00.910 Test: mem map adjacent registrations ...passed 00:04:00.910 00:04:00.910 Run Summary: Type Total Ran Passed Failed Inactive 00:04:00.910 suites 1 1 n/a 0 0 00:04:00.910 tests 4 4 4 0 0 00:04:00.910 asserts 152 152 152 0 n/a 00:04:00.910 00:04:00.910 Elapsed time = 0.133 seconds 00:04:00.910 00:04:00.910 real 0m0.140s 00:04:00.910 user 0m0.136s 00:04:00.910 sys 0m0.003s 00:04:00.910 16:54:48 env.env_memory -- common/autotest_common.sh@1122 -- # xtrace_disable 00:04:00.910 16:54:48 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:04:00.910 ************************************ 00:04:00.910 END TEST env_memory 00:04:00.910 ************************************ 00:04:00.910 16:54:48 env -- env/env.sh@11 -- # run_test env_vtophys /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:04:00.910 16:54:48 env -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:04:00.910 16:54:48 env -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:00.910 16:54:48 env -- common/autotest_common.sh@10 -- # set +x 00:04:00.910 ************************************ 00:04:00.910 START TEST env_vtophys 00:04:00.910 ************************************ 00:04:00.910 16:54:48 env.env_vtophys -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:04:01.169 EAL: lib.eal log level changed from notice to debug 00:04:01.169 EAL: Detected lcore 0 as core 0 on socket 0 00:04:01.169 EAL: Detected lcore 1 as core 1 on socket 0 00:04:01.169 EAL: Detected lcore 2 as core 2 on socket 0 00:04:01.169 EAL: Detected lcore 3 as core 3 on socket 0 00:04:01.169 EAL: Detected lcore 4 as core 4 on socket 0 00:04:01.169 EAL: Detected lcore 5 as core 5 on socket 0 00:04:01.169 EAL: Detected lcore 6 as core 6 on socket 0 00:04:01.169 EAL: Detected lcore 7 as core 8 on socket 0 00:04:01.169 EAL: Detected lcore 8 as core 9 on socket 0 00:04:01.169 EAL: Detected lcore 9 as core 10 on socket 0 00:04:01.169 EAL: Detected lcore 10 as core 11 on socket 0 00:04:01.169 EAL: Detected lcore 11 as core 12 on socket 0 00:04:01.169 EAL: Detected lcore 12 as core 13 on socket 0 00:04:01.169 EAL: Detected lcore 13 as core 16 on socket 0 00:04:01.169 EAL: Detected lcore 14 as core 17 on socket 0 00:04:01.169 EAL: Detected lcore 15 as core 18 on socket 0 00:04:01.169 EAL: Detected lcore 16 as core 19 on socket 0 00:04:01.169 EAL: Detected lcore 17 as core 20 on socket 0 00:04:01.169 EAL: Detected lcore 18 as core 21 on socket 0 00:04:01.169 EAL: Detected lcore 19 as core 25 on socket 0 00:04:01.169 EAL: Detected lcore 20 as core 26 on socket 0 00:04:01.169 EAL: Detected lcore 21 as core 27 on socket 0 00:04:01.169 EAL: Detected lcore 22 as core 28 on socket 0 00:04:01.169 EAL: Detected lcore 23 as core 29 on socket 0 00:04:01.169 EAL: Detected lcore 24 as core 0 on socket 1 00:04:01.169 EAL: Detected lcore 25 as core 1 on socket 1 00:04:01.169 EAL: Detected lcore 26 as core 2 on socket 1 00:04:01.169 EAL: Detected lcore 27 as core 3 on socket 1 00:04:01.169 EAL: Detected lcore 28 as core 4 on socket 1 00:04:01.169 EAL: Detected lcore 29 as core 5 on socket 1 00:04:01.169 EAL: Detected lcore 30 as core 6 on socket 1 00:04:01.169 EAL: Detected lcore 31 as core 9 on socket 1 00:04:01.169 EAL: Detected lcore 32 as core 10 on socket 1 00:04:01.169 EAL: Detected lcore 33 as core 11 on socket 1 00:04:01.169 EAL: Detected lcore 34 as core 12 on socket 1 00:04:01.169 EAL: Detected lcore 35 as core 13 on socket 1 00:04:01.169 EAL: Detected lcore 36 as core 16 on socket 1 00:04:01.169 EAL: Detected lcore 37 as core 17 on socket 1 00:04:01.169 EAL: Detected lcore 38 as core 18 on socket 1 00:04:01.169 EAL: Detected lcore 39 as core 19 on socket 1 00:04:01.169 EAL: Detected lcore 40 as core 20 on socket 1 00:04:01.169 EAL: Detected lcore 41 as core 21 on socket 1 00:04:01.169 EAL: Detected lcore 42 as core 24 on socket 1 00:04:01.169 EAL: Detected lcore 43 as core 25 on socket 1 00:04:01.169 EAL: Detected lcore 44 as core 26 on socket 1 00:04:01.169 EAL: Detected lcore 45 as core 27 on socket 1 00:04:01.169 EAL: Detected lcore 46 as core 28 on socket 1 00:04:01.169 EAL: Detected lcore 47 as core 29 on socket 1 00:04:01.169 EAL: Detected lcore 48 as core 0 on socket 0 00:04:01.169 EAL: Detected lcore 49 as core 1 on socket 0 00:04:01.169 EAL: Detected lcore 50 as core 2 on socket 0 00:04:01.169 EAL: Detected lcore 51 as core 3 on socket 0 00:04:01.169 EAL: Detected lcore 52 as core 4 on socket 0 00:04:01.169 EAL: Detected lcore 53 as core 5 on socket 0 00:04:01.169 EAL: Detected lcore 54 as core 6 on socket 0 00:04:01.169 EAL: Detected lcore 55 as core 8 on socket 0 00:04:01.169 EAL: Detected lcore 56 as core 9 on socket 0 00:04:01.170 EAL: Detected lcore 57 as core 10 on socket 0 00:04:01.170 EAL: Detected lcore 58 as core 11 on socket 0 00:04:01.170 EAL: Detected lcore 59 as core 12 on socket 0 00:04:01.170 EAL: Detected lcore 60 as core 13 on socket 0 00:04:01.170 EAL: Detected lcore 61 as core 16 on socket 0 00:04:01.170 EAL: Detected lcore 62 as core 17 on socket 0 00:04:01.170 EAL: Detected lcore 63 as core 18 on socket 0 00:04:01.170 EAL: Detected lcore 64 as core 19 on socket 0 00:04:01.170 EAL: Detected lcore 65 as core 20 on socket 0 00:04:01.170 EAL: Detected lcore 66 as core 21 on socket 0 00:04:01.170 EAL: Detected lcore 67 as core 25 on socket 0 00:04:01.170 EAL: Detected lcore 68 as core 26 on socket 0 00:04:01.170 EAL: Detected lcore 69 as core 27 on socket 0 00:04:01.170 EAL: Detected lcore 70 as core 28 on socket 0 00:04:01.170 EAL: Detected lcore 71 as core 29 on socket 0 00:04:01.170 EAL: Detected lcore 72 as core 0 on socket 1 00:04:01.170 EAL: Detected lcore 73 as core 1 on socket 1 00:04:01.170 EAL: Detected lcore 74 as core 2 on socket 1 00:04:01.170 EAL: Detected lcore 75 as core 3 on socket 1 00:04:01.170 EAL: Detected lcore 76 as core 4 on socket 1 00:04:01.170 EAL: Detected lcore 77 as core 5 on socket 1 00:04:01.170 EAL: Detected lcore 78 as core 6 on socket 1 00:04:01.170 EAL: Detected lcore 79 as core 9 on socket 1 00:04:01.170 EAL: Detected lcore 80 as core 10 on socket 1 00:04:01.170 EAL: Detected lcore 81 as core 11 on socket 1 00:04:01.170 EAL: Detected lcore 82 as core 12 on socket 1 00:04:01.170 EAL: Detected lcore 83 as core 13 on socket 1 00:04:01.170 EAL: Detected lcore 84 as core 16 on socket 1 00:04:01.170 EAL: Detected lcore 85 as core 17 on socket 1 00:04:01.170 EAL: Detected lcore 86 as core 18 on socket 1 00:04:01.170 EAL: Detected lcore 87 as core 19 on socket 1 00:04:01.170 EAL: Detected lcore 88 as core 20 on socket 1 00:04:01.170 EAL: Detected lcore 89 as core 21 on socket 1 00:04:01.170 EAL: Detected lcore 90 as core 24 on socket 1 00:04:01.170 EAL: Detected lcore 91 as core 25 on socket 1 00:04:01.170 EAL: Detected lcore 92 as core 26 on socket 1 00:04:01.170 EAL: Detected lcore 93 as core 27 on socket 1 00:04:01.170 EAL: Detected lcore 94 as core 28 on socket 1 00:04:01.170 EAL: Detected lcore 95 as core 29 on socket 1 00:04:01.170 EAL: Maximum logical cores by configuration: 128 00:04:01.170 EAL: Detected CPU lcores: 96 00:04:01.170 EAL: Detected NUMA nodes: 2 00:04:01.170 EAL: Checking presence of .so 'librte_eal.so.24.0' 00:04:01.170 EAL: Detected shared linkage of DPDK 00:04:01.170 EAL: No shared files mode enabled, IPC will be disabled 00:04:01.170 EAL: Bus pci wants IOVA as 'DC' 00:04:01.170 EAL: Buses did not request a specific IOVA mode. 00:04:01.170 EAL: IOMMU is available, selecting IOVA as VA mode. 00:04:01.170 EAL: Selected IOVA mode 'VA' 00:04:01.170 EAL: No free 2048 kB hugepages reported on node 1 00:04:01.170 EAL: Probing VFIO support... 00:04:01.170 EAL: IOMMU type 1 (Type 1) is supported 00:04:01.170 EAL: IOMMU type 7 (sPAPR) is not supported 00:04:01.170 EAL: IOMMU type 8 (No-IOMMU) is not supported 00:04:01.170 EAL: VFIO support initialized 00:04:01.170 EAL: Ask a virtual area of 0x2e000 bytes 00:04:01.170 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:04:01.170 EAL: Setting up physically contiguous memory... 00:04:01.170 EAL: Setting maximum number of open files to 524288 00:04:01.170 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:04:01.170 EAL: Detected memory type: socket_id:1 hugepage_sz:2097152 00:04:01.170 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:04:01.170 EAL: Ask a virtual area of 0x61000 bytes 00:04:01.170 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:04:01.170 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:01.170 EAL: Ask a virtual area of 0x400000000 bytes 00:04:01.170 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:04:01.170 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:04:01.170 EAL: Ask a virtual area of 0x61000 bytes 00:04:01.170 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:04:01.170 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:01.170 EAL: Ask a virtual area of 0x400000000 bytes 00:04:01.170 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:04:01.170 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:04:01.170 EAL: Ask a virtual area of 0x61000 bytes 00:04:01.170 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:04:01.170 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:01.170 EAL: Ask a virtual area of 0x400000000 bytes 00:04:01.170 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:04:01.170 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:04:01.170 EAL: Ask a virtual area of 0x61000 bytes 00:04:01.170 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:04:01.170 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:01.170 EAL: Ask a virtual area of 0x400000000 bytes 00:04:01.170 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:04:01.170 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:04:01.170 EAL: Creating 4 segment lists: n_segs:8192 socket_id:1 hugepage_sz:2097152 00:04:01.170 EAL: Ask a virtual area of 0x61000 bytes 00:04:01.170 EAL: Virtual area found at 0x201000800000 (size = 0x61000) 00:04:01.170 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:04:01.170 EAL: Ask a virtual area of 0x400000000 bytes 00:04:01.170 EAL: Virtual area found at 0x201000a00000 (size = 0x400000000) 00:04:01.170 EAL: VA reserved for memseg list at 0x201000a00000, size 400000000 00:04:01.170 EAL: Ask a virtual area of 0x61000 bytes 00:04:01.170 EAL: Virtual area found at 0x201400a00000 (size = 0x61000) 00:04:01.170 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:04:01.170 EAL: Ask a virtual area of 0x400000000 bytes 00:04:01.170 EAL: Virtual area found at 0x201400c00000 (size = 0x400000000) 00:04:01.170 EAL: VA reserved for memseg list at 0x201400c00000, size 400000000 00:04:01.170 EAL: Ask a virtual area of 0x61000 bytes 00:04:01.170 EAL: Virtual area found at 0x201800c00000 (size = 0x61000) 00:04:01.170 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:04:01.170 EAL: Ask a virtual area of 0x400000000 bytes 00:04:01.170 EAL: Virtual area found at 0x201800e00000 (size = 0x400000000) 00:04:01.170 EAL: VA reserved for memseg list at 0x201800e00000, size 400000000 00:04:01.170 EAL: Ask a virtual area of 0x61000 bytes 00:04:01.170 EAL: Virtual area found at 0x201c00e00000 (size = 0x61000) 00:04:01.170 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:04:01.170 EAL: Ask a virtual area of 0x400000000 bytes 00:04:01.170 EAL: Virtual area found at 0x201c01000000 (size = 0x400000000) 00:04:01.170 EAL: VA reserved for memseg list at 0x201c01000000, size 400000000 00:04:01.170 EAL: Hugepages will be freed exactly as allocated. 00:04:01.170 EAL: No shared files mode enabled, IPC is disabled 00:04:01.170 EAL: No shared files mode enabled, IPC is disabled 00:04:01.170 EAL: TSC frequency is ~2300000 KHz 00:04:01.170 EAL: Main lcore 0 is ready (tid=7f105f140a00;cpuset=[0]) 00:04:01.170 EAL: Trying to obtain current memory policy. 00:04:01.170 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:01.170 EAL: Restoring previous memory policy: 0 00:04:01.170 EAL: request: mp_malloc_sync 00:04:01.170 EAL: No shared files mode enabled, IPC is disabled 00:04:01.170 EAL: Heap on socket 0 was expanded by 2MB 00:04:01.170 EAL: No shared files mode enabled, IPC is disabled 00:04:01.170 EAL: No PCI address specified using 'addr=' in: bus=pci 00:04:01.170 EAL: Mem event callback 'spdk:(nil)' registered 00:04:01.170 00:04:01.170 00:04:01.170 CUnit - A unit testing framework for C - Version 2.1-3 00:04:01.170 http://cunit.sourceforge.net/ 00:04:01.170 00:04:01.170 00:04:01.170 Suite: components_suite 00:04:01.170 Test: vtophys_malloc_test ...passed 00:04:01.170 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:04:01.170 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:01.170 EAL: Restoring previous memory policy: 4 00:04:01.170 EAL: Calling mem event callback 'spdk:(nil)' 00:04:01.170 EAL: request: mp_malloc_sync 00:04:01.170 EAL: No shared files mode enabled, IPC is disabled 00:04:01.170 EAL: Heap on socket 0 was expanded by 4MB 00:04:01.170 EAL: Calling mem event callback 'spdk:(nil)' 00:04:01.170 EAL: request: mp_malloc_sync 00:04:01.170 EAL: No shared files mode enabled, IPC is disabled 00:04:01.170 EAL: Heap on socket 0 was shrunk by 4MB 00:04:01.170 EAL: Trying to obtain current memory policy. 00:04:01.170 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:01.170 EAL: Restoring previous memory policy: 4 00:04:01.170 EAL: Calling mem event callback 'spdk:(nil)' 00:04:01.170 EAL: request: mp_malloc_sync 00:04:01.170 EAL: No shared files mode enabled, IPC is disabled 00:04:01.170 EAL: Heap on socket 0 was expanded by 6MB 00:04:01.171 EAL: Calling mem event callback 'spdk:(nil)' 00:04:01.171 EAL: request: mp_malloc_sync 00:04:01.171 EAL: No shared files mode enabled, IPC is disabled 00:04:01.171 EAL: Heap on socket 0 was shrunk by 6MB 00:04:01.171 EAL: Trying to obtain current memory policy. 00:04:01.171 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:01.171 EAL: Restoring previous memory policy: 4 00:04:01.171 EAL: Calling mem event callback 'spdk:(nil)' 00:04:01.171 EAL: request: mp_malloc_sync 00:04:01.171 EAL: No shared files mode enabled, IPC is disabled 00:04:01.171 EAL: Heap on socket 0 was expanded by 10MB 00:04:01.171 EAL: Calling mem event callback 'spdk:(nil)' 00:04:01.171 EAL: request: mp_malloc_sync 00:04:01.171 EAL: No shared files mode enabled, IPC is disabled 00:04:01.171 EAL: Heap on socket 0 was shrunk by 10MB 00:04:01.171 EAL: Trying to obtain current memory policy. 00:04:01.171 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:01.171 EAL: Restoring previous memory policy: 4 00:04:01.171 EAL: Calling mem event callback 'spdk:(nil)' 00:04:01.171 EAL: request: mp_malloc_sync 00:04:01.171 EAL: No shared files mode enabled, IPC is disabled 00:04:01.171 EAL: Heap on socket 0 was expanded by 18MB 00:04:01.171 EAL: Calling mem event callback 'spdk:(nil)' 00:04:01.171 EAL: request: mp_malloc_sync 00:04:01.171 EAL: No shared files mode enabled, IPC is disabled 00:04:01.171 EAL: Heap on socket 0 was shrunk by 18MB 00:04:01.171 EAL: Trying to obtain current memory policy. 00:04:01.171 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:01.171 EAL: Restoring previous memory policy: 4 00:04:01.171 EAL: Calling mem event callback 'spdk:(nil)' 00:04:01.171 EAL: request: mp_malloc_sync 00:04:01.171 EAL: No shared files mode enabled, IPC is disabled 00:04:01.171 EAL: Heap on socket 0 was expanded by 34MB 00:04:01.171 EAL: Calling mem event callback 'spdk:(nil)' 00:04:01.171 EAL: request: mp_malloc_sync 00:04:01.171 EAL: No shared files mode enabled, IPC is disabled 00:04:01.171 EAL: Heap on socket 0 was shrunk by 34MB 00:04:01.171 EAL: Trying to obtain current memory policy. 00:04:01.171 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:01.171 EAL: Restoring previous memory policy: 4 00:04:01.171 EAL: Calling mem event callback 'spdk:(nil)' 00:04:01.171 EAL: request: mp_malloc_sync 00:04:01.171 EAL: No shared files mode enabled, IPC is disabled 00:04:01.171 EAL: Heap on socket 0 was expanded by 66MB 00:04:01.171 EAL: Calling mem event callback 'spdk:(nil)' 00:04:01.171 EAL: request: mp_malloc_sync 00:04:01.171 EAL: No shared files mode enabled, IPC is disabled 00:04:01.171 EAL: Heap on socket 0 was shrunk by 66MB 00:04:01.171 EAL: Trying to obtain current memory policy. 00:04:01.171 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:01.171 EAL: Restoring previous memory policy: 4 00:04:01.171 EAL: Calling mem event callback 'spdk:(nil)' 00:04:01.171 EAL: request: mp_malloc_sync 00:04:01.171 EAL: No shared files mode enabled, IPC is disabled 00:04:01.171 EAL: Heap on socket 0 was expanded by 130MB 00:04:01.171 EAL: Calling mem event callback 'spdk:(nil)' 00:04:01.171 EAL: request: mp_malloc_sync 00:04:01.171 EAL: No shared files mode enabled, IPC is disabled 00:04:01.171 EAL: Heap on socket 0 was shrunk by 130MB 00:04:01.171 EAL: Trying to obtain current memory policy. 00:04:01.171 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:01.171 EAL: Restoring previous memory policy: 4 00:04:01.171 EAL: Calling mem event callback 'spdk:(nil)' 00:04:01.171 EAL: request: mp_malloc_sync 00:04:01.171 EAL: No shared files mode enabled, IPC is disabled 00:04:01.171 EAL: Heap on socket 0 was expanded by 258MB 00:04:01.171 EAL: Calling mem event callback 'spdk:(nil)' 00:04:01.430 EAL: request: mp_malloc_sync 00:04:01.430 EAL: No shared files mode enabled, IPC is disabled 00:04:01.430 EAL: Heap on socket 0 was shrunk by 258MB 00:04:01.430 EAL: Trying to obtain current memory policy. 00:04:01.430 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:01.430 EAL: Restoring previous memory policy: 4 00:04:01.430 EAL: Calling mem event callback 'spdk:(nil)' 00:04:01.430 EAL: request: mp_malloc_sync 00:04:01.430 EAL: No shared files mode enabled, IPC is disabled 00:04:01.430 EAL: Heap on socket 0 was expanded by 514MB 00:04:01.430 EAL: Calling mem event callback 'spdk:(nil)' 00:04:01.689 EAL: request: mp_malloc_sync 00:04:01.689 EAL: No shared files mode enabled, IPC is disabled 00:04:01.689 EAL: Heap on socket 0 was shrunk by 514MB 00:04:01.689 EAL: Trying to obtain current memory policy. 00:04:01.689 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:01.689 EAL: Restoring previous memory policy: 4 00:04:01.689 EAL: Calling mem event callback 'spdk:(nil)' 00:04:01.689 EAL: request: mp_malloc_sync 00:04:01.689 EAL: No shared files mode enabled, IPC is disabled 00:04:01.689 EAL: Heap on socket 0 was expanded by 1026MB 00:04:01.948 EAL: Calling mem event callback 'spdk:(nil)' 00:04:02.207 EAL: request: mp_malloc_sync 00:04:02.207 EAL: No shared files mode enabled, IPC is disabled 00:04:02.207 EAL: Heap on socket 0 was shrunk by 1026MB 00:04:02.207 passed 00:04:02.207 00:04:02.207 Run Summary: Type Total Ran Passed Failed Inactive 00:04:02.208 suites 1 1 n/a 0 0 00:04:02.208 tests 2 2 2 0 0 00:04:02.208 asserts 497 497 497 0 n/a 00:04:02.208 00:04:02.208 Elapsed time = 0.961 seconds 00:04:02.208 EAL: Calling mem event callback 'spdk:(nil)' 00:04:02.208 EAL: request: mp_malloc_sync 00:04:02.208 EAL: No shared files mode enabled, IPC is disabled 00:04:02.208 EAL: Heap on socket 0 was shrunk by 2MB 00:04:02.208 EAL: No shared files mode enabled, IPC is disabled 00:04:02.208 EAL: No shared files mode enabled, IPC is disabled 00:04:02.208 EAL: No shared files mode enabled, IPC is disabled 00:04:02.208 00:04:02.208 real 0m1.066s 00:04:02.208 user 0m0.635s 00:04:02.208 sys 0m0.407s 00:04:02.208 16:54:49 env.env_vtophys -- common/autotest_common.sh@1122 -- # xtrace_disable 00:04:02.208 16:54:49 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:04:02.208 ************************************ 00:04:02.208 END TEST env_vtophys 00:04:02.208 ************************************ 00:04:02.208 16:54:49 env -- env/env.sh@12 -- # run_test env_pci /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:04:02.208 16:54:49 env -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:04:02.208 16:54:49 env -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:02.208 16:54:49 env -- common/autotest_common.sh@10 -- # set +x 00:04:02.208 ************************************ 00:04:02.208 START TEST env_pci 00:04:02.208 ************************************ 00:04:02.208 16:54:49 env.env_pci -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:04:02.208 00:04:02.208 00:04:02.208 CUnit - A unit testing framework for C - Version 2.1-3 00:04:02.208 http://cunit.sourceforge.net/ 00:04:02.208 00:04:02.208 00:04:02.208 Suite: pci 00:04:02.208 Test: pci_hook ...[2024-05-15 16:54:49.699764] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/pci.c:1040:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 2879894 has claimed it 00:04:02.208 EAL: Cannot find device (10000:00:01.0) 00:04:02.208 EAL: Failed to attach device on primary process 00:04:02.208 passed 00:04:02.208 00:04:02.208 Run Summary: Type Total Ran Passed Failed Inactive 00:04:02.208 suites 1 1 n/a 0 0 00:04:02.208 tests 1 1 1 0 0 00:04:02.208 asserts 25 25 25 0 n/a 00:04:02.208 00:04:02.208 Elapsed time = 0.029 seconds 00:04:02.208 00:04:02.208 real 0m0.049s 00:04:02.208 user 0m0.018s 00:04:02.208 sys 0m0.030s 00:04:02.208 16:54:49 env.env_pci -- common/autotest_common.sh@1122 -- # xtrace_disable 00:04:02.208 16:54:49 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:04:02.208 ************************************ 00:04:02.208 END TEST env_pci 00:04:02.208 ************************************ 00:04:02.208 16:54:49 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:04:02.208 16:54:49 env -- env/env.sh@15 -- # uname 00:04:02.208 16:54:49 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:04:02.208 16:54:49 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:04:02.208 16:54:49 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:04:02.208 16:54:49 env -- common/autotest_common.sh@1097 -- # '[' 5 -le 1 ']' 00:04:02.208 16:54:49 env -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:02.208 16:54:49 env -- common/autotest_common.sh@10 -- # set +x 00:04:02.208 ************************************ 00:04:02.208 START TEST env_dpdk_post_init 00:04:02.208 ************************************ 00:04:02.208 16:54:49 env.env_dpdk_post_init -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:04:02.208 EAL: Detected CPU lcores: 96 00:04:02.208 EAL: Detected NUMA nodes: 2 00:04:02.208 EAL: Detected shared linkage of DPDK 00:04:02.208 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:04:02.208 EAL: Selected IOVA mode 'VA' 00:04:02.208 EAL: No free 2048 kB hugepages reported on node 1 00:04:02.208 EAL: VFIO support initialized 00:04:02.208 TELEMETRY: No legacy callbacks, legacy socket not created 00:04:02.467 EAL: Using IOMMU type 1 (Type 1) 00:04:02.467 EAL: Ignore mapping IO port bar(1) 00:04:02.467 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.0 (socket 0) 00:04:02.467 EAL: Ignore mapping IO port bar(1) 00:04:02.467 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.1 (socket 0) 00:04:02.467 EAL: Ignore mapping IO port bar(1) 00:04:02.467 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.2 (socket 0) 00:04:02.467 EAL: Ignore mapping IO port bar(1) 00:04:02.467 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.3 (socket 0) 00:04:02.467 EAL: Ignore mapping IO port bar(1) 00:04:02.467 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.4 (socket 0) 00:04:02.467 EAL: Ignore mapping IO port bar(1) 00:04:02.467 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.5 (socket 0) 00:04:02.467 EAL: Ignore mapping IO port bar(1) 00:04:02.467 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.6 (socket 0) 00:04:02.467 EAL: Ignore mapping IO port bar(1) 00:04:02.467 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.7 (socket 0) 00:04:03.406 EAL: Probe PCI driver: spdk_nvme (8086:0a54) device: 0000:5e:00.0 (socket 0) 00:04:03.406 EAL: Ignore mapping IO port bar(1) 00:04:03.406 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.0 (socket 1) 00:04:03.406 EAL: Ignore mapping IO port bar(1) 00:04:03.406 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.1 (socket 1) 00:04:03.406 EAL: Ignore mapping IO port bar(1) 00:04:03.406 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.2 (socket 1) 00:04:03.406 EAL: Ignore mapping IO port bar(1) 00:04:03.406 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.3 (socket 1) 00:04:03.406 EAL: Ignore mapping IO port bar(1) 00:04:03.406 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.4 (socket 1) 00:04:03.406 EAL: Ignore mapping IO port bar(1) 00:04:03.406 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.5 (socket 1) 00:04:03.406 EAL: Ignore mapping IO port bar(1) 00:04:03.406 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.6 (socket 1) 00:04:03.406 EAL: Ignore mapping IO port bar(1) 00:04:03.406 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.7 (socket 1) 00:04:06.693 EAL: Releasing PCI mapped resource for 0000:5e:00.0 00:04:06.693 EAL: Calling pci_unmap_resource for 0000:5e:00.0 at 0x202001020000 00:04:06.693 Starting DPDK initialization... 00:04:06.693 Starting SPDK post initialization... 00:04:06.693 SPDK NVMe probe 00:04:06.693 Attaching to 0000:5e:00.0 00:04:06.693 Attached to 0000:5e:00.0 00:04:06.693 Cleaning up... 00:04:06.693 00:04:06.693 real 0m4.314s 00:04:06.693 user 0m3.265s 00:04:06.693 sys 0m0.119s 00:04:06.693 16:54:54 env.env_dpdk_post_init -- common/autotest_common.sh@1122 -- # xtrace_disable 00:04:06.693 16:54:54 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:04:06.693 ************************************ 00:04:06.693 END TEST env_dpdk_post_init 00:04:06.693 ************************************ 00:04:06.693 16:54:54 env -- env/env.sh@26 -- # uname 00:04:06.693 16:54:54 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:04:06.693 16:54:54 env -- env/env.sh@29 -- # run_test env_mem_callbacks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:04:06.693 16:54:54 env -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:04:06.693 16:54:54 env -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:06.693 16:54:54 env -- common/autotest_common.sh@10 -- # set +x 00:04:06.693 ************************************ 00:04:06.693 START TEST env_mem_callbacks 00:04:06.693 ************************************ 00:04:06.693 16:54:54 env.env_mem_callbacks -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:04:06.693 EAL: Detected CPU lcores: 96 00:04:06.693 EAL: Detected NUMA nodes: 2 00:04:06.693 EAL: Detected shared linkage of DPDK 00:04:06.693 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:04:06.693 EAL: Selected IOVA mode 'VA' 00:04:06.693 EAL: No free 2048 kB hugepages reported on node 1 00:04:06.693 EAL: VFIO support initialized 00:04:06.693 TELEMETRY: No legacy callbacks, legacy socket not created 00:04:06.693 00:04:06.693 00:04:06.693 CUnit - A unit testing framework for C - Version 2.1-3 00:04:06.693 http://cunit.sourceforge.net/ 00:04:06.693 00:04:06.693 00:04:06.693 Suite: memory 00:04:06.693 Test: test ... 00:04:06.693 register 0x200000200000 2097152 00:04:06.693 malloc 3145728 00:04:06.693 register 0x200000400000 4194304 00:04:06.693 buf 0x200000500000 len 3145728 PASSED 00:04:06.693 malloc 64 00:04:06.693 buf 0x2000004fff40 len 64 PASSED 00:04:06.693 malloc 4194304 00:04:06.693 register 0x200000800000 6291456 00:04:06.693 buf 0x200000a00000 len 4194304 PASSED 00:04:06.693 free 0x200000500000 3145728 00:04:06.693 free 0x2000004fff40 64 00:04:06.693 unregister 0x200000400000 4194304 PASSED 00:04:06.693 free 0x200000a00000 4194304 00:04:06.693 unregister 0x200000800000 6291456 PASSED 00:04:06.693 malloc 8388608 00:04:06.693 register 0x200000400000 10485760 00:04:06.693 buf 0x200000600000 len 8388608 PASSED 00:04:06.693 free 0x200000600000 8388608 00:04:06.693 unregister 0x200000400000 10485760 PASSED 00:04:06.693 passed 00:04:06.693 00:04:06.693 Run Summary: Type Total Ran Passed Failed Inactive 00:04:06.693 suites 1 1 n/a 0 0 00:04:06.693 tests 1 1 1 0 0 00:04:06.693 asserts 15 15 15 0 n/a 00:04:06.693 00:04:06.693 Elapsed time = 0.005 seconds 00:04:06.693 00:04:06.693 real 0m0.053s 00:04:06.693 user 0m0.019s 00:04:06.693 sys 0m0.034s 00:04:06.693 16:54:54 env.env_mem_callbacks -- common/autotest_common.sh@1122 -- # xtrace_disable 00:04:06.693 16:54:54 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:04:06.693 ************************************ 00:04:06.693 END TEST env_mem_callbacks 00:04:06.693 ************************************ 00:04:06.693 00:04:06.693 real 0m6.042s 00:04:06.693 user 0m4.248s 00:04:06.693 sys 0m0.850s 00:04:06.693 16:54:54 env -- common/autotest_common.sh@1122 -- # xtrace_disable 00:04:06.693 16:54:54 env -- common/autotest_common.sh@10 -- # set +x 00:04:06.693 ************************************ 00:04:06.693 END TEST env 00:04:06.693 ************************************ 00:04:06.693 16:54:54 -- spdk/autotest.sh@165 -- # run_test rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:04:06.693 16:54:54 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:04:06.693 16:54:54 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:06.693 16:54:54 -- common/autotest_common.sh@10 -- # set +x 00:04:06.693 ************************************ 00:04:06.693 START TEST rpc 00:04:06.693 ************************************ 00:04:06.693 16:54:54 rpc -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:04:06.952 * Looking for test storage... 00:04:06.952 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:04:06.952 16:54:54 rpc -- rpc/rpc.sh@65 -- # spdk_pid=2880806 00:04:06.952 16:54:54 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:06.952 16:54:54 rpc -- rpc/rpc.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -e bdev 00:04:06.952 16:54:54 rpc -- rpc/rpc.sh@67 -- # waitforlisten 2880806 00:04:06.952 16:54:54 rpc -- common/autotest_common.sh@827 -- # '[' -z 2880806 ']' 00:04:06.952 16:54:54 rpc -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:06.952 16:54:54 rpc -- common/autotest_common.sh@832 -- # local max_retries=100 00:04:06.952 16:54:54 rpc -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:06.952 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:06.952 16:54:54 rpc -- common/autotest_common.sh@836 -- # xtrace_disable 00:04:06.952 16:54:54 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:06.952 [2024-05-15 16:54:54.476746] Starting SPDK v24.05-pre git sha1 0ba8ca574 / DPDK 23.11.0 initialization... 00:04:06.952 [2024-05-15 16:54:54.476789] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2880806 ] 00:04:06.952 EAL: No free 2048 kB hugepages reported on node 1 00:04:06.952 [2024-05-15 16:54:54.530034] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:06.952 [2024-05-15 16:54:54.609216] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:04:06.952 [2024-05-15 16:54:54.609250] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 2880806' to capture a snapshot of events at runtime. 00:04:06.952 [2024-05-15 16:54:54.609259] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:04:06.952 [2024-05-15 16:54:54.609265] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:04:06.953 [2024-05-15 16:54:54.609270] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid2880806 for offline analysis/debug. 00:04:06.953 [2024-05-15 16:54:54.609288] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:07.890 16:54:55 rpc -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:04:07.890 16:54:55 rpc -- common/autotest_common.sh@860 -- # return 0 00:04:07.890 16:54:55 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:04:07.890 16:54:55 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:04:07.890 16:54:55 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:04:07.890 16:54:55 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:04:07.890 16:54:55 rpc -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:04:07.890 16:54:55 rpc -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:07.890 16:54:55 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:07.890 ************************************ 00:04:07.890 START TEST rpc_integrity 00:04:07.890 ************************************ 00:04:07.890 16:54:55 rpc.rpc_integrity -- common/autotest_common.sh@1121 -- # rpc_integrity 00:04:07.890 16:54:55 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:04:07.890 16:54:55 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:07.890 16:54:55 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:07.890 16:54:55 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:07.890 16:54:55 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:04:07.890 16:54:55 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:04:07.890 16:54:55 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:04:07.890 16:54:55 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:04:07.890 16:54:55 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:07.890 16:54:55 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:07.890 16:54:55 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:07.890 16:54:55 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:04:07.890 16:54:55 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:04:07.890 16:54:55 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:07.890 16:54:55 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:07.890 16:54:55 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:07.890 16:54:55 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:04:07.890 { 00:04:07.890 "name": "Malloc0", 00:04:07.890 "aliases": [ 00:04:07.890 "162f2982-2ef7-46a3-be1c-e224f36e5a04" 00:04:07.890 ], 00:04:07.890 "product_name": "Malloc disk", 00:04:07.890 "block_size": 512, 00:04:07.890 "num_blocks": 16384, 00:04:07.890 "uuid": "162f2982-2ef7-46a3-be1c-e224f36e5a04", 00:04:07.890 "assigned_rate_limits": { 00:04:07.890 "rw_ios_per_sec": 0, 00:04:07.890 "rw_mbytes_per_sec": 0, 00:04:07.890 "r_mbytes_per_sec": 0, 00:04:07.890 "w_mbytes_per_sec": 0 00:04:07.890 }, 00:04:07.890 "claimed": false, 00:04:07.890 "zoned": false, 00:04:07.890 "supported_io_types": { 00:04:07.890 "read": true, 00:04:07.890 "write": true, 00:04:07.890 "unmap": true, 00:04:07.890 "write_zeroes": true, 00:04:07.890 "flush": true, 00:04:07.890 "reset": true, 00:04:07.890 "compare": false, 00:04:07.890 "compare_and_write": false, 00:04:07.890 "abort": true, 00:04:07.890 "nvme_admin": false, 00:04:07.890 "nvme_io": false 00:04:07.890 }, 00:04:07.890 "memory_domains": [ 00:04:07.890 { 00:04:07.890 "dma_device_id": "system", 00:04:07.890 "dma_device_type": 1 00:04:07.890 }, 00:04:07.890 { 00:04:07.890 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:07.890 "dma_device_type": 2 00:04:07.890 } 00:04:07.890 ], 00:04:07.890 "driver_specific": {} 00:04:07.890 } 00:04:07.890 ]' 00:04:07.890 16:54:55 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:04:07.890 16:54:55 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:04:07.890 16:54:55 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:04:07.890 16:54:55 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:07.890 16:54:55 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:07.890 [2024-05-15 16:54:55.410962] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:04:07.890 [2024-05-15 16:54:55.410991] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:04:07.890 [2024-05-15 16:54:55.411003] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xef2260 00:04:07.890 [2024-05-15 16:54:55.411009] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:04:07.890 [2024-05-15 16:54:55.412072] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:04:07.890 [2024-05-15 16:54:55.412092] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:04:07.890 Passthru0 00:04:07.890 16:54:55 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:07.890 16:54:55 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:04:07.890 16:54:55 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:07.890 16:54:55 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:07.890 16:54:55 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:07.890 16:54:55 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:04:07.890 { 00:04:07.890 "name": "Malloc0", 00:04:07.890 "aliases": [ 00:04:07.890 "162f2982-2ef7-46a3-be1c-e224f36e5a04" 00:04:07.890 ], 00:04:07.890 "product_name": "Malloc disk", 00:04:07.890 "block_size": 512, 00:04:07.890 "num_blocks": 16384, 00:04:07.890 "uuid": "162f2982-2ef7-46a3-be1c-e224f36e5a04", 00:04:07.890 "assigned_rate_limits": { 00:04:07.891 "rw_ios_per_sec": 0, 00:04:07.891 "rw_mbytes_per_sec": 0, 00:04:07.891 "r_mbytes_per_sec": 0, 00:04:07.891 "w_mbytes_per_sec": 0 00:04:07.891 }, 00:04:07.891 "claimed": true, 00:04:07.891 "claim_type": "exclusive_write", 00:04:07.891 "zoned": false, 00:04:07.891 "supported_io_types": { 00:04:07.891 "read": true, 00:04:07.891 "write": true, 00:04:07.891 "unmap": true, 00:04:07.891 "write_zeroes": true, 00:04:07.891 "flush": true, 00:04:07.891 "reset": true, 00:04:07.891 "compare": false, 00:04:07.891 "compare_and_write": false, 00:04:07.891 "abort": true, 00:04:07.891 "nvme_admin": false, 00:04:07.891 "nvme_io": false 00:04:07.891 }, 00:04:07.891 "memory_domains": [ 00:04:07.891 { 00:04:07.891 "dma_device_id": "system", 00:04:07.891 "dma_device_type": 1 00:04:07.891 }, 00:04:07.891 { 00:04:07.891 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:07.891 "dma_device_type": 2 00:04:07.891 } 00:04:07.891 ], 00:04:07.891 "driver_specific": {} 00:04:07.891 }, 00:04:07.891 { 00:04:07.891 "name": "Passthru0", 00:04:07.891 "aliases": [ 00:04:07.891 "fe33c9d2-5323-5ea9-a835-e0b2f27f88ad" 00:04:07.891 ], 00:04:07.891 "product_name": "passthru", 00:04:07.891 "block_size": 512, 00:04:07.891 "num_blocks": 16384, 00:04:07.891 "uuid": "fe33c9d2-5323-5ea9-a835-e0b2f27f88ad", 00:04:07.891 "assigned_rate_limits": { 00:04:07.891 "rw_ios_per_sec": 0, 00:04:07.891 "rw_mbytes_per_sec": 0, 00:04:07.891 "r_mbytes_per_sec": 0, 00:04:07.891 "w_mbytes_per_sec": 0 00:04:07.891 }, 00:04:07.891 "claimed": false, 00:04:07.891 "zoned": false, 00:04:07.891 "supported_io_types": { 00:04:07.891 "read": true, 00:04:07.891 "write": true, 00:04:07.891 "unmap": true, 00:04:07.891 "write_zeroes": true, 00:04:07.891 "flush": true, 00:04:07.891 "reset": true, 00:04:07.891 "compare": false, 00:04:07.891 "compare_and_write": false, 00:04:07.891 "abort": true, 00:04:07.891 "nvme_admin": false, 00:04:07.891 "nvme_io": false 00:04:07.891 }, 00:04:07.891 "memory_domains": [ 00:04:07.891 { 00:04:07.891 "dma_device_id": "system", 00:04:07.891 "dma_device_type": 1 00:04:07.891 }, 00:04:07.891 { 00:04:07.891 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:07.891 "dma_device_type": 2 00:04:07.891 } 00:04:07.891 ], 00:04:07.891 "driver_specific": { 00:04:07.891 "passthru": { 00:04:07.891 "name": "Passthru0", 00:04:07.891 "base_bdev_name": "Malloc0" 00:04:07.891 } 00:04:07.891 } 00:04:07.891 } 00:04:07.891 ]' 00:04:07.891 16:54:55 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:04:07.891 16:54:55 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:04:07.891 16:54:55 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:04:07.891 16:54:55 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:07.891 16:54:55 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:07.891 16:54:55 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:07.891 16:54:55 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:04:07.891 16:54:55 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:07.891 16:54:55 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:07.891 16:54:55 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:07.891 16:54:55 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:04:07.891 16:54:55 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:07.891 16:54:55 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:07.891 16:54:55 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:07.891 16:54:55 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:04:07.891 16:54:55 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:04:07.891 16:54:55 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:04:07.891 00:04:07.891 real 0m0.245s 00:04:07.891 user 0m0.159s 00:04:07.891 sys 0m0.039s 00:04:07.891 16:54:55 rpc.rpc_integrity -- common/autotest_common.sh@1122 -- # xtrace_disable 00:04:07.891 16:54:55 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:07.891 ************************************ 00:04:07.891 END TEST rpc_integrity 00:04:07.891 ************************************ 00:04:08.150 16:54:55 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:04:08.150 16:54:55 rpc -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:04:08.150 16:54:55 rpc -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:08.150 16:54:55 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:08.150 ************************************ 00:04:08.150 START TEST rpc_plugins 00:04:08.150 ************************************ 00:04:08.150 16:54:55 rpc.rpc_plugins -- common/autotest_common.sh@1121 -- # rpc_plugins 00:04:08.150 16:54:55 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:04:08.150 16:54:55 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:08.150 16:54:55 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:08.150 16:54:55 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:08.150 16:54:55 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:04:08.150 16:54:55 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:04:08.150 16:54:55 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:08.150 16:54:55 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:08.150 16:54:55 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:08.150 16:54:55 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:04:08.150 { 00:04:08.150 "name": "Malloc1", 00:04:08.150 "aliases": [ 00:04:08.150 "898815ac-0fe1-4ed2-ac45-43ef6728cf15" 00:04:08.150 ], 00:04:08.150 "product_name": "Malloc disk", 00:04:08.150 "block_size": 4096, 00:04:08.150 "num_blocks": 256, 00:04:08.150 "uuid": "898815ac-0fe1-4ed2-ac45-43ef6728cf15", 00:04:08.150 "assigned_rate_limits": { 00:04:08.150 "rw_ios_per_sec": 0, 00:04:08.150 "rw_mbytes_per_sec": 0, 00:04:08.150 "r_mbytes_per_sec": 0, 00:04:08.150 "w_mbytes_per_sec": 0 00:04:08.150 }, 00:04:08.150 "claimed": false, 00:04:08.150 "zoned": false, 00:04:08.150 "supported_io_types": { 00:04:08.150 "read": true, 00:04:08.150 "write": true, 00:04:08.150 "unmap": true, 00:04:08.150 "write_zeroes": true, 00:04:08.150 "flush": true, 00:04:08.150 "reset": true, 00:04:08.150 "compare": false, 00:04:08.150 "compare_and_write": false, 00:04:08.150 "abort": true, 00:04:08.150 "nvme_admin": false, 00:04:08.150 "nvme_io": false 00:04:08.150 }, 00:04:08.150 "memory_domains": [ 00:04:08.150 { 00:04:08.150 "dma_device_id": "system", 00:04:08.150 "dma_device_type": 1 00:04:08.150 }, 00:04:08.150 { 00:04:08.150 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:08.150 "dma_device_type": 2 00:04:08.150 } 00:04:08.150 ], 00:04:08.150 "driver_specific": {} 00:04:08.150 } 00:04:08.150 ]' 00:04:08.150 16:54:55 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:04:08.150 16:54:55 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:04:08.150 16:54:55 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:04:08.150 16:54:55 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:08.150 16:54:55 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:08.150 16:54:55 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:08.150 16:54:55 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:04:08.150 16:54:55 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:08.150 16:54:55 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:08.150 16:54:55 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:08.150 16:54:55 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:04:08.150 16:54:55 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:04:08.150 16:54:55 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:04:08.150 00:04:08.150 real 0m0.135s 00:04:08.150 user 0m0.086s 00:04:08.150 sys 0m0.017s 00:04:08.150 16:54:55 rpc.rpc_plugins -- common/autotest_common.sh@1122 -- # xtrace_disable 00:04:08.150 16:54:55 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:08.150 ************************************ 00:04:08.150 END TEST rpc_plugins 00:04:08.150 ************************************ 00:04:08.150 16:54:55 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:04:08.150 16:54:55 rpc -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:04:08.150 16:54:55 rpc -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:08.150 16:54:55 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:08.409 ************************************ 00:04:08.409 START TEST rpc_trace_cmd_test 00:04:08.409 ************************************ 00:04:08.409 16:54:55 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1121 -- # rpc_trace_cmd_test 00:04:08.409 16:54:55 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:04:08.409 16:54:55 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:04:08.409 16:54:55 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:08.409 16:54:55 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:04:08.409 16:54:55 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:08.409 16:54:55 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:04:08.409 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid2880806", 00:04:08.409 "tpoint_group_mask": "0x8", 00:04:08.409 "iscsi_conn": { 00:04:08.409 "mask": "0x2", 00:04:08.409 "tpoint_mask": "0x0" 00:04:08.409 }, 00:04:08.409 "scsi": { 00:04:08.409 "mask": "0x4", 00:04:08.409 "tpoint_mask": "0x0" 00:04:08.409 }, 00:04:08.409 "bdev": { 00:04:08.409 "mask": "0x8", 00:04:08.409 "tpoint_mask": "0xffffffffffffffff" 00:04:08.409 }, 00:04:08.409 "nvmf_rdma": { 00:04:08.409 "mask": "0x10", 00:04:08.409 "tpoint_mask": "0x0" 00:04:08.409 }, 00:04:08.409 "nvmf_tcp": { 00:04:08.409 "mask": "0x20", 00:04:08.409 "tpoint_mask": "0x0" 00:04:08.409 }, 00:04:08.409 "ftl": { 00:04:08.409 "mask": "0x40", 00:04:08.409 "tpoint_mask": "0x0" 00:04:08.409 }, 00:04:08.409 "blobfs": { 00:04:08.409 "mask": "0x80", 00:04:08.409 "tpoint_mask": "0x0" 00:04:08.409 }, 00:04:08.409 "dsa": { 00:04:08.409 "mask": "0x200", 00:04:08.409 "tpoint_mask": "0x0" 00:04:08.409 }, 00:04:08.409 "thread": { 00:04:08.409 "mask": "0x400", 00:04:08.409 "tpoint_mask": "0x0" 00:04:08.409 }, 00:04:08.409 "nvme_pcie": { 00:04:08.409 "mask": "0x800", 00:04:08.409 "tpoint_mask": "0x0" 00:04:08.409 }, 00:04:08.409 "iaa": { 00:04:08.409 "mask": "0x1000", 00:04:08.409 "tpoint_mask": "0x0" 00:04:08.409 }, 00:04:08.409 "nvme_tcp": { 00:04:08.409 "mask": "0x2000", 00:04:08.409 "tpoint_mask": "0x0" 00:04:08.409 }, 00:04:08.409 "bdev_nvme": { 00:04:08.409 "mask": "0x4000", 00:04:08.409 "tpoint_mask": "0x0" 00:04:08.409 }, 00:04:08.409 "sock": { 00:04:08.409 "mask": "0x8000", 00:04:08.409 "tpoint_mask": "0x0" 00:04:08.409 } 00:04:08.409 }' 00:04:08.409 16:54:55 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:04:08.409 16:54:55 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 16 -gt 2 ']' 00:04:08.409 16:54:55 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:04:08.409 16:54:55 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:04:08.409 16:54:55 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:04:08.409 16:54:55 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:04:08.409 16:54:55 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:04:08.409 16:54:55 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:04:08.409 16:54:55 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:04:08.409 16:54:56 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:04:08.409 00:04:08.409 real 0m0.213s 00:04:08.409 user 0m0.185s 00:04:08.409 sys 0m0.019s 00:04:08.409 16:54:56 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1122 -- # xtrace_disable 00:04:08.409 16:54:56 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:04:08.409 ************************************ 00:04:08.409 END TEST rpc_trace_cmd_test 00:04:08.409 ************************************ 00:04:08.409 16:54:56 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:04:08.410 16:54:56 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:04:08.410 16:54:56 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:04:08.410 16:54:56 rpc -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:04:08.410 16:54:56 rpc -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:08.410 16:54:56 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:08.677 ************************************ 00:04:08.677 START TEST rpc_daemon_integrity 00:04:08.677 ************************************ 00:04:08.677 16:54:56 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1121 -- # rpc_integrity 00:04:08.677 16:54:56 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:04:08.677 16:54:56 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:08.677 16:54:56 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:08.677 16:54:56 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:08.677 16:54:56 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:04:08.677 16:54:56 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:04:08.677 16:54:56 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:04:08.677 16:54:56 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:04:08.677 16:54:56 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:08.677 16:54:56 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:08.677 16:54:56 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:08.677 16:54:56 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:04:08.677 16:54:56 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:04:08.677 16:54:56 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:08.677 16:54:56 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:08.677 16:54:56 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:08.677 16:54:56 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:04:08.677 { 00:04:08.677 "name": "Malloc2", 00:04:08.677 "aliases": [ 00:04:08.677 "10b136e9-14f4-4190-9801-ee0d062afe9b" 00:04:08.677 ], 00:04:08.677 "product_name": "Malloc disk", 00:04:08.677 "block_size": 512, 00:04:08.677 "num_blocks": 16384, 00:04:08.677 "uuid": "10b136e9-14f4-4190-9801-ee0d062afe9b", 00:04:08.677 "assigned_rate_limits": { 00:04:08.677 "rw_ios_per_sec": 0, 00:04:08.677 "rw_mbytes_per_sec": 0, 00:04:08.677 "r_mbytes_per_sec": 0, 00:04:08.677 "w_mbytes_per_sec": 0 00:04:08.677 }, 00:04:08.677 "claimed": false, 00:04:08.677 "zoned": false, 00:04:08.677 "supported_io_types": { 00:04:08.677 "read": true, 00:04:08.678 "write": true, 00:04:08.678 "unmap": true, 00:04:08.678 "write_zeroes": true, 00:04:08.678 "flush": true, 00:04:08.678 "reset": true, 00:04:08.678 "compare": false, 00:04:08.678 "compare_and_write": false, 00:04:08.678 "abort": true, 00:04:08.678 "nvme_admin": false, 00:04:08.678 "nvme_io": false 00:04:08.678 }, 00:04:08.678 "memory_domains": [ 00:04:08.678 { 00:04:08.678 "dma_device_id": "system", 00:04:08.678 "dma_device_type": 1 00:04:08.678 }, 00:04:08.678 { 00:04:08.678 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:08.678 "dma_device_type": 2 00:04:08.678 } 00:04:08.678 ], 00:04:08.678 "driver_specific": {} 00:04:08.678 } 00:04:08.678 ]' 00:04:08.678 16:54:56 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:04:08.678 16:54:56 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:04:08.678 16:54:56 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:04:08.678 16:54:56 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:08.678 16:54:56 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:08.678 [2024-05-15 16:54:56.233191] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:04:08.678 [2024-05-15 16:54:56.233218] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:04:08.678 [2024-05-15 16:54:56.233230] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1096340 00:04:08.678 [2024-05-15 16:54:56.233236] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:04:08.678 [2024-05-15 16:54:56.234216] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:04:08.678 [2024-05-15 16:54:56.234235] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:04:08.678 Passthru0 00:04:08.678 16:54:56 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:08.678 16:54:56 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:04:08.678 16:54:56 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:08.678 16:54:56 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:08.678 16:54:56 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:08.678 16:54:56 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:04:08.678 { 00:04:08.678 "name": "Malloc2", 00:04:08.678 "aliases": [ 00:04:08.678 "10b136e9-14f4-4190-9801-ee0d062afe9b" 00:04:08.678 ], 00:04:08.678 "product_name": "Malloc disk", 00:04:08.678 "block_size": 512, 00:04:08.678 "num_blocks": 16384, 00:04:08.678 "uuid": "10b136e9-14f4-4190-9801-ee0d062afe9b", 00:04:08.678 "assigned_rate_limits": { 00:04:08.678 "rw_ios_per_sec": 0, 00:04:08.678 "rw_mbytes_per_sec": 0, 00:04:08.678 "r_mbytes_per_sec": 0, 00:04:08.678 "w_mbytes_per_sec": 0 00:04:08.678 }, 00:04:08.678 "claimed": true, 00:04:08.678 "claim_type": "exclusive_write", 00:04:08.678 "zoned": false, 00:04:08.678 "supported_io_types": { 00:04:08.678 "read": true, 00:04:08.678 "write": true, 00:04:08.678 "unmap": true, 00:04:08.678 "write_zeroes": true, 00:04:08.678 "flush": true, 00:04:08.678 "reset": true, 00:04:08.678 "compare": false, 00:04:08.678 "compare_and_write": false, 00:04:08.678 "abort": true, 00:04:08.678 "nvme_admin": false, 00:04:08.678 "nvme_io": false 00:04:08.678 }, 00:04:08.678 "memory_domains": [ 00:04:08.678 { 00:04:08.678 "dma_device_id": "system", 00:04:08.678 "dma_device_type": 1 00:04:08.678 }, 00:04:08.678 { 00:04:08.678 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:08.678 "dma_device_type": 2 00:04:08.678 } 00:04:08.678 ], 00:04:08.678 "driver_specific": {} 00:04:08.678 }, 00:04:08.678 { 00:04:08.678 "name": "Passthru0", 00:04:08.678 "aliases": [ 00:04:08.678 "e7613d7b-c2a4-5db4-926a-828740613e73" 00:04:08.678 ], 00:04:08.678 "product_name": "passthru", 00:04:08.678 "block_size": 512, 00:04:08.678 "num_blocks": 16384, 00:04:08.678 "uuid": "e7613d7b-c2a4-5db4-926a-828740613e73", 00:04:08.678 "assigned_rate_limits": { 00:04:08.678 "rw_ios_per_sec": 0, 00:04:08.678 "rw_mbytes_per_sec": 0, 00:04:08.678 "r_mbytes_per_sec": 0, 00:04:08.678 "w_mbytes_per_sec": 0 00:04:08.678 }, 00:04:08.678 "claimed": false, 00:04:08.678 "zoned": false, 00:04:08.678 "supported_io_types": { 00:04:08.678 "read": true, 00:04:08.678 "write": true, 00:04:08.678 "unmap": true, 00:04:08.678 "write_zeroes": true, 00:04:08.678 "flush": true, 00:04:08.678 "reset": true, 00:04:08.678 "compare": false, 00:04:08.678 "compare_and_write": false, 00:04:08.678 "abort": true, 00:04:08.678 "nvme_admin": false, 00:04:08.678 "nvme_io": false 00:04:08.678 }, 00:04:08.678 "memory_domains": [ 00:04:08.678 { 00:04:08.678 "dma_device_id": "system", 00:04:08.678 "dma_device_type": 1 00:04:08.678 }, 00:04:08.678 { 00:04:08.678 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:08.678 "dma_device_type": 2 00:04:08.678 } 00:04:08.678 ], 00:04:08.678 "driver_specific": { 00:04:08.678 "passthru": { 00:04:08.678 "name": "Passthru0", 00:04:08.678 "base_bdev_name": "Malloc2" 00:04:08.678 } 00:04:08.678 } 00:04:08.678 } 00:04:08.678 ]' 00:04:08.678 16:54:56 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:04:08.678 16:54:56 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:04:08.678 16:54:56 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:04:08.678 16:54:56 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:08.678 16:54:56 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:08.678 16:54:56 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:08.678 16:54:56 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:04:08.678 16:54:56 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:08.678 16:54:56 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:08.678 16:54:56 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:08.678 16:54:56 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:04:08.678 16:54:56 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:08.678 16:54:56 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:08.678 16:54:56 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:08.678 16:54:56 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:04:08.678 16:54:56 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:04:08.990 16:54:56 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:04:08.990 00:04:08.990 real 0m0.257s 00:04:08.990 user 0m0.174s 00:04:08.990 sys 0m0.031s 00:04:08.990 16:54:56 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1122 -- # xtrace_disable 00:04:08.990 16:54:56 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:08.990 ************************************ 00:04:08.990 END TEST rpc_daemon_integrity 00:04:08.990 ************************************ 00:04:08.990 16:54:56 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:04:08.990 16:54:56 rpc -- rpc/rpc.sh@84 -- # killprocess 2880806 00:04:08.990 16:54:56 rpc -- common/autotest_common.sh@946 -- # '[' -z 2880806 ']' 00:04:08.990 16:54:56 rpc -- common/autotest_common.sh@950 -- # kill -0 2880806 00:04:08.990 16:54:56 rpc -- common/autotest_common.sh@951 -- # uname 00:04:08.990 16:54:56 rpc -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:04:08.990 16:54:56 rpc -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 2880806 00:04:08.990 16:54:56 rpc -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:04:08.990 16:54:56 rpc -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:04:08.990 16:54:56 rpc -- common/autotest_common.sh@964 -- # echo 'killing process with pid 2880806' 00:04:08.990 killing process with pid 2880806 00:04:08.990 16:54:56 rpc -- common/autotest_common.sh@965 -- # kill 2880806 00:04:08.990 16:54:56 rpc -- common/autotest_common.sh@970 -- # wait 2880806 00:04:09.249 00:04:09.249 real 0m2.433s 00:04:09.249 user 0m3.139s 00:04:09.249 sys 0m0.645s 00:04:09.250 16:54:56 rpc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:04:09.250 16:54:56 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:09.250 ************************************ 00:04:09.250 END TEST rpc 00:04:09.250 ************************************ 00:04:09.250 16:54:56 -- spdk/autotest.sh@166 -- # run_test skip_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:04:09.250 16:54:56 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:04:09.250 16:54:56 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:09.250 16:54:56 -- common/autotest_common.sh@10 -- # set +x 00:04:09.250 ************************************ 00:04:09.250 START TEST skip_rpc 00:04:09.250 ************************************ 00:04:09.250 16:54:56 skip_rpc -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:04:09.509 * Looking for test storage... 00:04:09.509 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:04:09.509 16:54:56 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:04:09.509 16:54:56 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:04:09.509 16:54:56 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:04:09.509 16:54:56 skip_rpc -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:04:09.509 16:54:56 skip_rpc -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:09.509 16:54:56 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:09.509 ************************************ 00:04:09.509 START TEST skip_rpc 00:04:09.509 ************************************ 00:04:09.509 16:54:56 skip_rpc.skip_rpc -- common/autotest_common.sh@1121 -- # test_skip_rpc 00:04:09.509 16:54:56 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=2881441 00:04:09.509 16:54:56 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:09.509 16:54:56 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:04:09.509 16:54:56 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:04:09.509 [2024-05-15 16:54:57.004458] Starting SPDK v24.05-pre git sha1 0ba8ca574 / DPDK 23.11.0 initialization... 00:04:09.509 [2024-05-15 16:54:57.004494] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2881441 ] 00:04:09.509 EAL: No free 2048 kB hugepages reported on node 1 00:04:09.509 [2024-05-15 16:54:57.056285] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:09.509 [2024-05-15 16:54:57.127520] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:14.778 16:55:01 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:04:14.778 16:55:01 skip_rpc.skip_rpc -- common/autotest_common.sh@648 -- # local es=0 00:04:14.778 16:55:01 skip_rpc.skip_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd spdk_get_version 00:04:14.778 16:55:01 skip_rpc.skip_rpc -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:04:14.778 16:55:01 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:04:14.778 16:55:01 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:04:14.778 16:55:01 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:04:14.778 16:55:01 skip_rpc.skip_rpc -- common/autotest_common.sh@651 -- # rpc_cmd spdk_get_version 00:04:14.778 16:55:01 skip_rpc.skip_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:14.778 16:55:01 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:14.778 16:55:01 skip_rpc.skip_rpc -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:04:14.778 16:55:01 skip_rpc.skip_rpc -- common/autotest_common.sh@651 -- # es=1 00:04:14.778 16:55:01 skip_rpc.skip_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:04:14.778 16:55:01 skip_rpc.skip_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:04:14.778 16:55:01 skip_rpc.skip_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:04:14.778 16:55:01 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:04:14.778 16:55:01 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 2881441 00:04:14.779 16:55:01 skip_rpc.skip_rpc -- common/autotest_common.sh@946 -- # '[' -z 2881441 ']' 00:04:14.779 16:55:01 skip_rpc.skip_rpc -- common/autotest_common.sh@950 -- # kill -0 2881441 00:04:14.779 16:55:01 skip_rpc.skip_rpc -- common/autotest_common.sh@951 -- # uname 00:04:14.779 16:55:01 skip_rpc.skip_rpc -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:04:14.779 16:55:01 skip_rpc.skip_rpc -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 2881441 00:04:14.779 16:55:02 skip_rpc.skip_rpc -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:04:14.779 16:55:02 skip_rpc.skip_rpc -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:04:14.779 16:55:02 skip_rpc.skip_rpc -- common/autotest_common.sh@964 -- # echo 'killing process with pid 2881441' 00:04:14.779 killing process with pid 2881441 00:04:14.779 16:55:02 skip_rpc.skip_rpc -- common/autotest_common.sh@965 -- # kill 2881441 00:04:14.779 16:55:02 skip_rpc.skip_rpc -- common/autotest_common.sh@970 -- # wait 2881441 00:04:14.779 00:04:14.779 real 0m5.394s 00:04:14.779 user 0m5.177s 00:04:14.779 sys 0m0.247s 00:04:14.779 16:55:02 skip_rpc.skip_rpc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:04:14.779 16:55:02 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:14.779 ************************************ 00:04:14.779 END TEST skip_rpc 00:04:14.779 ************************************ 00:04:14.779 16:55:02 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:04:14.779 16:55:02 skip_rpc -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:04:14.779 16:55:02 skip_rpc -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:14.779 16:55:02 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:14.779 ************************************ 00:04:14.779 START TEST skip_rpc_with_json 00:04:14.779 ************************************ 00:04:14.779 16:55:02 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1121 -- # test_skip_rpc_with_json 00:04:14.779 16:55:02 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:04:14.779 16:55:02 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=2882407 00:04:14.779 16:55:02 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:14.779 16:55:02 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:04:14.779 16:55:02 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 2882407 00:04:14.779 16:55:02 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@827 -- # '[' -z 2882407 ']' 00:04:14.779 16:55:02 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:14.779 16:55:02 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@832 -- # local max_retries=100 00:04:14.779 16:55:02 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:14.779 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:14.779 16:55:02 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@836 -- # xtrace_disable 00:04:14.779 16:55:02 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:15.037 [2024-05-15 16:55:02.464563] Starting SPDK v24.05-pre git sha1 0ba8ca574 / DPDK 23.11.0 initialization... 00:04:15.037 [2024-05-15 16:55:02.464600] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2882407 ] 00:04:15.037 EAL: No free 2048 kB hugepages reported on node 1 00:04:15.037 [2024-05-15 16:55:02.516246] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:15.037 [2024-05-15 16:55:02.596313] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:15.603 16:55:03 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:04:15.603 16:55:03 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@860 -- # return 0 00:04:15.603 16:55:03 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:04:15.603 16:55:03 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:15.603 16:55:03 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:15.862 [2024-05-15 16:55:03.264953] nvmf_rpc.c:2547:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:04:15.862 request: 00:04:15.862 { 00:04:15.862 "trtype": "tcp", 00:04:15.862 "method": "nvmf_get_transports", 00:04:15.862 "req_id": 1 00:04:15.862 } 00:04:15.862 Got JSON-RPC error response 00:04:15.862 response: 00:04:15.862 { 00:04:15.862 "code": -19, 00:04:15.862 "message": "No such device" 00:04:15.862 } 00:04:15.862 16:55:03 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:04:15.862 16:55:03 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:04:15.862 16:55:03 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:15.862 16:55:03 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:15.862 [2024-05-15 16:55:03.277052] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:04:15.862 16:55:03 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:15.862 16:55:03 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:04:15.862 16:55:03 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:15.862 16:55:03 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:15.862 16:55:03 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:15.862 16:55:03 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:04:15.862 { 00:04:15.862 "subsystems": [ 00:04:15.862 { 00:04:15.862 "subsystem": "vfio_user_target", 00:04:15.862 "config": null 00:04:15.862 }, 00:04:15.862 { 00:04:15.862 "subsystem": "keyring", 00:04:15.862 "config": [] 00:04:15.862 }, 00:04:15.862 { 00:04:15.862 "subsystem": "iobuf", 00:04:15.862 "config": [ 00:04:15.862 { 00:04:15.862 "method": "iobuf_set_options", 00:04:15.862 "params": { 00:04:15.862 "small_pool_count": 8192, 00:04:15.862 "large_pool_count": 1024, 00:04:15.862 "small_bufsize": 8192, 00:04:15.862 "large_bufsize": 135168 00:04:15.862 } 00:04:15.862 } 00:04:15.862 ] 00:04:15.862 }, 00:04:15.862 { 00:04:15.862 "subsystem": "sock", 00:04:15.862 "config": [ 00:04:15.862 { 00:04:15.862 "method": "sock_impl_set_options", 00:04:15.862 "params": { 00:04:15.862 "impl_name": "posix", 00:04:15.862 "recv_buf_size": 2097152, 00:04:15.862 "send_buf_size": 2097152, 00:04:15.862 "enable_recv_pipe": true, 00:04:15.862 "enable_quickack": false, 00:04:15.862 "enable_placement_id": 0, 00:04:15.862 "enable_zerocopy_send_server": true, 00:04:15.862 "enable_zerocopy_send_client": false, 00:04:15.862 "zerocopy_threshold": 0, 00:04:15.862 "tls_version": 0, 00:04:15.862 "enable_ktls": false 00:04:15.862 } 00:04:15.862 }, 00:04:15.862 { 00:04:15.862 "method": "sock_impl_set_options", 00:04:15.862 "params": { 00:04:15.862 "impl_name": "ssl", 00:04:15.862 "recv_buf_size": 4096, 00:04:15.862 "send_buf_size": 4096, 00:04:15.862 "enable_recv_pipe": true, 00:04:15.862 "enable_quickack": false, 00:04:15.862 "enable_placement_id": 0, 00:04:15.862 "enable_zerocopy_send_server": true, 00:04:15.862 "enable_zerocopy_send_client": false, 00:04:15.862 "zerocopy_threshold": 0, 00:04:15.862 "tls_version": 0, 00:04:15.862 "enable_ktls": false 00:04:15.862 } 00:04:15.862 } 00:04:15.862 ] 00:04:15.862 }, 00:04:15.862 { 00:04:15.862 "subsystem": "vmd", 00:04:15.862 "config": [] 00:04:15.862 }, 00:04:15.862 { 00:04:15.862 "subsystem": "accel", 00:04:15.862 "config": [ 00:04:15.862 { 00:04:15.862 "method": "accel_set_options", 00:04:15.862 "params": { 00:04:15.862 "small_cache_size": 128, 00:04:15.862 "large_cache_size": 16, 00:04:15.862 "task_count": 2048, 00:04:15.862 "sequence_count": 2048, 00:04:15.862 "buf_count": 2048 00:04:15.862 } 00:04:15.862 } 00:04:15.862 ] 00:04:15.862 }, 00:04:15.862 { 00:04:15.862 "subsystem": "bdev", 00:04:15.862 "config": [ 00:04:15.862 { 00:04:15.862 "method": "bdev_set_options", 00:04:15.862 "params": { 00:04:15.862 "bdev_io_pool_size": 65535, 00:04:15.862 "bdev_io_cache_size": 256, 00:04:15.862 "bdev_auto_examine": true, 00:04:15.862 "iobuf_small_cache_size": 128, 00:04:15.862 "iobuf_large_cache_size": 16 00:04:15.862 } 00:04:15.862 }, 00:04:15.862 { 00:04:15.862 "method": "bdev_raid_set_options", 00:04:15.862 "params": { 00:04:15.862 "process_window_size_kb": 1024 00:04:15.862 } 00:04:15.862 }, 00:04:15.862 { 00:04:15.862 "method": "bdev_iscsi_set_options", 00:04:15.862 "params": { 00:04:15.862 "timeout_sec": 30 00:04:15.862 } 00:04:15.862 }, 00:04:15.862 { 00:04:15.862 "method": "bdev_nvme_set_options", 00:04:15.862 "params": { 00:04:15.863 "action_on_timeout": "none", 00:04:15.863 "timeout_us": 0, 00:04:15.863 "timeout_admin_us": 0, 00:04:15.863 "keep_alive_timeout_ms": 10000, 00:04:15.863 "arbitration_burst": 0, 00:04:15.863 "low_priority_weight": 0, 00:04:15.863 "medium_priority_weight": 0, 00:04:15.863 "high_priority_weight": 0, 00:04:15.863 "nvme_adminq_poll_period_us": 10000, 00:04:15.863 "nvme_ioq_poll_period_us": 0, 00:04:15.863 "io_queue_requests": 0, 00:04:15.863 "delay_cmd_submit": true, 00:04:15.863 "transport_retry_count": 4, 00:04:15.863 "bdev_retry_count": 3, 00:04:15.863 "transport_ack_timeout": 0, 00:04:15.863 "ctrlr_loss_timeout_sec": 0, 00:04:15.863 "reconnect_delay_sec": 0, 00:04:15.863 "fast_io_fail_timeout_sec": 0, 00:04:15.863 "disable_auto_failback": false, 00:04:15.863 "generate_uuids": false, 00:04:15.863 "transport_tos": 0, 00:04:15.863 "nvme_error_stat": false, 00:04:15.863 "rdma_srq_size": 0, 00:04:15.863 "io_path_stat": false, 00:04:15.863 "allow_accel_sequence": false, 00:04:15.863 "rdma_max_cq_size": 0, 00:04:15.863 "rdma_cm_event_timeout_ms": 0, 00:04:15.863 "dhchap_digests": [ 00:04:15.863 "sha256", 00:04:15.863 "sha384", 00:04:15.863 "sha512" 00:04:15.863 ], 00:04:15.863 "dhchap_dhgroups": [ 00:04:15.863 "null", 00:04:15.863 "ffdhe2048", 00:04:15.863 "ffdhe3072", 00:04:15.863 "ffdhe4096", 00:04:15.863 "ffdhe6144", 00:04:15.863 "ffdhe8192" 00:04:15.863 ] 00:04:15.863 } 00:04:15.863 }, 00:04:15.863 { 00:04:15.863 "method": "bdev_nvme_set_hotplug", 00:04:15.863 "params": { 00:04:15.863 "period_us": 100000, 00:04:15.863 "enable": false 00:04:15.863 } 00:04:15.863 }, 00:04:15.863 { 00:04:15.863 "method": "bdev_wait_for_examine" 00:04:15.863 } 00:04:15.863 ] 00:04:15.863 }, 00:04:15.863 { 00:04:15.863 "subsystem": "scsi", 00:04:15.863 "config": null 00:04:15.863 }, 00:04:15.863 { 00:04:15.863 "subsystem": "scheduler", 00:04:15.863 "config": [ 00:04:15.863 { 00:04:15.863 "method": "framework_set_scheduler", 00:04:15.863 "params": { 00:04:15.863 "name": "static" 00:04:15.863 } 00:04:15.863 } 00:04:15.863 ] 00:04:15.863 }, 00:04:15.863 { 00:04:15.863 "subsystem": "vhost_scsi", 00:04:15.863 "config": [] 00:04:15.863 }, 00:04:15.863 { 00:04:15.863 "subsystem": "vhost_blk", 00:04:15.863 "config": [] 00:04:15.863 }, 00:04:15.863 { 00:04:15.863 "subsystem": "ublk", 00:04:15.863 "config": [] 00:04:15.863 }, 00:04:15.863 { 00:04:15.863 "subsystem": "nbd", 00:04:15.863 "config": [] 00:04:15.863 }, 00:04:15.863 { 00:04:15.863 "subsystem": "nvmf", 00:04:15.863 "config": [ 00:04:15.863 { 00:04:15.863 "method": "nvmf_set_config", 00:04:15.863 "params": { 00:04:15.863 "discovery_filter": "match_any", 00:04:15.863 "admin_cmd_passthru": { 00:04:15.863 "identify_ctrlr": false 00:04:15.863 } 00:04:15.863 } 00:04:15.863 }, 00:04:15.863 { 00:04:15.863 "method": "nvmf_set_max_subsystems", 00:04:15.863 "params": { 00:04:15.863 "max_subsystems": 1024 00:04:15.863 } 00:04:15.863 }, 00:04:15.863 { 00:04:15.863 "method": "nvmf_set_crdt", 00:04:15.863 "params": { 00:04:15.863 "crdt1": 0, 00:04:15.863 "crdt2": 0, 00:04:15.863 "crdt3": 0 00:04:15.863 } 00:04:15.863 }, 00:04:15.863 { 00:04:15.863 "method": "nvmf_create_transport", 00:04:15.863 "params": { 00:04:15.863 "trtype": "TCP", 00:04:15.863 "max_queue_depth": 128, 00:04:15.863 "max_io_qpairs_per_ctrlr": 127, 00:04:15.863 "in_capsule_data_size": 4096, 00:04:15.863 "max_io_size": 131072, 00:04:15.863 "io_unit_size": 131072, 00:04:15.863 "max_aq_depth": 128, 00:04:15.863 "num_shared_buffers": 511, 00:04:15.863 "buf_cache_size": 4294967295, 00:04:15.863 "dif_insert_or_strip": false, 00:04:15.863 "zcopy": false, 00:04:15.863 "c2h_success": true, 00:04:15.863 "sock_priority": 0, 00:04:15.863 "abort_timeout_sec": 1, 00:04:15.863 "ack_timeout": 0, 00:04:15.863 "data_wr_pool_size": 0 00:04:15.863 } 00:04:15.863 } 00:04:15.863 ] 00:04:15.863 }, 00:04:15.863 { 00:04:15.863 "subsystem": "iscsi", 00:04:15.863 "config": [ 00:04:15.863 { 00:04:15.863 "method": "iscsi_set_options", 00:04:15.863 "params": { 00:04:15.863 "node_base": "iqn.2016-06.io.spdk", 00:04:15.863 "max_sessions": 128, 00:04:15.863 "max_connections_per_session": 2, 00:04:15.863 "max_queue_depth": 64, 00:04:15.863 "default_time2wait": 2, 00:04:15.863 "default_time2retain": 20, 00:04:15.863 "first_burst_length": 8192, 00:04:15.863 "immediate_data": true, 00:04:15.863 "allow_duplicated_isid": false, 00:04:15.863 "error_recovery_level": 0, 00:04:15.863 "nop_timeout": 60, 00:04:15.863 "nop_in_interval": 30, 00:04:15.863 "disable_chap": false, 00:04:15.863 "require_chap": false, 00:04:15.863 "mutual_chap": false, 00:04:15.863 "chap_group": 0, 00:04:15.863 "max_large_datain_per_connection": 64, 00:04:15.863 "max_r2t_per_connection": 4, 00:04:15.863 "pdu_pool_size": 36864, 00:04:15.863 "immediate_data_pool_size": 16384, 00:04:15.863 "data_out_pool_size": 2048 00:04:15.863 } 00:04:15.863 } 00:04:15.863 ] 00:04:15.863 } 00:04:15.863 ] 00:04:15.863 } 00:04:15.863 16:55:03 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:04:15.863 16:55:03 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 2882407 00:04:15.863 16:55:03 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@946 -- # '[' -z 2882407 ']' 00:04:15.863 16:55:03 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@950 -- # kill -0 2882407 00:04:15.863 16:55:03 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@951 -- # uname 00:04:15.863 16:55:03 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:04:15.863 16:55:03 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 2882407 00:04:15.863 16:55:03 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:04:15.863 16:55:03 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:04:15.863 16:55:03 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # echo 'killing process with pid 2882407' 00:04:15.863 killing process with pid 2882407 00:04:15.863 16:55:03 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@965 -- # kill 2882407 00:04:15.863 16:55:03 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@970 -- # wait 2882407 00:04:16.438 16:55:03 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=2882645 00:04:16.438 16:55:03 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:04:16.438 16:55:03 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:04:21.708 16:55:08 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 2882645 00:04:21.708 16:55:08 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@946 -- # '[' -z 2882645 ']' 00:04:21.708 16:55:08 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@950 -- # kill -0 2882645 00:04:21.708 16:55:08 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@951 -- # uname 00:04:21.708 16:55:08 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:04:21.708 16:55:08 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 2882645 00:04:21.708 16:55:08 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:04:21.708 16:55:08 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:04:21.708 16:55:08 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # echo 'killing process with pid 2882645' 00:04:21.708 killing process with pid 2882645 00:04:21.708 16:55:08 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@965 -- # kill 2882645 00:04:21.708 16:55:08 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@970 -- # wait 2882645 00:04:21.708 16:55:09 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:04:21.708 16:55:09 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:04:21.708 00:04:21.708 real 0m6.790s 00:04:21.708 user 0m6.656s 00:04:21.708 sys 0m0.553s 00:04:21.708 16:55:09 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1122 -- # xtrace_disable 00:04:21.708 16:55:09 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:21.708 ************************************ 00:04:21.708 END TEST skip_rpc_with_json 00:04:21.708 ************************************ 00:04:21.708 16:55:09 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:04:21.708 16:55:09 skip_rpc -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:04:21.708 16:55:09 skip_rpc -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:21.708 16:55:09 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:21.708 ************************************ 00:04:21.708 START TEST skip_rpc_with_delay 00:04:21.708 ************************************ 00:04:21.708 16:55:09 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1121 -- # test_skip_rpc_with_delay 00:04:21.708 16:55:09 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:21.708 16:55:09 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@648 -- # local es=0 00:04:21.708 16:55:09 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:21.708 16:55:09 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:21.708 16:55:09 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:04:21.708 16:55:09 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:21.708 16:55:09 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:04:21.708 16:55:09 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:21.708 16:55:09 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:04:21.708 16:55:09 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:21.708 16:55:09 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:04:21.708 16:55:09 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:21.708 [2024-05-15 16:55:09.331044] app.c: 832:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:04:21.708 [2024-05-15 16:55:09.331101] app.c: 711:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 0, errno: 2 00:04:21.708 16:55:09 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@651 -- # es=1 00:04:21.708 16:55:09 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:04:21.708 16:55:09 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:04:21.709 16:55:09 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:04:21.709 00:04:21.709 real 0m0.066s 00:04:21.709 user 0m0.044s 00:04:21.709 sys 0m0.022s 00:04:21.709 16:55:09 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1122 -- # xtrace_disable 00:04:21.709 16:55:09 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:04:21.709 ************************************ 00:04:21.709 END TEST skip_rpc_with_delay 00:04:21.709 ************************************ 00:04:21.967 16:55:09 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:04:21.967 16:55:09 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:04:21.967 16:55:09 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:04:21.967 16:55:09 skip_rpc -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:04:21.967 16:55:09 skip_rpc -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:21.967 16:55:09 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:21.967 ************************************ 00:04:21.967 START TEST exit_on_failed_rpc_init 00:04:21.967 ************************************ 00:04:21.967 16:55:09 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1121 -- # test_exit_on_failed_rpc_init 00:04:21.967 16:55:09 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=2883623 00:04:21.967 16:55:09 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 2883623 00:04:21.967 16:55:09 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:04:21.967 16:55:09 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@827 -- # '[' -z 2883623 ']' 00:04:21.967 16:55:09 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:21.967 16:55:09 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@832 -- # local max_retries=100 00:04:21.967 16:55:09 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:21.967 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:21.967 16:55:09 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@836 -- # xtrace_disable 00:04:21.967 16:55:09 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:04:21.967 [2024-05-15 16:55:09.465919] Starting SPDK v24.05-pre git sha1 0ba8ca574 / DPDK 23.11.0 initialization... 00:04:21.967 [2024-05-15 16:55:09.465960] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2883623 ] 00:04:21.967 EAL: No free 2048 kB hugepages reported on node 1 00:04:21.967 [2024-05-15 16:55:09.519235] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:21.967 [2024-05-15 16:55:09.598851] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:22.904 16:55:10 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:04:22.904 16:55:10 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@860 -- # return 0 00:04:22.904 16:55:10 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:22.904 16:55:10 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:04:22.904 16:55:10 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@648 -- # local es=0 00:04:22.904 16:55:10 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:04:22.904 16:55:10 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:22.904 16:55:10 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:04:22.904 16:55:10 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:22.904 16:55:10 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:04:22.904 16:55:10 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:22.904 16:55:10 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:04:22.904 16:55:10 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:22.904 16:55:10 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:04:22.904 16:55:10 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:04:22.904 [2024-05-15 16:55:10.305052] Starting SPDK v24.05-pre git sha1 0ba8ca574 / DPDK 23.11.0 initialization... 00:04:22.904 [2024-05-15 16:55:10.305097] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2883847 ] 00:04:22.904 EAL: No free 2048 kB hugepages reported on node 1 00:04:22.904 [2024-05-15 16:55:10.357503] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:22.904 [2024-05-15 16:55:10.430429] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:04:22.904 [2024-05-15 16:55:10.430494] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:04:22.904 [2024-05-15 16:55:10.430503] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:04:22.904 [2024-05-15 16:55:10.430509] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:04:22.904 16:55:10 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@651 -- # es=234 00:04:22.904 16:55:10 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:04:22.904 16:55:10 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@660 -- # es=106 00:04:22.904 16:55:10 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@661 -- # case "$es" in 00:04:22.904 16:55:10 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@668 -- # es=1 00:04:22.904 16:55:10 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:04:22.904 16:55:10 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:04:22.904 16:55:10 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 2883623 00:04:22.904 16:55:10 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@946 -- # '[' -z 2883623 ']' 00:04:22.904 16:55:10 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@950 -- # kill -0 2883623 00:04:22.904 16:55:10 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@951 -- # uname 00:04:22.904 16:55:10 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:04:22.904 16:55:10 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 2883623 00:04:23.163 16:55:10 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:04:23.163 16:55:10 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:04:23.163 16:55:10 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@964 -- # echo 'killing process with pid 2883623' 00:04:23.163 killing process with pid 2883623 00:04:23.163 16:55:10 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@965 -- # kill 2883623 00:04:23.163 16:55:10 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@970 -- # wait 2883623 00:04:23.421 00:04:23.422 real 0m1.489s 00:04:23.422 user 0m1.732s 00:04:23.422 sys 0m0.381s 00:04:23.422 16:55:10 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1122 -- # xtrace_disable 00:04:23.422 16:55:10 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:04:23.422 ************************************ 00:04:23.422 END TEST exit_on_failed_rpc_init 00:04:23.422 ************************************ 00:04:23.422 16:55:10 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:04:23.422 00:04:23.422 real 0m14.105s 00:04:23.422 user 0m13.757s 00:04:23.422 sys 0m1.435s 00:04:23.422 16:55:10 skip_rpc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:04:23.422 16:55:10 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:23.422 ************************************ 00:04:23.422 END TEST skip_rpc 00:04:23.422 ************************************ 00:04:23.422 16:55:10 -- spdk/autotest.sh@167 -- # run_test rpc_client /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:04:23.422 16:55:10 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:04:23.422 16:55:10 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:23.422 16:55:10 -- common/autotest_common.sh@10 -- # set +x 00:04:23.422 ************************************ 00:04:23.422 START TEST rpc_client 00:04:23.422 ************************************ 00:04:23.422 16:55:10 rpc_client -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:04:23.422 * Looking for test storage... 00:04:23.680 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client 00:04:23.680 16:55:11 rpc_client -- rpc_client/rpc_client.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client_test 00:04:23.680 OK 00:04:23.680 16:55:11 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:04:23.680 00:04:23.680 real 0m0.110s 00:04:23.680 user 0m0.044s 00:04:23.680 sys 0m0.074s 00:04:23.680 16:55:11 rpc_client -- common/autotest_common.sh@1122 -- # xtrace_disable 00:04:23.680 16:55:11 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:04:23.680 ************************************ 00:04:23.680 END TEST rpc_client 00:04:23.680 ************************************ 00:04:23.680 16:55:11 -- spdk/autotest.sh@168 -- # run_test json_config /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:04:23.680 16:55:11 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:04:23.680 16:55:11 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:23.680 16:55:11 -- common/autotest_common.sh@10 -- # set +x 00:04:23.680 ************************************ 00:04:23.680 START TEST json_config 00:04:23.680 ************************************ 00:04:23.680 16:55:11 json_config -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:04:23.680 16:55:11 json_config -- json_config/json_config.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:04:23.680 16:55:11 json_config -- nvmf/common.sh@7 -- # uname -s 00:04:23.680 16:55:11 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:23.680 16:55:11 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:23.680 16:55:11 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:23.680 16:55:11 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:23.680 16:55:11 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:23.680 16:55:11 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:23.680 16:55:11 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:23.680 16:55:11 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:23.680 16:55:11 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:23.680 16:55:11 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:23.680 16:55:11 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:04:23.680 16:55:11 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:04:23.680 16:55:11 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:23.680 16:55:11 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:23.680 16:55:11 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:04:23.680 16:55:11 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:04:23.680 16:55:11 json_config -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:04:23.680 16:55:11 json_config -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:23.680 16:55:11 json_config -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:23.680 16:55:11 json_config -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:23.680 16:55:11 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:23.680 16:55:11 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:23.680 16:55:11 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:23.680 16:55:11 json_config -- paths/export.sh@5 -- # export PATH 00:04:23.680 16:55:11 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:23.680 16:55:11 json_config -- nvmf/common.sh@47 -- # : 0 00:04:23.680 16:55:11 json_config -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:04:23.680 16:55:11 json_config -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:04:23.680 16:55:11 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:04:23.681 16:55:11 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:23.681 16:55:11 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:23.681 16:55:11 json_config -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:04:23.681 16:55:11 json_config -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:04:23.681 16:55:11 json_config -- nvmf/common.sh@51 -- # have_pci_nics=0 00:04:23.681 16:55:11 json_config -- json_config/json_config.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:04:23.681 16:55:11 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:04:23.681 16:55:11 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:04:23.681 16:55:11 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:04:23.681 16:55:11 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:04:23.681 16:55:11 json_config -- json_config/json_config.sh@31 -- # app_pid=(['target']='' ['initiator']='') 00:04:23.681 16:55:11 json_config -- json_config/json_config.sh@31 -- # declare -A app_pid 00:04:23.681 16:55:11 json_config -- json_config/json_config.sh@32 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock' ['initiator']='/var/tmp/spdk_initiator.sock') 00:04:23.681 16:55:11 json_config -- json_config/json_config.sh@32 -- # declare -A app_socket 00:04:23.681 16:55:11 json_config -- json_config/json_config.sh@33 -- # app_params=(['target']='-m 0x1 -s 1024' ['initiator']='-m 0x2 -g -u -s 1024') 00:04:23.681 16:55:11 json_config -- json_config/json_config.sh@33 -- # declare -A app_params 00:04:23.681 16:55:11 json_config -- json_config/json_config.sh@34 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json' ['initiator']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json') 00:04:23.681 16:55:11 json_config -- json_config/json_config.sh@34 -- # declare -A configs_path 00:04:23.681 16:55:11 json_config -- json_config/json_config.sh@40 -- # last_event_id=0 00:04:23.681 16:55:11 json_config -- json_config/json_config.sh@355 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:04:23.681 16:55:11 json_config -- json_config/json_config.sh@356 -- # echo 'INFO: JSON configuration test init' 00:04:23.681 INFO: JSON configuration test init 00:04:23.681 16:55:11 json_config -- json_config/json_config.sh@357 -- # json_config_test_init 00:04:23.681 16:55:11 json_config -- json_config/json_config.sh@262 -- # timing_enter json_config_test_init 00:04:23.681 16:55:11 json_config -- common/autotest_common.sh@720 -- # xtrace_disable 00:04:23.681 16:55:11 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:23.681 16:55:11 json_config -- json_config/json_config.sh@263 -- # timing_enter json_config_setup_target 00:04:23.681 16:55:11 json_config -- common/autotest_common.sh@720 -- # xtrace_disable 00:04:23.681 16:55:11 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:23.681 16:55:11 json_config -- json_config/json_config.sh@265 -- # json_config_test_start_app target --wait-for-rpc 00:04:23.681 16:55:11 json_config -- json_config/common.sh@9 -- # local app=target 00:04:23.681 16:55:11 json_config -- json_config/common.sh@10 -- # shift 00:04:23.681 16:55:11 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:04:23.681 16:55:11 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:04:23.681 16:55:11 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:04:23.681 16:55:11 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:23.681 16:55:11 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:23.681 16:55:11 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=2883982 00:04:23.681 16:55:11 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:04:23.681 Waiting for target to run... 00:04:23.681 16:55:11 json_config -- json_config/common.sh@25 -- # waitforlisten 2883982 /var/tmp/spdk_tgt.sock 00:04:23.681 16:55:11 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:04:23.681 16:55:11 json_config -- common/autotest_common.sh@827 -- # '[' -z 2883982 ']' 00:04:23.681 16:55:11 json_config -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:04:23.681 16:55:11 json_config -- common/autotest_common.sh@832 -- # local max_retries=100 00:04:23.681 16:55:11 json_config -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:04:23.681 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:04:23.681 16:55:11 json_config -- common/autotest_common.sh@836 -- # xtrace_disable 00:04:23.681 16:55:11 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:23.681 [2024-05-15 16:55:11.337371] Starting SPDK v24.05-pre git sha1 0ba8ca574 / DPDK 23.11.0 initialization... 00:04:23.681 [2024-05-15 16:55:11.337422] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2883982 ] 00:04:23.940 EAL: No free 2048 kB hugepages reported on node 1 00:04:24.197 [2024-05-15 16:55:11.770381] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:24.456 [2024-05-15 16:55:11.858529] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:24.714 16:55:12 json_config -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:04:24.714 16:55:12 json_config -- common/autotest_common.sh@860 -- # return 0 00:04:24.714 16:55:12 json_config -- json_config/common.sh@26 -- # echo '' 00:04:24.714 00:04:24.714 16:55:12 json_config -- json_config/json_config.sh@269 -- # create_accel_config 00:04:24.714 16:55:12 json_config -- json_config/json_config.sh@93 -- # timing_enter create_accel_config 00:04:24.714 16:55:12 json_config -- common/autotest_common.sh@720 -- # xtrace_disable 00:04:24.714 16:55:12 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:24.714 16:55:12 json_config -- json_config/json_config.sh@95 -- # [[ 0 -eq 1 ]] 00:04:24.714 16:55:12 json_config -- json_config/json_config.sh@101 -- # timing_exit create_accel_config 00:04:24.714 16:55:12 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:24.714 16:55:12 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:24.714 16:55:12 json_config -- json_config/json_config.sh@273 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:04:24.714 16:55:12 json_config -- json_config/json_config.sh@274 -- # tgt_rpc load_config 00:04:24.714 16:55:12 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:04:27.993 16:55:15 json_config -- json_config/json_config.sh@276 -- # tgt_check_notification_types 00:04:27.993 16:55:15 json_config -- json_config/json_config.sh@43 -- # timing_enter tgt_check_notification_types 00:04:27.993 16:55:15 json_config -- common/autotest_common.sh@720 -- # xtrace_disable 00:04:27.993 16:55:15 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:27.993 16:55:15 json_config -- json_config/json_config.sh@45 -- # local ret=0 00:04:27.993 16:55:15 json_config -- json_config/json_config.sh@46 -- # enabled_types=('bdev_register' 'bdev_unregister') 00:04:27.993 16:55:15 json_config -- json_config/json_config.sh@46 -- # local enabled_types 00:04:27.993 16:55:15 json_config -- json_config/json_config.sh@48 -- # tgt_rpc notify_get_types 00:04:27.993 16:55:15 json_config -- json_config/json_config.sh@48 -- # jq -r '.[]' 00:04:27.993 16:55:15 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:04:27.993 16:55:15 json_config -- json_config/json_config.sh@48 -- # get_types=('bdev_register' 'bdev_unregister') 00:04:27.993 16:55:15 json_config -- json_config/json_config.sh@48 -- # local get_types 00:04:27.993 16:55:15 json_config -- json_config/json_config.sh@49 -- # [[ bdev_register bdev_unregister != \b\d\e\v\_\r\e\g\i\s\t\e\r\ \b\d\e\v\_\u\n\r\e\g\i\s\t\e\r ]] 00:04:27.993 16:55:15 json_config -- json_config/json_config.sh@54 -- # timing_exit tgt_check_notification_types 00:04:27.993 16:55:15 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:27.993 16:55:15 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:27.993 16:55:15 json_config -- json_config/json_config.sh@55 -- # return 0 00:04:27.993 16:55:15 json_config -- json_config/json_config.sh@278 -- # [[ 0 -eq 1 ]] 00:04:27.993 16:55:15 json_config -- json_config/json_config.sh@282 -- # [[ 0 -eq 1 ]] 00:04:27.993 16:55:15 json_config -- json_config/json_config.sh@286 -- # [[ 0 -eq 1 ]] 00:04:27.993 16:55:15 json_config -- json_config/json_config.sh@290 -- # [[ 1 -eq 1 ]] 00:04:27.993 16:55:15 json_config -- json_config/json_config.sh@291 -- # create_nvmf_subsystem_config 00:04:27.993 16:55:15 json_config -- json_config/json_config.sh@230 -- # timing_enter create_nvmf_subsystem_config 00:04:27.993 16:55:15 json_config -- common/autotest_common.sh@720 -- # xtrace_disable 00:04:27.993 16:55:15 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:27.993 16:55:15 json_config -- json_config/json_config.sh@232 -- # NVMF_FIRST_TARGET_IP=127.0.0.1 00:04:27.993 16:55:15 json_config -- json_config/json_config.sh@233 -- # [[ tcp == \r\d\m\a ]] 00:04:27.993 16:55:15 json_config -- json_config/json_config.sh@237 -- # [[ -z 127.0.0.1 ]] 00:04:27.993 16:55:15 json_config -- json_config/json_config.sh@242 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocForNvmf0 00:04:27.993 16:55:15 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocForNvmf0 00:04:27.993 MallocForNvmf0 00:04:27.993 16:55:15 json_config -- json_config/json_config.sh@243 -- # tgt_rpc bdev_malloc_create 4 1024 --name MallocForNvmf1 00:04:27.993 16:55:15 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 4 1024 --name MallocForNvmf1 00:04:28.251 MallocForNvmf1 00:04:28.251 16:55:15 json_config -- json_config/json_config.sh@245 -- # tgt_rpc nvmf_create_transport -t tcp -u 8192 -c 0 00:04:28.251 16:55:15 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_transport -t tcp -u 8192 -c 0 00:04:28.509 [2024-05-15 16:55:15.935224] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:04:28.509 16:55:15 json_config -- json_config/json_config.sh@246 -- # tgt_rpc nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:04:28.509 16:55:15 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:04:28.509 16:55:16 json_config -- json_config/json_config.sh@247 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:04:28.509 16:55:16 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:04:28.767 16:55:16 json_config -- json_config/json_config.sh@248 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:04:28.767 16:55:16 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:04:29.025 16:55:16 json_config -- json_config/json_config.sh@249 -- # tgt_rpc nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:04:29.025 16:55:16 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:04:29.025 [2024-05-15 16:55:16.605005] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:04:29.025 [2024-05-15 16:55:16.605371] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:04:29.025 16:55:16 json_config -- json_config/json_config.sh@251 -- # timing_exit create_nvmf_subsystem_config 00:04:29.025 16:55:16 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:29.025 16:55:16 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:29.025 16:55:16 json_config -- json_config/json_config.sh@293 -- # timing_exit json_config_setup_target 00:04:29.025 16:55:16 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:29.025 16:55:16 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:29.025 16:55:16 json_config -- json_config/json_config.sh@295 -- # [[ 0 -eq 1 ]] 00:04:29.025 16:55:16 json_config -- json_config/json_config.sh@300 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:04:29.025 16:55:16 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:04:29.283 MallocBdevForConfigChangeCheck 00:04:29.283 16:55:16 json_config -- json_config/json_config.sh@302 -- # timing_exit json_config_test_init 00:04:29.283 16:55:16 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:29.283 16:55:16 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:29.283 16:55:16 json_config -- json_config/json_config.sh@359 -- # tgt_rpc save_config 00:04:29.283 16:55:16 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:04:29.541 16:55:17 json_config -- json_config/json_config.sh@361 -- # echo 'INFO: shutting down applications...' 00:04:29.541 INFO: shutting down applications... 00:04:29.799 16:55:17 json_config -- json_config/json_config.sh@362 -- # [[ 0 -eq 1 ]] 00:04:29.799 16:55:17 json_config -- json_config/json_config.sh@368 -- # json_config_clear target 00:04:29.799 16:55:17 json_config -- json_config/json_config.sh@332 -- # [[ -n 22 ]] 00:04:29.799 16:55:17 json_config -- json_config/json_config.sh@333 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:04:31.171 Calling clear_iscsi_subsystem 00:04:31.171 Calling clear_nvmf_subsystem 00:04:31.171 Calling clear_nbd_subsystem 00:04:31.171 Calling clear_ublk_subsystem 00:04:31.171 Calling clear_vhost_blk_subsystem 00:04:31.171 Calling clear_vhost_scsi_subsystem 00:04:31.171 Calling clear_bdev_subsystem 00:04:31.171 16:55:18 json_config -- json_config/json_config.sh@337 -- # local config_filter=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py 00:04:31.171 16:55:18 json_config -- json_config/json_config.sh@343 -- # count=100 00:04:31.171 16:55:18 json_config -- json_config/json_config.sh@344 -- # '[' 100 -gt 0 ']' 00:04:31.171 16:55:18 json_config -- json_config/json_config.sh@345 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:04:31.171 16:55:18 json_config -- json_config/json_config.sh@345 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:04:31.171 16:55:18 json_config -- json_config/json_config.sh@345 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method check_empty 00:04:31.429 16:55:19 json_config -- json_config/json_config.sh@345 -- # break 00:04:31.429 16:55:19 json_config -- json_config/json_config.sh@350 -- # '[' 100 -eq 0 ']' 00:04:31.429 16:55:19 json_config -- json_config/json_config.sh@369 -- # json_config_test_shutdown_app target 00:04:31.429 16:55:19 json_config -- json_config/common.sh@31 -- # local app=target 00:04:31.429 16:55:19 json_config -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:04:31.429 16:55:19 json_config -- json_config/common.sh@35 -- # [[ -n 2883982 ]] 00:04:31.429 16:55:19 json_config -- json_config/common.sh@38 -- # kill -SIGINT 2883982 00:04:31.429 [2024-05-15 16:55:19.072040] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:04:31.429 16:55:19 json_config -- json_config/common.sh@40 -- # (( i = 0 )) 00:04:31.429 16:55:19 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:31.429 16:55:19 json_config -- json_config/common.sh@41 -- # kill -0 2883982 00:04:31.429 16:55:19 json_config -- json_config/common.sh@45 -- # sleep 0.5 00:04:31.995 16:55:19 json_config -- json_config/common.sh@40 -- # (( i++ )) 00:04:31.995 16:55:19 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:31.995 16:55:19 json_config -- json_config/common.sh@41 -- # kill -0 2883982 00:04:31.995 16:55:19 json_config -- json_config/common.sh@42 -- # app_pid["$app"]= 00:04:31.995 16:55:19 json_config -- json_config/common.sh@43 -- # break 00:04:31.995 16:55:19 json_config -- json_config/common.sh@48 -- # [[ -n '' ]] 00:04:31.995 16:55:19 json_config -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:04:31.995 SPDK target shutdown done 00:04:31.995 16:55:19 json_config -- json_config/json_config.sh@371 -- # echo 'INFO: relaunching applications...' 00:04:31.995 INFO: relaunching applications... 00:04:31.995 16:55:19 json_config -- json_config/json_config.sh@372 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:31.995 16:55:19 json_config -- json_config/common.sh@9 -- # local app=target 00:04:31.995 16:55:19 json_config -- json_config/common.sh@10 -- # shift 00:04:31.995 16:55:19 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:04:31.995 16:55:19 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:04:31.995 16:55:19 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:04:31.995 16:55:19 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:31.995 16:55:19 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:31.995 16:55:19 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=2885601 00:04:31.995 16:55:19 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:31.995 16:55:19 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:04:31.995 Waiting for target to run... 00:04:31.995 16:55:19 json_config -- json_config/common.sh@25 -- # waitforlisten 2885601 /var/tmp/spdk_tgt.sock 00:04:31.995 16:55:19 json_config -- common/autotest_common.sh@827 -- # '[' -z 2885601 ']' 00:04:31.995 16:55:19 json_config -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:04:31.995 16:55:19 json_config -- common/autotest_common.sh@832 -- # local max_retries=100 00:04:31.995 16:55:19 json_config -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:04:31.995 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:04:31.995 16:55:19 json_config -- common/autotest_common.sh@836 -- # xtrace_disable 00:04:31.995 16:55:19 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:31.995 [2024-05-15 16:55:19.628479] Starting SPDK v24.05-pre git sha1 0ba8ca574 / DPDK 23.11.0 initialization... 00:04:31.995 [2024-05-15 16:55:19.628531] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2885601 ] 00:04:31.995 EAL: No free 2048 kB hugepages reported on node 1 00:04:32.560 [2024-05-15 16:55:20.060265] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:32.561 [2024-05-15 16:55:20.146855] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:35.843 [2024-05-15 16:55:23.149965] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:04:35.843 [2024-05-15 16:55:23.181976] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:04:35.843 [2024-05-15 16:55:23.182298] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:04:36.408 16:55:23 json_config -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:04:36.408 16:55:23 json_config -- common/autotest_common.sh@860 -- # return 0 00:04:36.408 16:55:23 json_config -- json_config/common.sh@26 -- # echo '' 00:04:36.408 00:04:36.408 16:55:23 json_config -- json_config/json_config.sh@373 -- # [[ 0 -eq 1 ]] 00:04:36.408 16:55:23 json_config -- json_config/json_config.sh@377 -- # echo 'INFO: Checking if target configuration is the same...' 00:04:36.408 INFO: Checking if target configuration is the same... 00:04:36.408 16:55:23 json_config -- json_config/json_config.sh@378 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:36.408 16:55:23 json_config -- json_config/json_config.sh@378 -- # tgt_rpc save_config 00:04:36.408 16:55:23 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:04:36.408 + '[' 2 -ne 2 ']' 00:04:36.408 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:04:36.408 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:04:36.408 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:04:36.408 +++ basename /dev/fd/62 00:04:36.408 ++ mktemp /tmp/62.XXX 00:04:36.408 + tmp_file_1=/tmp/62.QrW 00:04:36.408 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:36.408 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:04:36.408 + tmp_file_2=/tmp/spdk_tgt_config.json.FVj 00:04:36.408 + ret=0 00:04:36.408 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:04:36.666 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:04:36.666 + diff -u /tmp/62.QrW /tmp/spdk_tgt_config.json.FVj 00:04:36.666 + echo 'INFO: JSON config files are the same' 00:04:36.666 INFO: JSON config files are the same 00:04:36.666 + rm /tmp/62.QrW /tmp/spdk_tgt_config.json.FVj 00:04:36.666 + exit 0 00:04:36.666 16:55:24 json_config -- json_config/json_config.sh@379 -- # [[ 0 -eq 1 ]] 00:04:36.666 16:55:24 json_config -- json_config/json_config.sh@384 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:04:36.666 INFO: changing configuration and checking if this can be detected... 00:04:36.666 16:55:24 json_config -- json_config/json_config.sh@386 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:04:36.666 16:55:24 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:04:36.666 16:55:24 json_config -- json_config/json_config.sh@387 -- # tgt_rpc save_config 00:04:36.666 16:55:24 json_config -- json_config/json_config.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:36.666 16:55:24 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:04:36.927 + '[' 2 -ne 2 ']' 00:04:36.927 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:04:36.927 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:04:36.927 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:04:36.927 +++ basename /dev/fd/62 00:04:36.927 ++ mktemp /tmp/62.XXX 00:04:36.927 + tmp_file_1=/tmp/62.Wu3 00:04:36.927 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:36.927 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:04:36.927 + tmp_file_2=/tmp/spdk_tgt_config.json.nfo 00:04:36.927 + ret=0 00:04:36.927 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:04:37.248 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:04:37.249 + diff -u /tmp/62.Wu3 /tmp/spdk_tgt_config.json.nfo 00:04:37.249 + ret=1 00:04:37.249 + echo '=== Start of file: /tmp/62.Wu3 ===' 00:04:37.249 + cat /tmp/62.Wu3 00:04:37.249 + echo '=== End of file: /tmp/62.Wu3 ===' 00:04:37.249 + echo '' 00:04:37.249 + echo '=== Start of file: /tmp/spdk_tgt_config.json.nfo ===' 00:04:37.249 + cat /tmp/spdk_tgt_config.json.nfo 00:04:37.249 + echo '=== End of file: /tmp/spdk_tgt_config.json.nfo ===' 00:04:37.249 + echo '' 00:04:37.249 + rm /tmp/62.Wu3 /tmp/spdk_tgt_config.json.nfo 00:04:37.249 + exit 1 00:04:37.249 16:55:24 json_config -- json_config/json_config.sh@391 -- # echo 'INFO: configuration change detected.' 00:04:37.249 INFO: configuration change detected. 00:04:37.249 16:55:24 json_config -- json_config/json_config.sh@394 -- # json_config_test_fini 00:04:37.249 16:55:24 json_config -- json_config/json_config.sh@306 -- # timing_enter json_config_test_fini 00:04:37.249 16:55:24 json_config -- common/autotest_common.sh@720 -- # xtrace_disable 00:04:37.249 16:55:24 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:37.249 16:55:24 json_config -- json_config/json_config.sh@307 -- # local ret=0 00:04:37.249 16:55:24 json_config -- json_config/json_config.sh@309 -- # [[ -n '' ]] 00:04:37.249 16:55:24 json_config -- json_config/json_config.sh@317 -- # [[ -n 2885601 ]] 00:04:37.249 16:55:24 json_config -- json_config/json_config.sh@320 -- # cleanup_bdev_subsystem_config 00:04:37.249 16:55:24 json_config -- json_config/json_config.sh@184 -- # timing_enter cleanup_bdev_subsystem_config 00:04:37.249 16:55:24 json_config -- common/autotest_common.sh@720 -- # xtrace_disable 00:04:37.249 16:55:24 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:37.249 16:55:24 json_config -- json_config/json_config.sh@186 -- # [[ 0 -eq 1 ]] 00:04:37.249 16:55:24 json_config -- json_config/json_config.sh@193 -- # uname -s 00:04:37.249 16:55:24 json_config -- json_config/json_config.sh@193 -- # [[ Linux = Linux ]] 00:04:37.249 16:55:24 json_config -- json_config/json_config.sh@194 -- # rm -f /sample_aio 00:04:37.249 16:55:24 json_config -- json_config/json_config.sh@197 -- # [[ 0 -eq 1 ]] 00:04:37.249 16:55:24 json_config -- json_config/json_config.sh@201 -- # timing_exit cleanup_bdev_subsystem_config 00:04:37.249 16:55:24 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:37.249 16:55:24 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:37.249 16:55:24 json_config -- json_config/json_config.sh@323 -- # killprocess 2885601 00:04:37.249 16:55:24 json_config -- common/autotest_common.sh@946 -- # '[' -z 2885601 ']' 00:04:37.249 16:55:24 json_config -- common/autotest_common.sh@950 -- # kill -0 2885601 00:04:37.249 16:55:24 json_config -- common/autotest_common.sh@951 -- # uname 00:04:37.249 16:55:24 json_config -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:04:37.249 16:55:24 json_config -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 2885601 00:04:37.249 16:55:24 json_config -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:04:37.249 16:55:24 json_config -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:04:37.249 16:55:24 json_config -- common/autotest_common.sh@964 -- # echo 'killing process with pid 2885601' 00:04:37.249 killing process with pid 2885601 00:04:37.249 16:55:24 json_config -- common/autotest_common.sh@965 -- # kill 2885601 00:04:37.249 [2024-05-15 16:55:24.786023] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:04:37.249 16:55:24 json_config -- common/autotest_common.sh@970 -- # wait 2885601 00:04:39.162 16:55:26 json_config -- json_config/json_config.sh@326 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:39.162 16:55:26 json_config -- json_config/json_config.sh@327 -- # timing_exit json_config_test_fini 00:04:39.162 16:55:26 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:39.162 16:55:26 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:39.162 16:55:26 json_config -- json_config/json_config.sh@328 -- # return 0 00:04:39.162 16:55:26 json_config -- json_config/json_config.sh@396 -- # echo 'INFO: Success' 00:04:39.162 INFO: Success 00:04:39.162 00:04:39.162 real 0m15.170s 00:04:39.162 user 0m15.788s 00:04:39.162 sys 0m2.047s 00:04:39.162 16:55:26 json_config -- common/autotest_common.sh@1122 -- # xtrace_disable 00:04:39.162 16:55:26 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:39.162 ************************************ 00:04:39.162 END TEST json_config 00:04:39.162 ************************************ 00:04:39.162 16:55:26 -- spdk/autotest.sh@169 -- # run_test json_config_extra_key /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:04:39.162 16:55:26 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:04:39.162 16:55:26 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:39.162 16:55:26 -- common/autotest_common.sh@10 -- # set +x 00:04:39.162 ************************************ 00:04:39.162 START TEST json_config_extra_key 00:04:39.162 ************************************ 00:04:39.162 16:55:26 json_config_extra_key -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:04:39.162 16:55:26 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:04:39.162 16:55:26 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:04:39.162 16:55:26 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:39.162 16:55:26 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:39.162 16:55:26 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:39.162 16:55:26 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:39.162 16:55:26 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:39.162 16:55:26 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:39.162 16:55:26 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:39.162 16:55:26 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:39.162 16:55:26 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:39.162 16:55:26 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:39.162 16:55:26 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:04:39.162 16:55:26 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:04:39.162 16:55:26 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:39.162 16:55:26 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:39.162 16:55:26 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:04:39.162 16:55:26 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:04:39.162 16:55:26 json_config_extra_key -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:04:39.162 16:55:26 json_config_extra_key -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:39.162 16:55:26 json_config_extra_key -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:39.162 16:55:26 json_config_extra_key -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:39.162 16:55:26 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:39.162 16:55:26 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:39.162 16:55:26 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:39.162 16:55:26 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:04:39.162 16:55:26 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:39.162 16:55:26 json_config_extra_key -- nvmf/common.sh@47 -- # : 0 00:04:39.162 16:55:26 json_config_extra_key -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:04:39.162 16:55:26 json_config_extra_key -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:04:39.162 16:55:26 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:04:39.162 16:55:26 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:39.162 16:55:26 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:39.162 16:55:26 json_config_extra_key -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:04:39.162 16:55:26 json_config_extra_key -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:04:39.162 16:55:26 json_config_extra_key -- nvmf/common.sh@51 -- # have_pci_nics=0 00:04:39.162 16:55:26 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:04:39.162 16:55:26 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:04:39.162 16:55:26 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:04:39.162 16:55:26 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:04:39.162 16:55:26 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:04:39.162 16:55:26 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:04:39.162 16:55:26 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:04:39.162 16:55:26 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json') 00:04:39.162 16:55:26 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:04:39.162 16:55:26 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:04:39.162 16:55:26 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:04:39.162 INFO: launching applications... 00:04:39.162 16:55:26 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:04:39.162 16:55:26 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:04:39.162 16:55:26 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:04:39.162 16:55:26 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:04:39.162 16:55:26 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:04:39.162 16:55:26 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:04:39.162 16:55:26 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:39.162 16:55:26 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:39.162 16:55:26 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=2886961 00:04:39.162 16:55:26 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:04:39.162 Waiting for target to run... 00:04:39.162 16:55:26 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 2886961 /var/tmp/spdk_tgt.sock 00:04:39.163 16:55:26 json_config_extra_key -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:04:39.163 16:55:26 json_config_extra_key -- common/autotest_common.sh@827 -- # '[' -z 2886961 ']' 00:04:39.163 16:55:26 json_config_extra_key -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:04:39.163 16:55:26 json_config_extra_key -- common/autotest_common.sh@832 -- # local max_retries=100 00:04:39.163 16:55:26 json_config_extra_key -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:04:39.163 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:04:39.163 16:55:26 json_config_extra_key -- common/autotest_common.sh@836 -- # xtrace_disable 00:04:39.163 16:55:26 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:04:39.163 [2024-05-15 16:55:26.557490] Starting SPDK v24.05-pre git sha1 0ba8ca574 / DPDK 23.11.0 initialization... 00:04:39.163 [2024-05-15 16:55:26.557541] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2886961 ] 00:04:39.163 EAL: No free 2048 kB hugepages reported on node 1 00:04:39.163 [2024-05-15 16:55:26.815681] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:39.419 [2024-05-15 16:55:26.884257] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:39.984 16:55:27 json_config_extra_key -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:04:39.984 16:55:27 json_config_extra_key -- common/autotest_common.sh@860 -- # return 0 00:04:39.984 16:55:27 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:04:39.984 00:04:39.984 16:55:27 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:04:39.984 INFO: shutting down applications... 00:04:39.984 16:55:27 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:04:39.984 16:55:27 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:04:39.984 16:55:27 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:04:39.984 16:55:27 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 2886961 ]] 00:04:39.984 16:55:27 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 2886961 00:04:39.984 16:55:27 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:04:39.984 16:55:27 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:39.984 16:55:27 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 2886961 00:04:39.984 16:55:27 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:04:40.242 16:55:27 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:04:40.242 16:55:27 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:40.242 16:55:27 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 2886961 00:04:40.242 16:55:27 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:04:40.242 16:55:27 json_config_extra_key -- json_config/common.sh@43 -- # break 00:04:40.242 16:55:27 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:04:40.242 16:55:27 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:04:40.242 SPDK target shutdown done 00:04:40.242 16:55:27 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:04:40.242 Success 00:04:40.242 00:04:40.242 real 0m1.450s 00:04:40.242 user 0m1.293s 00:04:40.242 sys 0m0.353s 00:04:40.242 16:55:27 json_config_extra_key -- common/autotest_common.sh@1122 -- # xtrace_disable 00:04:40.242 16:55:27 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:04:40.242 ************************************ 00:04:40.242 END TEST json_config_extra_key 00:04:40.242 ************************************ 00:04:40.500 16:55:27 -- spdk/autotest.sh@170 -- # run_test alias_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:04:40.500 16:55:27 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:04:40.500 16:55:27 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:40.500 16:55:27 -- common/autotest_common.sh@10 -- # set +x 00:04:40.500 ************************************ 00:04:40.500 START TEST alias_rpc 00:04:40.500 ************************************ 00:04:40.500 16:55:27 alias_rpc -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:04:40.500 * Looking for test storage... 00:04:40.500 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc 00:04:40.500 16:55:28 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:04:40.500 16:55:28 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=2887251 00:04:40.500 16:55:28 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:40.500 16:55:28 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 2887251 00:04:40.500 16:55:28 alias_rpc -- common/autotest_common.sh@827 -- # '[' -z 2887251 ']' 00:04:40.500 16:55:28 alias_rpc -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:40.500 16:55:28 alias_rpc -- common/autotest_common.sh@832 -- # local max_retries=100 00:04:40.500 16:55:28 alias_rpc -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:40.500 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:40.500 16:55:28 alias_rpc -- common/autotest_common.sh@836 -- # xtrace_disable 00:04:40.500 16:55:28 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:40.500 [2024-05-15 16:55:28.065143] Starting SPDK v24.05-pre git sha1 0ba8ca574 / DPDK 23.11.0 initialization... 00:04:40.500 [2024-05-15 16:55:28.065200] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2887251 ] 00:04:40.500 EAL: No free 2048 kB hugepages reported on node 1 00:04:40.500 [2024-05-15 16:55:28.117105] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:40.759 [2024-05-15 16:55:28.192294] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:41.325 16:55:28 alias_rpc -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:04:41.325 16:55:28 alias_rpc -- common/autotest_common.sh@860 -- # return 0 00:04:41.325 16:55:28 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_config -i 00:04:41.583 16:55:29 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 2887251 00:04:41.583 16:55:29 alias_rpc -- common/autotest_common.sh@946 -- # '[' -z 2887251 ']' 00:04:41.583 16:55:29 alias_rpc -- common/autotest_common.sh@950 -- # kill -0 2887251 00:04:41.583 16:55:29 alias_rpc -- common/autotest_common.sh@951 -- # uname 00:04:41.583 16:55:29 alias_rpc -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:04:41.583 16:55:29 alias_rpc -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 2887251 00:04:41.583 16:55:29 alias_rpc -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:04:41.583 16:55:29 alias_rpc -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:04:41.583 16:55:29 alias_rpc -- common/autotest_common.sh@964 -- # echo 'killing process with pid 2887251' 00:04:41.583 killing process with pid 2887251 00:04:41.583 16:55:29 alias_rpc -- common/autotest_common.sh@965 -- # kill 2887251 00:04:41.583 16:55:29 alias_rpc -- common/autotest_common.sh@970 -- # wait 2887251 00:04:41.841 00:04:41.841 real 0m1.504s 00:04:41.841 user 0m1.666s 00:04:41.841 sys 0m0.379s 00:04:41.841 16:55:29 alias_rpc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:04:41.841 16:55:29 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:41.841 ************************************ 00:04:41.841 END TEST alias_rpc 00:04:41.841 ************************************ 00:04:41.841 16:55:29 -- spdk/autotest.sh@172 -- # [[ 0 -eq 0 ]] 00:04:41.841 16:55:29 -- spdk/autotest.sh@173 -- # run_test spdkcli_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:04:41.841 16:55:29 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:04:41.841 16:55:29 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:41.841 16:55:29 -- common/autotest_common.sh@10 -- # set +x 00:04:42.098 ************************************ 00:04:42.098 START TEST spdkcli_tcp 00:04:42.098 ************************************ 00:04:42.098 16:55:29 spdkcli_tcp -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:04:42.098 * Looking for test storage... 00:04:42.098 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:04:42.098 16:55:29 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:04:42.098 16:55:29 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:04:42.098 16:55:29 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:04:42.098 16:55:29 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:04:42.098 16:55:29 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:04:42.098 16:55:29 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:04:42.098 16:55:29 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:04:42.098 16:55:29 spdkcli_tcp -- common/autotest_common.sh@720 -- # xtrace_disable 00:04:42.098 16:55:29 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:42.098 16:55:29 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=2887539 00:04:42.098 16:55:29 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 2887539 00:04:42.098 16:55:29 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:04:42.098 16:55:29 spdkcli_tcp -- common/autotest_common.sh@827 -- # '[' -z 2887539 ']' 00:04:42.098 16:55:29 spdkcli_tcp -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:42.098 16:55:29 spdkcli_tcp -- common/autotest_common.sh@832 -- # local max_retries=100 00:04:42.098 16:55:29 spdkcli_tcp -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:42.098 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:42.099 16:55:29 spdkcli_tcp -- common/autotest_common.sh@836 -- # xtrace_disable 00:04:42.099 16:55:29 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:42.099 [2024-05-15 16:55:29.641440] Starting SPDK v24.05-pre git sha1 0ba8ca574 / DPDK 23.11.0 initialization... 00:04:42.099 [2024-05-15 16:55:29.641482] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2887539 ] 00:04:42.099 EAL: No free 2048 kB hugepages reported on node 1 00:04:42.099 [2024-05-15 16:55:29.695068] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:42.357 [2024-05-15 16:55:29.770018] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:04:42.357 [2024-05-15 16:55:29.770020] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:42.921 16:55:30 spdkcli_tcp -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:04:42.921 16:55:30 spdkcli_tcp -- common/autotest_common.sh@860 -- # return 0 00:04:42.921 16:55:30 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=2887651 00:04:42.921 16:55:30 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:04:42.921 16:55:30 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:04:43.180 [ 00:04:43.180 "bdev_malloc_delete", 00:04:43.180 "bdev_malloc_create", 00:04:43.180 "bdev_null_resize", 00:04:43.180 "bdev_null_delete", 00:04:43.180 "bdev_null_create", 00:04:43.180 "bdev_nvme_cuse_unregister", 00:04:43.180 "bdev_nvme_cuse_register", 00:04:43.180 "bdev_opal_new_user", 00:04:43.180 "bdev_opal_set_lock_state", 00:04:43.180 "bdev_opal_delete", 00:04:43.180 "bdev_opal_get_info", 00:04:43.180 "bdev_opal_create", 00:04:43.180 "bdev_nvme_opal_revert", 00:04:43.180 "bdev_nvme_opal_init", 00:04:43.180 "bdev_nvme_send_cmd", 00:04:43.180 "bdev_nvme_get_path_iostat", 00:04:43.180 "bdev_nvme_get_mdns_discovery_info", 00:04:43.180 "bdev_nvme_stop_mdns_discovery", 00:04:43.180 "bdev_nvme_start_mdns_discovery", 00:04:43.180 "bdev_nvme_set_multipath_policy", 00:04:43.180 "bdev_nvme_set_preferred_path", 00:04:43.180 "bdev_nvme_get_io_paths", 00:04:43.180 "bdev_nvme_remove_error_injection", 00:04:43.180 "bdev_nvme_add_error_injection", 00:04:43.180 "bdev_nvme_get_discovery_info", 00:04:43.180 "bdev_nvme_stop_discovery", 00:04:43.180 "bdev_nvme_start_discovery", 00:04:43.180 "bdev_nvme_get_controller_health_info", 00:04:43.180 "bdev_nvme_disable_controller", 00:04:43.180 "bdev_nvme_enable_controller", 00:04:43.180 "bdev_nvme_reset_controller", 00:04:43.180 "bdev_nvme_get_transport_statistics", 00:04:43.180 "bdev_nvme_apply_firmware", 00:04:43.180 "bdev_nvme_detach_controller", 00:04:43.180 "bdev_nvme_get_controllers", 00:04:43.180 "bdev_nvme_attach_controller", 00:04:43.180 "bdev_nvme_set_hotplug", 00:04:43.180 "bdev_nvme_set_options", 00:04:43.180 "bdev_passthru_delete", 00:04:43.180 "bdev_passthru_create", 00:04:43.180 "bdev_lvol_check_shallow_copy", 00:04:43.180 "bdev_lvol_start_shallow_copy", 00:04:43.180 "bdev_lvol_grow_lvstore", 00:04:43.180 "bdev_lvol_get_lvols", 00:04:43.180 "bdev_lvol_get_lvstores", 00:04:43.180 "bdev_lvol_delete", 00:04:43.180 "bdev_lvol_set_read_only", 00:04:43.180 "bdev_lvol_resize", 00:04:43.180 "bdev_lvol_decouple_parent", 00:04:43.180 "bdev_lvol_inflate", 00:04:43.180 "bdev_lvol_rename", 00:04:43.180 "bdev_lvol_clone_bdev", 00:04:43.180 "bdev_lvol_clone", 00:04:43.180 "bdev_lvol_snapshot", 00:04:43.180 "bdev_lvol_create", 00:04:43.180 "bdev_lvol_delete_lvstore", 00:04:43.180 "bdev_lvol_rename_lvstore", 00:04:43.180 "bdev_lvol_create_lvstore", 00:04:43.180 "bdev_raid_set_options", 00:04:43.180 "bdev_raid_remove_base_bdev", 00:04:43.180 "bdev_raid_add_base_bdev", 00:04:43.180 "bdev_raid_delete", 00:04:43.180 "bdev_raid_create", 00:04:43.180 "bdev_raid_get_bdevs", 00:04:43.180 "bdev_error_inject_error", 00:04:43.180 "bdev_error_delete", 00:04:43.180 "bdev_error_create", 00:04:43.180 "bdev_split_delete", 00:04:43.180 "bdev_split_create", 00:04:43.180 "bdev_delay_delete", 00:04:43.180 "bdev_delay_create", 00:04:43.180 "bdev_delay_update_latency", 00:04:43.180 "bdev_zone_block_delete", 00:04:43.180 "bdev_zone_block_create", 00:04:43.180 "blobfs_create", 00:04:43.180 "blobfs_detect", 00:04:43.180 "blobfs_set_cache_size", 00:04:43.180 "bdev_aio_delete", 00:04:43.180 "bdev_aio_rescan", 00:04:43.180 "bdev_aio_create", 00:04:43.180 "bdev_ftl_set_property", 00:04:43.180 "bdev_ftl_get_properties", 00:04:43.180 "bdev_ftl_get_stats", 00:04:43.180 "bdev_ftl_unmap", 00:04:43.180 "bdev_ftl_unload", 00:04:43.180 "bdev_ftl_delete", 00:04:43.180 "bdev_ftl_load", 00:04:43.180 "bdev_ftl_create", 00:04:43.180 "bdev_virtio_attach_controller", 00:04:43.180 "bdev_virtio_scsi_get_devices", 00:04:43.180 "bdev_virtio_detach_controller", 00:04:43.180 "bdev_virtio_blk_set_hotplug", 00:04:43.180 "bdev_iscsi_delete", 00:04:43.180 "bdev_iscsi_create", 00:04:43.181 "bdev_iscsi_set_options", 00:04:43.181 "accel_error_inject_error", 00:04:43.181 "ioat_scan_accel_module", 00:04:43.181 "dsa_scan_accel_module", 00:04:43.181 "iaa_scan_accel_module", 00:04:43.181 "vfu_virtio_create_scsi_endpoint", 00:04:43.181 "vfu_virtio_scsi_remove_target", 00:04:43.181 "vfu_virtio_scsi_add_target", 00:04:43.181 "vfu_virtio_create_blk_endpoint", 00:04:43.181 "vfu_virtio_delete_endpoint", 00:04:43.181 "keyring_file_remove_key", 00:04:43.181 "keyring_file_add_key", 00:04:43.181 "iscsi_get_histogram", 00:04:43.181 "iscsi_enable_histogram", 00:04:43.181 "iscsi_set_options", 00:04:43.181 "iscsi_get_auth_groups", 00:04:43.181 "iscsi_auth_group_remove_secret", 00:04:43.181 "iscsi_auth_group_add_secret", 00:04:43.181 "iscsi_delete_auth_group", 00:04:43.181 "iscsi_create_auth_group", 00:04:43.181 "iscsi_set_discovery_auth", 00:04:43.181 "iscsi_get_options", 00:04:43.181 "iscsi_target_node_request_logout", 00:04:43.181 "iscsi_target_node_set_redirect", 00:04:43.181 "iscsi_target_node_set_auth", 00:04:43.181 "iscsi_target_node_add_lun", 00:04:43.181 "iscsi_get_stats", 00:04:43.181 "iscsi_get_connections", 00:04:43.181 "iscsi_portal_group_set_auth", 00:04:43.181 "iscsi_start_portal_group", 00:04:43.181 "iscsi_delete_portal_group", 00:04:43.181 "iscsi_create_portal_group", 00:04:43.181 "iscsi_get_portal_groups", 00:04:43.181 "iscsi_delete_target_node", 00:04:43.181 "iscsi_target_node_remove_pg_ig_maps", 00:04:43.181 "iscsi_target_node_add_pg_ig_maps", 00:04:43.181 "iscsi_create_target_node", 00:04:43.181 "iscsi_get_target_nodes", 00:04:43.181 "iscsi_delete_initiator_group", 00:04:43.181 "iscsi_initiator_group_remove_initiators", 00:04:43.181 "iscsi_initiator_group_add_initiators", 00:04:43.181 "iscsi_create_initiator_group", 00:04:43.181 "iscsi_get_initiator_groups", 00:04:43.181 "nvmf_set_crdt", 00:04:43.181 "nvmf_set_config", 00:04:43.181 "nvmf_set_max_subsystems", 00:04:43.181 "nvmf_stop_mdns_prr", 00:04:43.181 "nvmf_publish_mdns_prr", 00:04:43.181 "nvmf_subsystem_get_listeners", 00:04:43.181 "nvmf_subsystem_get_qpairs", 00:04:43.181 "nvmf_subsystem_get_controllers", 00:04:43.181 "nvmf_get_stats", 00:04:43.181 "nvmf_get_transports", 00:04:43.181 "nvmf_create_transport", 00:04:43.181 "nvmf_get_targets", 00:04:43.181 "nvmf_delete_target", 00:04:43.181 "nvmf_create_target", 00:04:43.181 "nvmf_subsystem_allow_any_host", 00:04:43.181 "nvmf_subsystem_remove_host", 00:04:43.181 "nvmf_subsystem_add_host", 00:04:43.181 "nvmf_ns_remove_host", 00:04:43.181 "nvmf_ns_add_host", 00:04:43.181 "nvmf_subsystem_remove_ns", 00:04:43.181 "nvmf_subsystem_add_ns", 00:04:43.181 "nvmf_subsystem_listener_set_ana_state", 00:04:43.181 "nvmf_discovery_get_referrals", 00:04:43.181 "nvmf_discovery_remove_referral", 00:04:43.181 "nvmf_discovery_add_referral", 00:04:43.181 "nvmf_subsystem_remove_listener", 00:04:43.181 "nvmf_subsystem_add_listener", 00:04:43.181 "nvmf_delete_subsystem", 00:04:43.181 "nvmf_create_subsystem", 00:04:43.181 "nvmf_get_subsystems", 00:04:43.181 "env_dpdk_get_mem_stats", 00:04:43.181 "nbd_get_disks", 00:04:43.181 "nbd_stop_disk", 00:04:43.181 "nbd_start_disk", 00:04:43.181 "ublk_recover_disk", 00:04:43.181 "ublk_get_disks", 00:04:43.181 "ublk_stop_disk", 00:04:43.181 "ublk_start_disk", 00:04:43.181 "ublk_destroy_target", 00:04:43.181 "ublk_create_target", 00:04:43.181 "virtio_blk_create_transport", 00:04:43.181 "virtio_blk_get_transports", 00:04:43.181 "vhost_controller_set_coalescing", 00:04:43.181 "vhost_get_controllers", 00:04:43.181 "vhost_delete_controller", 00:04:43.181 "vhost_create_blk_controller", 00:04:43.181 "vhost_scsi_controller_remove_target", 00:04:43.181 "vhost_scsi_controller_add_target", 00:04:43.181 "vhost_start_scsi_controller", 00:04:43.181 "vhost_create_scsi_controller", 00:04:43.181 "thread_set_cpumask", 00:04:43.181 "framework_get_scheduler", 00:04:43.181 "framework_set_scheduler", 00:04:43.181 "framework_get_reactors", 00:04:43.181 "thread_get_io_channels", 00:04:43.181 "thread_get_pollers", 00:04:43.181 "thread_get_stats", 00:04:43.181 "framework_monitor_context_switch", 00:04:43.181 "spdk_kill_instance", 00:04:43.181 "log_enable_timestamps", 00:04:43.181 "log_get_flags", 00:04:43.181 "log_clear_flag", 00:04:43.181 "log_set_flag", 00:04:43.181 "log_get_level", 00:04:43.181 "log_set_level", 00:04:43.181 "log_get_print_level", 00:04:43.181 "log_set_print_level", 00:04:43.181 "framework_enable_cpumask_locks", 00:04:43.181 "framework_disable_cpumask_locks", 00:04:43.181 "framework_wait_init", 00:04:43.181 "framework_start_init", 00:04:43.181 "scsi_get_devices", 00:04:43.181 "bdev_get_histogram", 00:04:43.181 "bdev_enable_histogram", 00:04:43.181 "bdev_set_qos_limit", 00:04:43.181 "bdev_set_qd_sampling_period", 00:04:43.181 "bdev_get_bdevs", 00:04:43.181 "bdev_reset_iostat", 00:04:43.181 "bdev_get_iostat", 00:04:43.181 "bdev_examine", 00:04:43.181 "bdev_wait_for_examine", 00:04:43.181 "bdev_set_options", 00:04:43.181 "notify_get_notifications", 00:04:43.181 "notify_get_types", 00:04:43.181 "accel_get_stats", 00:04:43.181 "accel_set_options", 00:04:43.181 "accel_set_driver", 00:04:43.181 "accel_crypto_key_destroy", 00:04:43.181 "accel_crypto_keys_get", 00:04:43.181 "accel_crypto_key_create", 00:04:43.181 "accel_assign_opc", 00:04:43.181 "accel_get_module_info", 00:04:43.181 "accel_get_opc_assignments", 00:04:43.181 "vmd_rescan", 00:04:43.181 "vmd_remove_device", 00:04:43.181 "vmd_enable", 00:04:43.181 "sock_get_default_impl", 00:04:43.181 "sock_set_default_impl", 00:04:43.181 "sock_impl_set_options", 00:04:43.181 "sock_impl_get_options", 00:04:43.181 "iobuf_get_stats", 00:04:43.181 "iobuf_set_options", 00:04:43.181 "keyring_get_keys", 00:04:43.181 "framework_get_pci_devices", 00:04:43.181 "framework_get_config", 00:04:43.181 "framework_get_subsystems", 00:04:43.181 "vfu_tgt_set_base_path", 00:04:43.181 "trace_get_info", 00:04:43.181 "trace_get_tpoint_group_mask", 00:04:43.181 "trace_disable_tpoint_group", 00:04:43.181 "trace_enable_tpoint_group", 00:04:43.181 "trace_clear_tpoint_mask", 00:04:43.181 "trace_set_tpoint_mask", 00:04:43.181 "spdk_get_version", 00:04:43.181 "rpc_get_methods" 00:04:43.181 ] 00:04:43.181 16:55:30 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:04:43.181 16:55:30 spdkcli_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:43.181 16:55:30 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:43.181 16:55:30 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:04:43.181 16:55:30 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 2887539 00:04:43.181 16:55:30 spdkcli_tcp -- common/autotest_common.sh@946 -- # '[' -z 2887539 ']' 00:04:43.181 16:55:30 spdkcli_tcp -- common/autotest_common.sh@950 -- # kill -0 2887539 00:04:43.181 16:55:30 spdkcli_tcp -- common/autotest_common.sh@951 -- # uname 00:04:43.181 16:55:30 spdkcli_tcp -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:04:43.181 16:55:30 spdkcli_tcp -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 2887539 00:04:43.181 16:55:30 spdkcli_tcp -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:04:43.181 16:55:30 spdkcli_tcp -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:04:43.181 16:55:30 spdkcli_tcp -- common/autotest_common.sh@964 -- # echo 'killing process with pid 2887539' 00:04:43.181 killing process with pid 2887539 00:04:43.181 16:55:30 spdkcli_tcp -- common/autotest_common.sh@965 -- # kill 2887539 00:04:43.181 16:55:30 spdkcli_tcp -- common/autotest_common.sh@970 -- # wait 2887539 00:04:43.439 00:04:43.440 real 0m1.528s 00:04:43.440 user 0m2.846s 00:04:43.440 sys 0m0.436s 00:04:43.440 16:55:31 spdkcli_tcp -- common/autotest_common.sh@1122 -- # xtrace_disable 00:04:43.440 16:55:31 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:43.440 ************************************ 00:04:43.440 END TEST spdkcli_tcp 00:04:43.440 ************************************ 00:04:43.440 16:55:31 -- spdk/autotest.sh@176 -- # run_test dpdk_mem_utility /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:04:43.440 16:55:31 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:04:43.440 16:55:31 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:43.440 16:55:31 -- common/autotest_common.sh@10 -- # set +x 00:04:43.698 ************************************ 00:04:43.698 START TEST dpdk_mem_utility 00:04:43.698 ************************************ 00:04:43.698 16:55:31 dpdk_mem_utility -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:04:43.698 * Looking for test storage... 00:04:43.698 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility 00:04:43.698 16:55:31 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:04:43.698 16:55:31 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=2887840 00:04:43.698 16:55:31 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 2887840 00:04:43.698 16:55:31 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:43.698 16:55:31 dpdk_mem_utility -- common/autotest_common.sh@827 -- # '[' -z 2887840 ']' 00:04:43.698 16:55:31 dpdk_mem_utility -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:43.698 16:55:31 dpdk_mem_utility -- common/autotest_common.sh@832 -- # local max_retries=100 00:04:43.698 16:55:31 dpdk_mem_utility -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:43.698 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:43.698 16:55:31 dpdk_mem_utility -- common/autotest_common.sh@836 -- # xtrace_disable 00:04:43.698 16:55:31 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:04:43.698 [2024-05-15 16:55:31.249998] Starting SPDK v24.05-pre git sha1 0ba8ca574 / DPDK 23.11.0 initialization... 00:04:43.698 [2024-05-15 16:55:31.250042] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2887840 ] 00:04:43.698 EAL: No free 2048 kB hugepages reported on node 1 00:04:43.698 [2024-05-15 16:55:31.301690] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:43.956 [2024-05-15 16:55:31.375987] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:44.521 16:55:32 dpdk_mem_utility -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:04:44.521 16:55:32 dpdk_mem_utility -- common/autotest_common.sh@860 -- # return 0 00:04:44.521 16:55:32 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:04:44.521 16:55:32 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:04:44.521 16:55:32 dpdk_mem_utility -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:44.521 16:55:32 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:04:44.522 { 00:04:44.522 "filename": "/tmp/spdk_mem_dump.txt" 00:04:44.522 } 00:04:44.522 16:55:32 dpdk_mem_utility -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:44.522 16:55:32 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:04:44.522 DPDK memory size 814.000000 MiB in 1 heap(s) 00:04:44.522 1 heaps totaling size 814.000000 MiB 00:04:44.522 size: 814.000000 MiB heap id: 0 00:04:44.522 end heaps---------- 00:04:44.522 8 mempools totaling size 598.116089 MiB 00:04:44.522 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:04:44.522 size: 158.602051 MiB name: PDU_data_out_Pool 00:04:44.522 size: 84.521057 MiB name: bdev_io_2887840 00:04:44.522 size: 51.011292 MiB name: evtpool_2887840 00:04:44.522 size: 50.003479 MiB name: msgpool_2887840 00:04:44.522 size: 21.763794 MiB name: PDU_Pool 00:04:44.522 size: 19.513306 MiB name: SCSI_TASK_Pool 00:04:44.522 size: 0.026123 MiB name: Session_Pool 00:04:44.522 end mempools------- 00:04:44.522 6 memzones totaling size 4.142822 MiB 00:04:44.522 size: 1.000366 MiB name: RG_ring_0_2887840 00:04:44.522 size: 1.000366 MiB name: RG_ring_1_2887840 00:04:44.522 size: 1.000366 MiB name: RG_ring_4_2887840 00:04:44.522 size: 1.000366 MiB name: RG_ring_5_2887840 00:04:44.522 size: 0.125366 MiB name: RG_ring_2_2887840 00:04:44.522 size: 0.015991 MiB name: RG_ring_3_2887840 00:04:44.522 end memzones------- 00:04:44.522 16:55:32 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py -m 0 00:04:44.522 heap id: 0 total size: 814.000000 MiB number of busy elements: 41 number of free elements: 15 00:04:44.522 list of free elements. size: 12.519348 MiB 00:04:44.522 element at address: 0x200000400000 with size: 1.999512 MiB 00:04:44.522 element at address: 0x200018e00000 with size: 0.999878 MiB 00:04:44.522 element at address: 0x200019000000 with size: 0.999878 MiB 00:04:44.522 element at address: 0x200003e00000 with size: 0.996277 MiB 00:04:44.522 element at address: 0x200031c00000 with size: 0.994446 MiB 00:04:44.522 element at address: 0x200013800000 with size: 0.978699 MiB 00:04:44.522 element at address: 0x200007000000 with size: 0.959839 MiB 00:04:44.522 element at address: 0x200019200000 with size: 0.936584 MiB 00:04:44.522 element at address: 0x200000200000 with size: 0.841614 MiB 00:04:44.522 element at address: 0x20001aa00000 with size: 0.582886 MiB 00:04:44.522 element at address: 0x20000b200000 with size: 0.490723 MiB 00:04:44.522 element at address: 0x200000800000 with size: 0.487793 MiB 00:04:44.522 element at address: 0x200019400000 with size: 0.485657 MiB 00:04:44.522 element at address: 0x200027e00000 with size: 0.410034 MiB 00:04:44.522 element at address: 0x200003a00000 with size: 0.355530 MiB 00:04:44.522 list of standard malloc elements. size: 199.218079 MiB 00:04:44.522 element at address: 0x20000b3fff80 with size: 132.000122 MiB 00:04:44.522 element at address: 0x2000071fff80 with size: 64.000122 MiB 00:04:44.522 element at address: 0x200018efff80 with size: 1.000122 MiB 00:04:44.522 element at address: 0x2000190fff80 with size: 1.000122 MiB 00:04:44.522 element at address: 0x2000192fff80 with size: 1.000122 MiB 00:04:44.522 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:04:44.522 element at address: 0x2000192eff00 with size: 0.062622 MiB 00:04:44.522 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:04:44.522 element at address: 0x2000192efdc0 with size: 0.000305 MiB 00:04:44.522 element at address: 0x2000002d7740 with size: 0.000183 MiB 00:04:44.522 element at address: 0x2000002d7800 with size: 0.000183 MiB 00:04:44.522 element at address: 0x2000002d78c0 with size: 0.000183 MiB 00:04:44.522 element at address: 0x2000002d7ac0 with size: 0.000183 MiB 00:04:44.522 element at address: 0x2000002d7b80 with size: 0.000183 MiB 00:04:44.522 element at address: 0x2000002d7c40 with size: 0.000183 MiB 00:04:44.522 element at address: 0x2000003d9e40 with size: 0.000183 MiB 00:04:44.522 element at address: 0x20000087ce00 with size: 0.000183 MiB 00:04:44.522 element at address: 0x20000087cec0 with size: 0.000183 MiB 00:04:44.522 element at address: 0x2000008fd180 with size: 0.000183 MiB 00:04:44.522 element at address: 0x200003a5b040 with size: 0.000183 MiB 00:04:44.522 element at address: 0x200003adb300 with size: 0.000183 MiB 00:04:44.522 element at address: 0x200003adb500 with size: 0.000183 MiB 00:04:44.522 element at address: 0x200003adf7c0 with size: 0.000183 MiB 00:04:44.522 element at address: 0x200003affa80 with size: 0.000183 MiB 00:04:44.522 element at address: 0x200003affb40 with size: 0.000183 MiB 00:04:44.522 element at address: 0x200003eff0c0 with size: 0.000183 MiB 00:04:44.522 element at address: 0x2000070fdd80 with size: 0.000183 MiB 00:04:44.522 element at address: 0x20000b27da00 with size: 0.000183 MiB 00:04:44.522 element at address: 0x20000b27dac0 with size: 0.000183 MiB 00:04:44.522 element at address: 0x20000b2fdd80 with size: 0.000183 MiB 00:04:44.522 element at address: 0x2000138fa8c0 with size: 0.000183 MiB 00:04:44.522 element at address: 0x2000192efc40 with size: 0.000183 MiB 00:04:44.522 element at address: 0x2000192efd00 with size: 0.000183 MiB 00:04:44.522 element at address: 0x2000194bc740 with size: 0.000183 MiB 00:04:44.522 element at address: 0x20001aa95380 with size: 0.000183 MiB 00:04:44.522 element at address: 0x20001aa95440 with size: 0.000183 MiB 00:04:44.522 element at address: 0x200027e68f80 with size: 0.000183 MiB 00:04:44.522 element at address: 0x200027e69040 with size: 0.000183 MiB 00:04:44.522 element at address: 0x200027e6fc40 with size: 0.000183 MiB 00:04:44.522 element at address: 0x200027e6fe40 with size: 0.000183 MiB 00:04:44.522 element at address: 0x200027e6ff00 with size: 0.000183 MiB 00:04:44.522 list of memzone associated elements. size: 602.262573 MiB 00:04:44.522 element at address: 0x20001aa95500 with size: 211.416748 MiB 00:04:44.522 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:04:44.522 element at address: 0x200027e6ffc0 with size: 157.562561 MiB 00:04:44.522 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:04:44.522 element at address: 0x2000139fab80 with size: 84.020630 MiB 00:04:44.522 associated memzone info: size: 84.020508 MiB name: MP_bdev_io_2887840_0 00:04:44.522 element at address: 0x2000009ff380 with size: 48.003052 MiB 00:04:44.522 associated memzone info: size: 48.002930 MiB name: MP_evtpool_2887840_0 00:04:44.522 element at address: 0x200003fff380 with size: 48.003052 MiB 00:04:44.522 associated memzone info: size: 48.002930 MiB name: MP_msgpool_2887840_0 00:04:44.522 element at address: 0x2000195be940 with size: 20.255554 MiB 00:04:44.522 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:04:44.522 element at address: 0x200031dfeb40 with size: 18.005066 MiB 00:04:44.522 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:04:44.522 element at address: 0x2000005ffe00 with size: 2.000488 MiB 00:04:44.522 associated memzone info: size: 2.000366 MiB name: RG_MP_evtpool_2887840 00:04:44.522 element at address: 0x200003bffe00 with size: 2.000488 MiB 00:04:44.522 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_2887840 00:04:44.522 element at address: 0x2000002d7d00 with size: 1.008118 MiB 00:04:44.522 associated memzone info: size: 1.007996 MiB name: MP_evtpool_2887840 00:04:44.522 element at address: 0x20000b2fde40 with size: 1.008118 MiB 00:04:44.522 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:04:44.522 element at address: 0x2000194bc800 with size: 1.008118 MiB 00:04:44.522 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:04:44.522 element at address: 0x2000070fde40 with size: 1.008118 MiB 00:04:44.522 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:04:44.522 element at address: 0x2000008fd240 with size: 1.008118 MiB 00:04:44.522 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:04:44.522 element at address: 0x200003eff180 with size: 1.000488 MiB 00:04:44.522 associated memzone info: size: 1.000366 MiB name: RG_ring_0_2887840 00:04:44.522 element at address: 0x200003affc00 with size: 1.000488 MiB 00:04:44.522 associated memzone info: size: 1.000366 MiB name: RG_ring_1_2887840 00:04:44.522 element at address: 0x2000138fa980 with size: 1.000488 MiB 00:04:44.522 associated memzone info: size: 1.000366 MiB name: RG_ring_4_2887840 00:04:44.522 element at address: 0x200031cfe940 with size: 1.000488 MiB 00:04:44.522 associated memzone info: size: 1.000366 MiB name: RG_ring_5_2887840 00:04:44.522 element at address: 0x200003a5b100 with size: 0.500488 MiB 00:04:44.522 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_2887840 00:04:44.522 element at address: 0x20000b27db80 with size: 0.500488 MiB 00:04:44.522 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:04:44.522 element at address: 0x20000087cf80 with size: 0.500488 MiB 00:04:44.522 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:04:44.522 element at address: 0x20001947c540 with size: 0.250488 MiB 00:04:44.522 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:04:44.522 element at address: 0x200003adf880 with size: 0.125488 MiB 00:04:44.522 associated memzone info: size: 0.125366 MiB name: RG_ring_2_2887840 00:04:44.522 element at address: 0x2000070f5b80 with size: 0.031738 MiB 00:04:44.522 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:04:44.522 element at address: 0x200027e69100 with size: 0.023743 MiB 00:04:44.522 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:04:44.522 element at address: 0x200003adb5c0 with size: 0.016113 MiB 00:04:44.522 associated memzone info: size: 0.015991 MiB name: RG_ring_3_2887840 00:04:44.522 element at address: 0x200027e6f240 with size: 0.002441 MiB 00:04:44.522 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:04:44.522 element at address: 0x2000002d7980 with size: 0.000305 MiB 00:04:44.522 associated memzone info: size: 0.000183 MiB name: MP_msgpool_2887840 00:04:44.522 element at address: 0x200003adb3c0 with size: 0.000305 MiB 00:04:44.522 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_2887840 00:04:44.523 element at address: 0x200027e6fd00 with size: 0.000305 MiB 00:04:44.523 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:04:44.523 16:55:32 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:04:44.523 16:55:32 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 2887840 00:04:44.523 16:55:32 dpdk_mem_utility -- common/autotest_common.sh@946 -- # '[' -z 2887840 ']' 00:04:44.523 16:55:32 dpdk_mem_utility -- common/autotest_common.sh@950 -- # kill -0 2887840 00:04:44.523 16:55:32 dpdk_mem_utility -- common/autotest_common.sh@951 -- # uname 00:04:44.523 16:55:32 dpdk_mem_utility -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:04:44.523 16:55:32 dpdk_mem_utility -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 2887840 00:04:44.781 16:55:32 dpdk_mem_utility -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:04:44.781 16:55:32 dpdk_mem_utility -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:04:44.781 16:55:32 dpdk_mem_utility -- common/autotest_common.sh@964 -- # echo 'killing process with pid 2887840' 00:04:44.781 killing process with pid 2887840 00:04:44.781 16:55:32 dpdk_mem_utility -- common/autotest_common.sh@965 -- # kill 2887840 00:04:44.781 16:55:32 dpdk_mem_utility -- common/autotest_common.sh@970 -- # wait 2887840 00:04:45.039 00:04:45.039 real 0m1.420s 00:04:45.039 user 0m1.488s 00:04:45.039 sys 0m0.396s 00:04:45.039 16:55:32 dpdk_mem_utility -- common/autotest_common.sh@1122 -- # xtrace_disable 00:04:45.039 16:55:32 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:04:45.039 ************************************ 00:04:45.039 END TEST dpdk_mem_utility 00:04:45.039 ************************************ 00:04:45.039 16:55:32 -- spdk/autotest.sh@177 -- # run_test event /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:04:45.039 16:55:32 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:04:45.039 16:55:32 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:45.039 16:55:32 -- common/autotest_common.sh@10 -- # set +x 00:04:45.039 ************************************ 00:04:45.039 START TEST event 00:04:45.039 ************************************ 00:04:45.039 16:55:32 event -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:04:45.039 * Looking for test storage... 00:04:45.039 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:04:45.039 16:55:32 event -- event/event.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/nbd_common.sh 00:04:45.039 16:55:32 event -- bdev/nbd_common.sh@6 -- # set -e 00:04:45.039 16:55:32 event -- event/event.sh@45 -- # run_test event_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:04:45.039 16:55:32 event -- common/autotest_common.sh@1097 -- # '[' 6 -le 1 ']' 00:04:45.039 16:55:32 event -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:45.039 16:55:32 event -- common/autotest_common.sh@10 -- # set +x 00:04:45.297 ************************************ 00:04:45.297 START TEST event_perf 00:04:45.297 ************************************ 00:04:45.297 16:55:32 event.event_perf -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:04:45.297 Running I/O for 1 seconds...[2024-05-15 16:55:32.729289] Starting SPDK v24.05-pre git sha1 0ba8ca574 / DPDK 23.11.0 initialization... 00:04:45.297 [2024-05-15 16:55:32.729350] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2888134 ] 00:04:45.297 EAL: No free 2048 kB hugepages reported on node 1 00:04:45.297 [2024-05-15 16:55:32.788926] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:04:45.297 [2024-05-15 16:55:32.864625] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:04:45.297 [2024-05-15 16:55:32.864727] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:04:45.297 [2024-05-15 16:55:32.864823] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:04:45.297 [2024-05-15 16:55:32.864824] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:46.670 Running I/O for 1 seconds... 00:04:46.670 lcore 0: 199925 00:04:46.670 lcore 1: 199923 00:04:46.670 lcore 2: 199922 00:04:46.670 lcore 3: 199924 00:04:46.670 done. 00:04:46.670 00:04:46.670 real 0m1.239s 00:04:46.670 user 0m4.158s 00:04:46.670 sys 0m0.074s 00:04:46.670 16:55:33 event.event_perf -- common/autotest_common.sh@1122 -- # xtrace_disable 00:04:46.670 16:55:33 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:04:46.670 ************************************ 00:04:46.670 END TEST event_perf 00:04:46.670 ************************************ 00:04:46.670 16:55:33 event -- event/event.sh@46 -- # run_test event_reactor /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:04:46.670 16:55:33 event -- common/autotest_common.sh@1097 -- # '[' 4 -le 1 ']' 00:04:46.670 16:55:33 event -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:46.670 16:55:33 event -- common/autotest_common.sh@10 -- # set +x 00:04:46.670 ************************************ 00:04:46.670 START TEST event_reactor 00:04:46.670 ************************************ 00:04:46.670 16:55:34 event.event_reactor -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:04:46.670 [2024-05-15 16:55:34.045597] Starting SPDK v24.05-pre git sha1 0ba8ca574 / DPDK 23.11.0 initialization... 00:04:46.670 [2024-05-15 16:55:34.045674] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2888387 ] 00:04:46.670 EAL: No free 2048 kB hugepages reported on node 1 00:04:46.670 [2024-05-15 16:55:34.101549] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:46.670 [2024-05-15 16:55:34.172428] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:47.600 test_start 00:04:47.600 oneshot 00:04:47.600 tick 100 00:04:47.600 tick 100 00:04:47.600 tick 250 00:04:47.600 tick 100 00:04:47.600 tick 100 00:04:47.600 tick 250 00:04:47.600 tick 100 00:04:47.600 tick 500 00:04:47.600 tick 100 00:04:47.600 tick 100 00:04:47.600 tick 250 00:04:47.600 tick 100 00:04:47.600 tick 100 00:04:47.600 test_end 00:04:47.600 00:04:47.600 real 0m1.232s 00:04:47.600 user 0m1.153s 00:04:47.600 sys 0m0.075s 00:04:47.600 16:55:35 event.event_reactor -- common/autotest_common.sh@1122 -- # xtrace_disable 00:04:47.600 16:55:35 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:04:47.600 ************************************ 00:04:47.600 END TEST event_reactor 00:04:47.600 ************************************ 00:04:47.858 16:55:35 event -- event/event.sh@47 -- # run_test event_reactor_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:04:47.858 16:55:35 event -- common/autotest_common.sh@1097 -- # '[' 4 -le 1 ']' 00:04:47.858 16:55:35 event -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:47.858 16:55:35 event -- common/autotest_common.sh@10 -- # set +x 00:04:47.858 ************************************ 00:04:47.858 START TEST event_reactor_perf 00:04:47.858 ************************************ 00:04:47.858 16:55:35 event.event_reactor_perf -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:04:47.858 [2024-05-15 16:55:35.349730] Starting SPDK v24.05-pre git sha1 0ba8ca574 / DPDK 23.11.0 initialization... 00:04:47.858 [2024-05-15 16:55:35.349804] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2888635 ] 00:04:47.858 EAL: No free 2048 kB hugepages reported on node 1 00:04:47.858 [2024-05-15 16:55:35.406612] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:47.858 [2024-05-15 16:55:35.478533] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:49.232 test_start 00:04:49.233 test_end 00:04:49.233 Performance: 492468 events per second 00:04:49.233 00:04:49.233 real 0m1.233s 00:04:49.233 user 0m1.159s 00:04:49.233 sys 0m0.069s 00:04:49.233 16:55:36 event.event_reactor_perf -- common/autotest_common.sh@1122 -- # xtrace_disable 00:04:49.233 16:55:36 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:04:49.233 ************************************ 00:04:49.233 END TEST event_reactor_perf 00:04:49.233 ************************************ 00:04:49.233 16:55:36 event -- event/event.sh@49 -- # uname -s 00:04:49.233 16:55:36 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:04:49.233 16:55:36 event -- event/event.sh@50 -- # run_test event_scheduler /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:04:49.233 16:55:36 event -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:04:49.233 16:55:36 event -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:49.233 16:55:36 event -- common/autotest_common.sh@10 -- # set +x 00:04:49.233 ************************************ 00:04:49.233 START TEST event_scheduler 00:04:49.233 ************************************ 00:04:49.233 16:55:36 event.event_scheduler -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:04:49.233 * Looking for test storage... 00:04:49.233 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler 00:04:49.233 16:55:36 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:04:49.233 16:55:36 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=2888916 00:04:49.233 16:55:36 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:04:49.233 16:55:36 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:04:49.233 16:55:36 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 2888916 00:04:49.233 16:55:36 event.event_scheduler -- common/autotest_common.sh@827 -- # '[' -z 2888916 ']' 00:04:49.233 16:55:36 event.event_scheduler -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:49.233 16:55:36 event.event_scheduler -- common/autotest_common.sh@832 -- # local max_retries=100 00:04:49.233 16:55:36 event.event_scheduler -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:49.233 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:49.233 16:55:36 event.event_scheduler -- common/autotest_common.sh@836 -- # xtrace_disable 00:04:49.233 16:55:36 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:49.233 [2024-05-15 16:55:36.764486] Starting SPDK v24.05-pre git sha1 0ba8ca574 / DPDK 23.11.0 initialization... 00:04:49.233 [2024-05-15 16:55:36.764532] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2888916 ] 00:04:49.233 EAL: No free 2048 kB hugepages reported on node 1 00:04:49.233 [2024-05-15 16:55:36.814670] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:04:49.233 [2024-05-15 16:55:36.891084] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:49.233 [2024-05-15 16:55:36.891172] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:04:49.233 [2024-05-15 16:55:36.891190] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:04:49.233 [2024-05-15 16:55:36.891192] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:04:50.189 16:55:37 event.event_scheduler -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:04:50.189 16:55:37 event.event_scheduler -- common/autotest_common.sh@860 -- # return 0 00:04:50.189 16:55:37 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:04:50.189 16:55:37 event.event_scheduler -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:50.189 16:55:37 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:50.189 POWER: Env isn't set yet! 00:04:50.189 POWER: Attempting to initialise ACPI cpufreq power management... 00:04:50.189 POWER: Failed to write /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:04:50.189 POWER: Cannot set governor of lcore 0 to userspace 00:04:50.189 POWER: Attempting to initialise PSTAT power management... 00:04:50.189 POWER: Power management governor of lcore 0 has been set to 'performance' successfully 00:04:50.189 POWER: Initialized successfully for lcore 0 power management 00:04:50.189 POWER: Power management governor of lcore 1 has been set to 'performance' successfully 00:04:50.189 POWER: Initialized successfully for lcore 1 power management 00:04:50.189 POWER: Power management governor of lcore 2 has been set to 'performance' successfully 00:04:50.189 POWER: Initialized successfully for lcore 2 power management 00:04:50.189 POWER: Power management governor of lcore 3 has been set to 'performance' successfully 00:04:50.189 POWER: Initialized successfully for lcore 3 power management 00:04:50.189 16:55:37 event.event_scheduler -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:50.189 16:55:37 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:04:50.189 16:55:37 event.event_scheduler -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:50.189 16:55:37 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:50.189 [2024-05-15 16:55:37.758675] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:04:50.189 16:55:37 event.event_scheduler -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:50.189 16:55:37 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:04:50.189 16:55:37 event.event_scheduler -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:04:50.189 16:55:37 event.event_scheduler -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:50.189 16:55:37 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:50.189 ************************************ 00:04:50.189 START TEST scheduler_create_thread 00:04:50.189 ************************************ 00:04:50.189 16:55:37 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1121 -- # scheduler_create_thread 00:04:50.189 16:55:37 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:04:50.189 16:55:37 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:50.189 16:55:37 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:50.189 2 00:04:50.189 16:55:37 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:50.189 16:55:37 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:04:50.189 16:55:37 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:50.189 16:55:37 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:50.189 3 00:04:50.189 16:55:37 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:50.189 16:55:37 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:04:50.189 16:55:37 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:50.189 16:55:37 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:50.189 4 00:04:50.189 16:55:37 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:50.189 16:55:37 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:04:50.189 16:55:37 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:50.189 16:55:37 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:50.447 5 00:04:50.447 16:55:37 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:50.447 16:55:37 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:04:50.447 16:55:37 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:50.447 16:55:37 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:50.447 6 00:04:50.447 16:55:37 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:50.447 16:55:37 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:04:50.448 16:55:37 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:50.448 16:55:37 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:50.448 7 00:04:50.448 16:55:37 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:50.448 16:55:37 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:04:50.448 16:55:37 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:50.448 16:55:37 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:50.448 8 00:04:50.448 16:55:37 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:50.448 16:55:37 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:04:50.448 16:55:37 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:50.448 16:55:37 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:50.448 9 00:04:50.448 16:55:37 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:50.448 16:55:37 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:04:50.448 16:55:37 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:50.448 16:55:37 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:50.448 10 00:04:50.448 16:55:37 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:50.448 16:55:37 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:04:50.448 16:55:37 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:50.448 16:55:37 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:51.820 16:55:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:51.820 16:55:39 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:04:51.820 16:55:39 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:04:51.820 16:55:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:51.820 16:55:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:52.753 16:55:40 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:52.753 16:55:40 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:04:52.753 16:55:40 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:52.753 16:55:40 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:53.318 16:55:40 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:53.318 16:55:40 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:04:53.318 16:55:40 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:04:53.318 16:55:40 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:53.318 16:55:40 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:54.250 16:55:41 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:54.250 00:04:54.250 real 0m3.891s 00:04:54.250 user 0m0.019s 00:04:54.250 sys 0m0.009s 00:04:54.250 16:55:41 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1122 -- # xtrace_disable 00:04:54.250 16:55:41 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:54.250 ************************************ 00:04:54.250 END TEST scheduler_create_thread 00:04:54.250 ************************************ 00:04:54.250 16:55:41 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:04:54.250 16:55:41 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 2888916 00:04:54.250 16:55:41 event.event_scheduler -- common/autotest_common.sh@946 -- # '[' -z 2888916 ']' 00:04:54.250 16:55:41 event.event_scheduler -- common/autotest_common.sh@950 -- # kill -0 2888916 00:04:54.250 16:55:41 event.event_scheduler -- common/autotest_common.sh@951 -- # uname 00:04:54.250 16:55:41 event.event_scheduler -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:04:54.250 16:55:41 event.event_scheduler -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 2888916 00:04:54.250 16:55:41 event.event_scheduler -- common/autotest_common.sh@952 -- # process_name=reactor_2 00:04:54.250 16:55:41 event.event_scheduler -- common/autotest_common.sh@956 -- # '[' reactor_2 = sudo ']' 00:04:54.250 16:55:41 event.event_scheduler -- common/autotest_common.sh@964 -- # echo 'killing process with pid 2888916' 00:04:54.250 killing process with pid 2888916 00:04:54.250 16:55:41 event.event_scheduler -- common/autotest_common.sh@965 -- # kill 2888916 00:04:54.250 16:55:41 event.event_scheduler -- common/autotest_common.sh@970 -- # wait 2888916 00:04:54.506 [2024-05-15 16:55:42.066342] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:04:54.763 POWER: Power management governor of lcore 0 has been set to 'powersave' successfully 00:04:54.763 POWER: Power management of lcore 0 has exited from 'performance' mode and been set back to the original 00:04:54.763 POWER: Power management governor of lcore 1 has been set to 'powersave' successfully 00:04:54.763 POWER: Power management of lcore 1 has exited from 'performance' mode and been set back to the original 00:04:54.763 POWER: Power management governor of lcore 2 has been set to 'powersave' successfully 00:04:54.763 POWER: Power management of lcore 2 has exited from 'performance' mode and been set back to the original 00:04:54.763 POWER: Power management governor of lcore 3 has been set to 'powersave' successfully 00:04:54.763 POWER: Power management of lcore 3 has exited from 'performance' mode and been set back to the original 00:04:54.763 00:04:54.763 real 0m5.724s 00:04:54.763 user 0m12.601s 00:04:54.763 sys 0m0.361s 00:04:54.763 16:55:42 event.event_scheduler -- common/autotest_common.sh@1122 -- # xtrace_disable 00:04:54.763 16:55:42 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:54.763 ************************************ 00:04:54.763 END TEST event_scheduler 00:04:54.763 ************************************ 00:04:54.763 16:55:42 event -- event/event.sh@51 -- # modprobe -n nbd 00:04:54.763 16:55:42 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:04:54.763 16:55:42 event -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:04:54.763 16:55:42 event -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:54.763 16:55:42 event -- common/autotest_common.sh@10 -- # set +x 00:04:55.020 ************************************ 00:04:55.020 START TEST app_repeat 00:04:55.020 ************************************ 00:04:55.020 16:55:42 event.app_repeat -- common/autotest_common.sh@1121 -- # app_repeat_test 00:04:55.020 16:55:42 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:55.020 16:55:42 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:55.020 16:55:42 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:04:55.020 16:55:42 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:55.020 16:55:42 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:04:55.020 16:55:42 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:04:55.020 16:55:42 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:04:55.020 16:55:42 event.app_repeat -- event/event.sh@19 -- # repeat_pid=2889887 00:04:55.020 16:55:42 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:04:55.020 16:55:42 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 2889887' 00:04:55.020 Process app_repeat pid: 2889887 00:04:55.020 16:55:42 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:04:55.020 16:55:42 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:04:55.020 spdk_app_start Round 0 00:04:55.020 16:55:42 event.app_repeat -- event/event.sh@25 -- # waitforlisten 2889887 /var/tmp/spdk-nbd.sock 00:04:55.020 16:55:42 event.app_repeat -- common/autotest_common.sh@827 -- # '[' -z 2889887 ']' 00:04:55.020 16:55:42 event.app_repeat -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:04:55.020 16:55:42 event.app_repeat -- common/autotest_common.sh@832 -- # local max_retries=100 00:04:55.020 16:55:42 event.app_repeat -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:04:55.020 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:04:55.020 16:55:42 event.app_repeat -- common/autotest_common.sh@836 -- # xtrace_disable 00:04:55.020 16:55:42 event.app_repeat -- event/event.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:04:55.020 16:55:42 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:04:55.020 [2024-05-15 16:55:42.472201] Starting SPDK v24.05-pre git sha1 0ba8ca574 / DPDK 23.11.0 initialization... 00:04:55.020 [2024-05-15 16:55:42.472249] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2889887 ] 00:04:55.020 EAL: No free 2048 kB hugepages reported on node 1 00:04:55.020 [2024-05-15 16:55:42.526384] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:55.020 [2024-05-15 16:55:42.605965] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:04:55.020 [2024-05-15 16:55:42.605969] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:55.948 16:55:43 event.app_repeat -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:04:55.948 16:55:43 event.app_repeat -- common/autotest_common.sh@860 -- # return 0 00:04:55.948 16:55:43 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:55.948 Malloc0 00:04:55.949 16:55:43 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:56.206 Malloc1 00:04:56.206 16:55:43 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:56.206 16:55:43 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:56.206 16:55:43 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:56.207 16:55:43 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:04:56.207 16:55:43 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:56.207 16:55:43 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:04:56.207 16:55:43 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:56.207 16:55:43 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:56.207 16:55:43 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:56.207 16:55:43 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:04:56.207 16:55:43 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:56.207 16:55:43 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:04:56.207 16:55:43 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:04:56.207 16:55:43 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:04:56.207 16:55:43 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:56.207 16:55:43 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:04:56.207 /dev/nbd0 00:04:56.207 16:55:43 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:04:56.207 16:55:43 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:04:56.207 16:55:43 event.app_repeat -- common/autotest_common.sh@864 -- # local nbd_name=nbd0 00:04:56.207 16:55:43 event.app_repeat -- common/autotest_common.sh@865 -- # local i 00:04:56.207 16:55:43 event.app_repeat -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:04:56.207 16:55:43 event.app_repeat -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:04:56.207 16:55:43 event.app_repeat -- common/autotest_common.sh@868 -- # grep -q -w nbd0 /proc/partitions 00:04:56.207 16:55:43 event.app_repeat -- common/autotest_common.sh@869 -- # break 00:04:56.207 16:55:43 event.app_repeat -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:04:56.207 16:55:43 event.app_repeat -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:04:56.207 16:55:43 event.app_repeat -- common/autotest_common.sh@881 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:56.207 1+0 records in 00:04:56.207 1+0 records out 00:04:56.207 4096 bytes (4.1 kB, 4.0 KiB) copied, 8.9684e-05 s, 45.7 MB/s 00:04:56.207 16:55:43 event.app_repeat -- common/autotest_common.sh@882 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:56.207 16:55:43 event.app_repeat -- common/autotest_common.sh@882 -- # size=4096 00:04:56.207 16:55:43 event.app_repeat -- common/autotest_common.sh@883 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:56.207 16:55:43 event.app_repeat -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:04:56.207 16:55:43 event.app_repeat -- common/autotest_common.sh@885 -- # return 0 00:04:56.207 16:55:43 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:56.207 16:55:43 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:56.207 16:55:43 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:04:56.465 /dev/nbd1 00:04:56.465 16:55:44 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:04:56.465 16:55:44 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:04:56.465 16:55:44 event.app_repeat -- common/autotest_common.sh@864 -- # local nbd_name=nbd1 00:04:56.465 16:55:44 event.app_repeat -- common/autotest_common.sh@865 -- # local i 00:04:56.465 16:55:44 event.app_repeat -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:04:56.465 16:55:44 event.app_repeat -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:04:56.465 16:55:44 event.app_repeat -- common/autotest_common.sh@868 -- # grep -q -w nbd1 /proc/partitions 00:04:56.465 16:55:44 event.app_repeat -- common/autotest_common.sh@869 -- # break 00:04:56.465 16:55:44 event.app_repeat -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:04:56.465 16:55:44 event.app_repeat -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:04:56.465 16:55:44 event.app_repeat -- common/autotest_common.sh@881 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:56.465 1+0 records in 00:04:56.465 1+0 records out 00:04:56.465 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000194688 s, 21.0 MB/s 00:04:56.465 16:55:44 event.app_repeat -- common/autotest_common.sh@882 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:56.465 16:55:44 event.app_repeat -- common/autotest_common.sh@882 -- # size=4096 00:04:56.465 16:55:44 event.app_repeat -- common/autotest_common.sh@883 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:56.465 16:55:44 event.app_repeat -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:04:56.465 16:55:44 event.app_repeat -- common/autotest_common.sh@885 -- # return 0 00:04:56.465 16:55:44 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:56.465 16:55:44 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:56.465 16:55:44 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:56.465 16:55:44 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:56.465 16:55:44 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:56.722 16:55:44 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:04:56.722 { 00:04:56.722 "nbd_device": "/dev/nbd0", 00:04:56.722 "bdev_name": "Malloc0" 00:04:56.722 }, 00:04:56.722 { 00:04:56.722 "nbd_device": "/dev/nbd1", 00:04:56.722 "bdev_name": "Malloc1" 00:04:56.722 } 00:04:56.722 ]' 00:04:56.722 16:55:44 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:04:56.722 { 00:04:56.722 "nbd_device": "/dev/nbd0", 00:04:56.722 "bdev_name": "Malloc0" 00:04:56.722 }, 00:04:56.722 { 00:04:56.722 "nbd_device": "/dev/nbd1", 00:04:56.722 "bdev_name": "Malloc1" 00:04:56.722 } 00:04:56.722 ]' 00:04:56.722 16:55:44 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:56.722 16:55:44 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:04:56.722 /dev/nbd1' 00:04:56.722 16:55:44 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:04:56.722 /dev/nbd1' 00:04:56.722 16:55:44 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:56.722 16:55:44 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:04:56.722 16:55:44 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:04:56.722 16:55:44 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:04:56.722 16:55:44 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:04:56.722 16:55:44 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:04:56.722 16:55:44 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:56.722 16:55:44 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:56.722 16:55:44 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:04:56.722 16:55:44 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:56.722 16:55:44 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:04:56.722 16:55:44 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:04:56.722 256+0 records in 00:04:56.722 256+0 records out 00:04:56.722 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00363556 s, 288 MB/s 00:04:56.722 16:55:44 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:56.722 16:55:44 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:04:56.722 256+0 records in 00:04:56.722 256+0 records out 00:04:56.722 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0133198 s, 78.7 MB/s 00:04:56.722 16:55:44 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:56.722 16:55:44 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:04:56.722 256+0 records in 00:04:56.722 256+0 records out 00:04:56.722 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.015015 s, 69.8 MB/s 00:04:56.722 16:55:44 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:04:56.723 16:55:44 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:56.723 16:55:44 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:56.723 16:55:44 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:04:56.723 16:55:44 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:56.723 16:55:44 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:04:56.723 16:55:44 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:04:56.723 16:55:44 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:56.723 16:55:44 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:04:56.723 16:55:44 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:56.723 16:55:44 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:04:56.723 16:55:44 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:56.723 16:55:44 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:04:56.723 16:55:44 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:56.723 16:55:44 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:56.723 16:55:44 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:04:56.723 16:55:44 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:04:56.723 16:55:44 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:56.723 16:55:44 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:04:56.980 16:55:44 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:04:56.980 16:55:44 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:04:56.980 16:55:44 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:04:56.980 16:55:44 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:56.980 16:55:44 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:56.980 16:55:44 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:04:56.980 16:55:44 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:04:56.980 16:55:44 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:04:56.980 16:55:44 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:56.980 16:55:44 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:04:57.237 16:55:44 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:04:57.237 16:55:44 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:04:57.237 16:55:44 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:04:57.237 16:55:44 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:57.237 16:55:44 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:57.237 16:55:44 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:04:57.237 16:55:44 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:04:57.237 16:55:44 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:04:57.237 16:55:44 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:57.237 16:55:44 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:57.237 16:55:44 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:57.494 16:55:44 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:04:57.494 16:55:44 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:04:57.494 16:55:44 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:57.494 16:55:44 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:04:57.494 16:55:44 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:04:57.494 16:55:44 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:57.494 16:55:44 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:04:57.494 16:55:44 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:04:57.494 16:55:44 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:04:57.494 16:55:44 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:04:57.494 16:55:44 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:04:57.494 16:55:44 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:04:57.494 16:55:44 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:04:57.807 16:55:45 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:04:57.807 [2024-05-15 16:55:45.371570] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:57.807 [2024-05-15 16:55:45.441692] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:04:57.807 [2024-05-15 16:55:45.441695] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:58.087 [2024-05-15 16:55:45.484042] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:04:58.087 [2024-05-15 16:55:45.484080] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:00.612 16:55:48 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:00.612 16:55:48 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:05:00.612 spdk_app_start Round 1 00:05:00.612 16:55:48 event.app_repeat -- event/event.sh@25 -- # waitforlisten 2889887 /var/tmp/spdk-nbd.sock 00:05:00.612 16:55:48 event.app_repeat -- common/autotest_common.sh@827 -- # '[' -z 2889887 ']' 00:05:00.612 16:55:48 event.app_repeat -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:00.612 16:55:48 event.app_repeat -- common/autotest_common.sh@832 -- # local max_retries=100 00:05:00.612 16:55:48 event.app_repeat -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:00.612 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:00.612 16:55:48 event.app_repeat -- common/autotest_common.sh@836 -- # xtrace_disable 00:05:00.612 16:55:48 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:00.870 16:55:48 event.app_repeat -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:05:00.870 16:55:48 event.app_repeat -- common/autotest_common.sh@860 -- # return 0 00:05:00.870 16:55:48 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:00.870 Malloc0 00:05:00.870 16:55:48 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:01.129 Malloc1 00:05:01.129 16:55:48 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:01.129 16:55:48 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:01.129 16:55:48 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:01.129 16:55:48 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:01.129 16:55:48 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:01.129 16:55:48 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:01.129 16:55:48 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:01.129 16:55:48 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:01.129 16:55:48 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:01.129 16:55:48 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:01.129 16:55:48 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:01.129 16:55:48 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:01.129 16:55:48 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:01.129 16:55:48 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:01.129 16:55:48 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:01.129 16:55:48 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:01.387 /dev/nbd0 00:05:01.387 16:55:48 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:01.387 16:55:48 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:01.387 16:55:48 event.app_repeat -- common/autotest_common.sh@864 -- # local nbd_name=nbd0 00:05:01.387 16:55:48 event.app_repeat -- common/autotest_common.sh@865 -- # local i 00:05:01.387 16:55:48 event.app_repeat -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:05:01.387 16:55:48 event.app_repeat -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:05:01.387 16:55:48 event.app_repeat -- common/autotest_common.sh@868 -- # grep -q -w nbd0 /proc/partitions 00:05:01.387 16:55:48 event.app_repeat -- common/autotest_common.sh@869 -- # break 00:05:01.387 16:55:48 event.app_repeat -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:05:01.387 16:55:48 event.app_repeat -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:05:01.387 16:55:48 event.app_repeat -- common/autotest_common.sh@881 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:01.387 1+0 records in 00:05:01.387 1+0 records out 00:05:01.387 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000191808 s, 21.4 MB/s 00:05:01.387 16:55:48 event.app_repeat -- common/autotest_common.sh@882 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:01.387 16:55:48 event.app_repeat -- common/autotest_common.sh@882 -- # size=4096 00:05:01.387 16:55:48 event.app_repeat -- common/autotest_common.sh@883 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:01.387 16:55:48 event.app_repeat -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:05:01.387 16:55:48 event.app_repeat -- common/autotest_common.sh@885 -- # return 0 00:05:01.387 16:55:48 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:01.387 16:55:48 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:01.387 16:55:48 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:01.645 /dev/nbd1 00:05:01.645 16:55:49 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:01.645 16:55:49 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:01.645 16:55:49 event.app_repeat -- common/autotest_common.sh@864 -- # local nbd_name=nbd1 00:05:01.645 16:55:49 event.app_repeat -- common/autotest_common.sh@865 -- # local i 00:05:01.645 16:55:49 event.app_repeat -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:05:01.645 16:55:49 event.app_repeat -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:05:01.645 16:55:49 event.app_repeat -- common/autotest_common.sh@868 -- # grep -q -w nbd1 /proc/partitions 00:05:01.645 16:55:49 event.app_repeat -- common/autotest_common.sh@869 -- # break 00:05:01.645 16:55:49 event.app_repeat -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:05:01.645 16:55:49 event.app_repeat -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:05:01.645 16:55:49 event.app_repeat -- common/autotest_common.sh@881 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:01.645 1+0 records in 00:05:01.645 1+0 records out 00:05:01.645 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000190159 s, 21.5 MB/s 00:05:01.645 16:55:49 event.app_repeat -- common/autotest_common.sh@882 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:01.645 16:55:49 event.app_repeat -- common/autotest_common.sh@882 -- # size=4096 00:05:01.645 16:55:49 event.app_repeat -- common/autotest_common.sh@883 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:01.645 16:55:49 event.app_repeat -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:05:01.645 16:55:49 event.app_repeat -- common/autotest_common.sh@885 -- # return 0 00:05:01.645 16:55:49 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:01.645 16:55:49 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:01.645 16:55:49 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:01.645 16:55:49 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:01.645 16:55:49 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:01.645 16:55:49 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:01.645 { 00:05:01.645 "nbd_device": "/dev/nbd0", 00:05:01.645 "bdev_name": "Malloc0" 00:05:01.645 }, 00:05:01.645 { 00:05:01.645 "nbd_device": "/dev/nbd1", 00:05:01.645 "bdev_name": "Malloc1" 00:05:01.645 } 00:05:01.645 ]' 00:05:01.645 16:55:49 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:01.645 { 00:05:01.645 "nbd_device": "/dev/nbd0", 00:05:01.645 "bdev_name": "Malloc0" 00:05:01.645 }, 00:05:01.645 { 00:05:01.645 "nbd_device": "/dev/nbd1", 00:05:01.645 "bdev_name": "Malloc1" 00:05:01.645 } 00:05:01.645 ]' 00:05:01.645 16:55:49 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:01.903 16:55:49 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:01.903 /dev/nbd1' 00:05:01.903 16:55:49 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:01.903 /dev/nbd1' 00:05:01.903 16:55:49 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:01.903 16:55:49 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:01.903 16:55:49 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:01.903 16:55:49 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:01.903 16:55:49 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:01.903 16:55:49 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:01.903 16:55:49 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:01.903 16:55:49 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:01.903 16:55:49 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:01.903 16:55:49 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:01.903 16:55:49 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:01.903 16:55:49 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:01.903 256+0 records in 00:05:01.903 256+0 records out 00:05:01.903 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0103422 s, 101 MB/s 00:05:01.903 16:55:49 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:01.903 16:55:49 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:01.903 256+0 records in 00:05:01.903 256+0 records out 00:05:01.903 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0139909 s, 74.9 MB/s 00:05:01.903 16:55:49 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:01.903 16:55:49 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:01.903 256+0 records in 00:05:01.903 256+0 records out 00:05:01.903 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0144167 s, 72.7 MB/s 00:05:01.903 16:55:49 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:01.903 16:55:49 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:01.903 16:55:49 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:01.903 16:55:49 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:01.903 16:55:49 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:01.903 16:55:49 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:01.903 16:55:49 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:01.903 16:55:49 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:01.903 16:55:49 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:05:01.903 16:55:49 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:01.903 16:55:49 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:05:01.903 16:55:49 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:01.903 16:55:49 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:01.903 16:55:49 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:01.903 16:55:49 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:01.903 16:55:49 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:01.903 16:55:49 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:01.903 16:55:49 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:01.903 16:55:49 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:02.161 16:55:49 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:02.161 16:55:49 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:02.161 16:55:49 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:02.161 16:55:49 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:02.161 16:55:49 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:02.161 16:55:49 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:02.161 16:55:49 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:02.161 16:55:49 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:02.161 16:55:49 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:02.161 16:55:49 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:02.161 16:55:49 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:02.161 16:55:49 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:02.161 16:55:49 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:02.161 16:55:49 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:02.161 16:55:49 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:02.161 16:55:49 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:02.161 16:55:49 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:02.161 16:55:49 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:02.161 16:55:49 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:02.161 16:55:49 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:02.161 16:55:49 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:02.419 16:55:49 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:02.419 16:55:49 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:02.419 16:55:49 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:02.419 16:55:50 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:02.419 16:55:50 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:02.419 16:55:50 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:02.419 16:55:50 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:02.419 16:55:50 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:02.419 16:55:50 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:02.419 16:55:50 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:02.419 16:55:50 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:02.419 16:55:50 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:02.419 16:55:50 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:02.677 16:55:50 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:02.935 [2024-05-15 16:55:50.413985] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:02.935 [2024-05-15 16:55:50.480699] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:02.935 [2024-05-15 16:55:50.480701] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:02.935 [2024-05-15 16:55:50.523191] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:02.935 [2024-05-15 16:55:50.523230] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:06.212 16:55:53 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:06.212 16:55:53 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:05:06.212 spdk_app_start Round 2 00:05:06.212 16:55:53 event.app_repeat -- event/event.sh@25 -- # waitforlisten 2889887 /var/tmp/spdk-nbd.sock 00:05:06.212 16:55:53 event.app_repeat -- common/autotest_common.sh@827 -- # '[' -z 2889887 ']' 00:05:06.212 16:55:53 event.app_repeat -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:06.212 16:55:53 event.app_repeat -- common/autotest_common.sh@832 -- # local max_retries=100 00:05:06.212 16:55:53 event.app_repeat -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:06.212 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:06.212 16:55:53 event.app_repeat -- common/autotest_common.sh@836 -- # xtrace_disable 00:05:06.212 16:55:53 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:06.212 16:55:53 event.app_repeat -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:05:06.212 16:55:53 event.app_repeat -- common/autotest_common.sh@860 -- # return 0 00:05:06.212 16:55:53 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:06.212 Malloc0 00:05:06.212 16:55:53 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:06.212 Malloc1 00:05:06.213 16:55:53 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:06.213 16:55:53 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:06.213 16:55:53 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:06.213 16:55:53 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:06.213 16:55:53 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:06.213 16:55:53 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:06.213 16:55:53 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:06.213 16:55:53 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:06.213 16:55:53 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:06.213 16:55:53 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:06.213 16:55:53 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:06.213 16:55:53 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:06.213 16:55:53 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:06.213 16:55:53 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:06.213 16:55:53 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:06.213 16:55:53 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:06.471 /dev/nbd0 00:05:06.471 16:55:53 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:06.471 16:55:53 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:06.471 16:55:53 event.app_repeat -- common/autotest_common.sh@864 -- # local nbd_name=nbd0 00:05:06.471 16:55:53 event.app_repeat -- common/autotest_common.sh@865 -- # local i 00:05:06.471 16:55:53 event.app_repeat -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:05:06.471 16:55:53 event.app_repeat -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:05:06.471 16:55:53 event.app_repeat -- common/autotest_common.sh@868 -- # grep -q -w nbd0 /proc/partitions 00:05:06.471 16:55:53 event.app_repeat -- common/autotest_common.sh@869 -- # break 00:05:06.471 16:55:53 event.app_repeat -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:05:06.471 16:55:53 event.app_repeat -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:05:06.471 16:55:53 event.app_repeat -- common/autotest_common.sh@881 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:06.471 1+0 records in 00:05:06.471 1+0 records out 00:05:06.471 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000198156 s, 20.7 MB/s 00:05:06.471 16:55:53 event.app_repeat -- common/autotest_common.sh@882 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:06.471 16:55:53 event.app_repeat -- common/autotest_common.sh@882 -- # size=4096 00:05:06.471 16:55:53 event.app_repeat -- common/autotest_common.sh@883 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:06.471 16:55:53 event.app_repeat -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:05:06.471 16:55:53 event.app_repeat -- common/autotest_common.sh@885 -- # return 0 00:05:06.471 16:55:53 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:06.471 16:55:53 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:06.471 16:55:53 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:06.471 /dev/nbd1 00:05:06.729 16:55:54 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:06.729 16:55:54 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:06.729 16:55:54 event.app_repeat -- common/autotest_common.sh@864 -- # local nbd_name=nbd1 00:05:06.729 16:55:54 event.app_repeat -- common/autotest_common.sh@865 -- # local i 00:05:06.729 16:55:54 event.app_repeat -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:05:06.729 16:55:54 event.app_repeat -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:05:06.729 16:55:54 event.app_repeat -- common/autotest_common.sh@868 -- # grep -q -w nbd1 /proc/partitions 00:05:06.729 16:55:54 event.app_repeat -- common/autotest_common.sh@869 -- # break 00:05:06.729 16:55:54 event.app_repeat -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:05:06.729 16:55:54 event.app_repeat -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:05:06.729 16:55:54 event.app_repeat -- common/autotest_common.sh@881 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:06.729 1+0 records in 00:05:06.729 1+0 records out 00:05:06.729 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00017583 s, 23.3 MB/s 00:05:06.729 16:55:54 event.app_repeat -- common/autotest_common.sh@882 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:06.729 16:55:54 event.app_repeat -- common/autotest_common.sh@882 -- # size=4096 00:05:06.729 16:55:54 event.app_repeat -- common/autotest_common.sh@883 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:06.729 16:55:54 event.app_repeat -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:05:06.729 16:55:54 event.app_repeat -- common/autotest_common.sh@885 -- # return 0 00:05:06.729 16:55:54 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:06.729 16:55:54 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:06.729 16:55:54 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:06.729 16:55:54 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:06.729 16:55:54 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:06.729 16:55:54 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:06.729 { 00:05:06.729 "nbd_device": "/dev/nbd0", 00:05:06.729 "bdev_name": "Malloc0" 00:05:06.729 }, 00:05:06.729 { 00:05:06.729 "nbd_device": "/dev/nbd1", 00:05:06.729 "bdev_name": "Malloc1" 00:05:06.729 } 00:05:06.729 ]' 00:05:06.729 16:55:54 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:06.729 { 00:05:06.729 "nbd_device": "/dev/nbd0", 00:05:06.729 "bdev_name": "Malloc0" 00:05:06.729 }, 00:05:06.729 { 00:05:06.729 "nbd_device": "/dev/nbd1", 00:05:06.729 "bdev_name": "Malloc1" 00:05:06.729 } 00:05:06.729 ]' 00:05:06.729 16:55:54 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:06.729 16:55:54 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:06.729 /dev/nbd1' 00:05:06.730 16:55:54 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:06.730 /dev/nbd1' 00:05:06.730 16:55:54 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:06.730 16:55:54 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:06.730 16:55:54 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:06.730 16:55:54 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:06.730 16:55:54 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:06.730 16:55:54 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:06.730 16:55:54 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:06.730 16:55:54 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:06.730 16:55:54 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:06.730 16:55:54 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:06.730 16:55:54 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:06.730 16:55:54 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:06.988 256+0 records in 00:05:06.988 256+0 records out 00:05:06.988 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0103383 s, 101 MB/s 00:05:06.988 16:55:54 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:06.988 16:55:54 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:06.988 256+0 records in 00:05:06.988 256+0 records out 00:05:06.988 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0134715 s, 77.8 MB/s 00:05:06.988 16:55:54 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:06.988 16:55:54 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:06.988 256+0 records in 00:05:06.988 256+0 records out 00:05:06.988 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.014473 s, 72.5 MB/s 00:05:06.988 16:55:54 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:06.988 16:55:54 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:06.988 16:55:54 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:06.988 16:55:54 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:06.988 16:55:54 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:06.988 16:55:54 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:06.988 16:55:54 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:06.988 16:55:54 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:06.988 16:55:54 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:05:06.988 16:55:54 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:06.988 16:55:54 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:05:06.988 16:55:54 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:06.988 16:55:54 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:06.988 16:55:54 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:06.988 16:55:54 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:06.988 16:55:54 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:06.988 16:55:54 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:06.988 16:55:54 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:06.988 16:55:54 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:06.988 16:55:54 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:06.988 16:55:54 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:06.988 16:55:54 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:07.245 16:55:54 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:07.245 16:55:54 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:07.246 16:55:54 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:07.246 16:55:54 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:07.246 16:55:54 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:07.246 16:55:54 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:07.246 16:55:54 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:07.246 16:55:54 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:07.246 16:55:54 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:07.246 16:55:54 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:07.246 16:55:54 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:07.246 16:55:54 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:07.246 16:55:54 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:07.246 16:55:54 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:07.246 16:55:54 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:07.246 16:55:54 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:07.246 16:55:54 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:07.246 16:55:54 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:07.503 16:55:55 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:07.503 16:55:55 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:07.503 16:55:55 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:07.503 16:55:55 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:07.503 16:55:55 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:07.503 16:55:55 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:07.503 16:55:55 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:07.503 16:55:55 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:07.503 16:55:55 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:07.503 16:55:55 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:07.503 16:55:55 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:07.503 16:55:55 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:07.503 16:55:55 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:07.760 16:55:55 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:08.018 [2024-05-15 16:55:55.477311] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:08.018 [2024-05-15 16:55:55.543405] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:08.018 [2024-05-15 16:55:55.543406] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:08.018 [2024-05-15 16:55:55.585047] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:08.018 [2024-05-15 16:55:55.585091] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:11.296 16:55:58 event.app_repeat -- event/event.sh@38 -- # waitforlisten 2889887 /var/tmp/spdk-nbd.sock 00:05:11.296 16:55:58 event.app_repeat -- common/autotest_common.sh@827 -- # '[' -z 2889887 ']' 00:05:11.296 16:55:58 event.app_repeat -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:11.296 16:55:58 event.app_repeat -- common/autotest_common.sh@832 -- # local max_retries=100 00:05:11.296 16:55:58 event.app_repeat -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:11.296 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:11.296 16:55:58 event.app_repeat -- common/autotest_common.sh@836 -- # xtrace_disable 00:05:11.296 16:55:58 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:11.296 16:55:58 event.app_repeat -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:05:11.296 16:55:58 event.app_repeat -- common/autotest_common.sh@860 -- # return 0 00:05:11.296 16:55:58 event.app_repeat -- event/event.sh@39 -- # killprocess 2889887 00:05:11.296 16:55:58 event.app_repeat -- common/autotest_common.sh@946 -- # '[' -z 2889887 ']' 00:05:11.296 16:55:58 event.app_repeat -- common/autotest_common.sh@950 -- # kill -0 2889887 00:05:11.296 16:55:58 event.app_repeat -- common/autotest_common.sh@951 -- # uname 00:05:11.296 16:55:58 event.app_repeat -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:05:11.296 16:55:58 event.app_repeat -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 2889887 00:05:11.296 16:55:58 event.app_repeat -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:05:11.296 16:55:58 event.app_repeat -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:05:11.296 16:55:58 event.app_repeat -- common/autotest_common.sh@964 -- # echo 'killing process with pid 2889887' 00:05:11.296 killing process with pid 2889887 00:05:11.296 16:55:58 event.app_repeat -- common/autotest_common.sh@965 -- # kill 2889887 00:05:11.296 16:55:58 event.app_repeat -- common/autotest_common.sh@970 -- # wait 2889887 00:05:11.296 spdk_app_start is called in Round 0. 00:05:11.296 Shutdown signal received, stop current app iteration 00:05:11.296 Starting SPDK v24.05-pre git sha1 0ba8ca574 / DPDK 23.11.0 reinitialization... 00:05:11.296 spdk_app_start is called in Round 1. 00:05:11.296 Shutdown signal received, stop current app iteration 00:05:11.296 Starting SPDK v24.05-pre git sha1 0ba8ca574 / DPDK 23.11.0 reinitialization... 00:05:11.296 spdk_app_start is called in Round 2. 00:05:11.296 Shutdown signal received, stop current app iteration 00:05:11.296 Starting SPDK v24.05-pre git sha1 0ba8ca574 / DPDK 23.11.0 reinitialization... 00:05:11.296 spdk_app_start is called in Round 3. 00:05:11.296 Shutdown signal received, stop current app iteration 00:05:11.296 16:55:58 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:05:11.296 16:55:58 event.app_repeat -- event/event.sh@42 -- # return 0 00:05:11.296 00:05:11.296 real 0m16.231s 00:05:11.296 user 0m35.051s 00:05:11.296 sys 0m2.337s 00:05:11.296 16:55:58 event.app_repeat -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:11.296 16:55:58 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:11.296 ************************************ 00:05:11.296 END TEST app_repeat 00:05:11.296 ************************************ 00:05:11.296 16:55:58 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:05:11.296 16:55:58 event -- event/event.sh@55 -- # run_test cpu_locks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:05:11.296 16:55:58 event -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:11.296 16:55:58 event -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:11.296 16:55:58 event -- common/autotest_common.sh@10 -- # set +x 00:05:11.296 ************************************ 00:05:11.296 START TEST cpu_locks 00:05:11.296 ************************************ 00:05:11.296 16:55:58 event.cpu_locks -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:05:11.296 * Looking for test storage... 00:05:11.296 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:05:11.296 16:55:58 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:05:11.296 16:55:58 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:05:11.296 16:55:58 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:05:11.296 16:55:58 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:05:11.296 16:55:58 event.cpu_locks -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:11.296 16:55:58 event.cpu_locks -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:11.296 16:55:58 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:11.296 ************************************ 00:05:11.296 START TEST default_locks 00:05:11.296 ************************************ 00:05:11.296 16:55:58 event.cpu_locks.default_locks -- common/autotest_common.sh@1121 -- # default_locks 00:05:11.296 16:55:58 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=2892874 00:05:11.296 16:55:58 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 2892874 00:05:11.296 16:55:58 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:11.296 16:55:58 event.cpu_locks.default_locks -- common/autotest_common.sh@827 -- # '[' -z 2892874 ']' 00:05:11.296 16:55:58 event.cpu_locks.default_locks -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:11.296 16:55:58 event.cpu_locks.default_locks -- common/autotest_common.sh@832 -- # local max_retries=100 00:05:11.296 16:55:58 event.cpu_locks.default_locks -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:11.296 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:11.296 16:55:58 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # xtrace_disable 00:05:11.296 16:55:58 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:11.296 [2024-05-15 16:55:58.925494] Starting SPDK v24.05-pre git sha1 0ba8ca574 / DPDK 23.11.0 initialization... 00:05:11.296 [2024-05-15 16:55:58.925541] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2892874 ] 00:05:11.296 EAL: No free 2048 kB hugepages reported on node 1 00:05:11.554 [2024-05-15 16:55:58.982083] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:11.554 [2024-05-15 16:55:59.060835] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:12.119 16:55:59 event.cpu_locks.default_locks -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:05:12.119 16:55:59 event.cpu_locks.default_locks -- common/autotest_common.sh@860 -- # return 0 00:05:12.119 16:55:59 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 2892874 00:05:12.119 16:55:59 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 2892874 00:05:12.119 16:55:59 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:12.694 lslocks: write error 00:05:12.694 16:56:00 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 2892874 00:05:12.694 16:56:00 event.cpu_locks.default_locks -- common/autotest_common.sh@946 -- # '[' -z 2892874 ']' 00:05:12.694 16:56:00 event.cpu_locks.default_locks -- common/autotest_common.sh@950 -- # kill -0 2892874 00:05:12.694 16:56:00 event.cpu_locks.default_locks -- common/autotest_common.sh@951 -- # uname 00:05:12.694 16:56:00 event.cpu_locks.default_locks -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:05:12.694 16:56:00 event.cpu_locks.default_locks -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 2892874 00:05:12.694 16:56:00 event.cpu_locks.default_locks -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:05:12.694 16:56:00 event.cpu_locks.default_locks -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:05:12.694 16:56:00 event.cpu_locks.default_locks -- common/autotest_common.sh@964 -- # echo 'killing process with pid 2892874' 00:05:12.694 killing process with pid 2892874 00:05:12.694 16:56:00 event.cpu_locks.default_locks -- common/autotest_common.sh@965 -- # kill 2892874 00:05:12.694 16:56:00 event.cpu_locks.default_locks -- common/autotest_common.sh@970 -- # wait 2892874 00:05:13.261 16:56:00 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 2892874 00:05:13.261 16:56:00 event.cpu_locks.default_locks -- common/autotest_common.sh@648 -- # local es=0 00:05:13.261 16:56:00 event.cpu_locks.default_locks -- common/autotest_common.sh@650 -- # valid_exec_arg waitforlisten 2892874 00:05:13.261 16:56:00 event.cpu_locks.default_locks -- common/autotest_common.sh@636 -- # local arg=waitforlisten 00:05:13.261 16:56:00 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:13.261 16:56:00 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # type -t waitforlisten 00:05:13.261 16:56:00 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:13.261 16:56:00 event.cpu_locks.default_locks -- common/autotest_common.sh@651 -- # waitforlisten 2892874 00:05:13.261 16:56:00 event.cpu_locks.default_locks -- common/autotest_common.sh@827 -- # '[' -z 2892874 ']' 00:05:13.261 16:56:00 event.cpu_locks.default_locks -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:13.261 16:56:00 event.cpu_locks.default_locks -- common/autotest_common.sh@832 -- # local max_retries=100 00:05:13.261 16:56:00 event.cpu_locks.default_locks -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:13.261 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:13.261 16:56:00 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # xtrace_disable 00:05:13.261 16:56:00 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:13.261 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 842: kill: (2892874) - No such process 00:05:13.261 ERROR: process (pid: 2892874) is no longer running 00:05:13.261 16:56:00 event.cpu_locks.default_locks -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:05:13.261 16:56:00 event.cpu_locks.default_locks -- common/autotest_common.sh@860 -- # return 1 00:05:13.261 16:56:00 event.cpu_locks.default_locks -- common/autotest_common.sh@651 -- # es=1 00:05:13.261 16:56:00 event.cpu_locks.default_locks -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:05:13.261 16:56:00 event.cpu_locks.default_locks -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:05:13.261 16:56:00 event.cpu_locks.default_locks -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:05:13.261 16:56:00 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:05:13.261 16:56:00 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:05:13.261 16:56:00 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:05:13.261 16:56:00 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:05:13.261 00:05:13.261 real 0m1.810s 00:05:13.261 user 0m1.914s 00:05:13.261 sys 0m0.564s 00:05:13.261 16:56:00 event.cpu_locks.default_locks -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:13.261 16:56:00 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:13.261 ************************************ 00:05:13.261 END TEST default_locks 00:05:13.261 ************************************ 00:05:13.261 16:56:00 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:05:13.261 16:56:00 event.cpu_locks -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:13.261 16:56:00 event.cpu_locks -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:13.261 16:56:00 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:13.261 ************************************ 00:05:13.261 START TEST default_locks_via_rpc 00:05:13.261 ************************************ 00:05:13.261 16:56:00 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1121 -- # default_locks_via_rpc 00:05:13.261 16:56:00 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=2893359 00:05:13.261 16:56:00 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 2893359 00:05:13.261 16:56:00 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:13.261 16:56:00 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@827 -- # '[' -z 2893359 ']' 00:05:13.261 16:56:00 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:13.261 16:56:00 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@832 -- # local max_retries=100 00:05:13.261 16:56:00 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:13.261 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:13.261 16:56:00 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@836 -- # xtrace_disable 00:05:13.261 16:56:00 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:13.261 [2024-05-15 16:56:00.806967] Starting SPDK v24.05-pre git sha1 0ba8ca574 / DPDK 23.11.0 initialization... 00:05:13.261 [2024-05-15 16:56:00.807009] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2893359 ] 00:05:13.261 EAL: No free 2048 kB hugepages reported on node 1 00:05:13.261 [2024-05-15 16:56:00.860427] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:13.519 [2024-05-15 16:56:00.929420] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:14.085 16:56:01 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:05:14.085 16:56:01 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@860 -- # return 0 00:05:14.085 16:56:01 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:05:14.085 16:56:01 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:14.085 16:56:01 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:14.085 16:56:01 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:14.085 16:56:01 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:05:14.085 16:56:01 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:05:14.085 16:56:01 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:05:14.085 16:56:01 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:05:14.085 16:56:01 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:05:14.085 16:56:01 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:14.085 16:56:01 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:14.085 16:56:01 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:14.085 16:56:01 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 2893359 00:05:14.085 16:56:01 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 2893359 00:05:14.085 16:56:01 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:14.085 16:56:01 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 2893359 00:05:14.085 16:56:01 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@946 -- # '[' -z 2893359 ']' 00:05:14.085 16:56:01 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@950 -- # kill -0 2893359 00:05:14.085 16:56:01 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@951 -- # uname 00:05:14.085 16:56:01 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:05:14.085 16:56:01 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 2893359 00:05:14.342 16:56:01 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:05:14.342 16:56:01 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:05:14.342 16:56:01 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@964 -- # echo 'killing process with pid 2893359' 00:05:14.342 killing process with pid 2893359 00:05:14.342 16:56:01 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@965 -- # kill 2893359 00:05:14.342 16:56:01 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@970 -- # wait 2893359 00:05:14.600 00:05:14.600 real 0m1.351s 00:05:14.600 user 0m1.417s 00:05:14.600 sys 0m0.398s 00:05:14.600 16:56:02 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:14.600 16:56:02 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:14.600 ************************************ 00:05:14.600 END TEST default_locks_via_rpc 00:05:14.600 ************************************ 00:05:14.600 16:56:02 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:05:14.600 16:56:02 event.cpu_locks -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:14.600 16:56:02 event.cpu_locks -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:14.600 16:56:02 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:14.600 ************************************ 00:05:14.600 START TEST non_locking_app_on_locked_coremask 00:05:14.600 ************************************ 00:05:14.600 16:56:02 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1121 -- # non_locking_app_on_locked_coremask 00:05:14.600 16:56:02 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=2893623 00:05:14.600 16:56:02 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 2893623 /var/tmp/spdk.sock 00:05:14.600 16:56:02 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:14.600 16:56:02 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@827 -- # '[' -z 2893623 ']' 00:05:14.600 16:56:02 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:14.600 16:56:02 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@832 -- # local max_retries=100 00:05:14.600 16:56:02 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:14.600 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:14.600 16:56:02 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # xtrace_disable 00:05:14.600 16:56:02 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:14.600 [2024-05-15 16:56:02.227003] Starting SPDK v24.05-pre git sha1 0ba8ca574 / DPDK 23.11.0 initialization... 00:05:14.600 [2024-05-15 16:56:02.227042] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2893623 ] 00:05:14.600 EAL: No free 2048 kB hugepages reported on node 1 00:05:14.857 [2024-05-15 16:56:02.280776] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:14.857 [2024-05-15 16:56:02.353002] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:15.422 16:56:03 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:05:15.422 16:56:03 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # return 0 00:05:15.422 16:56:03 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:05:15.422 16:56:03 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=2893649 00:05:15.422 16:56:03 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 2893649 /var/tmp/spdk2.sock 00:05:15.422 16:56:03 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@827 -- # '[' -z 2893649 ']' 00:05:15.422 16:56:03 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:15.422 16:56:03 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@832 -- # local max_retries=100 00:05:15.422 16:56:03 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:15.422 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:15.422 16:56:03 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # xtrace_disable 00:05:15.422 16:56:03 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:15.422 [2024-05-15 16:56:03.061889] Starting SPDK v24.05-pre git sha1 0ba8ca574 / DPDK 23.11.0 initialization... 00:05:15.422 [2024-05-15 16:56:03.061940] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2893649 ] 00:05:15.680 EAL: No free 2048 kB hugepages reported on node 1 00:05:15.680 [2024-05-15 16:56:03.139240] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:15.680 [2024-05-15 16:56:03.139273] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:15.680 [2024-05-15 16:56:03.284978] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:16.245 16:56:03 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:05:16.245 16:56:03 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # return 0 00:05:16.245 16:56:03 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 2893623 00:05:16.245 16:56:03 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 2893623 00:05:16.245 16:56:03 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:16.810 lslocks: write error 00:05:16.810 16:56:04 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 2893623 00:05:16.810 16:56:04 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@946 -- # '[' -z 2893623 ']' 00:05:16.810 16:56:04 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # kill -0 2893623 00:05:16.810 16:56:04 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@951 -- # uname 00:05:16.810 16:56:04 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:05:16.810 16:56:04 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 2893623 00:05:16.810 16:56:04 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:05:16.810 16:56:04 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:05:16.810 16:56:04 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # echo 'killing process with pid 2893623' 00:05:16.810 killing process with pid 2893623 00:05:16.810 16:56:04 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@965 -- # kill 2893623 00:05:16.810 16:56:04 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@970 -- # wait 2893623 00:05:17.375 16:56:04 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 2893649 00:05:17.375 16:56:04 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@946 -- # '[' -z 2893649 ']' 00:05:17.375 16:56:04 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # kill -0 2893649 00:05:17.375 16:56:04 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@951 -- # uname 00:05:17.375 16:56:04 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:05:17.375 16:56:04 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 2893649 00:05:17.375 16:56:04 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:05:17.375 16:56:04 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:05:17.375 16:56:04 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # echo 'killing process with pid 2893649' 00:05:17.375 killing process with pid 2893649 00:05:17.375 16:56:04 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@965 -- # kill 2893649 00:05:17.375 16:56:04 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@970 -- # wait 2893649 00:05:17.941 00:05:17.941 real 0m3.136s 00:05:17.941 user 0m3.342s 00:05:17.941 sys 0m0.861s 00:05:17.941 16:56:05 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:17.941 16:56:05 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:17.941 ************************************ 00:05:17.941 END TEST non_locking_app_on_locked_coremask 00:05:17.941 ************************************ 00:05:17.941 16:56:05 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:05:17.941 16:56:05 event.cpu_locks -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:17.941 16:56:05 event.cpu_locks -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:17.941 16:56:05 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:17.941 ************************************ 00:05:17.941 START TEST locking_app_on_unlocked_coremask 00:05:17.941 ************************************ 00:05:17.941 16:56:05 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1121 -- # locking_app_on_unlocked_coremask 00:05:17.941 16:56:05 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=2894129 00:05:17.941 16:56:05 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 2894129 /var/tmp/spdk.sock 00:05:17.941 16:56:05 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@827 -- # '[' -z 2894129 ']' 00:05:17.941 16:56:05 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:17.941 16:56:05 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@832 -- # local max_retries=100 00:05:17.941 16:56:05 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:17.941 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:17.941 16:56:05 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # xtrace_disable 00:05:17.941 16:56:05 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:17.941 16:56:05 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:05:17.941 [2024-05-15 16:56:05.419222] Starting SPDK v24.05-pre git sha1 0ba8ca574 / DPDK 23.11.0 initialization... 00:05:17.941 [2024-05-15 16:56:05.419263] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2894129 ] 00:05:17.941 EAL: No free 2048 kB hugepages reported on node 1 00:05:17.941 [2024-05-15 16:56:05.470841] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:17.941 [2024-05-15 16:56:05.470864] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:17.941 [2024-05-15 16:56:05.550097] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:18.875 16:56:06 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:05:18.875 16:56:06 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@860 -- # return 0 00:05:18.875 16:56:06 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=2894353 00:05:18.875 16:56:06 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 2894353 /var/tmp/spdk2.sock 00:05:18.875 16:56:06 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@827 -- # '[' -z 2894353 ']' 00:05:18.875 16:56:06 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:05:18.875 16:56:06 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:18.875 16:56:06 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@832 -- # local max_retries=100 00:05:18.875 16:56:06 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:18.875 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:18.875 16:56:06 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # xtrace_disable 00:05:18.875 16:56:06 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:18.875 [2024-05-15 16:56:06.233359] Starting SPDK v24.05-pre git sha1 0ba8ca574 / DPDK 23.11.0 initialization... 00:05:18.875 [2024-05-15 16:56:06.233406] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2894353 ] 00:05:18.875 EAL: No free 2048 kB hugepages reported on node 1 00:05:18.875 [2024-05-15 16:56:06.303727] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:18.875 [2024-05-15 16:56:06.452767] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:19.440 16:56:07 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:05:19.440 16:56:07 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@860 -- # return 0 00:05:19.440 16:56:07 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 2894353 00:05:19.440 16:56:07 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 2894353 00:05:19.440 16:56:07 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:20.005 lslocks: write error 00:05:20.005 16:56:07 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 2894129 00:05:20.005 16:56:07 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@946 -- # '[' -z 2894129 ']' 00:05:20.005 16:56:07 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@950 -- # kill -0 2894129 00:05:20.005 16:56:07 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@951 -- # uname 00:05:20.005 16:56:07 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:05:20.005 16:56:07 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 2894129 00:05:20.005 16:56:07 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:05:20.005 16:56:07 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:05:20.005 16:56:07 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # echo 'killing process with pid 2894129' 00:05:20.005 killing process with pid 2894129 00:05:20.005 16:56:07 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@965 -- # kill 2894129 00:05:20.005 16:56:07 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@970 -- # wait 2894129 00:05:20.571 16:56:08 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 2894353 00:05:20.571 16:56:08 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@946 -- # '[' -z 2894353 ']' 00:05:20.571 16:56:08 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@950 -- # kill -0 2894353 00:05:20.571 16:56:08 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@951 -- # uname 00:05:20.571 16:56:08 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:05:20.571 16:56:08 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 2894353 00:05:20.571 16:56:08 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:05:20.571 16:56:08 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:05:20.571 16:56:08 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # echo 'killing process with pid 2894353' 00:05:20.571 killing process with pid 2894353 00:05:20.571 16:56:08 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@965 -- # kill 2894353 00:05:20.571 16:56:08 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@970 -- # wait 2894353 00:05:21.188 00:05:21.188 real 0m3.176s 00:05:21.188 user 0m3.383s 00:05:21.188 sys 0m0.855s 00:05:21.188 16:56:08 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:21.188 16:56:08 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:21.188 ************************************ 00:05:21.188 END TEST locking_app_on_unlocked_coremask 00:05:21.188 ************************************ 00:05:21.188 16:56:08 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:05:21.188 16:56:08 event.cpu_locks -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:21.188 16:56:08 event.cpu_locks -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:21.188 16:56:08 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:21.188 ************************************ 00:05:21.188 START TEST locking_app_on_locked_coremask 00:05:21.188 ************************************ 00:05:21.188 16:56:08 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1121 -- # locking_app_on_locked_coremask 00:05:21.188 16:56:08 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=2894728 00:05:21.188 16:56:08 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 2894728 /var/tmp/spdk.sock 00:05:21.188 16:56:08 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:21.188 16:56:08 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@827 -- # '[' -z 2894728 ']' 00:05:21.188 16:56:08 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:21.188 16:56:08 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@832 -- # local max_retries=100 00:05:21.188 16:56:08 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:21.188 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:21.188 16:56:08 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # xtrace_disable 00:05:21.188 16:56:08 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:21.188 [2024-05-15 16:56:08.671951] Starting SPDK v24.05-pre git sha1 0ba8ca574 / DPDK 23.11.0 initialization... 00:05:21.188 [2024-05-15 16:56:08.671996] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2894728 ] 00:05:21.188 EAL: No free 2048 kB hugepages reported on node 1 00:05:21.188 [2024-05-15 16:56:08.726785] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:21.188 [2024-05-15 16:56:08.798549] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:22.122 16:56:09 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:05:22.122 16:56:09 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # return 0 00:05:22.122 16:56:09 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:05:22.122 16:56:09 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=2894864 00:05:22.122 16:56:09 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 2894864 /var/tmp/spdk2.sock 00:05:22.122 16:56:09 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@648 -- # local es=0 00:05:22.122 16:56:09 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@650 -- # valid_exec_arg waitforlisten 2894864 /var/tmp/spdk2.sock 00:05:22.122 16:56:09 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@636 -- # local arg=waitforlisten 00:05:22.122 16:56:09 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:22.122 16:56:09 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # type -t waitforlisten 00:05:22.122 16:56:09 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:22.122 16:56:09 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@651 -- # waitforlisten 2894864 /var/tmp/spdk2.sock 00:05:22.122 16:56:09 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@827 -- # '[' -z 2894864 ']' 00:05:22.122 16:56:09 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:22.122 16:56:09 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@832 -- # local max_retries=100 00:05:22.122 16:56:09 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:22.122 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:22.122 16:56:09 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # xtrace_disable 00:05:22.122 16:56:09 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:22.122 [2024-05-15 16:56:09.506626] Starting SPDK v24.05-pre git sha1 0ba8ca574 / DPDK 23.11.0 initialization... 00:05:22.122 [2024-05-15 16:56:09.506670] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2894864 ] 00:05:22.122 EAL: No free 2048 kB hugepages reported on node 1 00:05:22.122 [2024-05-15 16:56:09.582465] app.c: 771:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 2894728 has claimed it. 00:05:22.122 [2024-05-15 16:56:09.582505] app.c: 902:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:05:22.687 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 842: kill: (2894864) - No such process 00:05:22.687 ERROR: process (pid: 2894864) is no longer running 00:05:22.687 16:56:10 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:05:22.687 16:56:10 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # return 1 00:05:22.687 16:56:10 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@651 -- # es=1 00:05:22.687 16:56:10 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:05:22.687 16:56:10 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:05:22.687 16:56:10 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:05:22.687 16:56:10 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 2894728 00:05:22.687 16:56:10 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 2894728 00:05:22.687 16:56:10 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:23.253 lslocks: write error 00:05:23.253 16:56:10 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 2894728 00:05:23.253 16:56:10 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@946 -- # '[' -z 2894728 ']' 00:05:23.253 16:56:10 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # kill -0 2894728 00:05:23.253 16:56:10 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@951 -- # uname 00:05:23.253 16:56:10 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:05:23.253 16:56:10 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 2894728 00:05:23.253 16:56:10 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:05:23.253 16:56:10 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:05:23.253 16:56:10 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # echo 'killing process with pid 2894728' 00:05:23.253 killing process with pid 2894728 00:05:23.253 16:56:10 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@965 -- # kill 2894728 00:05:23.253 16:56:10 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@970 -- # wait 2894728 00:05:23.511 00:05:23.511 real 0m2.437s 00:05:23.511 user 0m2.676s 00:05:23.511 sys 0m0.631s 00:05:23.511 16:56:11 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:23.511 16:56:11 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:23.511 ************************************ 00:05:23.511 END TEST locking_app_on_locked_coremask 00:05:23.511 ************************************ 00:05:23.511 16:56:11 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:05:23.511 16:56:11 event.cpu_locks -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:23.511 16:56:11 event.cpu_locks -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:23.511 16:56:11 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:23.511 ************************************ 00:05:23.511 START TEST locking_overlapped_coremask 00:05:23.511 ************************************ 00:05:23.511 16:56:11 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1121 -- # locking_overlapped_coremask 00:05:23.511 16:56:11 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=2895125 00:05:23.511 16:56:11 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 2895125 /var/tmp/spdk.sock 00:05:23.511 16:56:11 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 00:05:23.511 16:56:11 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@827 -- # '[' -z 2895125 ']' 00:05:23.511 16:56:11 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:23.511 16:56:11 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@832 -- # local max_retries=100 00:05:23.511 16:56:11 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:23.511 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:23.511 16:56:11 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # xtrace_disable 00:05:23.511 16:56:11 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:23.769 [2024-05-15 16:56:11.179631] Starting SPDK v24.05-pre git sha1 0ba8ca574 / DPDK 23.11.0 initialization... 00:05:23.769 [2024-05-15 16:56:11.179672] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2895125 ] 00:05:23.769 EAL: No free 2048 kB hugepages reported on node 1 00:05:23.769 [2024-05-15 16:56:11.233853] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:23.769 [2024-05-15 16:56:11.309415] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:23.770 [2024-05-15 16:56:11.309510] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:23.770 [2024-05-15 16:56:11.309511] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:05:24.335 16:56:11 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:05:24.335 16:56:11 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@860 -- # return 0 00:05:24.335 16:56:11 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=2895357 00:05:24.335 16:56:11 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 2895357 /var/tmp/spdk2.sock 00:05:24.335 16:56:11 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:05:24.335 16:56:11 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@648 -- # local es=0 00:05:24.335 16:56:11 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@650 -- # valid_exec_arg waitforlisten 2895357 /var/tmp/spdk2.sock 00:05:24.335 16:56:11 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@636 -- # local arg=waitforlisten 00:05:24.335 16:56:11 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:24.335 16:56:11 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # type -t waitforlisten 00:05:24.335 16:56:11 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:24.335 16:56:11 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@651 -- # waitforlisten 2895357 /var/tmp/spdk2.sock 00:05:24.335 16:56:11 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@827 -- # '[' -z 2895357 ']' 00:05:24.335 16:56:11 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:24.335 16:56:11 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@832 -- # local max_retries=100 00:05:24.335 16:56:11 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:24.335 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:24.335 16:56:11 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # xtrace_disable 00:05:24.335 16:56:11 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:24.593 [2024-05-15 16:56:12.025864] Starting SPDK v24.05-pre git sha1 0ba8ca574 / DPDK 23.11.0 initialization... 00:05:24.593 [2024-05-15 16:56:12.025910] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2895357 ] 00:05:24.593 EAL: No free 2048 kB hugepages reported on node 1 00:05:24.593 [2024-05-15 16:56:12.101030] app.c: 771:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 2895125 has claimed it. 00:05:24.593 [2024-05-15 16:56:12.101068] app.c: 902:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:05:25.158 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 842: kill: (2895357) - No such process 00:05:25.158 ERROR: process (pid: 2895357) is no longer running 00:05:25.158 16:56:12 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:05:25.158 16:56:12 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@860 -- # return 1 00:05:25.158 16:56:12 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@651 -- # es=1 00:05:25.158 16:56:12 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:05:25.158 16:56:12 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:05:25.158 16:56:12 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:05:25.158 16:56:12 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:05:25.158 16:56:12 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:05:25.158 16:56:12 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:05:25.158 16:56:12 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:05:25.158 16:56:12 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 2895125 00:05:25.158 16:56:12 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@946 -- # '[' -z 2895125 ']' 00:05:25.158 16:56:12 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@950 -- # kill -0 2895125 00:05:25.158 16:56:12 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@951 -- # uname 00:05:25.158 16:56:12 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:05:25.158 16:56:12 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 2895125 00:05:25.158 16:56:12 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:05:25.158 16:56:12 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:05:25.158 16:56:12 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@964 -- # echo 'killing process with pid 2895125' 00:05:25.158 killing process with pid 2895125 00:05:25.158 16:56:12 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@965 -- # kill 2895125 00:05:25.158 16:56:12 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@970 -- # wait 2895125 00:05:25.417 00:05:25.417 real 0m1.908s 00:05:25.417 user 0m5.352s 00:05:25.417 sys 0m0.384s 00:05:25.417 16:56:13 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:25.417 16:56:13 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:25.417 ************************************ 00:05:25.417 END TEST locking_overlapped_coremask 00:05:25.417 ************************************ 00:05:25.417 16:56:13 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:05:25.417 16:56:13 event.cpu_locks -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:25.417 16:56:13 event.cpu_locks -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:25.417 16:56:13 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:25.675 ************************************ 00:05:25.675 START TEST locking_overlapped_coremask_via_rpc 00:05:25.675 ************************************ 00:05:25.675 16:56:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1121 -- # locking_overlapped_coremask_via_rpc 00:05:25.675 16:56:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=2895615 00:05:25.675 16:56:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 2895615 /var/tmp/spdk.sock 00:05:25.675 16:56:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:05:25.675 16:56:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@827 -- # '[' -z 2895615 ']' 00:05:25.675 16:56:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:25.675 16:56:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@832 -- # local max_retries=100 00:05:25.675 16:56:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:25.675 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:25.675 16:56:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # xtrace_disable 00:05:25.675 16:56:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:25.675 [2024-05-15 16:56:13.162488] Starting SPDK v24.05-pre git sha1 0ba8ca574 / DPDK 23.11.0 initialization... 00:05:25.675 [2024-05-15 16:56:13.162533] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2895615 ] 00:05:25.675 EAL: No free 2048 kB hugepages reported on node 1 00:05:25.675 [2024-05-15 16:56:13.215183] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:25.675 [2024-05-15 16:56:13.215210] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:25.675 [2024-05-15 16:56:13.285604] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:25.675 [2024-05-15 16:56:13.285700] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:05:25.675 [2024-05-15 16:56:13.285701] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:26.607 16:56:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:05:26.607 16:56:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # return 0 00:05:26.607 16:56:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:05:26.607 16:56:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=2895643 00:05:26.607 16:56:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 2895643 /var/tmp/spdk2.sock 00:05:26.608 16:56:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@827 -- # '[' -z 2895643 ']' 00:05:26.608 16:56:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:26.608 16:56:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@832 -- # local max_retries=100 00:05:26.608 16:56:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:26.608 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:26.608 16:56:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # xtrace_disable 00:05:26.608 16:56:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:26.608 [2024-05-15 16:56:13.997442] Starting SPDK v24.05-pre git sha1 0ba8ca574 / DPDK 23.11.0 initialization... 00:05:26.608 [2024-05-15 16:56:13.997490] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2895643 ] 00:05:26.608 EAL: No free 2048 kB hugepages reported on node 1 00:05:26.608 [2024-05-15 16:56:14.073499] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:26.608 [2024-05-15 16:56:14.073531] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:26.608 [2024-05-15 16:56:14.224451] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:05:26.608 [2024-05-15 16:56:14.224579] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:05:26.608 [2024-05-15 16:56:14.224580] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:05:27.173 16:56:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:05:27.173 16:56:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # return 0 00:05:27.173 16:56:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:05:27.173 16:56:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:27.173 16:56:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:27.173 16:56:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:27.173 16:56:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:05:27.173 16:56:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@648 -- # local es=0 00:05:27.173 16:56:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:05:27.173 16:56:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:05:27.173 16:56:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:27.173 16:56:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:05:27.173 16:56:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:27.173 16:56:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:05:27.173 16:56:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:27.173 16:56:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:27.430 [2024-05-15 16:56:14.836244] app.c: 771:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 2895615 has claimed it. 00:05:27.430 request: 00:05:27.430 { 00:05:27.430 "method": "framework_enable_cpumask_locks", 00:05:27.430 "req_id": 1 00:05:27.430 } 00:05:27.430 Got JSON-RPC error response 00:05:27.430 response: 00:05:27.430 { 00:05:27.430 "code": -32603, 00:05:27.430 "message": "Failed to claim CPU core: 2" 00:05:27.430 } 00:05:27.430 16:56:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:05:27.430 16:56:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@651 -- # es=1 00:05:27.430 16:56:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:05:27.430 16:56:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:05:27.430 16:56:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:05:27.430 16:56:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 2895615 /var/tmp/spdk.sock 00:05:27.430 16:56:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@827 -- # '[' -z 2895615 ']' 00:05:27.430 16:56:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:27.430 16:56:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@832 -- # local max_retries=100 00:05:27.430 16:56:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:27.430 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:27.430 16:56:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # xtrace_disable 00:05:27.430 16:56:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:27.430 16:56:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:05:27.430 16:56:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # return 0 00:05:27.430 16:56:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 2895643 /var/tmp/spdk2.sock 00:05:27.430 16:56:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@827 -- # '[' -z 2895643 ']' 00:05:27.430 16:56:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:27.430 16:56:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@832 -- # local max_retries=100 00:05:27.430 16:56:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:27.430 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:27.430 16:56:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # xtrace_disable 00:05:27.430 16:56:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:27.688 16:56:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:05:27.688 16:56:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # return 0 00:05:27.688 16:56:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:05:27.688 16:56:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:05:27.688 16:56:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:05:27.688 16:56:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:05:27.688 00:05:27.688 real 0m2.089s 00:05:27.688 user 0m0.841s 00:05:27.688 sys 0m0.174s 00:05:27.688 16:56:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:27.688 16:56:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:27.688 ************************************ 00:05:27.688 END TEST locking_overlapped_coremask_via_rpc 00:05:27.688 ************************************ 00:05:27.688 16:56:15 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:05:27.688 16:56:15 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 2895615 ]] 00:05:27.688 16:56:15 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 2895615 00:05:27.688 16:56:15 event.cpu_locks -- common/autotest_common.sh@946 -- # '[' -z 2895615 ']' 00:05:27.688 16:56:15 event.cpu_locks -- common/autotest_common.sh@950 -- # kill -0 2895615 00:05:27.688 16:56:15 event.cpu_locks -- common/autotest_common.sh@951 -- # uname 00:05:27.688 16:56:15 event.cpu_locks -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:05:27.688 16:56:15 event.cpu_locks -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 2895615 00:05:27.688 16:56:15 event.cpu_locks -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:05:27.688 16:56:15 event.cpu_locks -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:05:27.688 16:56:15 event.cpu_locks -- common/autotest_common.sh@964 -- # echo 'killing process with pid 2895615' 00:05:27.688 killing process with pid 2895615 00:05:27.688 16:56:15 event.cpu_locks -- common/autotest_common.sh@965 -- # kill 2895615 00:05:27.688 16:56:15 event.cpu_locks -- common/autotest_common.sh@970 -- # wait 2895615 00:05:28.254 16:56:15 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 2895643 ]] 00:05:28.254 16:56:15 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 2895643 00:05:28.254 16:56:15 event.cpu_locks -- common/autotest_common.sh@946 -- # '[' -z 2895643 ']' 00:05:28.254 16:56:15 event.cpu_locks -- common/autotest_common.sh@950 -- # kill -0 2895643 00:05:28.254 16:56:15 event.cpu_locks -- common/autotest_common.sh@951 -- # uname 00:05:28.254 16:56:15 event.cpu_locks -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:05:28.254 16:56:15 event.cpu_locks -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 2895643 00:05:28.254 16:56:15 event.cpu_locks -- common/autotest_common.sh@952 -- # process_name=reactor_2 00:05:28.255 16:56:15 event.cpu_locks -- common/autotest_common.sh@956 -- # '[' reactor_2 = sudo ']' 00:05:28.255 16:56:15 event.cpu_locks -- common/autotest_common.sh@964 -- # echo 'killing process with pid 2895643' 00:05:28.255 killing process with pid 2895643 00:05:28.255 16:56:15 event.cpu_locks -- common/autotest_common.sh@965 -- # kill 2895643 00:05:28.255 16:56:15 event.cpu_locks -- common/autotest_common.sh@970 -- # wait 2895643 00:05:28.512 16:56:16 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:05:28.512 16:56:16 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:05:28.512 16:56:16 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 2895615 ]] 00:05:28.512 16:56:16 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 2895615 00:05:28.512 16:56:16 event.cpu_locks -- common/autotest_common.sh@946 -- # '[' -z 2895615 ']' 00:05:28.512 16:56:16 event.cpu_locks -- common/autotest_common.sh@950 -- # kill -0 2895615 00:05:28.512 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 950: kill: (2895615) - No such process 00:05:28.512 16:56:16 event.cpu_locks -- common/autotest_common.sh@973 -- # echo 'Process with pid 2895615 is not found' 00:05:28.512 Process with pid 2895615 is not found 00:05:28.512 16:56:16 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 2895643 ]] 00:05:28.512 16:56:16 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 2895643 00:05:28.512 16:56:16 event.cpu_locks -- common/autotest_common.sh@946 -- # '[' -z 2895643 ']' 00:05:28.512 16:56:16 event.cpu_locks -- common/autotest_common.sh@950 -- # kill -0 2895643 00:05:28.512 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 950: kill: (2895643) - No such process 00:05:28.512 16:56:16 event.cpu_locks -- common/autotest_common.sh@973 -- # echo 'Process with pid 2895643 is not found' 00:05:28.512 Process with pid 2895643 is not found 00:05:28.512 16:56:16 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:05:28.512 00:05:28.512 real 0m17.280s 00:05:28.512 user 0m29.600s 00:05:28.512 sys 0m4.764s 00:05:28.512 16:56:16 event.cpu_locks -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:28.512 16:56:16 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:28.512 ************************************ 00:05:28.512 END TEST cpu_locks 00:05:28.512 ************************************ 00:05:28.512 00:05:28.512 real 0m43.467s 00:05:28.512 user 1m23.914s 00:05:28.512 sys 0m8.032s 00:05:28.512 16:56:16 event -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:28.512 16:56:16 event -- common/autotest_common.sh@10 -- # set +x 00:05:28.512 ************************************ 00:05:28.512 END TEST event 00:05:28.512 ************************************ 00:05:28.512 16:56:16 -- spdk/autotest.sh@178 -- # run_test thread /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:05:28.512 16:56:16 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:28.512 16:56:16 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:28.512 16:56:16 -- common/autotest_common.sh@10 -- # set +x 00:05:28.512 ************************************ 00:05:28.512 START TEST thread 00:05:28.512 ************************************ 00:05:28.512 16:56:16 thread -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:05:28.770 * Looking for test storage... 00:05:28.770 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread 00:05:28.770 16:56:16 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:05:28.770 16:56:16 thread -- common/autotest_common.sh@1097 -- # '[' 8 -le 1 ']' 00:05:28.770 16:56:16 thread -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:28.770 16:56:16 thread -- common/autotest_common.sh@10 -- # set +x 00:05:28.770 ************************************ 00:05:28.770 START TEST thread_poller_perf 00:05:28.770 ************************************ 00:05:28.770 16:56:16 thread.thread_poller_perf -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:05:28.770 [2024-05-15 16:56:16.264044] Starting SPDK v24.05-pre git sha1 0ba8ca574 / DPDK 23.11.0 initialization... 00:05:28.770 [2024-05-15 16:56:16.264109] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2896184 ] 00:05:28.770 EAL: No free 2048 kB hugepages reported on node 1 00:05:28.770 [2024-05-15 16:56:16.322617] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:28.770 [2024-05-15 16:56:16.395396] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:28.770 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:05:30.141 ====================================== 00:05:30.141 busy:2306742640 (cyc) 00:05:30.141 total_run_count: 408000 00:05:30.141 tsc_hz: 2300000000 (cyc) 00:05:30.141 ====================================== 00:05:30.141 poller_cost: 5653 (cyc), 2457 (nsec) 00:05:30.141 00:05:30.141 real 0m1.244s 00:05:30.141 user 0m1.169s 00:05:30.141 sys 0m0.070s 00:05:30.141 16:56:17 thread.thread_poller_perf -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:30.141 16:56:17 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:05:30.141 ************************************ 00:05:30.141 END TEST thread_poller_perf 00:05:30.141 ************************************ 00:05:30.141 16:56:17 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:05:30.141 16:56:17 thread -- common/autotest_common.sh@1097 -- # '[' 8 -le 1 ']' 00:05:30.141 16:56:17 thread -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:30.141 16:56:17 thread -- common/autotest_common.sh@10 -- # set +x 00:05:30.141 ************************************ 00:05:30.141 START TEST thread_poller_perf 00:05:30.141 ************************************ 00:05:30.141 16:56:17 thread.thread_poller_perf -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:05:30.141 [2024-05-15 16:56:17.581583] Starting SPDK v24.05-pre git sha1 0ba8ca574 / DPDK 23.11.0 initialization... 00:05:30.142 [2024-05-15 16:56:17.581652] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2896440 ] 00:05:30.142 EAL: No free 2048 kB hugepages reported on node 1 00:05:30.142 [2024-05-15 16:56:17.638139] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:30.142 [2024-05-15 16:56:17.708569] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:30.142 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:05:31.513 ====================================== 00:05:31.513 busy:2301483778 (cyc) 00:05:31.513 total_run_count: 5380000 00:05:31.513 tsc_hz: 2300000000 (cyc) 00:05:31.513 ====================================== 00:05:31.513 poller_cost: 427 (cyc), 185 (nsec) 00:05:31.513 00:05:31.513 real 0m1.235s 00:05:31.513 user 0m1.163s 00:05:31.513 sys 0m0.068s 00:05:31.513 16:56:18 thread.thread_poller_perf -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:31.513 16:56:18 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:05:31.513 ************************************ 00:05:31.513 END TEST thread_poller_perf 00:05:31.513 ************************************ 00:05:31.513 16:56:18 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:05:31.513 00:05:31.513 real 0m2.705s 00:05:31.513 user 0m2.424s 00:05:31.513 sys 0m0.281s 00:05:31.513 16:56:18 thread -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:31.513 16:56:18 thread -- common/autotest_common.sh@10 -- # set +x 00:05:31.513 ************************************ 00:05:31.513 END TEST thread 00:05:31.513 ************************************ 00:05:31.513 16:56:18 -- spdk/autotest.sh@179 -- # run_test accel /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel.sh 00:05:31.513 16:56:18 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:31.513 16:56:18 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:31.513 16:56:18 -- common/autotest_common.sh@10 -- # set +x 00:05:31.513 ************************************ 00:05:31.513 START TEST accel 00:05:31.513 ************************************ 00:05:31.513 16:56:18 accel -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel.sh 00:05:31.513 * Looking for test storage... 00:05:31.513 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel 00:05:31.513 16:56:18 accel -- accel/accel.sh@81 -- # declare -A expected_opcs 00:05:31.513 16:56:18 accel -- accel/accel.sh@82 -- # get_expected_opcs 00:05:31.513 16:56:18 accel -- accel/accel.sh@60 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:05:31.513 16:56:18 accel -- accel/accel.sh@62 -- # spdk_tgt_pid=2896727 00:05:31.513 16:56:18 accel -- accel/accel.sh@63 -- # waitforlisten 2896727 00:05:31.513 16:56:18 accel -- common/autotest_common.sh@827 -- # '[' -z 2896727 ']' 00:05:31.513 16:56:18 accel -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:31.513 16:56:18 accel -- accel/accel.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -c /dev/fd/63 00:05:31.513 16:56:18 accel -- accel/accel.sh@61 -- # build_accel_config 00:05:31.513 16:56:18 accel -- common/autotest_common.sh@832 -- # local max_retries=100 00:05:31.513 16:56:18 accel -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:31.513 16:56:18 accel -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:31.514 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:31.514 16:56:18 accel -- common/autotest_common.sh@836 -- # xtrace_disable 00:05:31.514 16:56:18 accel -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:31.514 16:56:18 accel -- common/autotest_common.sh@10 -- # set +x 00:05:31.514 16:56:18 accel -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:31.514 16:56:18 accel -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:31.514 16:56:18 accel -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:31.514 16:56:18 accel -- accel/accel.sh@40 -- # local IFS=, 00:05:31.514 16:56:18 accel -- accel/accel.sh@41 -- # jq -r . 00:05:31.514 [2024-05-15 16:56:19.025440] Starting SPDK v24.05-pre git sha1 0ba8ca574 / DPDK 23.11.0 initialization... 00:05:31.514 [2024-05-15 16:56:19.025490] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2896727 ] 00:05:31.514 EAL: No free 2048 kB hugepages reported on node 1 00:05:31.514 [2024-05-15 16:56:19.077076] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:31.514 [2024-05-15 16:56:19.150891] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:32.447 16:56:19 accel -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:05:32.447 16:56:19 accel -- common/autotest_common.sh@860 -- # return 0 00:05:32.447 16:56:19 accel -- accel/accel.sh@65 -- # [[ 0 -gt 0 ]] 00:05:32.447 16:56:19 accel -- accel/accel.sh@66 -- # [[ 0 -gt 0 ]] 00:05:32.447 16:56:19 accel -- accel/accel.sh@67 -- # [[ 0 -gt 0 ]] 00:05:32.447 16:56:19 accel -- accel/accel.sh@68 -- # [[ -n '' ]] 00:05:32.447 16:56:19 accel -- accel/accel.sh@70 -- # exp_opcs=($($rpc_py accel_get_opc_assignments | jq -r ". | to_entries | map(\"\(.key)=\(.value)\") | .[]")) 00:05:32.447 16:56:19 accel -- accel/accel.sh@70 -- # rpc_cmd accel_get_opc_assignments 00:05:32.447 16:56:19 accel -- accel/accel.sh@70 -- # jq -r '. | to_entries | map("\(.key)=\(.value)") | .[]' 00:05:32.447 16:56:19 accel -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:32.447 16:56:19 accel -- common/autotest_common.sh@10 -- # set +x 00:05:32.447 16:56:19 accel -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:32.447 16:56:19 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:32.447 16:56:19 accel -- accel/accel.sh@72 -- # IFS== 00:05:32.447 16:56:19 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:32.447 16:56:19 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:32.447 16:56:19 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:32.447 16:56:19 accel -- accel/accel.sh@72 -- # IFS== 00:05:32.447 16:56:19 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:32.447 16:56:19 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:32.447 16:56:19 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:32.447 16:56:19 accel -- accel/accel.sh@72 -- # IFS== 00:05:32.447 16:56:19 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:32.447 16:56:19 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:32.447 16:56:19 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:32.447 16:56:19 accel -- accel/accel.sh@72 -- # IFS== 00:05:32.447 16:56:19 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:32.447 16:56:19 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:32.447 16:56:19 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:32.447 16:56:19 accel -- accel/accel.sh@72 -- # IFS== 00:05:32.447 16:56:19 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:32.447 16:56:19 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:32.447 16:56:19 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:32.447 16:56:19 accel -- accel/accel.sh@72 -- # IFS== 00:05:32.447 16:56:19 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:32.447 16:56:19 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:32.447 16:56:19 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:32.447 16:56:19 accel -- accel/accel.sh@72 -- # IFS== 00:05:32.447 16:56:19 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:32.447 16:56:19 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:32.447 16:56:19 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:32.447 16:56:19 accel -- accel/accel.sh@72 -- # IFS== 00:05:32.447 16:56:19 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:32.447 16:56:19 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:32.447 16:56:19 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:32.447 16:56:19 accel -- accel/accel.sh@72 -- # IFS== 00:05:32.447 16:56:19 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:32.447 16:56:19 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:32.447 16:56:19 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:32.447 16:56:19 accel -- accel/accel.sh@72 -- # IFS== 00:05:32.447 16:56:19 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:32.447 16:56:19 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:32.447 16:56:19 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:32.447 16:56:19 accel -- accel/accel.sh@72 -- # IFS== 00:05:32.447 16:56:19 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:32.447 16:56:19 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:32.447 16:56:19 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:32.447 16:56:19 accel -- accel/accel.sh@72 -- # IFS== 00:05:32.447 16:56:19 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:32.448 16:56:19 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:32.448 16:56:19 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:32.448 16:56:19 accel -- accel/accel.sh@72 -- # IFS== 00:05:32.448 16:56:19 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:32.448 16:56:19 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:32.448 16:56:19 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:32.448 16:56:19 accel -- accel/accel.sh@72 -- # IFS== 00:05:32.448 16:56:19 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:32.448 16:56:19 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:32.448 16:56:19 accel -- accel/accel.sh@75 -- # killprocess 2896727 00:05:32.448 16:56:19 accel -- common/autotest_common.sh@946 -- # '[' -z 2896727 ']' 00:05:32.448 16:56:19 accel -- common/autotest_common.sh@950 -- # kill -0 2896727 00:05:32.448 16:56:19 accel -- common/autotest_common.sh@951 -- # uname 00:05:32.448 16:56:19 accel -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:05:32.448 16:56:19 accel -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 2896727 00:05:32.448 16:56:19 accel -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:05:32.448 16:56:19 accel -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:05:32.448 16:56:19 accel -- common/autotest_common.sh@964 -- # echo 'killing process with pid 2896727' 00:05:32.448 killing process with pid 2896727 00:05:32.448 16:56:19 accel -- common/autotest_common.sh@965 -- # kill 2896727 00:05:32.448 16:56:19 accel -- common/autotest_common.sh@970 -- # wait 2896727 00:05:32.707 16:56:20 accel -- accel/accel.sh@76 -- # trap - ERR 00:05:32.707 16:56:20 accel -- accel/accel.sh@89 -- # run_test accel_help accel_perf -h 00:05:32.707 16:56:20 accel -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:05:32.707 16:56:20 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:32.707 16:56:20 accel -- common/autotest_common.sh@10 -- # set +x 00:05:32.707 16:56:20 accel.accel_help -- common/autotest_common.sh@1121 -- # accel_perf -h 00:05:32.707 16:56:20 accel.accel_help -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -h 00:05:32.707 16:56:20 accel.accel_help -- accel/accel.sh@12 -- # build_accel_config 00:05:32.707 16:56:20 accel.accel_help -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:32.707 16:56:20 accel.accel_help -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:32.707 16:56:20 accel.accel_help -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:32.707 16:56:20 accel.accel_help -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:32.707 16:56:20 accel.accel_help -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:32.707 16:56:20 accel.accel_help -- accel/accel.sh@40 -- # local IFS=, 00:05:32.707 16:56:20 accel.accel_help -- accel/accel.sh@41 -- # jq -r . 00:05:32.707 16:56:20 accel.accel_help -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:32.707 16:56:20 accel.accel_help -- common/autotest_common.sh@10 -- # set +x 00:05:32.707 16:56:20 accel -- accel/accel.sh@91 -- # run_test accel_missing_filename NOT accel_perf -t 1 -w compress 00:05:32.707 16:56:20 accel -- common/autotest_common.sh@1097 -- # '[' 7 -le 1 ']' 00:05:32.707 16:56:20 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:32.707 16:56:20 accel -- common/autotest_common.sh@10 -- # set +x 00:05:32.965 ************************************ 00:05:32.965 START TEST accel_missing_filename 00:05:32.965 ************************************ 00:05:32.965 16:56:20 accel.accel_missing_filename -- common/autotest_common.sh@1121 -- # NOT accel_perf -t 1 -w compress 00:05:32.965 16:56:20 accel.accel_missing_filename -- common/autotest_common.sh@648 -- # local es=0 00:05:32.965 16:56:20 accel.accel_missing_filename -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w compress 00:05:32.965 16:56:20 accel.accel_missing_filename -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:05:32.965 16:56:20 accel.accel_missing_filename -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:32.965 16:56:20 accel.accel_missing_filename -- common/autotest_common.sh@640 -- # type -t accel_perf 00:05:32.965 16:56:20 accel.accel_missing_filename -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:32.965 16:56:20 accel.accel_missing_filename -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w compress 00:05:32.965 16:56:20 accel.accel_missing_filename -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress 00:05:32.965 16:56:20 accel.accel_missing_filename -- accel/accel.sh@12 -- # build_accel_config 00:05:32.965 16:56:20 accel.accel_missing_filename -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:32.965 16:56:20 accel.accel_missing_filename -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:32.965 16:56:20 accel.accel_missing_filename -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:32.965 16:56:20 accel.accel_missing_filename -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:32.965 16:56:20 accel.accel_missing_filename -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:32.965 16:56:20 accel.accel_missing_filename -- accel/accel.sh@40 -- # local IFS=, 00:05:32.965 16:56:20 accel.accel_missing_filename -- accel/accel.sh@41 -- # jq -r . 00:05:32.965 [2024-05-15 16:56:20.403022] Starting SPDK v24.05-pre git sha1 0ba8ca574 / DPDK 23.11.0 initialization... 00:05:32.965 [2024-05-15 16:56:20.403093] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2896994 ] 00:05:32.965 EAL: No free 2048 kB hugepages reported on node 1 00:05:32.965 [2024-05-15 16:56:20.459524] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:32.965 [2024-05-15 16:56:20.532972] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:32.965 [2024-05-15 16:56:20.574814] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:05:33.223 [2024-05-15 16:56:20.635230] accel_perf.c:1393:main: *ERROR*: ERROR starting application 00:05:33.223 A filename is required. 00:05:33.223 16:56:20 accel.accel_missing_filename -- common/autotest_common.sh@651 -- # es=234 00:05:33.223 16:56:20 accel.accel_missing_filename -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:05:33.223 16:56:20 accel.accel_missing_filename -- common/autotest_common.sh@660 -- # es=106 00:05:33.223 16:56:20 accel.accel_missing_filename -- common/autotest_common.sh@661 -- # case "$es" in 00:05:33.223 16:56:20 accel.accel_missing_filename -- common/autotest_common.sh@668 -- # es=1 00:05:33.223 16:56:20 accel.accel_missing_filename -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:05:33.223 00:05:33.223 real 0m0.359s 00:05:33.223 user 0m0.273s 00:05:33.223 sys 0m0.125s 00:05:33.223 16:56:20 accel.accel_missing_filename -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:33.223 16:56:20 accel.accel_missing_filename -- common/autotest_common.sh@10 -- # set +x 00:05:33.223 ************************************ 00:05:33.223 END TEST accel_missing_filename 00:05:33.223 ************************************ 00:05:33.223 16:56:20 accel -- accel/accel.sh@93 -- # run_test accel_compress_verify NOT accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:05:33.223 16:56:20 accel -- common/autotest_common.sh@1097 -- # '[' 10 -le 1 ']' 00:05:33.223 16:56:20 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:33.223 16:56:20 accel -- common/autotest_common.sh@10 -- # set +x 00:05:33.223 ************************************ 00:05:33.223 START TEST accel_compress_verify 00:05:33.223 ************************************ 00:05:33.223 16:56:20 accel.accel_compress_verify -- common/autotest_common.sh@1121 -- # NOT accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:05:33.223 16:56:20 accel.accel_compress_verify -- common/autotest_common.sh@648 -- # local es=0 00:05:33.223 16:56:20 accel.accel_compress_verify -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:05:33.223 16:56:20 accel.accel_compress_verify -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:05:33.223 16:56:20 accel.accel_compress_verify -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:33.223 16:56:20 accel.accel_compress_verify -- common/autotest_common.sh@640 -- # type -t accel_perf 00:05:33.223 16:56:20 accel.accel_compress_verify -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:33.223 16:56:20 accel.accel_compress_verify -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:05:33.223 16:56:20 accel.accel_compress_verify -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:05:33.223 16:56:20 accel.accel_compress_verify -- accel/accel.sh@12 -- # build_accel_config 00:05:33.223 16:56:20 accel.accel_compress_verify -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:33.223 16:56:20 accel.accel_compress_verify -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:33.223 16:56:20 accel.accel_compress_verify -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:33.223 16:56:20 accel.accel_compress_verify -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:33.223 16:56:20 accel.accel_compress_verify -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:33.223 16:56:20 accel.accel_compress_verify -- accel/accel.sh@40 -- # local IFS=, 00:05:33.223 16:56:20 accel.accel_compress_verify -- accel/accel.sh@41 -- # jq -r . 00:05:33.223 [2024-05-15 16:56:20.827990] Starting SPDK v24.05-pre git sha1 0ba8ca574 / DPDK 23.11.0 initialization... 00:05:33.223 [2024-05-15 16:56:20.828060] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2897025 ] 00:05:33.223 EAL: No free 2048 kB hugepages reported on node 1 00:05:33.481 [2024-05-15 16:56:20.885128] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:33.481 [2024-05-15 16:56:20.958925] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:33.481 [2024-05-15 16:56:21.000626] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:05:33.481 [2024-05-15 16:56:21.061318] accel_perf.c:1393:main: *ERROR*: ERROR starting application 00:05:33.739 00:05:33.739 Compression does not support the verify option, aborting. 00:05:33.739 16:56:21 accel.accel_compress_verify -- common/autotest_common.sh@651 -- # es=161 00:05:33.739 16:56:21 accel.accel_compress_verify -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:05:33.739 16:56:21 accel.accel_compress_verify -- common/autotest_common.sh@660 -- # es=33 00:05:33.739 16:56:21 accel.accel_compress_verify -- common/autotest_common.sh@661 -- # case "$es" in 00:05:33.739 16:56:21 accel.accel_compress_verify -- common/autotest_common.sh@668 -- # es=1 00:05:33.739 16:56:21 accel.accel_compress_verify -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:05:33.739 00:05:33.739 real 0m0.359s 00:05:33.739 user 0m0.285s 00:05:33.739 sys 0m0.115s 00:05:33.739 16:56:21 accel.accel_compress_verify -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:33.739 16:56:21 accel.accel_compress_verify -- common/autotest_common.sh@10 -- # set +x 00:05:33.739 ************************************ 00:05:33.739 END TEST accel_compress_verify 00:05:33.739 ************************************ 00:05:33.739 16:56:21 accel -- accel/accel.sh@95 -- # run_test accel_wrong_workload NOT accel_perf -t 1 -w foobar 00:05:33.739 16:56:21 accel -- common/autotest_common.sh@1097 -- # '[' 7 -le 1 ']' 00:05:33.739 16:56:21 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:33.739 16:56:21 accel -- common/autotest_common.sh@10 -- # set +x 00:05:33.739 ************************************ 00:05:33.739 START TEST accel_wrong_workload 00:05:33.739 ************************************ 00:05:33.739 16:56:21 accel.accel_wrong_workload -- common/autotest_common.sh@1121 -- # NOT accel_perf -t 1 -w foobar 00:05:33.739 16:56:21 accel.accel_wrong_workload -- common/autotest_common.sh@648 -- # local es=0 00:05:33.739 16:56:21 accel.accel_wrong_workload -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w foobar 00:05:33.739 16:56:21 accel.accel_wrong_workload -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:05:33.739 16:56:21 accel.accel_wrong_workload -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:33.739 16:56:21 accel.accel_wrong_workload -- common/autotest_common.sh@640 -- # type -t accel_perf 00:05:33.739 16:56:21 accel.accel_wrong_workload -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:33.739 16:56:21 accel.accel_wrong_workload -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w foobar 00:05:33.739 16:56:21 accel.accel_wrong_workload -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w foobar 00:05:33.739 16:56:21 accel.accel_wrong_workload -- accel/accel.sh@12 -- # build_accel_config 00:05:33.739 16:56:21 accel.accel_wrong_workload -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:33.739 16:56:21 accel.accel_wrong_workload -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:33.739 16:56:21 accel.accel_wrong_workload -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:33.739 16:56:21 accel.accel_wrong_workload -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:33.739 16:56:21 accel.accel_wrong_workload -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:33.739 16:56:21 accel.accel_wrong_workload -- accel/accel.sh@40 -- # local IFS=, 00:05:33.739 16:56:21 accel.accel_wrong_workload -- accel/accel.sh@41 -- # jq -r . 00:05:33.739 Unsupported workload type: foobar 00:05:33.739 [2024-05-15 16:56:21.261148] app.c:1451:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'w' failed: 1 00:05:33.739 accel_perf options: 00:05:33.739 [-h help message] 00:05:33.739 [-q queue depth per core] 00:05:33.739 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:05:33.739 [-T number of threads per core 00:05:33.739 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:05:33.739 [-t time in seconds] 00:05:33.739 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:05:33.739 [ dif_verify, , dif_generate, dif_generate_copy 00:05:33.739 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:05:33.739 [-l for compress/decompress workloads, name of uncompressed input file 00:05:33.739 [-S for crc32c workload, use this seed value (default 0) 00:05:33.739 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:05:33.739 [-f for fill workload, use this BYTE value (default 255) 00:05:33.739 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:05:33.739 [-y verify result if this switch is on] 00:05:33.739 [-a tasks to allocate per core (default: same value as -q)] 00:05:33.739 Can be used to spread operations across a wider range of memory. 00:05:33.739 16:56:21 accel.accel_wrong_workload -- common/autotest_common.sh@651 -- # es=1 00:05:33.739 16:56:21 accel.accel_wrong_workload -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:05:33.739 16:56:21 accel.accel_wrong_workload -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:05:33.739 16:56:21 accel.accel_wrong_workload -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:05:33.739 00:05:33.739 real 0m0.034s 00:05:33.739 user 0m0.016s 00:05:33.739 sys 0m0.018s 00:05:33.739 16:56:21 accel.accel_wrong_workload -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:33.739 16:56:21 accel.accel_wrong_workload -- common/autotest_common.sh@10 -- # set +x 00:05:33.739 ************************************ 00:05:33.739 END TEST accel_wrong_workload 00:05:33.739 ************************************ 00:05:33.739 Error: writing output failed: Broken pipe 00:05:33.739 16:56:21 accel -- accel/accel.sh@97 -- # run_test accel_negative_buffers NOT accel_perf -t 1 -w xor -y -x -1 00:05:33.739 16:56:21 accel -- common/autotest_common.sh@1097 -- # '[' 10 -le 1 ']' 00:05:33.739 16:56:21 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:33.739 16:56:21 accel -- common/autotest_common.sh@10 -- # set +x 00:05:33.739 ************************************ 00:05:33.739 START TEST accel_negative_buffers 00:05:33.739 ************************************ 00:05:33.739 16:56:21 accel.accel_negative_buffers -- common/autotest_common.sh@1121 -- # NOT accel_perf -t 1 -w xor -y -x -1 00:05:33.739 16:56:21 accel.accel_negative_buffers -- common/autotest_common.sh@648 -- # local es=0 00:05:33.739 16:56:21 accel.accel_negative_buffers -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w xor -y -x -1 00:05:33.739 16:56:21 accel.accel_negative_buffers -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:05:33.739 16:56:21 accel.accel_negative_buffers -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:33.739 16:56:21 accel.accel_negative_buffers -- common/autotest_common.sh@640 -- # type -t accel_perf 00:05:33.739 16:56:21 accel.accel_negative_buffers -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:33.739 16:56:21 accel.accel_negative_buffers -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w xor -y -x -1 00:05:33.739 16:56:21 accel.accel_negative_buffers -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x -1 00:05:33.739 16:56:21 accel.accel_negative_buffers -- accel/accel.sh@12 -- # build_accel_config 00:05:33.739 16:56:21 accel.accel_negative_buffers -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:33.739 16:56:21 accel.accel_negative_buffers -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:33.739 16:56:21 accel.accel_negative_buffers -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:33.739 16:56:21 accel.accel_negative_buffers -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:33.739 16:56:21 accel.accel_negative_buffers -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:33.739 16:56:21 accel.accel_negative_buffers -- accel/accel.sh@40 -- # local IFS=, 00:05:33.739 16:56:21 accel.accel_negative_buffers -- accel/accel.sh@41 -- # jq -r . 00:05:33.739 -x option must be non-negative. 00:05:33.739 [2024-05-15 16:56:21.366220] app.c:1451:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'x' failed: 1 00:05:33.739 accel_perf options: 00:05:33.739 [-h help message] 00:05:33.739 [-q queue depth per core] 00:05:33.739 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:05:33.739 [-T number of threads per core 00:05:33.739 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:05:33.739 [-t time in seconds] 00:05:33.739 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:05:33.740 [ dif_verify, , dif_generate, dif_generate_copy 00:05:33.740 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:05:33.740 [-l for compress/decompress workloads, name of uncompressed input file 00:05:33.740 [-S for crc32c workload, use this seed value (default 0) 00:05:33.740 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:05:33.740 [-f for fill workload, use this BYTE value (default 255) 00:05:33.740 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:05:33.740 [-y verify result if this switch is on] 00:05:33.740 [-a tasks to allocate per core (default: same value as -q)] 00:05:33.740 Can be used to spread operations across a wider range of memory. 00:05:33.740 16:56:21 accel.accel_negative_buffers -- common/autotest_common.sh@651 -- # es=1 00:05:33.740 16:56:21 accel.accel_negative_buffers -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:05:33.740 16:56:21 accel.accel_negative_buffers -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:05:33.740 16:56:21 accel.accel_negative_buffers -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:05:33.740 00:05:33.740 real 0m0.032s 00:05:33.740 user 0m0.021s 00:05:33.740 sys 0m0.011s 00:05:33.740 16:56:21 accel.accel_negative_buffers -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:33.740 16:56:21 accel.accel_negative_buffers -- common/autotest_common.sh@10 -- # set +x 00:05:33.740 ************************************ 00:05:33.740 END TEST accel_negative_buffers 00:05:33.740 ************************************ 00:05:33.740 Error: writing output failed: Broken pipe 00:05:33.998 16:56:21 accel -- accel/accel.sh@101 -- # run_test accel_crc32c accel_test -t 1 -w crc32c -S 32 -y 00:05:33.998 16:56:21 accel -- common/autotest_common.sh@1097 -- # '[' 9 -le 1 ']' 00:05:33.998 16:56:21 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:33.998 16:56:21 accel -- common/autotest_common.sh@10 -- # set +x 00:05:33.998 ************************************ 00:05:33.998 START TEST accel_crc32c 00:05:33.998 ************************************ 00:05:33.998 16:56:21 accel.accel_crc32c -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w crc32c -S 32 -y 00:05:33.998 16:56:21 accel.accel_crc32c -- accel/accel.sh@16 -- # local accel_opc 00:05:33.998 16:56:21 accel.accel_crc32c -- accel/accel.sh@17 -- # local accel_module 00:05:33.998 16:56:21 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:33.998 16:56:21 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:33.998 16:56:21 accel.accel_crc32c -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -S 32 -y 00:05:33.998 16:56:21 accel.accel_crc32c -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -S 32 -y 00:05:33.998 16:56:21 accel.accel_crc32c -- accel/accel.sh@12 -- # build_accel_config 00:05:33.998 16:56:21 accel.accel_crc32c -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:33.998 16:56:21 accel.accel_crc32c -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:33.998 16:56:21 accel.accel_crc32c -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:33.998 16:56:21 accel.accel_crc32c -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:33.998 16:56:21 accel.accel_crc32c -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:33.998 16:56:21 accel.accel_crc32c -- accel/accel.sh@40 -- # local IFS=, 00:05:33.998 16:56:21 accel.accel_crc32c -- accel/accel.sh@41 -- # jq -r . 00:05:33.998 [2024-05-15 16:56:21.468660] Starting SPDK v24.05-pre git sha1 0ba8ca574 / DPDK 23.11.0 initialization... 00:05:33.998 [2024-05-15 16:56:21.468725] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2897303 ] 00:05:33.998 EAL: No free 2048 kB hugepages reported on node 1 00:05:33.998 [2024-05-15 16:56:21.524842] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:33.998 [2024-05-15 16:56:21.597397] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:33.998 16:56:21 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:05:33.998 16:56:21 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:33.998 16:56:21 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:33.998 16:56:21 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:33.998 16:56:21 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:05:33.998 16:56:21 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:33.998 16:56:21 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:33.998 16:56:21 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:33.998 16:56:21 accel.accel_crc32c -- accel/accel.sh@20 -- # val=0x1 00:05:33.998 16:56:21 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:33.998 16:56:21 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:33.998 16:56:21 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:33.998 16:56:21 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:05:33.998 16:56:21 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:33.998 16:56:21 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:33.998 16:56:21 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:33.998 16:56:21 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:05:33.998 16:56:21 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:33.998 16:56:21 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:33.998 16:56:21 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:33.998 16:56:21 accel.accel_crc32c -- accel/accel.sh@20 -- # val=crc32c 00:05:33.998 16:56:21 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:33.998 16:56:21 accel.accel_crc32c -- accel/accel.sh@23 -- # accel_opc=crc32c 00:05:33.998 16:56:21 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:33.998 16:56:21 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:33.998 16:56:21 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:05:33.998 16:56:21 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:33.999 16:56:21 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:33.999 16:56:21 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:33.999 16:56:21 accel.accel_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:33.999 16:56:21 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:33.999 16:56:21 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:33.999 16:56:21 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:33.999 16:56:21 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:05:33.999 16:56:21 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:33.999 16:56:21 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:33.999 16:56:21 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:33.999 16:56:21 accel.accel_crc32c -- accel/accel.sh@20 -- # val=software 00:05:33.999 16:56:21 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:33.999 16:56:21 accel.accel_crc32c -- accel/accel.sh@22 -- # accel_module=software 00:05:33.999 16:56:21 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:33.999 16:56:21 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:33.999 16:56:21 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:05:33.999 16:56:21 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:33.999 16:56:21 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:33.999 16:56:21 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:33.999 16:56:21 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:05:33.999 16:56:21 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:33.999 16:56:21 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:33.999 16:56:21 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:33.999 16:56:21 accel.accel_crc32c -- accel/accel.sh@20 -- # val=1 00:05:33.999 16:56:21 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:33.999 16:56:21 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:33.999 16:56:21 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:33.999 16:56:21 accel.accel_crc32c -- accel/accel.sh@20 -- # val='1 seconds' 00:05:33.999 16:56:21 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:33.999 16:56:21 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:33.999 16:56:21 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:33.999 16:56:21 accel.accel_crc32c -- accel/accel.sh@20 -- # val=Yes 00:05:33.999 16:56:21 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:33.999 16:56:21 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:33.999 16:56:21 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:33.999 16:56:21 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:05:33.999 16:56:21 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:33.999 16:56:21 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:33.999 16:56:21 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:33.999 16:56:21 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:05:33.999 16:56:21 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:33.999 16:56:21 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:33.999 16:56:21 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:35.372 16:56:22 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:05:35.372 16:56:22 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:35.372 16:56:22 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:35.372 16:56:22 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:35.372 16:56:22 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:05:35.372 16:56:22 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:35.372 16:56:22 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:35.372 16:56:22 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:35.372 16:56:22 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:05:35.372 16:56:22 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:35.372 16:56:22 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:35.372 16:56:22 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:35.372 16:56:22 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:05:35.372 16:56:22 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:35.372 16:56:22 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:35.372 16:56:22 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:35.372 16:56:22 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:05:35.372 16:56:22 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:35.372 16:56:22 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:35.372 16:56:22 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:35.372 16:56:22 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:05:35.372 16:56:22 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:35.372 16:56:22 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:35.372 16:56:22 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:35.372 16:56:22 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:35.372 16:56:22 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ -n crc32c ]] 00:05:35.372 16:56:22 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:35.372 00:05:35.372 real 0m1.356s 00:05:35.372 user 0m1.250s 00:05:35.372 sys 0m0.111s 00:05:35.372 16:56:22 accel.accel_crc32c -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:35.372 16:56:22 accel.accel_crc32c -- common/autotest_common.sh@10 -- # set +x 00:05:35.372 ************************************ 00:05:35.372 END TEST accel_crc32c 00:05:35.372 ************************************ 00:05:35.372 16:56:22 accel -- accel/accel.sh@102 -- # run_test accel_crc32c_C2 accel_test -t 1 -w crc32c -y -C 2 00:05:35.372 16:56:22 accel -- common/autotest_common.sh@1097 -- # '[' 9 -le 1 ']' 00:05:35.372 16:56:22 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:35.372 16:56:22 accel -- common/autotest_common.sh@10 -- # set +x 00:05:35.372 ************************************ 00:05:35.372 START TEST accel_crc32c_C2 00:05:35.372 ************************************ 00:05:35.372 16:56:22 accel.accel_crc32c_C2 -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w crc32c -y -C 2 00:05:35.372 16:56:22 accel.accel_crc32c_C2 -- accel/accel.sh@16 -- # local accel_opc 00:05:35.372 16:56:22 accel.accel_crc32c_C2 -- accel/accel.sh@17 -- # local accel_module 00:05:35.372 16:56:22 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:35.372 16:56:22 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:35.372 16:56:22 accel.accel_crc32c_C2 -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -y -C 2 00:05:35.372 16:56:22 accel.accel_crc32c_C2 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -y -C 2 00:05:35.372 16:56:22 accel.accel_crc32c_C2 -- accel/accel.sh@12 -- # build_accel_config 00:05:35.372 16:56:22 accel.accel_crc32c_C2 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:35.372 16:56:22 accel.accel_crc32c_C2 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:35.372 16:56:22 accel.accel_crc32c_C2 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:35.372 16:56:22 accel.accel_crc32c_C2 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:35.372 16:56:22 accel.accel_crc32c_C2 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:35.372 16:56:22 accel.accel_crc32c_C2 -- accel/accel.sh@40 -- # local IFS=, 00:05:35.372 16:56:22 accel.accel_crc32c_C2 -- accel/accel.sh@41 -- # jq -r . 00:05:35.372 [2024-05-15 16:56:22.884743] Starting SPDK v24.05-pre git sha1 0ba8ca574 / DPDK 23.11.0 initialization... 00:05:35.372 [2024-05-15 16:56:22.884808] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2897553 ] 00:05:35.372 EAL: No free 2048 kB hugepages reported on node 1 00:05:35.372 [2024-05-15 16:56:22.939845] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:35.372 [2024-05-15 16:56:23.010374] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:35.629 16:56:23 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:35.629 16:56:23 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:35.629 16:56:23 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:35.629 16:56:23 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:35.629 16:56:23 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:35.629 16:56:23 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:35.629 16:56:23 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:35.629 16:56:23 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:35.629 16:56:23 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=0x1 00:05:35.629 16:56:23 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:35.629 16:56:23 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:35.629 16:56:23 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:35.629 16:56:23 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:35.629 16:56:23 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:35.629 16:56:23 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:35.629 16:56:23 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:35.629 16:56:23 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:35.629 16:56:23 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:35.629 16:56:23 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:35.629 16:56:23 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:35.629 16:56:23 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=crc32c 00:05:35.629 16:56:23 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:35.629 16:56:23 accel.accel_crc32c_C2 -- accel/accel.sh@23 -- # accel_opc=crc32c 00:05:35.629 16:56:23 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:35.629 16:56:23 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:35.629 16:56:23 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=0 00:05:35.629 16:56:23 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:35.629 16:56:23 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:35.629 16:56:23 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:35.629 16:56:23 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:35.629 16:56:23 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:35.629 16:56:23 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:35.629 16:56:23 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:35.629 16:56:23 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:35.629 16:56:23 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:35.629 16:56:23 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:35.629 16:56:23 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:35.629 16:56:23 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=software 00:05:35.629 16:56:23 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:35.629 16:56:23 accel.accel_crc32c_C2 -- accel/accel.sh@22 -- # accel_module=software 00:05:35.629 16:56:23 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:35.629 16:56:23 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:35.629 16:56:23 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:05:35.629 16:56:23 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:35.629 16:56:23 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:35.629 16:56:23 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:35.629 16:56:23 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:05:35.629 16:56:23 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:35.629 16:56:23 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:35.629 16:56:23 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:35.629 16:56:23 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=1 00:05:35.629 16:56:23 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:35.629 16:56:23 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:35.629 16:56:23 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:35.629 16:56:23 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val='1 seconds' 00:05:35.629 16:56:23 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:35.629 16:56:23 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:35.629 16:56:23 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:35.629 16:56:23 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=Yes 00:05:35.629 16:56:23 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:35.629 16:56:23 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:35.629 16:56:23 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:35.629 16:56:23 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:35.629 16:56:23 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:35.629 16:56:23 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:35.629 16:56:23 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:35.629 16:56:23 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:35.629 16:56:23 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:35.629 16:56:23 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:35.629 16:56:23 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:36.563 16:56:24 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:36.563 16:56:24 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:36.563 16:56:24 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:36.563 16:56:24 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:36.563 16:56:24 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:36.563 16:56:24 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:36.563 16:56:24 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:36.563 16:56:24 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:36.563 16:56:24 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:36.563 16:56:24 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:36.563 16:56:24 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:36.563 16:56:24 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:36.563 16:56:24 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:36.563 16:56:24 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:36.563 16:56:24 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:36.563 16:56:24 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:36.563 16:56:24 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:36.563 16:56:24 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:36.563 16:56:24 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:36.563 16:56:24 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:36.563 16:56:24 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:36.563 16:56:24 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:36.563 16:56:24 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:36.563 16:56:24 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:36.563 16:56:24 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:36.563 16:56:24 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n crc32c ]] 00:05:36.563 16:56:24 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:36.563 00:05:36.563 real 0m1.351s 00:05:36.563 user 0m1.242s 00:05:36.563 sys 0m0.114s 00:05:36.563 16:56:24 accel.accel_crc32c_C2 -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:36.563 16:56:24 accel.accel_crc32c_C2 -- common/autotest_common.sh@10 -- # set +x 00:05:36.563 ************************************ 00:05:36.563 END TEST accel_crc32c_C2 00:05:36.563 ************************************ 00:05:36.821 16:56:24 accel -- accel/accel.sh@103 -- # run_test accel_copy accel_test -t 1 -w copy -y 00:05:36.821 16:56:24 accel -- common/autotest_common.sh@1097 -- # '[' 7 -le 1 ']' 00:05:36.821 16:56:24 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:36.821 16:56:24 accel -- common/autotest_common.sh@10 -- # set +x 00:05:36.821 ************************************ 00:05:36.821 START TEST accel_copy 00:05:36.821 ************************************ 00:05:36.821 16:56:24 accel.accel_copy -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w copy -y 00:05:36.821 16:56:24 accel.accel_copy -- accel/accel.sh@16 -- # local accel_opc 00:05:36.821 16:56:24 accel.accel_copy -- accel/accel.sh@17 -- # local accel_module 00:05:36.821 16:56:24 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:36.821 16:56:24 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:36.821 16:56:24 accel.accel_copy -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy -y 00:05:36.821 16:56:24 accel.accel_copy -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy -y 00:05:36.821 16:56:24 accel.accel_copy -- accel/accel.sh@12 -- # build_accel_config 00:05:36.821 16:56:24 accel.accel_copy -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:36.821 16:56:24 accel.accel_copy -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:36.821 16:56:24 accel.accel_copy -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:36.821 16:56:24 accel.accel_copy -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:36.821 16:56:24 accel.accel_copy -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:36.821 16:56:24 accel.accel_copy -- accel/accel.sh@40 -- # local IFS=, 00:05:36.821 16:56:24 accel.accel_copy -- accel/accel.sh@41 -- # jq -r . 00:05:36.821 [2024-05-15 16:56:24.303901] Starting SPDK v24.05-pre git sha1 0ba8ca574 / DPDK 23.11.0 initialization... 00:05:36.821 [2024-05-15 16:56:24.303951] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2897798 ] 00:05:36.821 EAL: No free 2048 kB hugepages reported on node 1 00:05:36.821 [2024-05-15 16:56:24.359067] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:36.821 [2024-05-15 16:56:24.430508] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:36.821 16:56:24 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:05:36.821 16:56:24 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:36.821 16:56:24 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:36.821 16:56:24 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:36.821 16:56:24 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:05:36.821 16:56:24 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:36.821 16:56:24 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:36.821 16:56:24 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:36.821 16:56:24 accel.accel_copy -- accel/accel.sh@20 -- # val=0x1 00:05:36.821 16:56:24 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:36.821 16:56:24 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:36.821 16:56:24 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:36.821 16:56:24 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:05:36.821 16:56:24 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:36.821 16:56:24 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:36.821 16:56:24 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:36.821 16:56:24 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:05:36.821 16:56:24 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:36.821 16:56:24 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:36.821 16:56:24 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:36.821 16:56:24 accel.accel_copy -- accel/accel.sh@20 -- # val=copy 00:05:36.821 16:56:24 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:36.821 16:56:24 accel.accel_copy -- accel/accel.sh@23 -- # accel_opc=copy 00:05:36.821 16:56:24 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:36.821 16:56:24 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:36.821 16:56:24 accel.accel_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:36.821 16:56:24 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:36.821 16:56:24 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:36.821 16:56:24 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:36.821 16:56:24 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:05:36.821 16:56:24 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:36.821 16:56:24 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:36.821 16:56:24 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:36.821 16:56:24 accel.accel_copy -- accel/accel.sh@20 -- # val=software 00:05:36.821 16:56:24 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:36.821 16:56:24 accel.accel_copy -- accel/accel.sh@22 -- # accel_module=software 00:05:36.821 16:56:24 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:36.821 16:56:24 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:36.821 16:56:24 accel.accel_copy -- accel/accel.sh@20 -- # val=32 00:05:36.821 16:56:24 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:36.821 16:56:24 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:36.821 16:56:24 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:36.821 16:56:24 accel.accel_copy -- accel/accel.sh@20 -- # val=32 00:05:36.821 16:56:24 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:36.821 16:56:24 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:36.821 16:56:24 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:36.821 16:56:24 accel.accel_copy -- accel/accel.sh@20 -- # val=1 00:05:36.821 16:56:24 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:36.821 16:56:24 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:36.821 16:56:24 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:36.821 16:56:24 accel.accel_copy -- accel/accel.sh@20 -- # val='1 seconds' 00:05:36.821 16:56:24 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:36.821 16:56:24 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:36.821 16:56:24 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:36.821 16:56:24 accel.accel_copy -- accel/accel.sh@20 -- # val=Yes 00:05:36.821 16:56:24 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:36.821 16:56:24 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:36.821 16:56:24 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:36.821 16:56:24 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:05:36.821 16:56:24 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:36.821 16:56:24 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:36.821 16:56:24 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:36.821 16:56:24 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:05:36.821 16:56:24 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:36.821 16:56:24 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:36.821 16:56:24 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:38.193 16:56:25 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:05:38.193 16:56:25 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:38.193 16:56:25 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:38.193 16:56:25 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:38.193 16:56:25 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:05:38.193 16:56:25 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:38.193 16:56:25 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:38.193 16:56:25 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:38.193 16:56:25 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:05:38.193 16:56:25 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:38.193 16:56:25 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:38.193 16:56:25 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:38.193 16:56:25 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:05:38.193 16:56:25 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:38.193 16:56:25 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:38.193 16:56:25 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:38.193 16:56:25 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:05:38.193 16:56:25 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:38.193 16:56:25 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:38.193 16:56:25 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:38.193 16:56:25 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:05:38.193 16:56:25 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:38.193 16:56:25 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:38.193 16:56:25 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:38.193 16:56:25 accel.accel_copy -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:38.193 16:56:25 accel.accel_copy -- accel/accel.sh@27 -- # [[ -n copy ]] 00:05:38.193 16:56:25 accel.accel_copy -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:38.193 00:05:38.193 real 0m1.353s 00:05:38.193 user 0m1.243s 00:05:38.193 sys 0m0.115s 00:05:38.193 16:56:25 accel.accel_copy -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:38.193 16:56:25 accel.accel_copy -- common/autotest_common.sh@10 -- # set +x 00:05:38.193 ************************************ 00:05:38.193 END TEST accel_copy 00:05:38.193 ************************************ 00:05:38.193 16:56:25 accel -- accel/accel.sh@104 -- # run_test accel_fill accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:05:38.193 16:56:25 accel -- common/autotest_common.sh@1097 -- # '[' 13 -le 1 ']' 00:05:38.193 16:56:25 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:38.193 16:56:25 accel -- common/autotest_common.sh@10 -- # set +x 00:05:38.193 ************************************ 00:05:38.193 START TEST accel_fill 00:05:38.193 ************************************ 00:05:38.193 16:56:25 accel.accel_fill -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:05:38.193 16:56:25 accel.accel_fill -- accel/accel.sh@16 -- # local accel_opc 00:05:38.193 16:56:25 accel.accel_fill -- accel/accel.sh@17 -- # local accel_module 00:05:38.193 16:56:25 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:38.193 16:56:25 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:38.193 16:56:25 accel.accel_fill -- accel/accel.sh@15 -- # accel_perf -t 1 -w fill -f 128 -q 64 -a 64 -y 00:05:38.193 16:56:25 accel.accel_fill -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w fill -f 128 -q 64 -a 64 -y 00:05:38.193 16:56:25 accel.accel_fill -- accel/accel.sh@12 -- # build_accel_config 00:05:38.193 16:56:25 accel.accel_fill -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:38.193 16:56:25 accel.accel_fill -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:38.193 16:56:25 accel.accel_fill -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:38.193 16:56:25 accel.accel_fill -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:38.193 16:56:25 accel.accel_fill -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:38.193 16:56:25 accel.accel_fill -- accel/accel.sh@40 -- # local IFS=, 00:05:38.193 16:56:25 accel.accel_fill -- accel/accel.sh@41 -- # jq -r . 00:05:38.193 [2024-05-15 16:56:25.712315] Starting SPDK v24.05-pre git sha1 0ba8ca574 / DPDK 23.11.0 initialization... 00:05:38.193 [2024-05-15 16:56:25.712386] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2898052 ] 00:05:38.193 EAL: No free 2048 kB hugepages reported on node 1 00:05:38.193 [2024-05-15 16:56:25.768233] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:38.193 [2024-05-15 16:56:25.839684] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:38.449 16:56:25 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:05:38.449 16:56:25 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:38.449 16:56:25 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:38.449 16:56:25 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:38.449 16:56:25 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:05:38.449 16:56:25 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:38.449 16:56:25 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:38.449 16:56:25 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:38.449 16:56:25 accel.accel_fill -- accel/accel.sh@20 -- # val=0x1 00:05:38.449 16:56:25 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:38.449 16:56:25 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:38.449 16:56:25 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:38.449 16:56:25 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:05:38.449 16:56:25 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:38.449 16:56:25 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:38.449 16:56:25 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:38.449 16:56:25 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:05:38.449 16:56:25 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:38.449 16:56:25 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:38.449 16:56:25 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:38.449 16:56:25 accel.accel_fill -- accel/accel.sh@20 -- # val=fill 00:05:38.449 16:56:25 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:38.449 16:56:25 accel.accel_fill -- accel/accel.sh@23 -- # accel_opc=fill 00:05:38.449 16:56:25 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:38.449 16:56:25 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:38.449 16:56:25 accel.accel_fill -- accel/accel.sh@20 -- # val=0x80 00:05:38.449 16:56:25 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:38.449 16:56:25 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:38.449 16:56:25 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:38.449 16:56:25 accel.accel_fill -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:38.449 16:56:25 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:38.449 16:56:25 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:38.449 16:56:25 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:38.449 16:56:25 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:05:38.449 16:56:25 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:38.449 16:56:25 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:38.449 16:56:25 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:38.449 16:56:25 accel.accel_fill -- accel/accel.sh@20 -- # val=software 00:05:38.449 16:56:25 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:38.449 16:56:25 accel.accel_fill -- accel/accel.sh@22 -- # accel_module=software 00:05:38.449 16:56:25 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:38.449 16:56:25 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:38.449 16:56:25 accel.accel_fill -- accel/accel.sh@20 -- # val=64 00:05:38.449 16:56:25 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:38.449 16:56:25 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:38.449 16:56:25 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:38.449 16:56:25 accel.accel_fill -- accel/accel.sh@20 -- # val=64 00:05:38.449 16:56:25 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:38.449 16:56:25 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:38.449 16:56:25 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:38.449 16:56:25 accel.accel_fill -- accel/accel.sh@20 -- # val=1 00:05:38.449 16:56:25 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:38.449 16:56:25 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:38.449 16:56:25 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:38.449 16:56:25 accel.accel_fill -- accel/accel.sh@20 -- # val='1 seconds' 00:05:38.449 16:56:25 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:38.449 16:56:25 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:38.449 16:56:25 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:38.449 16:56:25 accel.accel_fill -- accel/accel.sh@20 -- # val=Yes 00:05:38.449 16:56:25 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:38.449 16:56:25 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:38.450 16:56:25 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:38.450 16:56:25 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:05:38.450 16:56:25 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:38.450 16:56:25 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:38.450 16:56:25 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:38.450 16:56:25 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:05:38.450 16:56:25 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:38.450 16:56:25 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:38.450 16:56:25 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:39.569 16:56:27 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:05:39.569 16:56:27 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:39.569 16:56:27 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:39.569 16:56:27 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:39.569 16:56:27 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:05:39.569 16:56:27 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:39.569 16:56:27 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:39.569 16:56:27 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:39.569 16:56:27 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:05:39.569 16:56:27 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:39.569 16:56:27 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:39.569 16:56:27 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:39.569 16:56:27 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:05:39.569 16:56:27 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:39.569 16:56:27 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:39.569 16:56:27 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:39.569 16:56:27 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:05:39.569 16:56:27 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:39.569 16:56:27 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:39.569 16:56:27 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:39.569 16:56:27 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:05:39.569 16:56:27 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:39.569 16:56:27 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:39.569 16:56:27 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:39.569 16:56:27 accel.accel_fill -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:39.569 16:56:27 accel.accel_fill -- accel/accel.sh@27 -- # [[ -n fill ]] 00:05:39.569 16:56:27 accel.accel_fill -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:39.569 00:05:39.569 real 0m1.352s 00:05:39.569 user 0m1.243s 00:05:39.569 sys 0m0.114s 00:05:39.569 16:56:27 accel.accel_fill -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:39.569 16:56:27 accel.accel_fill -- common/autotest_common.sh@10 -- # set +x 00:05:39.569 ************************************ 00:05:39.569 END TEST accel_fill 00:05:39.569 ************************************ 00:05:39.569 16:56:27 accel -- accel/accel.sh@105 -- # run_test accel_copy_crc32c accel_test -t 1 -w copy_crc32c -y 00:05:39.569 16:56:27 accel -- common/autotest_common.sh@1097 -- # '[' 7 -le 1 ']' 00:05:39.569 16:56:27 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:39.569 16:56:27 accel -- common/autotest_common.sh@10 -- # set +x 00:05:39.569 ************************************ 00:05:39.569 START TEST accel_copy_crc32c 00:05:39.569 ************************************ 00:05:39.569 16:56:27 accel.accel_copy_crc32c -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w copy_crc32c -y 00:05:39.569 16:56:27 accel.accel_copy_crc32c -- accel/accel.sh@16 -- # local accel_opc 00:05:39.569 16:56:27 accel.accel_copy_crc32c -- accel/accel.sh@17 -- # local accel_module 00:05:39.569 16:56:27 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:39.569 16:56:27 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:39.569 16:56:27 accel.accel_copy_crc32c -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y 00:05:39.569 16:56:27 accel.accel_copy_crc32c -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y 00:05:39.569 16:56:27 accel.accel_copy_crc32c -- accel/accel.sh@12 -- # build_accel_config 00:05:39.569 16:56:27 accel.accel_copy_crc32c -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:39.569 16:56:27 accel.accel_copy_crc32c -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:39.569 16:56:27 accel.accel_copy_crc32c -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:39.569 16:56:27 accel.accel_copy_crc32c -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:39.569 16:56:27 accel.accel_copy_crc32c -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:39.569 16:56:27 accel.accel_copy_crc32c -- accel/accel.sh@40 -- # local IFS=, 00:05:39.569 16:56:27 accel.accel_copy_crc32c -- accel/accel.sh@41 -- # jq -r . 00:05:39.569 [2024-05-15 16:56:27.125788] Starting SPDK v24.05-pre git sha1 0ba8ca574 / DPDK 23.11.0 initialization... 00:05:39.570 [2024-05-15 16:56:27.125834] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2898300 ] 00:05:39.570 EAL: No free 2048 kB hugepages reported on node 1 00:05:39.570 [2024-05-15 16:56:27.179937] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:39.827 [2024-05-15 16:56:27.252508] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:39.827 16:56:27 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:05:39.827 16:56:27 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:39.827 16:56:27 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:39.827 16:56:27 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:39.827 16:56:27 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:05:39.827 16:56:27 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:39.827 16:56:27 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:39.827 16:56:27 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:39.827 16:56:27 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=0x1 00:05:39.827 16:56:27 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:39.827 16:56:27 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:39.827 16:56:27 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:39.827 16:56:27 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:05:39.827 16:56:27 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:39.827 16:56:27 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:39.827 16:56:27 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:39.827 16:56:27 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:05:39.827 16:56:27 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:39.827 16:56:27 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:39.827 16:56:27 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:39.827 16:56:27 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=copy_crc32c 00:05:39.827 16:56:27 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:39.827 16:56:27 accel.accel_copy_crc32c -- accel/accel.sh@23 -- # accel_opc=copy_crc32c 00:05:39.827 16:56:27 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:39.827 16:56:27 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:39.827 16:56:27 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=0 00:05:39.827 16:56:27 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:39.827 16:56:27 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:39.827 16:56:27 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:39.827 16:56:27 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:39.827 16:56:27 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:39.827 16:56:27 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:39.827 16:56:27 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:39.827 16:56:27 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:39.827 16:56:27 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:39.827 16:56:27 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:39.827 16:56:27 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:39.827 16:56:27 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:05:39.827 16:56:27 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:39.827 16:56:27 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:39.827 16:56:27 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:39.827 16:56:27 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=software 00:05:39.827 16:56:27 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:39.827 16:56:27 accel.accel_copy_crc32c -- accel/accel.sh@22 -- # accel_module=software 00:05:39.827 16:56:27 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:39.827 16:56:27 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:39.827 16:56:27 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=32 00:05:39.827 16:56:27 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:39.827 16:56:27 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:39.827 16:56:27 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:39.827 16:56:27 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=32 00:05:39.827 16:56:27 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:39.827 16:56:27 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:39.827 16:56:27 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:39.827 16:56:27 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=1 00:05:39.827 16:56:27 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:39.827 16:56:27 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:39.827 16:56:27 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:39.827 16:56:27 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='1 seconds' 00:05:39.827 16:56:27 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:39.827 16:56:27 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:39.827 16:56:27 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:39.827 16:56:27 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=Yes 00:05:39.827 16:56:27 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:39.827 16:56:27 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:39.827 16:56:27 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:39.827 16:56:27 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:05:39.827 16:56:27 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:39.827 16:56:27 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:39.827 16:56:27 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:39.827 16:56:27 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:05:39.827 16:56:27 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:39.827 16:56:27 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:39.827 16:56:27 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:41.203 16:56:28 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:05:41.203 16:56:28 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:41.203 16:56:28 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:41.203 16:56:28 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:41.203 16:56:28 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:05:41.203 16:56:28 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:41.203 16:56:28 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:41.203 16:56:28 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:41.203 16:56:28 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:05:41.203 16:56:28 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:41.203 16:56:28 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:41.203 16:56:28 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:41.203 16:56:28 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:05:41.203 16:56:28 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:41.203 16:56:28 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:41.203 16:56:28 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:41.203 16:56:28 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:05:41.203 16:56:28 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:41.203 16:56:28 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:41.203 16:56:28 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:41.203 16:56:28 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:05:41.203 16:56:28 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:41.203 16:56:28 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:41.203 16:56:28 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:41.203 16:56:28 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:41.203 16:56:28 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ -n copy_crc32c ]] 00:05:41.203 16:56:28 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:41.203 00:05:41.203 real 0m1.350s 00:05:41.203 user 0m1.248s 00:05:41.203 sys 0m0.107s 00:05:41.203 16:56:28 accel.accel_copy_crc32c -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:41.203 16:56:28 accel.accel_copy_crc32c -- common/autotest_common.sh@10 -- # set +x 00:05:41.203 ************************************ 00:05:41.203 END TEST accel_copy_crc32c 00:05:41.203 ************************************ 00:05:41.203 16:56:28 accel -- accel/accel.sh@106 -- # run_test accel_copy_crc32c_C2 accel_test -t 1 -w copy_crc32c -y -C 2 00:05:41.203 16:56:28 accel -- common/autotest_common.sh@1097 -- # '[' 9 -le 1 ']' 00:05:41.203 16:56:28 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:41.203 16:56:28 accel -- common/autotest_common.sh@10 -- # set +x 00:05:41.203 ************************************ 00:05:41.203 START TEST accel_copy_crc32c_C2 00:05:41.203 ************************************ 00:05:41.203 16:56:28 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w copy_crc32c -y -C 2 00:05:41.203 16:56:28 accel.accel_copy_crc32c_C2 -- accel/accel.sh@16 -- # local accel_opc 00:05:41.203 16:56:28 accel.accel_copy_crc32c_C2 -- accel/accel.sh@17 -- # local accel_module 00:05:41.203 16:56:28 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:41.203 16:56:28 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:41.203 16:56:28 accel.accel_copy_crc32c_C2 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y -C 2 00:05:41.203 16:56:28 accel.accel_copy_crc32c_C2 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y -C 2 00:05:41.203 16:56:28 accel.accel_copy_crc32c_C2 -- accel/accel.sh@12 -- # build_accel_config 00:05:41.203 16:56:28 accel.accel_copy_crc32c_C2 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:41.203 16:56:28 accel.accel_copy_crc32c_C2 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:41.203 16:56:28 accel.accel_copy_crc32c_C2 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:41.203 16:56:28 accel.accel_copy_crc32c_C2 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:41.203 16:56:28 accel.accel_copy_crc32c_C2 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:41.203 16:56:28 accel.accel_copy_crc32c_C2 -- accel/accel.sh@40 -- # local IFS=, 00:05:41.203 16:56:28 accel.accel_copy_crc32c_C2 -- accel/accel.sh@41 -- # jq -r . 00:05:41.203 [2024-05-15 16:56:28.542358] Starting SPDK v24.05-pre git sha1 0ba8ca574 / DPDK 23.11.0 initialization... 00:05:41.203 [2024-05-15 16:56:28.542410] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2898547 ] 00:05:41.203 EAL: No free 2048 kB hugepages reported on node 1 00:05:41.203 [2024-05-15 16:56:28.598600] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:41.203 [2024-05-15 16:56:28.670290] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:41.203 16:56:28 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:41.203 16:56:28 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:41.203 16:56:28 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:41.203 16:56:28 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:41.203 16:56:28 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:41.203 16:56:28 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:41.203 16:56:28 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:41.203 16:56:28 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:41.203 16:56:28 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=0x1 00:05:41.203 16:56:28 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:41.203 16:56:28 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:41.203 16:56:28 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:41.203 16:56:28 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:41.203 16:56:28 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:41.203 16:56:28 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:41.203 16:56:28 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:41.203 16:56:28 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:41.203 16:56:28 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:41.203 16:56:28 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:41.203 16:56:28 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:41.203 16:56:28 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=copy_crc32c 00:05:41.203 16:56:28 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:41.203 16:56:28 accel.accel_copy_crc32c_C2 -- accel/accel.sh@23 -- # accel_opc=copy_crc32c 00:05:41.203 16:56:28 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:41.203 16:56:28 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:41.203 16:56:28 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=0 00:05:41.204 16:56:28 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:41.204 16:56:28 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:41.204 16:56:28 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:41.204 16:56:28 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:41.204 16:56:28 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:41.204 16:56:28 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:41.204 16:56:28 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:41.204 16:56:28 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='8192 bytes' 00:05:41.204 16:56:28 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:41.204 16:56:28 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:41.204 16:56:28 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:41.204 16:56:28 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:41.204 16:56:28 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:41.204 16:56:28 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:41.204 16:56:28 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:41.204 16:56:28 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=software 00:05:41.204 16:56:28 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:41.204 16:56:28 accel.accel_copy_crc32c_C2 -- accel/accel.sh@22 -- # accel_module=software 00:05:41.204 16:56:28 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:41.204 16:56:28 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:41.204 16:56:28 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:05:41.204 16:56:28 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:41.204 16:56:28 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:41.204 16:56:28 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:41.204 16:56:28 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:05:41.204 16:56:28 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:41.204 16:56:28 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:41.204 16:56:28 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:41.204 16:56:28 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=1 00:05:41.204 16:56:28 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:41.204 16:56:28 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:41.204 16:56:28 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:41.204 16:56:28 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='1 seconds' 00:05:41.204 16:56:28 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:41.204 16:56:28 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:41.204 16:56:28 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:41.204 16:56:28 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=Yes 00:05:41.204 16:56:28 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:41.204 16:56:28 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:41.204 16:56:28 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:41.204 16:56:28 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:41.204 16:56:28 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:41.204 16:56:28 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:41.204 16:56:28 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:41.204 16:56:28 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:41.204 16:56:28 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:41.204 16:56:28 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:41.204 16:56:28 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:42.647 16:56:29 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:42.647 16:56:29 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:42.647 16:56:29 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:42.647 16:56:29 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:42.647 16:56:29 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:42.647 16:56:29 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:42.647 16:56:29 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:42.647 16:56:29 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:42.647 16:56:29 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:42.647 16:56:29 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:42.647 16:56:29 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:42.647 16:56:29 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:42.647 16:56:29 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:42.647 16:56:29 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:42.647 16:56:29 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:42.647 16:56:29 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:42.647 16:56:29 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:42.647 16:56:29 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:42.647 16:56:29 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:42.647 16:56:29 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:42.647 16:56:29 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:42.647 16:56:29 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:42.647 16:56:29 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:42.647 16:56:29 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:42.647 16:56:29 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:42.647 16:56:29 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n copy_crc32c ]] 00:05:42.647 16:56:29 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:42.647 00:05:42.647 real 0m1.352s 00:05:42.647 user 0m1.250s 00:05:42.647 sys 0m0.108s 00:05:42.647 16:56:29 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:42.647 16:56:29 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@10 -- # set +x 00:05:42.647 ************************************ 00:05:42.647 END TEST accel_copy_crc32c_C2 00:05:42.647 ************************************ 00:05:42.647 16:56:29 accel -- accel/accel.sh@107 -- # run_test accel_dualcast accel_test -t 1 -w dualcast -y 00:05:42.647 16:56:29 accel -- common/autotest_common.sh@1097 -- # '[' 7 -le 1 ']' 00:05:42.647 16:56:29 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:42.647 16:56:29 accel -- common/autotest_common.sh@10 -- # set +x 00:05:42.647 ************************************ 00:05:42.647 START TEST accel_dualcast 00:05:42.647 ************************************ 00:05:42.647 16:56:29 accel.accel_dualcast -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w dualcast -y 00:05:42.647 16:56:29 accel.accel_dualcast -- accel/accel.sh@16 -- # local accel_opc 00:05:42.647 16:56:29 accel.accel_dualcast -- accel/accel.sh@17 -- # local accel_module 00:05:42.647 16:56:29 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:42.647 16:56:29 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:42.647 16:56:29 accel.accel_dualcast -- accel/accel.sh@15 -- # accel_perf -t 1 -w dualcast -y 00:05:42.647 16:56:29 accel.accel_dualcast -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dualcast -y 00:05:42.647 16:56:29 accel.accel_dualcast -- accel/accel.sh@12 -- # build_accel_config 00:05:42.647 16:56:29 accel.accel_dualcast -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:42.647 16:56:29 accel.accel_dualcast -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:42.647 16:56:29 accel.accel_dualcast -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:42.647 16:56:29 accel.accel_dualcast -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:42.647 16:56:29 accel.accel_dualcast -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:42.647 16:56:29 accel.accel_dualcast -- accel/accel.sh@40 -- # local IFS=, 00:05:42.647 16:56:29 accel.accel_dualcast -- accel/accel.sh@41 -- # jq -r . 00:05:42.647 [2024-05-15 16:56:29.960046] Starting SPDK v24.05-pre git sha1 0ba8ca574 / DPDK 23.11.0 initialization... 00:05:42.647 [2024-05-15 16:56:29.960106] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2898800 ] 00:05:42.647 EAL: No free 2048 kB hugepages reported on node 1 00:05:42.647 [2024-05-15 16:56:30.018222] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:42.647 [2024-05-15 16:56:30.095271] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:42.647 16:56:30 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:05:42.647 16:56:30 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:42.647 16:56:30 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:42.647 16:56:30 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:42.647 16:56:30 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:05:42.647 16:56:30 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:42.647 16:56:30 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:42.647 16:56:30 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:42.647 16:56:30 accel.accel_dualcast -- accel/accel.sh@20 -- # val=0x1 00:05:42.648 16:56:30 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:42.648 16:56:30 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:42.648 16:56:30 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:42.648 16:56:30 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:05:42.648 16:56:30 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:42.648 16:56:30 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:42.648 16:56:30 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:42.648 16:56:30 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:05:42.648 16:56:30 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:42.648 16:56:30 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:42.648 16:56:30 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:42.648 16:56:30 accel.accel_dualcast -- accel/accel.sh@20 -- # val=dualcast 00:05:42.648 16:56:30 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:42.648 16:56:30 accel.accel_dualcast -- accel/accel.sh@23 -- # accel_opc=dualcast 00:05:42.648 16:56:30 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:42.648 16:56:30 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:42.648 16:56:30 accel.accel_dualcast -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:42.648 16:56:30 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:42.648 16:56:30 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:42.648 16:56:30 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:42.648 16:56:30 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:05:42.648 16:56:30 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:42.648 16:56:30 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:42.648 16:56:30 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:42.648 16:56:30 accel.accel_dualcast -- accel/accel.sh@20 -- # val=software 00:05:42.648 16:56:30 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:42.648 16:56:30 accel.accel_dualcast -- accel/accel.sh@22 -- # accel_module=software 00:05:42.648 16:56:30 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:42.648 16:56:30 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:42.648 16:56:30 accel.accel_dualcast -- accel/accel.sh@20 -- # val=32 00:05:42.648 16:56:30 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:42.648 16:56:30 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:42.648 16:56:30 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:42.648 16:56:30 accel.accel_dualcast -- accel/accel.sh@20 -- # val=32 00:05:42.648 16:56:30 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:42.648 16:56:30 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:42.648 16:56:30 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:42.648 16:56:30 accel.accel_dualcast -- accel/accel.sh@20 -- # val=1 00:05:42.648 16:56:30 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:42.648 16:56:30 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:42.648 16:56:30 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:42.648 16:56:30 accel.accel_dualcast -- accel/accel.sh@20 -- # val='1 seconds' 00:05:42.648 16:56:30 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:42.648 16:56:30 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:42.648 16:56:30 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:42.648 16:56:30 accel.accel_dualcast -- accel/accel.sh@20 -- # val=Yes 00:05:42.648 16:56:30 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:42.648 16:56:30 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:42.648 16:56:30 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:42.648 16:56:30 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:05:42.648 16:56:30 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:42.648 16:56:30 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:42.648 16:56:30 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:42.648 16:56:30 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:05:42.648 16:56:30 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:42.648 16:56:30 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:42.648 16:56:30 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:44.022 16:56:31 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:05:44.022 16:56:31 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:44.022 16:56:31 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:44.022 16:56:31 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:44.023 16:56:31 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:05:44.023 16:56:31 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:44.023 16:56:31 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:44.023 16:56:31 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:44.023 16:56:31 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:05:44.023 16:56:31 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:44.023 16:56:31 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:44.023 16:56:31 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:44.023 16:56:31 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:05:44.023 16:56:31 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:44.023 16:56:31 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:44.023 16:56:31 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:44.023 16:56:31 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:05:44.023 16:56:31 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:44.023 16:56:31 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:44.023 16:56:31 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:44.023 16:56:31 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:05:44.023 16:56:31 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:44.023 16:56:31 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:44.023 16:56:31 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:44.023 16:56:31 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:44.023 16:56:31 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ -n dualcast ]] 00:05:44.023 16:56:31 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:44.023 00:05:44.023 real 0m1.361s 00:05:44.023 user 0m1.246s 00:05:44.023 sys 0m0.118s 00:05:44.023 16:56:31 accel.accel_dualcast -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:44.023 16:56:31 accel.accel_dualcast -- common/autotest_common.sh@10 -- # set +x 00:05:44.023 ************************************ 00:05:44.023 END TEST accel_dualcast 00:05:44.023 ************************************ 00:05:44.023 16:56:31 accel -- accel/accel.sh@108 -- # run_test accel_compare accel_test -t 1 -w compare -y 00:05:44.023 16:56:31 accel -- common/autotest_common.sh@1097 -- # '[' 7 -le 1 ']' 00:05:44.023 16:56:31 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:44.023 16:56:31 accel -- common/autotest_common.sh@10 -- # set +x 00:05:44.023 ************************************ 00:05:44.023 START TEST accel_compare 00:05:44.023 ************************************ 00:05:44.023 16:56:31 accel.accel_compare -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w compare -y 00:05:44.023 16:56:31 accel.accel_compare -- accel/accel.sh@16 -- # local accel_opc 00:05:44.023 16:56:31 accel.accel_compare -- accel/accel.sh@17 -- # local accel_module 00:05:44.023 16:56:31 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:44.023 16:56:31 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:44.023 16:56:31 accel.accel_compare -- accel/accel.sh@15 -- # accel_perf -t 1 -w compare -y 00:05:44.023 16:56:31 accel.accel_compare -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compare -y 00:05:44.023 16:56:31 accel.accel_compare -- accel/accel.sh@12 -- # build_accel_config 00:05:44.023 16:56:31 accel.accel_compare -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:44.023 16:56:31 accel.accel_compare -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:44.023 16:56:31 accel.accel_compare -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:44.023 16:56:31 accel.accel_compare -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:44.023 16:56:31 accel.accel_compare -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:44.023 16:56:31 accel.accel_compare -- accel/accel.sh@40 -- # local IFS=, 00:05:44.023 16:56:31 accel.accel_compare -- accel/accel.sh@41 -- # jq -r . 00:05:44.023 [2024-05-15 16:56:31.379943] Starting SPDK v24.05-pre git sha1 0ba8ca574 / DPDK 23.11.0 initialization... 00:05:44.023 [2024-05-15 16:56:31.379985] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2899054 ] 00:05:44.023 EAL: No free 2048 kB hugepages reported on node 1 00:05:44.023 [2024-05-15 16:56:31.433601] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:44.023 [2024-05-15 16:56:31.505882] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:44.023 16:56:31 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:05:44.023 16:56:31 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:44.023 16:56:31 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:44.023 16:56:31 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:44.023 16:56:31 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:05:44.023 16:56:31 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:44.023 16:56:31 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:44.023 16:56:31 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:44.023 16:56:31 accel.accel_compare -- accel/accel.sh@20 -- # val=0x1 00:05:44.023 16:56:31 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:44.023 16:56:31 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:44.023 16:56:31 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:44.023 16:56:31 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:05:44.023 16:56:31 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:44.023 16:56:31 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:44.023 16:56:31 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:44.023 16:56:31 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:05:44.023 16:56:31 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:44.023 16:56:31 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:44.023 16:56:31 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:44.023 16:56:31 accel.accel_compare -- accel/accel.sh@20 -- # val=compare 00:05:44.023 16:56:31 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:44.023 16:56:31 accel.accel_compare -- accel/accel.sh@23 -- # accel_opc=compare 00:05:44.023 16:56:31 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:44.023 16:56:31 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:44.023 16:56:31 accel.accel_compare -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:44.023 16:56:31 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:44.023 16:56:31 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:44.023 16:56:31 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:44.023 16:56:31 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:05:44.023 16:56:31 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:44.023 16:56:31 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:44.023 16:56:31 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:44.023 16:56:31 accel.accel_compare -- accel/accel.sh@20 -- # val=software 00:05:44.023 16:56:31 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:44.023 16:56:31 accel.accel_compare -- accel/accel.sh@22 -- # accel_module=software 00:05:44.023 16:56:31 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:44.023 16:56:31 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:44.023 16:56:31 accel.accel_compare -- accel/accel.sh@20 -- # val=32 00:05:44.023 16:56:31 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:44.023 16:56:31 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:44.023 16:56:31 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:44.023 16:56:31 accel.accel_compare -- accel/accel.sh@20 -- # val=32 00:05:44.023 16:56:31 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:44.023 16:56:31 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:44.023 16:56:31 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:44.023 16:56:31 accel.accel_compare -- accel/accel.sh@20 -- # val=1 00:05:44.023 16:56:31 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:44.023 16:56:31 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:44.023 16:56:31 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:44.023 16:56:31 accel.accel_compare -- accel/accel.sh@20 -- # val='1 seconds' 00:05:44.023 16:56:31 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:44.023 16:56:31 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:44.023 16:56:31 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:44.023 16:56:31 accel.accel_compare -- accel/accel.sh@20 -- # val=Yes 00:05:44.023 16:56:31 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:44.023 16:56:31 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:44.023 16:56:31 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:44.023 16:56:31 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:05:44.023 16:56:31 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:44.023 16:56:31 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:44.023 16:56:31 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:44.023 16:56:31 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:05:44.023 16:56:31 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:44.023 16:56:31 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:44.023 16:56:31 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:45.398 16:56:32 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:05:45.398 16:56:32 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:45.398 16:56:32 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:45.398 16:56:32 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:45.398 16:56:32 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:05:45.398 16:56:32 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:45.398 16:56:32 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:45.398 16:56:32 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:45.398 16:56:32 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:05:45.398 16:56:32 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:45.398 16:56:32 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:45.398 16:56:32 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:45.398 16:56:32 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:05:45.398 16:56:32 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:45.398 16:56:32 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:45.398 16:56:32 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:45.398 16:56:32 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:05:45.398 16:56:32 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:45.398 16:56:32 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:45.398 16:56:32 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:45.398 16:56:32 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:05:45.398 16:56:32 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:45.398 16:56:32 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:45.398 16:56:32 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:45.398 16:56:32 accel.accel_compare -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:45.398 16:56:32 accel.accel_compare -- accel/accel.sh@27 -- # [[ -n compare ]] 00:05:45.398 16:56:32 accel.accel_compare -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:45.398 00:05:45.398 real 0m1.344s 00:05:45.398 user 0m1.239s 00:05:45.398 sys 0m0.110s 00:05:45.398 16:56:32 accel.accel_compare -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:45.398 16:56:32 accel.accel_compare -- common/autotest_common.sh@10 -- # set +x 00:05:45.398 ************************************ 00:05:45.398 END TEST accel_compare 00:05:45.398 ************************************ 00:05:45.398 16:56:32 accel -- accel/accel.sh@109 -- # run_test accel_xor accel_test -t 1 -w xor -y 00:05:45.398 16:56:32 accel -- common/autotest_common.sh@1097 -- # '[' 7 -le 1 ']' 00:05:45.398 16:56:32 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:45.398 16:56:32 accel -- common/autotest_common.sh@10 -- # set +x 00:05:45.398 ************************************ 00:05:45.398 START TEST accel_xor 00:05:45.398 ************************************ 00:05:45.398 16:56:32 accel.accel_xor -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w xor -y 00:05:45.398 16:56:32 accel.accel_xor -- accel/accel.sh@16 -- # local accel_opc 00:05:45.398 16:56:32 accel.accel_xor -- accel/accel.sh@17 -- # local accel_module 00:05:45.398 16:56:32 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:45.398 16:56:32 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:45.398 16:56:32 accel.accel_xor -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y 00:05:45.398 16:56:32 accel.accel_xor -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y 00:05:45.398 16:56:32 accel.accel_xor -- accel/accel.sh@12 -- # build_accel_config 00:05:45.398 16:56:32 accel.accel_xor -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:45.398 16:56:32 accel.accel_xor -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:45.398 16:56:32 accel.accel_xor -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:45.398 16:56:32 accel.accel_xor -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:45.398 16:56:32 accel.accel_xor -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:45.398 16:56:32 accel.accel_xor -- accel/accel.sh@40 -- # local IFS=, 00:05:45.398 16:56:32 accel.accel_xor -- accel/accel.sh@41 -- # jq -r . 00:05:45.398 [2024-05-15 16:56:32.793478] Starting SPDK v24.05-pre git sha1 0ba8ca574 / DPDK 23.11.0 initialization... 00:05:45.398 [2024-05-15 16:56:32.793543] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2899306 ] 00:05:45.398 EAL: No free 2048 kB hugepages reported on node 1 00:05:45.398 [2024-05-15 16:56:32.848899] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:45.398 [2024-05-15 16:56:32.920219] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:45.398 16:56:32 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:45.398 16:56:32 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:45.398 16:56:32 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:45.398 16:56:32 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:45.398 16:56:32 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:45.398 16:56:32 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:45.398 16:56:32 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:45.398 16:56:32 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:45.398 16:56:32 accel.accel_xor -- accel/accel.sh@20 -- # val=0x1 00:05:45.398 16:56:32 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:45.398 16:56:32 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:45.398 16:56:32 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:45.398 16:56:32 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:45.398 16:56:32 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:45.398 16:56:32 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:45.398 16:56:32 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:45.398 16:56:32 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:45.398 16:56:32 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:45.398 16:56:32 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:45.398 16:56:32 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:45.398 16:56:32 accel.accel_xor -- accel/accel.sh@20 -- # val=xor 00:05:45.398 16:56:32 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:45.398 16:56:32 accel.accel_xor -- accel/accel.sh@23 -- # accel_opc=xor 00:05:45.398 16:56:32 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:45.398 16:56:32 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:45.398 16:56:32 accel.accel_xor -- accel/accel.sh@20 -- # val=2 00:05:45.398 16:56:32 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:45.398 16:56:32 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:45.398 16:56:32 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:45.398 16:56:32 accel.accel_xor -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:45.398 16:56:32 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:45.398 16:56:32 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:45.398 16:56:32 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:45.398 16:56:32 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:45.398 16:56:32 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:45.398 16:56:32 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:45.398 16:56:32 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:45.398 16:56:32 accel.accel_xor -- accel/accel.sh@20 -- # val=software 00:05:45.398 16:56:32 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:45.398 16:56:32 accel.accel_xor -- accel/accel.sh@22 -- # accel_module=software 00:05:45.398 16:56:32 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:45.398 16:56:32 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:45.398 16:56:32 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:05:45.398 16:56:32 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:45.398 16:56:32 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:45.398 16:56:32 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:45.398 16:56:32 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:05:45.398 16:56:32 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:45.398 16:56:32 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:45.398 16:56:32 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:45.398 16:56:32 accel.accel_xor -- accel/accel.sh@20 -- # val=1 00:05:45.399 16:56:32 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:45.399 16:56:32 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:45.399 16:56:32 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:45.399 16:56:32 accel.accel_xor -- accel/accel.sh@20 -- # val='1 seconds' 00:05:45.399 16:56:32 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:45.399 16:56:32 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:45.399 16:56:32 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:45.399 16:56:32 accel.accel_xor -- accel/accel.sh@20 -- # val=Yes 00:05:45.399 16:56:32 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:45.399 16:56:32 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:45.399 16:56:32 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:45.399 16:56:32 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:45.399 16:56:32 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:45.399 16:56:32 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:45.399 16:56:32 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:45.399 16:56:32 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:45.399 16:56:32 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:45.399 16:56:32 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:45.399 16:56:32 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:46.775 16:56:34 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:46.775 16:56:34 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:46.775 16:56:34 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:46.775 16:56:34 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:46.775 16:56:34 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:46.775 16:56:34 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:46.775 16:56:34 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:46.775 16:56:34 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:46.775 16:56:34 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:46.775 16:56:34 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:46.775 16:56:34 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:46.775 16:56:34 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:46.775 16:56:34 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:46.775 16:56:34 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:46.775 16:56:34 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:46.775 16:56:34 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:46.775 16:56:34 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:46.775 16:56:34 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:46.775 16:56:34 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:46.775 16:56:34 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:46.775 16:56:34 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:46.775 16:56:34 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:46.775 16:56:34 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:46.775 16:56:34 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:46.775 16:56:34 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:46.775 16:56:34 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n xor ]] 00:05:46.775 16:56:34 accel.accel_xor -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:46.775 00:05:46.775 real 0m1.353s 00:05:46.775 user 0m1.248s 00:05:46.775 sys 0m0.110s 00:05:46.775 16:56:34 accel.accel_xor -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:46.775 16:56:34 accel.accel_xor -- common/autotest_common.sh@10 -- # set +x 00:05:46.775 ************************************ 00:05:46.775 END TEST accel_xor 00:05:46.775 ************************************ 00:05:46.775 16:56:34 accel -- accel/accel.sh@110 -- # run_test accel_xor accel_test -t 1 -w xor -y -x 3 00:05:46.775 16:56:34 accel -- common/autotest_common.sh@1097 -- # '[' 9 -le 1 ']' 00:05:46.775 16:56:34 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:46.775 16:56:34 accel -- common/autotest_common.sh@10 -- # set +x 00:05:46.775 ************************************ 00:05:46.775 START TEST accel_xor 00:05:46.775 ************************************ 00:05:46.775 16:56:34 accel.accel_xor -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w xor -y -x 3 00:05:46.775 16:56:34 accel.accel_xor -- accel/accel.sh@16 -- # local accel_opc 00:05:46.775 16:56:34 accel.accel_xor -- accel/accel.sh@17 -- # local accel_module 00:05:46.775 16:56:34 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:46.775 16:56:34 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:46.775 16:56:34 accel.accel_xor -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y -x 3 00:05:46.775 16:56:34 accel.accel_xor -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x 3 00:05:46.775 16:56:34 accel.accel_xor -- accel/accel.sh@12 -- # build_accel_config 00:05:46.775 16:56:34 accel.accel_xor -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:46.775 16:56:34 accel.accel_xor -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:46.775 16:56:34 accel.accel_xor -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:46.775 16:56:34 accel.accel_xor -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:46.775 16:56:34 accel.accel_xor -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:46.775 16:56:34 accel.accel_xor -- accel/accel.sh@40 -- # local IFS=, 00:05:46.775 16:56:34 accel.accel_xor -- accel/accel.sh@41 -- # jq -r . 00:05:46.775 [2024-05-15 16:56:34.209598] Starting SPDK v24.05-pre git sha1 0ba8ca574 / DPDK 23.11.0 initialization... 00:05:46.775 [2024-05-15 16:56:34.209664] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2899553 ] 00:05:46.775 EAL: No free 2048 kB hugepages reported on node 1 00:05:46.775 [2024-05-15 16:56:34.264862] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:46.775 [2024-05-15 16:56:34.337024] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:46.775 16:56:34 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:46.775 16:56:34 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:46.775 16:56:34 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:46.775 16:56:34 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:46.775 16:56:34 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:46.775 16:56:34 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:46.775 16:56:34 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:46.775 16:56:34 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:46.775 16:56:34 accel.accel_xor -- accel/accel.sh@20 -- # val=0x1 00:05:46.775 16:56:34 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:46.775 16:56:34 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:46.775 16:56:34 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:46.775 16:56:34 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:46.775 16:56:34 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:46.775 16:56:34 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:46.775 16:56:34 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:46.775 16:56:34 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:46.775 16:56:34 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:46.775 16:56:34 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:46.775 16:56:34 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:46.775 16:56:34 accel.accel_xor -- accel/accel.sh@20 -- # val=xor 00:05:46.775 16:56:34 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:46.775 16:56:34 accel.accel_xor -- accel/accel.sh@23 -- # accel_opc=xor 00:05:46.775 16:56:34 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:46.775 16:56:34 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:46.775 16:56:34 accel.accel_xor -- accel/accel.sh@20 -- # val=3 00:05:46.775 16:56:34 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:46.775 16:56:34 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:46.775 16:56:34 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:46.775 16:56:34 accel.accel_xor -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:46.775 16:56:34 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:46.775 16:56:34 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:46.775 16:56:34 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:46.775 16:56:34 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:46.775 16:56:34 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:46.776 16:56:34 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:46.776 16:56:34 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:46.776 16:56:34 accel.accel_xor -- accel/accel.sh@20 -- # val=software 00:05:46.776 16:56:34 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:46.776 16:56:34 accel.accel_xor -- accel/accel.sh@22 -- # accel_module=software 00:05:46.776 16:56:34 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:46.776 16:56:34 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:46.776 16:56:34 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:05:46.776 16:56:34 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:46.776 16:56:34 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:46.776 16:56:34 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:46.776 16:56:34 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:05:46.776 16:56:34 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:46.776 16:56:34 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:46.776 16:56:34 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:46.776 16:56:34 accel.accel_xor -- accel/accel.sh@20 -- # val=1 00:05:46.776 16:56:34 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:46.776 16:56:34 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:46.776 16:56:34 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:46.776 16:56:34 accel.accel_xor -- accel/accel.sh@20 -- # val='1 seconds' 00:05:46.776 16:56:34 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:46.776 16:56:34 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:46.776 16:56:34 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:46.776 16:56:34 accel.accel_xor -- accel/accel.sh@20 -- # val=Yes 00:05:46.776 16:56:34 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:46.776 16:56:34 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:46.776 16:56:34 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:46.776 16:56:34 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:46.776 16:56:34 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:46.776 16:56:34 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:46.776 16:56:34 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:46.776 16:56:34 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:46.776 16:56:34 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:46.776 16:56:34 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:46.776 16:56:34 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:48.148 16:56:35 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:48.148 16:56:35 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:48.148 16:56:35 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:48.148 16:56:35 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:48.148 16:56:35 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:48.148 16:56:35 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:48.148 16:56:35 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:48.148 16:56:35 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:48.148 16:56:35 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:48.148 16:56:35 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:48.149 16:56:35 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:48.149 16:56:35 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:48.149 16:56:35 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:48.149 16:56:35 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:48.149 16:56:35 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:48.149 16:56:35 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:48.149 16:56:35 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:48.149 16:56:35 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:48.149 16:56:35 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:48.149 16:56:35 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:48.149 16:56:35 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:48.149 16:56:35 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:48.149 16:56:35 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:48.149 16:56:35 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:48.149 16:56:35 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:48.149 16:56:35 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n xor ]] 00:05:48.149 16:56:35 accel.accel_xor -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:48.149 00:05:48.149 real 0m1.355s 00:05:48.149 user 0m1.246s 00:05:48.149 sys 0m0.113s 00:05:48.149 16:56:35 accel.accel_xor -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:48.149 16:56:35 accel.accel_xor -- common/autotest_common.sh@10 -- # set +x 00:05:48.149 ************************************ 00:05:48.149 END TEST accel_xor 00:05:48.149 ************************************ 00:05:48.149 16:56:35 accel -- accel/accel.sh@111 -- # run_test accel_dif_verify accel_test -t 1 -w dif_verify 00:05:48.149 16:56:35 accel -- common/autotest_common.sh@1097 -- # '[' 6 -le 1 ']' 00:05:48.149 16:56:35 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:48.149 16:56:35 accel -- common/autotest_common.sh@10 -- # set +x 00:05:48.149 ************************************ 00:05:48.149 START TEST accel_dif_verify 00:05:48.149 ************************************ 00:05:48.149 16:56:35 accel.accel_dif_verify -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w dif_verify 00:05:48.149 16:56:35 accel.accel_dif_verify -- accel/accel.sh@16 -- # local accel_opc 00:05:48.149 16:56:35 accel.accel_dif_verify -- accel/accel.sh@17 -- # local accel_module 00:05:48.149 16:56:35 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:48.149 16:56:35 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:48.149 16:56:35 accel.accel_dif_verify -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_verify 00:05:48.149 16:56:35 accel.accel_dif_verify -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_verify 00:05:48.149 16:56:35 accel.accel_dif_verify -- accel/accel.sh@12 -- # build_accel_config 00:05:48.149 16:56:35 accel.accel_dif_verify -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:48.149 16:56:35 accel.accel_dif_verify -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:48.149 16:56:35 accel.accel_dif_verify -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:48.149 16:56:35 accel.accel_dif_verify -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:48.149 16:56:35 accel.accel_dif_verify -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:48.149 16:56:35 accel.accel_dif_verify -- accel/accel.sh@40 -- # local IFS=, 00:05:48.149 16:56:35 accel.accel_dif_verify -- accel/accel.sh@41 -- # jq -r . 00:05:48.149 [2024-05-15 16:56:35.629134] Starting SPDK v24.05-pre git sha1 0ba8ca574 / DPDK 23.11.0 initialization... 00:05:48.149 [2024-05-15 16:56:35.629188] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2899801 ] 00:05:48.149 EAL: No free 2048 kB hugepages reported on node 1 00:05:48.149 [2024-05-15 16:56:35.683711] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:48.149 [2024-05-15 16:56:35.755771] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:48.149 16:56:35 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:05:48.149 16:56:35 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:48.149 16:56:35 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:48.149 16:56:35 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:48.149 16:56:35 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:05:48.149 16:56:35 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:48.149 16:56:35 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:48.149 16:56:35 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:48.149 16:56:35 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=0x1 00:05:48.149 16:56:35 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:48.149 16:56:35 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:48.149 16:56:35 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:48.149 16:56:35 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:05:48.149 16:56:35 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:48.149 16:56:35 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:48.149 16:56:35 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:48.149 16:56:35 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:05:48.149 16:56:35 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:48.149 16:56:35 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:48.149 16:56:35 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:48.149 16:56:35 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=dif_verify 00:05:48.149 16:56:35 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:48.149 16:56:35 accel.accel_dif_verify -- accel/accel.sh@23 -- # accel_opc=dif_verify 00:05:48.149 16:56:35 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:48.149 16:56:35 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:48.149 16:56:35 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:48.149 16:56:35 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:48.149 16:56:35 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:48.149 16:56:35 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:48.149 16:56:35 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:48.149 16:56:35 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:48.149 16:56:35 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:48.149 16:56:35 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:48.149 16:56:35 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='512 bytes' 00:05:48.149 16:56:35 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:48.149 16:56:35 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:48.149 16:56:35 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:48.149 16:56:35 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='8 bytes' 00:05:48.149 16:56:35 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:48.149 16:56:35 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:48.149 16:56:35 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:48.149 16:56:35 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:05:48.149 16:56:35 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:48.149 16:56:35 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:48.149 16:56:35 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:48.149 16:56:35 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=software 00:05:48.149 16:56:35 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:48.149 16:56:35 accel.accel_dif_verify -- accel/accel.sh@22 -- # accel_module=software 00:05:48.149 16:56:35 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:48.149 16:56:35 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:48.149 16:56:35 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=32 00:05:48.149 16:56:35 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:48.149 16:56:35 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:48.149 16:56:35 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:48.149 16:56:35 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=32 00:05:48.149 16:56:35 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:48.149 16:56:35 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:48.149 16:56:35 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:48.149 16:56:35 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=1 00:05:48.149 16:56:35 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:48.149 16:56:35 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:48.149 16:56:35 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:48.149 16:56:35 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='1 seconds' 00:05:48.149 16:56:35 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:48.149 16:56:35 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:48.149 16:56:35 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:48.149 16:56:35 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=No 00:05:48.149 16:56:35 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:48.149 16:56:35 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:48.149 16:56:35 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:48.149 16:56:35 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:05:48.149 16:56:35 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:48.149 16:56:35 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:48.149 16:56:35 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:48.149 16:56:35 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:05:48.149 16:56:35 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:48.149 16:56:35 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:48.149 16:56:35 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:49.521 16:56:36 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:05:49.521 16:56:36 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:49.521 16:56:36 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:49.521 16:56:36 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:49.521 16:56:36 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:05:49.521 16:56:36 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:49.521 16:56:36 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:49.521 16:56:36 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:49.521 16:56:36 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:05:49.521 16:56:36 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:49.521 16:56:36 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:49.521 16:56:36 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:49.521 16:56:36 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:05:49.521 16:56:36 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:49.521 16:56:36 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:49.521 16:56:36 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:49.521 16:56:36 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:05:49.521 16:56:36 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:49.521 16:56:36 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:49.521 16:56:36 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:49.521 16:56:36 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:05:49.521 16:56:36 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:49.521 16:56:36 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:49.521 16:56:36 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:49.521 16:56:36 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:49.521 16:56:36 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ -n dif_verify ]] 00:05:49.521 16:56:36 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:49.521 00:05:49.521 real 0m1.351s 00:05:49.521 user 0m1.244s 00:05:49.521 sys 0m0.112s 00:05:49.521 16:56:36 accel.accel_dif_verify -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:49.521 16:56:36 accel.accel_dif_verify -- common/autotest_common.sh@10 -- # set +x 00:05:49.521 ************************************ 00:05:49.521 END TEST accel_dif_verify 00:05:49.521 ************************************ 00:05:49.521 16:56:36 accel -- accel/accel.sh@112 -- # run_test accel_dif_generate accel_test -t 1 -w dif_generate 00:05:49.521 16:56:36 accel -- common/autotest_common.sh@1097 -- # '[' 6 -le 1 ']' 00:05:49.521 16:56:36 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:49.521 16:56:36 accel -- common/autotest_common.sh@10 -- # set +x 00:05:49.521 ************************************ 00:05:49.521 START TEST accel_dif_generate 00:05:49.521 ************************************ 00:05:49.521 16:56:37 accel.accel_dif_generate -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w dif_generate 00:05:49.521 16:56:37 accel.accel_dif_generate -- accel/accel.sh@16 -- # local accel_opc 00:05:49.521 16:56:37 accel.accel_dif_generate -- accel/accel.sh@17 -- # local accel_module 00:05:49.521 16:56:37 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:49.521 16:56:37 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:49.521 16:56:37 accel.accel_dif_generate -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate 00:05:49.521 16:56:37 accel.accel_dif_generate -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate 00:05:49.521 16:56:37 accel.accel_dif_generate -- accel/accel.sh@12 -- # build_accel_config 00:05:49.521 16:56:37 accel.accel_dif_generate -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:49.521 16:56:37 accel.accel_dif_generate -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:49.521 16:56:37 accel.accel_dif_generate -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:49.521 16:56:37 accel.accel_dif_generate -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:49.521 16:56:37 accel.accel_dif_generate -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:49.521 16:56:37 accel.accel_dif_generate -- accel/accel.sh@40 -- # local IFS=, 00:05:49.521 16:56:37 accel.accel_dif_generate -- accel/accel.sh@41 -- # jq -r . 00:05:49.521 [2024-05-15 16:56:37.044995] Starting SPDK v24.05-pre git sha1 0ba8ca574 / DPDK 23.11.0 initialization... 00:05:49.521 [2024-05-15 16:56:37.045042] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2900056 ] 00:05:49.521 EAL: No free 2048 kB hugepages reported on node 1 00:05:49.521 [2024-05-15 16:56:37.098769] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:49.521 [2024-05-15 16:56:37.169708] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:49.780 16:56:37 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:05:49.780 16:56:37 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:49.780 16:56:37 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:49.780 16:56:37 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:49.780 16:56:37 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:05:49.780 16:56:37 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:49.780 16:56:37 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:49.780 16:56:37 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:49.780 16:56:37 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=0x1 00:05:49.780 16:56:37 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:49.780 16:56:37 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:49.780 16:56:37 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:49.780 16:56:37 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:05:49.780 16:56:37 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:49.780 16:56:37 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:49.780 16:56:37 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:49.780 16:56:37 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:05:49.780 16:56:37 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:49.780 16:56:37 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:49.780 16:56:37 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:49.780 16:56:37 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=dif_generate 00:05:49.780 16:56:37 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:49.780 16:56:37 accel.accel_dif_generate -- accel/accel.sh@23 -- # accel_opc=dif_generate 00:05:49.780 16:56:37 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:49.780 16:56:37 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:49.780 16:56:37 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:49.780 16:56:37 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:49.780 16:56:37 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:49.780 16:56:37 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:49.780 16:56:37 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:49.780 16:56:37 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:49.780 16:56:37 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:49.780 16:56:37 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:49.780 16:56:37 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='512 bytes' 00:05:49.780 16:56:37 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:49.780 16:56:37 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:49.780 16:56:37 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:49.780 16:56:37 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='8 bytes' 00:05:49.780 16:56:37 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:49.780 16:56:37 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:49.780 16:56:37 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:49.780 16:56:37 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:05:49.780 16:56:37 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:49.780 16:56:37 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:49.780 16:56:37 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:49.780 16:56:37 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=software 00:05:49.780 16:56:37 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:49.780 16:56:37 accel.accel_dif_generate -- accel/accel.sh@22 -- # accel_module=software 00:05:49.780 16:56:37 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:49.780 16:56:37 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:49.780 16:56:37 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=32 00:05:49.780 16:56:37 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:49.780 16:56:37 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:49.780 16:56:37 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:49.780 16:56:37 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=32 00:05:49.780 16:56:37 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:49.780 16:56:37 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:49.780 16:56:37 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:49.780 16:56:37 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=1 00:05:49.780 16:56:37 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:49.780 16:56:37 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:49.780 16:56:37 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:49.780 16:56:37 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='1 seconds' 00:05:49.780 16:56:37 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:49.780 16:56:37 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:49.780 16:56:37 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:49.780 16:56:37 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=No 00:05:49.780 16:56:37 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:49.780 16:56:37 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:49.780 16:56:37 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:49.780 16:56:37 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:05:49.780 16:56:37 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:49.780 16:56:37 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:49.780 16:56:37 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:49.780 16:56:37 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:05:49.780 16:56:37 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:49.780 16:56:37 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:49.780 16:56:37 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:50.712 16:56:38 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:05:50.712 16:56:38 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:50.712 16:56:38 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:50.712 16:56:38 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:50.712 16:56:38 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:05:50.712 16:56:38 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:50.712 16:56:38 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:50.712 16:56:38 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:50.969 16:56:38 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:05:50.969 16:56:38 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:50.969 16:56:38 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:50.969 16:56:38 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:50.969 16:56:38 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:05:50.969 16:56:38 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:50.969 16:56:38 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:50.969 16:56:38 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:50.969 16:56:38 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:05:50.969 16:56:38 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:50.969 16:56:38 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:50.969 16:56:38 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:50.969 16:56:38 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:05:50.969 16:56:38 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:50.969 16:56:38 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:50.969 16:56:38 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:50.969 16:56:38 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:50.969 16:56:38 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ -n dif_generate ]] 00:05:50.969 16:56:38 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:50.969 00:05:50.969 real 0m1.353s 00:05:50.969 user 0m1.240s 00:05:50.969 sys 0m0.118s 00:05:50.969 16:56:38 accel.accel_dif_generate -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:50.969 16:56:38 accel.accel_dif_generate -- common/autotest_common.sh@10 -- # set +x 00:05:50.969 ************************************ 00:05:50.969 END TEST accel_dif_generate 00:05:50.969 ************************************ 00:05:50.969 16:56:38 accel -- accel/accel.sh@113 -- # run_test accel_dif_generate_copy accel_test -t 1 -w dif_generate_copy 00:05:50.969 16:56:38 accel -- common/autotest_common.sh@1097 -- # '[' 6 -le 1 ']' 00:05:50.969 16:56:38 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:50.969 16:56:38 accel -- common/autotest_common.sh@10 -- # set +x 00:05:50.970 ************************************ 00:05:50.970 START TEST accel_dif_generate_copy 00:05:50.970 ************************************ 00:05:50.970 16:56:38 accel.accel_dif_generate_copy -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w dif_generate_copy 00:05:50.970 16:56:38 accel.accel_dif_generate_copy -- accel/accel.sh@16 -- # local accel_opc 00:05:50.970 16:56:38 accel.accel_dif_generate_copy -- accel/accel.sh@17 -- # local accel_module 00:05:50.970 16:56:38 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:50.970 16:56:38 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:50.970 16:56:38 accel.accel_dif_generate_copy -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate_copy 00:05:50.970 16:56:38 accel.accel_dif_generate_copy -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate_copy 00:05:50.970 16:56:38 accel.accel_dif_generate_copy -- accel/accel.sh@12 -- # build_accel_config 00:05:50.970 16:56:38 accel.accel_dif_generate_copy -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:50.970 16:56:38 accel.accel_dif_generate_copy -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:50.970 16:56:38 accel.accel_dif_generate_copy -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:50.970 16:56:38 accel.accel_dif_generate_copy -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:50.970 16:56:38 accel.accel_dif_generate_copy -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:50.970 16:56:38 accel.accel_dif_generate_copy -- accel/accel.sh@40 -- # local IFS=, 00:05:50.970 16:56:38 accel.accel_dif_generate_copy -- accel/accel.sh@41 -- # jq -r . 00:05:50.970 [2024-05-15 16:56:38.465172] Starting SPDK v24.05-pre git sha1 0ba8ca574 / DPDK 23.11.0 initialization... 00:05:50.970 [2024-05-15 16:56:38.465235] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2900302 ] 00:05:50.970 EAL: No free 2048 kB hugepages reported on node 1 00:05:50.970 [2024-05-15 16:56:38.521929] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:50.970 [2024-05-15 16:56:38.593775] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:51.227 16:56:38 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:05:51.227 16:56:38 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:51.227 16:56:38 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:51.227 16:56:38 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:51.227 16:56:38 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:05:51.227 16:56:38 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:51.227 16:56:38 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:51.227 16:56:38 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:51.227 16:56:38 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=0x1 00:05:51.227 16:56:38 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:51.228 16:56:38 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:51.228 16:56:38 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:51.228 16:56:38 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:05:51.228 16:56:38 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:51.228 16:56:38 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:51.228 16:56:38 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:51.228 16:56:38 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:05:51.228 16:56:38 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:51.228 16:56:38 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:51.228 16:56:38 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:51.228 16:56:38 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=dif_generate_copy 00:05:51.228 16:56:38 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:51.228 16:56:38 accel.accel_dif_generate_copy -- accel/accel.sh@23 -- # accel_opc=dif_generate_copy 00:05:51.228 16:56:38 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:51.228 16:56:38 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:51.228 16:56:38 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:51.228 16:56:38 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:51.228 16:56:38 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:51.228 16:56:38 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:51.228 16:56:38 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:51.228 16:56:38 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:51.228 16:56:38 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:51.228 16:56:38 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:51.228 16:56:38 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:05:51.228 16:56:38 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:51.228 16:56:38 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:51.228 16:56:38 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:51.228 16:56:38 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=software 00:05:51.228 16:56:38 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:51.228 16:56:38 accel.accel_dif_generate_copy -- accel/accel.sh@22 -- # accel_module=software 00:05:51.228 16:56:38 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:51.228 16:56:38 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:51.228 16:56:38 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=32 00:05:51.228 16:56:38 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:51.228 16:56:38 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:51.228 16:56:38 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:51.228 16:56:38 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=32 00:05:51.228 16:56:38 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:51.228 16:56:38 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:51.228 16:56:38 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:51.228 16:56:38 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=1 00:05:51.228 16:56:38 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:51.228 16:56:38 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:51.228 16:56:38 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:51.228 16:56:38 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='1 seconds' 00:05:51.228 16:56:38 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:51.228 16:56:38 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:51.228 16:56:38 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:51.228 16:56:38 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=No 00:05:51.228 16:56:38 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:51.228 16:56:38 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:51.228 16:56:38 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:51.228 16:56:38 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:05:51.228 16:56:38 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:51.228 16:56:38 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:51.228 16:56:38 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:51.228 16:56:38 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:05:51.228 16:56:38 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:51.228 16:56:38 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:51.228 16:56:38 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:52.161 16:56:39 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:05:52.161 16:56:39 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:52.161 16:56:39 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:52.161 16:56:39 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:52.161 16:56:39 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:05:52.161 16:56:39 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:52.161 16:56:39 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:52.161 16:56:39 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:52.161 16:56:39 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:05:52.161 16:56:39 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:52.161 16:56:39 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:52.161 16:56:39 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:52.161 16:56:39 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:05:52.161 16:56:39 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:52.161 16:56:39 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:52.161 16:56:39 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:52.161 16:56:39 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:05:52.161 16:56:39 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:52.161 16:56:39 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:52.161 16:56:39 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:52.161 16:56:39 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:05:52.161 16:56:39 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:52.161 16:56:39 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:52.161 16:56:39 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:52.161 16:56:39 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:52.161 16:56:39 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ -n dif_generate_copy ]] 00:05:52.161 16:56:39 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:52.161 00:05:52.161 real 0m1.355s 00:05:52.161 user 0m1.240s 00:05:52.161 sys 0m0.121s 00:05:52.161 16:56:39 accel.accel_dif_generate_copy -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:52.161 16:56:39 accel.accel_dif_generate_copy -- common/autotest_common.sh@10 -- # set +x 00:05:52.161 ************************************ 00:05:52.161 END TEST accel_dif_generate_copy 00:05:52.161 ************************************ 00:05:52.419 16:56:39 accel -- accel/accel.sh@115 -- # [[ y == y ]] 00:05:52.419 16:56:39 accel -- accel/accel.sh@116 -- # run_test accel_comp accel_test -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:05:52.419 16:56:39 accel -- common/autotest_common.sh@1097 -- # '[' 8 -le 1 ']' 00:05:52.419 16:56:39 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:52.419 16:56:39 accel -- common/autotest_common.sh@10 -- # set +x 00:05:52.419 ************************************ 00:05:52.419 START TEST accel_comp 00:05:52.419 ************************************ 00:05:52.419 16:56:39 accel.accel_comp -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:05:52.419 16:56:39 accel.accel_comp -- accel/accel.sh@16 -- # local accel_opc 00:05:52.419 16:56:39 accel.accel_comp -- accel/accel.sh@17 -- # local accel_module 00:05:52.419 16:56:39 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:52.419 16:56:39 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:52.419 16:56:39 accel.accel_comp -- accel/accel.sh@15 -- # accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:05:52.419 16:56:39 accel.accel_comp -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:05:52.419 16:56:39 accel.accel_comp -- accel/accel.sh@12 -- # build_accel_config 00:05:52.419 16:56:39 accel.accel_comp -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:52.419 16:56:39 accel.accel_comp -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:52.419 16:56:39 accel.accel_comp -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:52.419 16:56:39 accel.accel_comp -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:52.419 16:56:39 accel.accel_comp -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:52.419 16:56:39 accel.accel_comp -- accel/accel.sh@40 -- # local IFS=, 00:05:52.419 16:56:39 accel.accel_comp -- accel/accel.sh@41 -- # jq -r . 00:05:52.419 [2024-05-15 16:56:39.892551] Starting SPDK v24.05-pre git sha1 0ba8ca574 / DPDK 23.11.0 initialization... 00:05:52.419 [2024-05-15 16:56:39.892617] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2900558 ] 00:05:52.419 EAL: No free 2048 kB hugepages reported on node 1 00:05:52.419 [2024-05-15 16:56:39.948046] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:52.419 [2024-05-15 16:56:40.023050] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:52.419 16:56:40 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:05:52.419 16:56:40 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:52.419 16:56:40 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:52.419 16:56:40 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:52.419 16:56:40 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:05:52.419 16:56:40 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:52.419 16:56:40 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:52.419 16:56:40 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:52.419 16:56:40 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:05:52.419 16:56:40 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:52.419 16:56:40 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:52.420 16:56:40 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:52.420 16:56:40 accel.accel_comp -- accel/accel.sh@20 -- # val=0x1 00:05:52.420 16:56:40 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:52.420 16:56:40 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:52.420 16:56:40 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:52.420 16:56:40 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:05:52.420 16:56:40 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:52.420 16:56:40 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:52.420 16:56:40 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:52.420 16:56:40 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:05:52.420 16:56:40 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:52.420 16:56:40 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:52.420 16:56:40 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:52.420 16:56:40 accel.accel_comp -- accel/accel.sh@20 -- # val=compress 00:05:52.420 16:56:40 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:52.420 16:56:40 accel.accel_comp -- accel/accel.sh@23 -- # accel_opc=compress 00:05:52.420 16:56:40 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:52.420 16:56:40 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:52.420 16:56:40 accel.accel_comp -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:52.420 16:56:40 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:52.420 16:56:40 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:52.420 16:56:40 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:52.420 16:56:40 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:05:52.420 16:56:40 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:52.420 16:56:40 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:52.420 16:56:40 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:52.420 16:56:40 accel.accel_comp -- accel/accel.sh@20 -- # val=software 00:05:52.420 16:56:40 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:52.420 16:56:40 accel.accel_comp -- accel/accel.sh@22 -- # accel_module=software 00:05:52.420 16:56:40 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:52.420 16:56:40 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:52.420 16:56:40 accel.accel_comp -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:05:52.420 16:56:40 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:52.420 16:56:40 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:52.420 16:56:40 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:52.420 16:56:40 accel.accel_comp -- accel/accel.sh@20 -- # val=32 00:05:52.420 16:56:40 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:52.420 16:56:40 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:52.420 16:56:40 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:52.420 16:56:40 accel.accel_comp -- accel/accel.sh@20 -- # val=32 00:05:52.420 16:56:40 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:52.420 16:56:40 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:52.420 16:56:40 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:52.420 16:56:40 accel.accel_comp -- accel/accel.sh@20 -- # val=1 00:05:52.420 16:56:40 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:52.420 16:56:40 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:52.420 16:56:40 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:52.420 16:56:40 accel.accel_comp -- accel/accel.sh@20 -- # val='1 seconds' 00:05:52.420 16:56:40 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:52.420 16:56:40 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:52.420 16:56:40 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:52.420 16:56:40 accel.accel_comp -- accel/accel.sh@20 -- # val=No 00:05:52.420 16:56:40 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:52.420 16:56:40 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:52.420 16:56:40 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:52.420 16:56:40 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:05:52.420 16:56:40 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:52.420 16:56:40 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:52.420 16:56:40 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:52.420 16:56:40 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:05:52.420 16:56:40 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:52.420 16:56:40 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:52.420 16:56:40 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:53.793 16:56:41 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:05:53.793 16:56:41 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:53.793 16:56:41 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:53.793 16:56:41 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:53.793 16:56:41 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:05:53.793 16:56:41 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:53.793 16:56:41 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:53.793 16:56:41 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:53.793 16:56:41 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:05:53.793 16:56:41 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:53.793 16:56:41 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:53.793 16:56:41 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:53.793 16:56:41 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:05:53.793 16:56:41 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:53.793 16:56:41 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:53.793 16:56:41 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:53.793 16:56:41 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:05:53.793 16:56:41 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:53.793 16:56:41 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:53.793 16:56:41 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:53.793 16:56:41 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:05:53.793 16:56:41 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:53.793 16:56:41 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:53.793 16:56:41 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:53.793 16:56:41 accel.accel_comp -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:53.793 16:56:41 accel.accel_comp -- accel/accel.sh@27 -- # [[ -n compress ]] 00:05:53.793 16:56:41 accel.accel_comp -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:53.793 00:05:53.793 real 0m1.361s 00:05:53.793 user 0m1.251s 00:05:53.793 sys 0m0.115s 00:05:53.793 16:56:41 accel.accel_comp -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:53.793 16:56:41 accel.accel_comp -- common/autotest_common.sh@10 -- # set +x 00:05:53.793 ************************************ 00:05:53.793 END TEST accel_comp 00:05:53.793 ************************************ 00:05:53.793 16:56:41 accel -- accel/accel.sh@117 -- # run_test accel_decomp accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:05:53.793 16:56:41 accel -- common/autotest_common.sh@1097 -- # '[' 9 -le 1 ']' 00:05:53.793 16:56:41 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:53.793 16:56:41 accel -- common/autotest_common.sh@10 -- # set +x 00:05:53.793 ************************************ 00:05:53.793 START TEST accel_decomp 00:05:53.793 ************************************ 00:05:53.793 16:56:41 accel.accel_decomp -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:05:53.793 16:56:41 accel.accel_decomp -- accel/accel.sh@16 -- # local accel_opc 00:05:53.793 16:56:41 accel.accel_decomp -- accel/accel.sh@17 -- # local accel_module 00:05:53.793 16:56:41 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:53.793 16:56:41 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:53.793 16:56:41 accel.accel_decomp -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:05:53.793 16:56:41 accel.accel_decomp -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:05:53.793 16:56:41 accel.accel_decomp -- accel/accel.sh@12 -- # build_accel_config 00:05:53.793 16:56:41 accel.accel_decomp -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:53.793 16:56:41 accel.accel_decomp -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:53.793 16:56:41 accel.accel_decomp -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:53.793 16:56:41 accel.accel_decomp -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:53.793 16:56:41 accel.accel_decomp -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:53.793 16:56:41 accel.accel_decomp -- accel/accel.sh@40 -- # local IFS=, 00:05:53.793 16:56:41 accel.accel_decomp -- accel/accel.sh@41 -- # jq -r . 00:05:53.793 [2024-05-15 16:56:41.308480] Starting SPDK v24.05-pre git sha1 0ba8ca574 / DPDK 23.11.0 initialization... 00:05:53.793 [2024-05-15 16:56:41.308532] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2900809 ] 00:05:53.793 EAL: No free 2048 kB hugepages reported on node 1 00:05:53.793 [2024-05-15 16:56:41.363666] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:53.793 [2024-05-15 16:56:41.435841] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:54.051 16:56:41 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:05:54.051 16:56:41 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:54.051 16:56:41 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:54.051 16:56:41 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:54.051 16:56:41 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:05:54.051 16:56:41 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:54.051 16:56:41 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:54.051 16:56:41 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:54.051 16:56:41 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:05:54.051 16:56:41 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:54.051 16:56:41 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:54.051 16:56:41 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:54.051 16:56:41 accel.accel_decomp -- accel/accel.sh@20 -- # val=0x1 00:05:54.051 16:56:41 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:54.051 16:56:41 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:54.051 16:56:41 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:54.051 16:56:41 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:05:54.051 16:56:41 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:54.051 16:56:41 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:54.051 16:56:41 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:54.051 16:56:41 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:05:54.051 16:56:41 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:54.051 16:56:41 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:54.051 16:56:41 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:54.051 16:56:41 accel.accel_decomp -- accel/accel.sh@20 -- # val=decompress 00:05:54.051 16:56:41 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:54.051 16:56:41 accel.accel_decomp -- accel/accel.sh@23 -- # accel_opc=decompress 00:05:54.051 16:56:41 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:54.051 16:56:41 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:54.051 16:56:41 accel.accel_decomp -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:54.051 16:56:41 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:54.051 16:56:41 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:54.051 16:56:41 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:54.051 16:56:41 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:05:54.051 16:56:41 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:54.051 16:56:41 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:54.051 16:56:41 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:54.051 16:56:41 accel.accel_decomp -- accel/accel.sh@20 -- # val=software 00:05:54.051 16:56:41 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:54.051 16:56:41 accel.accel_decomp -- accel/accel.sh@22 -- # accel_module=software 00:05:54.051 16:56:41 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:54.051 16:56:41 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:54.051 16:56:41 accel.accel_decomp -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:05:54.051 16:56:41 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:54.051 16:56:41 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:54.051 16:56:41 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:54.051 16:56:41 accel.accel_decomp -- accel/accel.sh@20 -- # val=32 00:05:54.051 16:56:41 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:54.051 16:56:41 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:54.051 16:56:41 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:54.051 16:56:41 accel.accel_decomp -- accel/accel.sh@20 -- # val=32 00:05:54.051 16:56:41 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:54.051 16:56:41 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:54.051 16:56:41 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:54.051 16:56:41 accel.accel_decomp -- accel/accel.sh@20 -- # val=1 00:05:54.051 16:56:41 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:54.051 16:56:41 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:54.051 16:56:41 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:54.051 16:56:41 accel.accel_decomp -- accel/accel.sh@20 -- # val='1 seconds' 00:05:54.051 16:56:41 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:54.051 16:56:41 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:54.051 16:56:41 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:54.051 16:56:41 accel.accel_decomp -- accel/accel.sh@20 -- # val=Yes 00:05:54.051 16:56:41 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:54.051 16:56:41 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:54.051 16:56:41 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:54.051 16:56:41 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:05:54.051 16:56:41 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:54.051 16:56:41 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:54.051 16:56:41 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:54.051 16:56:41 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:05:54.051 16:56:41 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:54.051 16:56:41 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:54.051 16:56:41 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:54.983 16:56:42 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:05:54.983 16:56:42 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:54.983 16:56:42 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:54.983 16:56:42 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:54.983 16:56:42 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:05:54.983 16:56:42 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:54.983 16:56:42 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:54.983 16:56:42 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:54.983 16:56:42 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:05:54.983 16:56:42 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:54.983 16:56:42 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:54.983 16:56:42 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:54.983 16:56:42 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:05:54.983 16:56:42 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:54.983 16:56:42 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:54.983 16:56:42 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:54.983 16:56:42 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:05:54.983 16:56:42 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:54.983 16:56:42 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:54.983 16:56:42 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:54.983 16:56:42 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:05:54.983 16:56:42 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:54.983 16:56:42 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:54.983 16:56:42 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:54.983 16:56:42 accel.accel_decomp -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:54.983 16:56:42 accel.accel_decomp -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:05:54.983 16:56:42 accel.accel_decomp -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:54.983 00:05:54.983 real 0m1.349s 00:05:54.983 user 0m1.239s 00:05:54.983 sys 0m0.115s 00:05:54.983 16:56:42 accel.accel_decomp -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:54.983 16:56:42 accel.accel_decomp -- common/autotest_common.sh@10 -- # set +x 00:05:54.983 ************************************ 00:05:54.983 END TEST accel_decomp 00:05:54.983 ************************************ 00:05:55.241 16:56:42 accel -- accel/accel.sh@118 -- # run_test accel_decmop_full accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:05:55.241 16:56:42 accel -- common/autotest_common.sh@1097 -- # '[' 11 -le 1 ']' 00:05:55.241 16:56:42 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:55.241 16:56:42 accel -- common/autotest_common.sh@10 -- # set +x 00:05:55.241 ************************************ 00:05:55.241 START TEST accel_decmop_full 00:05:55.241 ************************************ 00:05:55.241 16:56:42 accel.accel_decmop_full -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:05:55.241 16:56:42 accel.accel_decmop_full -- accel/accel.sh@16 -- # local accel_opc 00:05:55.241 16:56:42 accel.accel_decmop_full -- accel/accel.sh@17 -- # local accel_module 00:05:55.241 16:56:42 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:05:55.241 16:56:42 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:05:55.241 16:56:42 accel.accel_decmop_full -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:05:55.241 16:56:42 accel.accel_decmop_full -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:05:55.241 16:56:42 accel.accel_decmop_full -- accel/accel.sh@12 -- # build_accel_config 00:05:55.241 16:56:42 accel.accel_decmop_full -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:55.241 16:56:42 accel.accel_decmop_full -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:55.241 16:56:42 accel.accel_decmop_full -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:55.241 16:56:42 accel.accel_decmop_full -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:55.241 16:56:42 accel.accel_decmop_full -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:55.241 16:56:42 accel.accel_decmop_full -- accel/accel.sh@40 -- # local IFS=, 00:05:55.241 16:56:42 accel.accel_decmop_full -- accel/accel.sh@41 -- # jq -r . 00:05:55.241 [2024-05-15 16:56:42.726952] Starting SPDK v24.05-pre git sha1 0ba8ca574 / DPDK 23.11.0 initialization... 00:05:55.241 [2024-05-15 16:56:42.727015] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2901063 ] 00:05:55.241 EAL: No free 2048 kB hugepages reported on node 1 00:05:55.241 [2024-05-15 16:56:42.783458] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:55.241 [2024-05-15 16:56:42.857125] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:55.498 16:56:42 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:05:55.498 16:56:42 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:05:55.498 16:56:42 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:05:55.498 16:56:42 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:05:55.498 16:56:42 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:05:55.498 16:56:42 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:05:55.498 16:56:42 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:05:55.498 16:56:42 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:05:55.498 16:56:42 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:05:55.498 16:56:42 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:05:55.498 16:56:42 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:05:55.498 16:56:42 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:05:55.498 16:56:42 accel.accel_decmop_full -- accel/accel.sh@20 -- # val=0x1 00:05:55.498 16:56:42 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:05:55.498 16:56:42 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:05:55.498 16:56:42 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:05:55.498 16:56:42 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:05:55.498 16:56:42 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:05:55.498 16:56:42 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:05:55.498 16:56:42 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:05:55.498 16:56:42 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:05:55.498 16:56:42 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:05:55.498 16:56:42 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:05:55.498 16:56:42 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:05:55.498 16:56:42 accel.accel_decmop_full -- accel/accel.sh@20 -- # val=decompress 00:05:55.498 16:56:42 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:05:55.498 16:56:42 accel.accel_decmop_full -- accel/accel.sh@23 -- # accel_opc=decompress 00:05:55.498 16:56:42 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:05:55.498 16:56:42 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:05:55.498 16:56:42 accel.accel_decmop_full -- accel/accel.sh@20 -- # val='111250 bytes' 00:05:55.498 16:56:42 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:05:55.498 16:56:42 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:05:55.498 16:56:42 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:05:55.498 16:56:42 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:05:55.498 16:56:42 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:05:55.498 16:56:42 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:05:55.498 16:56:42 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:05:55.498 16:56:42 accel.accel_decmop_full -- accel/accel.sh@20 -- # val=software 00:05:55.498 16:56:42 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:05:55.498 16:56:42 accel.accel_decmop_full -- accel/accel.sh@22 -- # accel_module=software 00:05:55.498 16:56:42 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:05:55.498 16:56:42 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:05:55.498 16:56:42 accel.accel_decmop_full -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:05:55.498 16:56:42 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:05:55.498 16:56:42 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:05:55.498 16:56:42 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:05:55.498 16:56:42 accel.accel_decmop_full -- accel/accel.sh@20 -- # val=32 00:05:55.498 16:56:42 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:05:55.498 16:56:42 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:05:55.498 16:56:42 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:05:55.498 16:56:42 accel.accel_decmop_full -- accel/accel.sh@20 -- # val=32 00:05:55.498 16:56:42 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:05:55.498 16:56:42 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:05:55.498 16:56:42 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:05:55.498 16:56:42 accel.accel_decmop_full -- accel/accel.sh@20 -- # val=1 00:05:55.498 16:56:42 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:05:55.498 16:56:42 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:05:55.498 16:56:42 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:05:55.498 16:56:42 accel.accel_decmop_full -- accel/accel.sh@20 -- # val='1 seconds' 00:05:55.498 16:56:42 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:05:55.498 16:56:42 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:05:55.498 16:56:42 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:05:55.498 16:56:42 accel.accel_decmop_full -- accel/accel.sh@20 -- # val=Yes 00:05:55.498 16:56:42 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:05:55.498 16:56:42 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:05:55.498 16:56:42 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:05:55.498 16:56:42 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:05:55.498 16:56:42 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:05:55.498 16:56:42 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:05:55.498 16:56:42 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:05:55.498 16:56:42 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:05:55.499 16:56:42 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:05:55.499 16:56:42 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:05:55.499 16:56:42 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:05:56.430 16:56:44 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:05:56.431 16:56:44 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:05:56.431 16:56:44 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:05:56.431 16:56:44 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:05:56.431 16:56:44 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:05:56.431 16:56:44 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:05:56.431 16:56:44 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:05:56.431 16:56:44 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:05:56.431 16:56:44 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:05:56.431 16:56:44 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:05:56.431 16:56:44 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:05:56.431 16:56:44 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:05:56.431 16:56:44 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:05:56.431 16:56:44 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:05:56.431 16:56:44 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:05:56.431 16:56:44 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:05:56.431 16:56:44 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:05:56.431 16:56:44 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:05:56.431 16:56:44 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:05:56.431 16:56:44 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:05:56.431 16:56:44 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:05:56.431 16:56:44 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:05:56.431 16:56:44 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:05:56.431 16:56:44 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:05:56.431 16:56:44 accel.accel_decmop_full -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:56.431 16:56:44 accel.accel_decmop_full -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:05:56.431 16:56:44 accel.accel_decmop_full -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:56.431 00:05:56.431 real 0m1.369s 00:05:56.431 user 0m1.255s 00:05:56.431 sys 0m0.120s 00:05:56.431 16:56:44 accel.accel_decmop_full -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:56.431 16:56:44 accel.accel_decmop_full -- common/autotest_common.sh@10 -- # set +x 00:05:56.431 ************************************ 00:05:56.431 END TEST accel_decmop_full 00:05:56.431 ************************************ 00:05:56.688 16:56:44 accel -- accel/accel.sh@119 -- # run_test accel_decomp_mcore accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:05:56.688 16:56:44 accel -- common/autotest_common.sh@1097 -- # '[' 11 -le 1 ']' 00:05:56.688 16:56:44 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:56.688 16:56:44 accel -- common/autotest_common.sh@10 -- # set +x 00:05:56.688 ************************************ 00:05:56.688 START TEST accel_decomp_mcore 00:05:56.688 ************************************ 00:05:56.688 16:56:44 accel.accel_decomp_mcore -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:05:56.688 16:56:44 accel.accel_decomp_mcore -- accel/accel.sh@16 -- # local accel_opc 00:05:56.688 16:56:44 accel.accel_decomp_mcore -- accel/accel.sh@17 -- # local accel_module 00:05:56.688 16:56:44 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:56.688 16:56:44 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:56.688 16:56:44 accel.accel_decomp_mcore -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:05:56.688 16:56:44 accel.accel_decomp_mcore -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:05:56.688 16:56:44 accel.accel_decomp_mcore -- accel/accel.sh@12 -- # build_accel_config 00:05:56.688 16:56:44 accel.accel_decomp_mcore -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:56.688 16:56:44 accel.accel_decomp_mcore -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:56.688 16:56:44 accel.accel_decomp_mcore -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:56.688 16:56:44 accel.accel_decomp_mcore -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:56.688 16:56:44 accel.accel_decomp_mcore -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:56.688 16:56:44 accel.accel_decomp_mcore -- accel/accel.sh@40 -- # local IFS=, 00:05:56.688 16:56:44 accel.accel_decomp_mcore -- accel/accel.sh@41 -- # jq -r . 00:05:56.688 [2024-05-15 16:56:44.161294] Starting SPDK v24.05-pre git sha1 0ba8ca574 / DPDK 23.11.0 initialization... 00:05:56.688 [2024-05-15 16:56:44.161353] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2901344 ] 00:05:56.688 EAL: No free 2048 kB hugepages reported on node 1 00:05:56.688 [2024-05-15 16:56:44.218433] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:56.688 [2024-05-15 16:56:44.294356] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:56.688 [2024-05-15 16:56:44.294451] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:05:56.688 [2024-05-15 16:56:44.294559] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:05:56.688 [2024-05-15 16:56:44.294561] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:56.688 16:56:44 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:05:56.688 16:56:44 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:56.688 16:56:44 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:56.688 16:56:44 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:56.688 16:56:44 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:05:56.688 16:56:44 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:56.688 16:56:44 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:56.688 16:56:44 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:56.688 16:56:44 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:05:56.688 16:56:44 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:56.688 16:56:44 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:56.688 16:56:44 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:56.688 16:56:44 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=0xf 00:05:56.688 16:56:44 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:56.688 16:56:44 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:56.688 16:56:44 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:56.688 16:56:44 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:05:56.688 16:56:44 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:56.688 16:56:44 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:56.688 16:56:44 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:56.688 16:56:44 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:05:56.688 16:56:44 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:56.688 16:56:44 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:56.689 16:56:44 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:56.689 16:56:44 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=decompress 00:05:56.689 16:56:44 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:56.689 16:56:44 accel.accel_decomp_mcore -- accel/accel.sh@23 -- # accel_opc=decompress 00:05:56.689 16:56:44 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:56.689 16:56:44 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:56.689 16:56:44 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:56.689 16:56:44 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:56.689 16:56:44 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:56.689 16:56:44 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:56.689 16:56:44 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:05:56.946 16:56:44 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:56.946 16:56:44 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:56.946 16:56:44 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:56.946 16:56:44 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=software 00:05:56.946 16:56:44 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:56.946 16:56:44 accel.accel_decomp_mcore -- accel/accel.sh@22 -- # accel_module=software 00:05:56.946 16:56:44 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:56.946 16:56:44 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:56.946 16:56:44 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:05:56.946 16:56:44 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:56.946 16:56:44 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:56.946 16:56:44 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:56.946 16:56:44 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=32 00:05:56.946 16:56:44 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:56.946 16:56:44 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:56.946 16:56:44 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:56.946 16:56:44 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=32 00:05:56.946 16:56:44 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:56.946 16:56:44 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:56.946 16:56:44 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:56.946 16:56:44 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=1 00:05:56.946 16:56:44 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:56.946 16:56:44 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:56.946 16:56:44 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:56.946 16:56:44 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val='1 seconds' 00:05:56.946 16:56:44 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:56.946 16:56:44 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:56.946 16:56:44 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:56.946 16:56:44 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=Yes 00:05:56.946 16:56:44 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:56.946 16:56:44 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:56.946 16:56:44 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:56.946 16:56:44 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:05:56.946 16:56:44 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:56.946 16:56:44 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:56.946 16:56:44 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:56.946 16:56:44 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:05:56.946 16:56:44 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:56.946 16:56:44 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:56.946 16:56:44 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:57.880 16:56:45 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:05:57.880 16:56:45 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:57.880 16:56:45 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:57.880 16:56:45 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:57.880 16:56:45 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:05:57.880 16:56:45 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:57.880 16:56:45 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:57.880 16:56:45 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:57.880 16:56:45 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:05:57.880 16:56:45 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:57.880 16:56:45 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:57.880 16:56:45 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:57.880 16:56:45 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:05:57.880 16:56:45 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:57.880 16:56:45 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:57.880 16:56:45 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:57.880 16:56:45 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:05:57.880 16:56:45 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:57.880 16:56:45 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:57.880 16:56:45 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:57.880 16:56:45 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:05:57.880 16:56:45 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:57.880 16:56:45 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:57.880 16:56:45 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:57.880 16:56:45 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:05:57.880 16:56:45 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:57.880 16:56:45 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:57.880 16:56:45 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:57.880 16:56:45 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:05:57.880 16:56:45 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:57.880 16:56:45 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:57.880 16:56:45 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:57.880 16:56:45 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:05:57.880 16:56:45 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:57.880 16:56:45 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:57.880 16:56:45 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:57.880 16:56:45 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:57.880 16:56:45 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:05:57.880 16:56:45 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:57.880 00:05:57.880 real 0m1.375s 00:05:57.880 user 0m4.597s 00:05:57.880 sys 0m0.122s 00:05:57.880 16:56:45 accel.accel_decomp_mcore -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:57.880 16:56:45 accel.accel_decomp_mcore -- common/autotest_common.sh@10 -- # set +x 00:05:57.880 ************************************ 00:05:57.880 END TEST accel_decomp_mcore 00:05:57.880 ************************************ 00:05:58.139 16:56:45 accel -- accel/accel.sh@120 -- # run_test accel_decomp_full_mcore accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:05:58.139 16:56:45 accel -- common/autotest_common.sh@1097 -- # '[' 13 -le 1 ']' 00:05:58.139 16:56:45 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:58.139 16:56:45 accel -- common/autotest_common.sh@10 -- # set +x 00:05:58.139 ************************************ 00:05:58.139 START TEST accel_decomp_full_mcore 00:05:58.139 ************************************ 00:05:58.139 16:56:45 accel.accel_decomp_full_mcore -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:05:58.139 16:56:45 accel.accel_decomp_full_mcore -- accel/accel.sh@16 -- # local accel_opc 00:05:58.139 16:56:45 accel.accel_decomp_full_mcore -- accel/accel.sh@17 -- # local accel_module 00:05:58.139 16:56:45 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:58.139 16:56:45 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:58.140 16:56:45 accel.accel_decomp_full_mcore -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:05:58.140 16:56:45 accel.accel_decomp_full_mcore -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:05:58.140 16:56:45 accel.accel_decomp_full_mcore -- accel/accel.sh@12 -- # build_accel_config 00:05:58.140 16:56:45 accel.accel_decomp_full_mcore -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:58.140 16:56:45 accel.accel_decomp_full_mcore -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:58.140 16:56:45 accel.accel_decomp_full_mcore -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:58.140 16:56:45 accel.accel_decomp_full_mcore -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:58.140 16:56:45 accel.accel_decomp_full_mcore -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:58.140 16:56:45 accel.accel_decomp_full_mcore -- accel/accel.sh@40 -- # local IFS=, 00:05:58.140 16:56:45 accel.accel_decomp_full_mcore -- accel/accel.sh@41 -- # jq -r . 00:05:58.140 [2024-05-15 16:56:45.611457] Starting SPDK v24.05-pre git sha1 0ba8ca574 / DPDK 23.11.0 initialization... 00:05:58.140 [2024-05-15 16:56:45.611524] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2901635 ] 00:05:58.140 EAL: No free 2048 kB hugepages reported on node 1 00:05:58.140 [2024-05-15 16:56:45.667614] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:58.140 [2024-05-15 16:56:45.742829] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:58.140 [2024-05-15 16:56:45.742943] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:05:58.140 [2024-05-15 16:56:45.743036] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:05:58.140 [2024-05-15 16:56:45.743038] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:58.140 16:56:45 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:05:58.140 16:56:45 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:58.140 16:56:45 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:58.140 16:56:45 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:58.140 16:56:45 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:05:58.140 16:56:45 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:58.140 16:56:45 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:58.140 16:56:45 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:58.140 16:56:45 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:05:58.140 16:56:45 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:58.140 16:56:45 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:58.140 16:56:45 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:58.140 16:56:45 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=0xf 00:05:58.140 16:56:45 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:58.140 16:56:45 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:58.140 16:56:45 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:58.140 16:56:45 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:05:58.140 16:56:45 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:58.140 16:56:45 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:58.140 16:56:45 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:58.140 16:56:45 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:05:58.140 16:56:45 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:58.140 16:56:45 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:58.140 16:56:45 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:58.140 16:56:45 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=decompress 00:05:58.140 16:56:45 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:58.140 16:56:45 accel.accel_decomp_full_mcore -- accel/accel.sh@23 -- # accel_opc=decompress 00:05:58.140 16:56:45 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:58.140 16:56:45 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:58.140 16:56:45 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val='111250 bytes' 00:05:58.140 16:56:45 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:58.140 16:56:45 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:58.140 16:56:45 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:58.140 16:56:45 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:05:58.140 16:56:45 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:58.140 16:56:45 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:58.140 16:56:45 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:58.140 16:56:45 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=software 00:05:58.140 16:56:45 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:58.140 16:56:45 accel.accel_decomp_full_mcore -- accel/accel.sh@22 -- # accel_module=software 00:05:58.140 16:56:45 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:58.140 16:56:45 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:58.140 16:56:45 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:05:58.398 16:56:45 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:58.398 16:56:45 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:58.398 16:56:45 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:58.398 16:56:45 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=32 00:05:58.398 16:56:45 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:58.398 16:56:45 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:58.398 16:56:45 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:58.398 16:56:45 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=32 00:05:58.398 16:56:45 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:58.398 16:56:45 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:58.398 16:56:45 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:58.398 16:56:45 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=1 00:05:58.398 16:56:45 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:58.398 16:56:45 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:58.398 16:56:45 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:58.398 16:56:45 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val='1 seconds' 00:05:58.398 16:56:45 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:58.398 16:56:45 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:58.398 16:56:45 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:58.398 16:56:45 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=Yes 00:05:58.398 16:56:45 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:58.398 16:56:45 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:58.398 16:56:45 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:58.398 16:56:45 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:05:58.398 16:56:45 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:58.398 16:56:45 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:58.398 16:56:45 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:58.398 16:56:45 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:05:58.398 16:56:45 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:58.398 16:56:45 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:58.398 16:56:45 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:59.332 16:56:46 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:05:59.332 16:56:46 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:59.332 16:56:46 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:59.332 16:56:46 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:59.332 16:56:46 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:05:59.332 16:56:46 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:59.332 16:56:46 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:59.332 16:56:46 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:59.332 16:56:46 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:05:59.332 16:56:46 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:59.332 16:56:46 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:59.332 16:56:46 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:59.332 16:56:46 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:05:59.332 16:56:46 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:59.332 16:56:46 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:59.332 16:56:46 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:59.332 16:56:46 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:05:59.332 16:56:46 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:59.332 16:56:46 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:59.332 16:56:46 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:59.332 16:56:46 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:05:59.332 16:56:46 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:59.332 16:56:46 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:59.332 16:56:46 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:59.332 16:56:46 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:05:59.332 16:56:46 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:59.332 16:56:46 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:59.332 16:56:46 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:59.332 16:56:46 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:05:59.332 16:56:46 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:59.332 16:56:46 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:59.332 16:56:46 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:59.332 16:56:46 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:05:59.332 16:56:46 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:59.332 16:56:46 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:59.332 16:56:46 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:59.332 16:56:46 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:59.332 16:56:46 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:05:59.332 16:56:46 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:59.332 00:05:59.332 real 0m1.387s 00:05:59.332 user 0m4.633s 00:05:59.332 sys 0m0.133s 00:05:59.332 16:56:46 accel.accel_decomp_full_mcore -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:59.332 16:56:46 accel.accel_decomp_full_mcore -- common/autotest_common.sh@10 -- # set +x 00:05:59.332 ************************************ 00:05:59.332 END TEST accel_decomp_full_mcore 00:05:59.332 ************************************ 00:05:59.591 16:56:47 accel -- accel/accel.sh@121 -- # run_test accel_decomp_mthread accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:05:59.591 16:56:47 accel -- common/autotest_common.sh@1097 -- # '[' 11 -le 1 ']' 00:05:59.591 16:56:47 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:59.591 16:56:47 accel -- common/autotest_common.sh@10 -- # set +x 00:05:59.591 ************************************ 00:05:59.591 START TEST accel_decomp_mthread 00:05:59.591 ************************************ 00:05:59.591 16:56:47 accel.accel_decomp_mthread -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:05:59.591 16:56:47 accel.accel_decomp_mthread -- accel/accel.sh@16 -- # local accel_opc 00:05:59.591 16:56:47 accel.accel_decomp_mthread -- accel/accel.sh@17 -- # local accel_module 00:05:59.591 16:56:47 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:59.591 16:56:47 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:59.591 16:56:47 accel.accel_decomp_mthread -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:05:59.591 16:56:47 accel.accel_decomp_mthread -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:05:59.591 16:56:47 accel.accel_decomp_mthread -- accel/accel.sh@12 -- # build_accel_config 00:05:59.591 16:56:47 accel.accel_decomp_mthread -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:59.591 16:56:47 accel.accel_decomp_mthread -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:59.591 16:56:47 accel.accel_decomp_mthread -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:59.591 16:56:47 accel.accel_decomp_mthread -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:59.591 16:56:47 accel.accel_decomp_mthread -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:59.591 16:56:47 accel.accel_decomp_mthread -- accel/accel.sh@40 -- # local IFS=, 00:05:59.591 16:56:47 accel.accel_decomp_mthread -- accel/accel.sh@41 -- # jq -r . 00:05:59.591 [2024-05-15 16:56:47.069177] Starting SPDK v24.05-pre git sha1 0ba8ca574 / DPDK 23.11.0 initialization... 00:05:59.591 [2024-05-15 16:56:47.069225] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2901932 ] 00:05:59.591 EAL: No free 2048 kB hugepages reported on node 1 00:05:59.591 [2024-05-15 16:56:47.123161] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:59.591 [2024-05-15 16:56:47.196025] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:59.591 16:56:47 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:05:59.591 16:56:47 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:59.591 16:56:47 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:59.591 16:56:47 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:59.591 16:56:47 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:05:59.591 16:56:47 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:59.591 16:56:47 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:59.591 16:56:47 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:59.591 16:56:47 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:05:59.591 16:56:47 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:59.591 16:56:47 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:59.591 16:56:47 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:59.591 16:56:47 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=0x1 00:05:59.591 16:56:47 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:59.591 16:56:47 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:59.591 16:56:47 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:59.591 16:56:47 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:05:59.591 16:56:47 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:59.591 16:56:47 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:59.591 16:56:47 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:59.591 16:56:47 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:05:59.591 16:56:47 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:59.591 16:56:47 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:59.591 16:56:47 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:59.591 16:56:47 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=decompress 00:05:59.591 16:56:47 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:59.591 16:56:47 accel.accel_decomp_mthread -- accel/accel.sh@23 -- # accel_opc=decompress 00:05:59.591 16:56:47 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:59.591 16:56:47 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:59.591 16:56:47 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:59.591 16:56:47 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:59.591 16:56:47 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:59.591 16:56:47 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:59.591 16:56:47 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:05:59.591 16:56:47 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:59.591 16:56:47 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:59.591 16:56:47 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:59.591 16:56:47 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=software 00:05:59.591 16:56:47 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:59.591 16:56:47 accel.accel_decomp_mthread -- accel/accel.sh@22 -- # accel_module=software 00:05:59.591 16:56:47 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:59.591 16:56:47 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:59.849 16:56:47 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:05:59.849 16:56:47 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:59.849 16:56:47 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:59.849 16:56:47 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:59.849 16:56:47 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=32 00:05:59.849 16:56:47 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:59.850 16:56:47 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:59.850 16:56:47 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:59.850 16:56:47 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=32 00:05:59.850 16:56:47 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:59.850 16:56:47 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:59.850 16:56:47 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:59.850 16:56:47 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=2 00:05:59.850 16:56:47 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:59.850 16:56:47 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:59.850 16:56:47 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:59.850 16:56:47 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val='1 seconds' 00:05:59.850 16:56:47 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:59.850 16:56:47 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:59.850 16:56:47 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:59.850 16:56:47 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=Yes 00:05:59.850 16:56:47 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:59.850 16:56:47 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:59.850 16:56:47 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:59.850 16:56:47 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:05:59.850 16:56:47 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:59.850 16:56:47 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:59.850 16:56:47 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:59.850 16:56:47 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:05:59.850 16:56:47 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:59.850 16:56:47 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:59.850 16:56:47 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:00.782 16:56:48 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:00.782 16:56:48 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:00.782 16:56:48 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:00.782 16:56:48 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:00.782 16:56:48 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:00.782 16:56:48 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:00.782 16:56:48 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:00.782 16:56:48 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:00.782 16:56:48 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:00.782 16:56:48 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:00.782 16:56:48 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:00.782 16:56:48 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:00.782 16:56:48 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:00.782 16:56:48 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:00.782 16:56:48 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:00.782 16:56:48 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:00.782 16:56:48 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:00.782 16:56:48 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:00.782 16:56:48 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:00.782 16:56:48 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:00.782 16:56:48 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:00.782 16:56:48 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:00.782 16:56:48 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:00.782 16:56:48 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:00.782 16:56:48 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:00.782 16:56:48 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:00.782 16:56:48 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:00.782 16:56:48 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:00.782 16:56:48 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:00.782 16:56:48 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:06:00.782 16:56:48 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:00.782 00:06:00.782 real 0m1.364s 00:06:00.782 user 0m1.261s 00:06:00.782 sys 0m0.117s 00:06:00.782 16:56:48 accel.accel_decomp_mthread -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:00.782 16:56:48 accel.accel_decomp_mthread -- common/autotest_common.sh@10 -- # set +x 00:06:00.782 ************************************ 00:06:00.782 END TEST accel_decomp_mthread 00:06:00.782 ************************************ 00:06:01.040 16:56:48 accel -- accel/accel.sh@122 -- # run_test accel_decomp_full_mthread accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:06:01.040 16:56:48 accel -- common/autotest_common.sh@1097 -- # '[' 13 -le 1 ']' 00:06:01.040 16:56:48 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:01.040 16:56:48 accel -- common/autotest_common.sh@10 -- # set +x 00:06:01.040 ************************************ 00:06:01.040 START TEST accel_decomp_full_mthread 00:06:01.040 ************************************ 00:06:01.040 16:56:48 accel.accel_decomp_full_mthread -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:06:01.040 16:56:48 accel.accel_decomp_full_mthread -- accel/accel.sh@16 -- # local accel_opc 00:06:01.040 16:56:48 accel.accel_decomp_full_mthread -- accel/accel.sh@17 -- # local accel_module 00:06:01.040 16:56:48 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:01.040 16:56:48 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:01.040 16:56:48 accel.accel_decomp_full_mthread -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:06:01.040 16:56:48 accel.accel_decomp_full_mthread -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:06:01.040 16:56:48 accel.accel_decomp_full_mthread -- accel/accel.sh@12 -- # build_accel_config 00:06:01.040 16:56:48 accel.accel_decomp_full_mthread -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:01.040 16:56:48 accel.accel_decomp_full_mthread -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:01.040 16:56:48 accel.accel_decomp_full_mthread -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:01.040 16:56:48 accel.accel_decomp_full_mthread -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:01.040 16:56:48 accel.accel_decomp_full_mthread -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:01.040 16:56:48 accel.accel_decomp_full_mthread -- accel/accel.sh@40 -- # local IFS=, 00:06:01.040 16:56:48 accel.accel_decomp_full_mthread -- accel/accel.sh@41 -- # jq -r . 00:06:01.040 [2024-05-15 16:56:48.506858] Starting SPDK v24.05-pre git sha1 0ba8ca574 / DPDK 23.11.0 initialization... 00:06:01.040 [2024-05-15 16:56:48.506912] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2902216 ] 00:06:01.040 EAL: No free 2048 kB hugepages reported on node 1 00:06:01.040 [2024-05-15 16:56:48.562699] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:01.040 [2024-05-15 16:56:48.635253] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:01.040 16:56:48 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:01.040 16:56:48 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:01.040 16:56:48 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:01.040 16:56:48 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:01.040 16:56:48 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:01.040 16:56:48 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:01.040 16:56:48 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:01.040 16:56:48 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:01.040 16:56:48 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:01.040 16:56:48 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:01.040 16:56:48 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:01.040 16:56:48 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:01.040 16:56:48 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=0x1 00:06:01.040 16:56:48 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:01.040 16:56:48 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:01.040 16:56:48 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:01.040 16:56:48 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:01.040 16:56:48 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:01.040 16:56:48 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:01.040 16:56:48 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:01.040 16:56:48 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:01.040 16:56:48 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:01.040 16:56:48 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:01.040 16:56:48 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:01.040 16:56:48 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=decompress 00:06:01.040 16:56:48 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:01.041 16:56:48 accel.accel_decomp_full_mthread -- accel/accel.sh@23 -- # accel_opc=decompress 00:06:01.041 16:56:48 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:01.041 16:56:48 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:01.041 16:56:48 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val='111250 bytes' 00:06:01.041 16:56:48 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:01.041 16:56:48 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:01.041 16:56:48 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:01.041 16:56:48 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:01.041 16:56:48 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:01.041 16:56:48 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:01.041 16:56:48 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:01.041 16:56:48 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=software 00:06:01.041 16:56:48 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:01.041 16:56:48 accel.accel_decomp_full_mthread -- accel/accel.sh@22 -- # accel_module=software 00:06:01.041 16:56:48 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:01.041 16:56:48 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:01.041 16:56:48 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:01.041 16:56:48 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:01.041 16:56:48 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:01.041 16:56:48 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:01.041 16:56:48 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=32 00:06:01.041 16:56:48 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:01.041 16:56:48 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:01.041 16:56:48 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:01.041 16:56:48 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=32 00:06:01.041 16:56:48 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:01.041 16:56:48 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:01.041 16:56:48 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:01.041 16:56:48 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=2 00:06:01.041 16:56:48 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:01.041 16:56:48 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:01.041 16:56:48 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:01.041 16:56:48 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val='1 seconds' 00:06:01.041 16:56:48 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:01.041 16:56:48 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:01.041 16:56:48 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:01.041 16:56:48 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=Yes 00:06:01.041 16:56:48 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:01.041 16:56:48 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:01.041 16:56:48 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:01.041 16:56:48 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:01.041 16:56:48 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:01.041 16:56:48 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:01.041 16:56:48 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:01.041 16:56:48 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:01.041 16:56:48 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:01.041 16:56:48 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:01.041 16:56:48 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:02.413 16:56:49 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:02.413 16:56:49 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:02.413 16:56:49 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:02.413 16:56:49 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:02.413 16:56:49 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:02.413 16:56:49 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:02.413 16:56:49 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:02.413 16:56:49 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:02.413 16:56:49 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:02.413 16:56:49 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:02.413 16:56:49 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:02.413 16:56:49 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:02.413 16:56:49 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:02.413 16:56:49 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:02.413 16:56:49 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:02.413 16:56:49 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:02.413 16:56:49 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:02.413 16:56:49 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:02.413 16:56:49 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:02.413 16:56:49 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:02.413 16:56:49 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:02.413 16:56:49 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:02.413 16:56:49 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:02.413 16:56:49 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:02.413 16:56:49 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:02.413 16:56:49 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:02.413 16:56:49 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:02.413 16:56:49 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:02.413 16:56:49 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:02.413 16:56:49 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:06:02.413 16:56:49 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:02.413 00:06:02.413 real 0m1.387s 00:06:02.413 user 0m1.284s 00:06:02.413 sys 0m0.117s 00:06:02.413 16:56:49 accel.accel_decomp_full_mthread -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:02.413 16:56:49 accel.accel_decomp_full_mthread -- common/autotest_common.sh@10 -- # set +x 00:06:02.413 ************************************ 00:06:02.413 END TEST accel_decomp_full_mthread 00:06:02.413 ************************************ 00:06:02.413 16:56:49 accel -- accel/accel.sh@124 -- # [[ n == y ]] 00:06:02.413 16:56:49 accel -- accel/accel.sh@137 -- # run_test accel_dif_functional_tests /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/dif/dif -c /dev/fd/62 00:06:02.413 16:56:49 accel -- accel/accel.sh@137 -- # build_accel_config 00:06:02.413 16:56:49 accel -- common/autotest_common.sh@1097 -- # '[' 4 -le 1 ']' 00:06:02.413 16:56:49 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:02.413 16:56:49 accel -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:02.413 16:56:49 accel -- common/autotest_common.sh@10 -- # set +x 00:06:02.413 16:56:49 accel -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:02.413 16:56:49 accel -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:02.413 16:56:49 accel -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:02.413 16:56:49 accel -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:02.413 16:56:49 accel -- accel/accel.sh@40 -- # local IFS=, 00:06:02.413 16:56:49 accel -- accel/accel.sh@41 -- # jq -r . 00:06:02.414 ************************************ 00:06:02.414 START TEST accel_dif_functional_tests 00:06:02.414 ************************************ 00:06:02.414 16:56:49 accel.accel_dif_functional_tests -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/dif/dif -c /dev/fd/62 00:06:02.414 [2024-05-15 16:56:49.983659] Starting SPDK v24.05-pre git sha1 0ba8ca574 / DPDK 23.11.0 initialization... 00:06:02.414 [2024-05-15 16:56:49.983695] [ DPDK EAL parameters: DIF --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2902518 ] 00:06:02.414 EAL: No free 2048 kB hugepages reported on node 1 00:06:02.414 [2024-05-15 16:56:50.037202] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:02.672 [2024-05-15 16:56:50.116071] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:02.672 [2024-05-15 16:56:50.116155] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:06:02.672 [2024-05-15 16:56:50.116157] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:02.672 00:06:02.672 00:06:02.672 CUnit - A unit testing framework for C - Version 2.1-3 00:06:02.672 http://cunit.sourceforge.net/ 00:06:02.672 00:06:02.672 00:06:02.672 Suite: accel_dif 00:06:02.672 Test: verify: DIF generated, GUARD check ...passed 00:06:02.672 Test: verify: DIF generated, APPTAG check ...passed 00:06:02.672 Test: verify: DIF generated, REFTAG check ...passed 00:06:02.672 Test: verify: DIF not generated, GUARD check ...[2024-05-15 16:56:50.183819] dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:06:02.672 [2024-05-15 16:56:50.183860] dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:06:02.672 passed 00:06:02.672 Test: verify: DIF not generated, APPTAG check ...[2024-05-15 16:56:50.183888] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:06:02.672 [2024-05-15 16:56:50.183902] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:06:02.672 passed 00:06:02.672 Test: verify: DIF not generated, REFTAG check ...[2024-05-15 16:56:50.183918] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:06:02.672 [2024-05-15 16:56:50.183933] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:06:02.672 passed 00:06:02.672 Test: verify: APPTAG correct, APPTAG check ...passed 00:06:02.672 Test: verify: APPTAG incorrect, APPTAG check ...[2024-05-15 16:56:50.183975] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=30, Expected=28, Actual=14 00:06:02.672 passed 00:06:02.672 Test: verify: APPTAG incorrect, no APPTAG check ...passed 00:06:02.672 Test: verify: REFTAG incorrect, REFTAG ignore ...passed 00:06:02.672 Test: verify: REFTAG_INIT correct, REFTAG check ...passed 00:06:02.672 Test: verify: REFTAG_INIT incorrect, REFTAG check ...[2024-05-15 16:56:50.184080] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=10 00:06:02.672 passed 00:06:02.672 Test: generate copy: DIF generated, GUARD check ...passed 00:06:02.672 Test: generate copy: DIF generated, APTTAG check ...passed 00:06:02.672 Test: generate copy: DIF generated, REFTAG check ...passed 00:06:02.672 Test: generate copy: DIF generated, no GUARD check flag set ...passed 00:06:02.672 Test: generate copy: DIF generated, no APPTAG check flag set ...passed 00:06:02.672 Test: generate copy: DIF generated, no REFTAG check flag set ...passed 00:06:02.672 Test: generate copy: iovecs-len validate ...[2024-05-15 16:56:50.184251] dif.c:1190:spdk_dif_generate_copy: *ERROR*: Size of bounce_iovs arrays are not valid or misaligned with block_size. 00:06:02.672 passed 00:06:02.672 Test: generate copy: buffer alignment validate ...passed 00:06:02.672 00:06:02.672 Run Summary: Type Total Ran Passed Failed Inactive 00:06:02.672 suites 1 1 n/a 0 0 00:06:02.672 tests 20 20 20 0 0 00:06:02.672 asserts 204 204 204 0 n/a 00:06:02.672 00:06:02.672 Elapsed time = 0.002 seconds 00:06:02.931 00:06:02.931 real 0m0.438s 00:06:02.931 user 0m0.656s 00:06:02.931 sys 0m0.139s 00:06:02.931 16:56:50 accel.accel_dif_functional_tests -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:02.931 16:56:50 accel.accel_dif_functional_tests -- common/autotest_common.sh@10 -- # set +x 00:06:02.931 ************************************ 00:06:02.931 END TEST accel_dif_functional_tests 00:06:02.931 ************************************ 00:06:02.931 00:06:02.931 real 0m31.522s 00:06:02.931 user 0m35.137s 00:06:02.931 sys 0m4.231s 00:06:02.931 16:56:50 accel -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:02.931 16:56:50 accel -- common/autotest_common.sh@10 -- # set +x 00:06:02.931 ************************************ 00:06:02.931 END TEST accel 00:06:02.931 ************************************ 00:06:02.931 16:56:50 -- spdk/autotest.sh@180 -- # run_test accel_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel_rpc.sh 00:06:02.931 16:56:50 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:02.931 16:56:50 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:02.931 16:56:50 -- common/autotest_common.sh@10 -- # set +x 00:06:02.931 ************************************ 00:06:02.931 START TEST accel_rpc 00:06:02.931 ************************************ 00:06:02.931 16:56:50 accel_rpc -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel_rpc.sh 00:06:02.931 * Looking for test storage... 00:06:02.931 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel 00:06:02.931 16:56:50 accel_rpc -- accel/accel_rpc.sh@11 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:06:02.931 16:56:50 accel_rpc -- accel/accel_rpc.sh@14 -- # spdk_tgt_pid=2902597 00:06:02.931 16:56:50 accel_rpc -- accel/accel_rpc.sh@15 -- # waitforlisten 2902597 00:06:02.931 16:56:50 accel_rpc -- accel/accel_rpc.sh@13 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --wait-for-rpc 00:06:02.931 16:56:50 accel_rpc -- common/autotest_common.sh@827 -- # '[' -z 2902597 ']' 00:06:02.931 16:56:50 accel_rpc -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:02.931 16:56:50 accel_rpc -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:02.931 16:56:50 accel_rpc -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:02.931 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:02.931 16:56:50 accel_rpc -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:02.931 16:56:50 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:03.228 [2024-05-15 16:56:50.609022] Starting SPDK v24.05-pre git sha1 0ba8ca574 / DPDK 23.11.0 initialization... 00:06:03.228 [2024-05-15 16:56:50.609070] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2902597 ] 00:06:03.228 EAL: No free 2048 kB hugepages reported on node 1 00:06:03.228 [2024-05-15 16:56:50.662495] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:03.228 [2024-05-15 16:56:50.737014] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:03.795 16:56:51 accel_rpc -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:03.795 16:56:51 accel_rpc -- common/autotest_common.sh@860 -- # return 0 00:06:03.795 16:56:51 accel_rpc -- accel/accel_rpc.sh@45 -- # [[ y == y ]] 00:06:03.795 16:56:51 accel_rpc -- accel/accel_rpc.sh@45 -- # [[ 0 -gt 0 ]] 00:06:03.795 16:56:51 accel_rpc -- accel/accel_rpc.sh@49 -- # [[ y == y ]] 00:06:03.795 16:56:51 accel_rpc -- accel/accel_rpc.sh@49 -- # [[ 0 -gt 0 ]] 00:06:03.795 16:56:51 accel_rpc -- accel/accel_rpc.sh@53 -- # run_test accel_assign_opcode accel_assign_opcode_test_suite 00:06:03.795 16:56:51 accel_rpc -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:03.795 16:56:51 accel_rpc -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:03.795 16:56:51 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:03.795 ************************************ 00:06:03.795 START TEST accel_assign_opcode 00:06:03.795 ************************************ 00:06:03.795 16:56:51 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@1121 -- # accel_assign_opcode_test_suite 00:06:03.795 16:56:51 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@38 -- # rpc_cmd accel_assign_opc -o copy -m incorrect 00:06:03.795 16:56:51 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:03.795 16:56:51 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:06:03.795 [2024-05-15 16:56:51.439130] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module incorrect 00:06:03.795 16:56:51 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:03.795 16:56:51 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@40 -- # rpc_cmd accel_assign_opc -o copy -m software 00:06:03.795 16:56:51 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:03.795 16:56:51 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:06:03.795 [2024-05-15 16:56:51.451153] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module software 00:06:04.053 16:56:51 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:04.053 16:56:51 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@41 -- # rpc_cmd framework_start_init 00:06:04.053 16:56:51 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:04.053 16:56:51 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:06:04.053 16:56:51 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:04.053 16:56:51 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # rpc_cmd accel_get_opc_assignments 00:06:04.053 16:56:51 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:04.053 16:56:51 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # jq -r .copy 00:06:04.053 16:56:51 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:06:04.053 16:56:51 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # grep software 00:06:04.053 16:56:51 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:04.053 software 00:06:04.053 00:06:04.053 real 0m0.229s 00:06:04.053 user 0m0.042s 00:06:04.053 sys 0m0.012s 00:06:04.053 16:56:51 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:04.053 16:56:51 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:06:04.053 ************************************ 00:06:04.053 END TEST accel_assign_opcode 00:06:04.053 ************************************ 00:06:04.054 16:56:51 accel_rpc -- accel/accel_rpc.sh@55 -- # killprocess 2902597 00:06:04.054 16:56:51 accel_rpc -- common/autotest_common.sh@946 -- # '[' -z 2902597 ']' 00:06:04.054 16:56:51 accel_rpc -- common/autotest_common.sh@950 -- # kill -0 2902597 00:06:04.054 16:56:51 accel_rpc -- common/autotest_common.sh@951 -- # uname 00:06:04.054 16:56:51 accel_rpc -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:06:04.054 16:56:51 accel_rpc -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 2902597 00:06:04.312 16:56:51 accel_rpc -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:06:04.312 16:56:51 accel_rpc -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:06:04.312 16:56:51 accel_rpc -- common/autotest_common.sh@964 -- # echo 'killing process with pid 2902597' 00:06:04.312 killing process with pid 2902597 00:06:04.312 16:56:51 accel_rpc -- common/autotest_common.sh@965 -- # kill 2902597 00:06:04.312 16:56:51 accel_rpc -- common/autotest_common.sh@970 -- # wait 2902597 00:06:04.570 00:06:04.570 real 0m1.602s 00:06:04.570 user 0m1.689s 00:06:04.570 sys 0m0.410s 00:06:04.570 16:56:52 accel_rpc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:04.570 16:56:52 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:04.570 ************************************ 00:06:04.570 END TEST accel_rpc 00:06:04.570 ************************************ 00:06:04.570 16:56:52 -- spdk/autotest.sh@181 -- # run_test app_cmdline /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:06:04.570 16:56:52 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:04.570 16:56:52 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:04.570 16:56:52 -- common/autotest_common.sh@10 -- # set +x 00:06:04.570 ************************************ 00:06:04.570 START TEST app_cmdline 00:06:04.570 ************************************ 00:06:04.570 16:56:52 app_cmdline -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:06:04.828 * Looking for test storage... 00:06:04.828 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:06:04.828 16:56:52 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:06:04.828 16:56:52 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=2902907 00:06:04.828 16:56:52 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 2902907 00:06:04.828 16:56:52 app_cmdline -- app/cmdline.sh@16 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:06:04.828 16:56:52 app_cmdline -- common/autotest_common.sh@827 -- # '[' -z 2902907 ']' 00:06:04.828 16:56:52 app_cmdline -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:04.828 16:56:52 app_cmdline -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:04.828 16:56:52 app_cmdline -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:04.828 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:04.828 16:56:52 app_cmdline -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:04.828 16:56:52 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:04.828 [2024-05-15 16:56:52.277044] Starting SPDK v24.05-pre git sha1 0ba8ca574 / DPDK 23.11.0 initialization... 00:06:04.828 [2024-05-15 16:56:52.277089] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2902907 ] 00:06:04.828 EAL: No free 2048 kB hugepages reported on node 1 00:06:04.828 [2024-05-15 16:56:52.330992] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:04.828 [2024-05-15 16:56:52.413148] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:05.763 16:56:53 app_cmdline -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:05.763 16:56:53 app_cmdline -- common/autotest_common.sh@860 -- # return 0 00:06:05.763 16:56:53 app_cmdline -- app/cmdline.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py spdk_get_version 00:06:05.763 { 00:06:05.763 "version": "SPDK v24.05-pre git sha1 0ba8ca574", 00:06:05.763 "fields": { 00:06:05.763 "major": 24, 00:06:05.763 "minor": 5, 00:06:05.763 "patch": 0, 00:06:05.763 "suffix": "-pre", 00:06:05.763 "commit": "0ba8ca574" 00:06:05.763 } 00:06:05.763 } 00:06:05.763 16:56:53 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:06:05.763 16:56:53 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:06:05.763 16:56:53 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:06:05.763 16:56:53 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:06:05.763 16:56:53 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:06:05.763 16:56:53 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:06:05.763 16:56:53 app_cmdline -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:05.763 16:56:53 app_cmdline -- app/cmdline.sh@26 -- # sort 00:06:05.763 16:56:53 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:05.763 16:56:53 app_cmdline -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:05.763 16:56:53 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:06:05.763 16:56:53 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:06:05.763 16:56:53 app_cmdline -- app/cmdline.sh@30 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:05.763 16:56:53 app_cmdline -- common/autotest_common.sh@648 -- # local es=0 00:06:05.763 16:56:53 app_cmdline -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:05.763 16:56:53 app_cmdline -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:06:05.763 16:56:53 app_cmdline -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:05.763 16:56:53 app_cmdline -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:06:05.763 16:56:53 app_cmdline -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:05.763 16:56:53 app_cmdline -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:06:05.763 16:56:53 app_cmdline -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:05.763 16:56:53 app_cmdline -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:06:05.763 16:56:53 app_cmdline -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:06:05.763 16:56:53 app_cmdline -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:06.021 request: 00:06:06.021 { 00:06:06.021 "method": "env_dpdk_get_mem_stats", 00:06:06.021 "req_id": 1 00:06:06.021 } 00:06:06.021 Got JSON-RPC error response 00:06:06.021 response: 00:06:06.021 { 00:06:06.021 "code": -32601, 00:06:06.021 "message": "Method not found" 00:06:06.021 } 00:06:06.021 16:56:53 app_cmdline -- common/autotest_common.sh@651 -- # es=1 00:06:06.021 16:56:53 app_cmdline -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:06.021 16:56:53 app_cmdline -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:06:06.021 16:56:53 app_cmdline -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:06.021 16:56:53 app_cmdline -- app/cmdline.sh@1 -- # killprocess 2902907 00:06:06.021 16:56:53 app_cmdline -- common/autotest_common.sh@946 -- # '[' -z 2902907 ']' 00:06:06.021 16:56:53 app_cmdline -- common/autotest_common.sh@950 -- # kill -0 2902907 00:06:06.021 16:56:53 app_cmdline -- common/autotest_common.sh@951 -- # uname 00:06:06.021 16:56:53 app_cmdline -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:06:06.021 16:56:53 app_cmdline -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 2902907 00:06:06.021 16:56:53 app_cmdline -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:06:06.021 16:56:53 app_cmdline -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:06:06.021 16:56:53 app_cmdline -- common/autotest_common.sh@964 -- # echo 'killing process with pid 2902907' 00:06:06.021 killing process with pid 2902907 00:06:06.021 16:56:53 app_cmdline -- common/autotest_common.sh@965 -- # kill 2902907 00:06:06.021 16:56:53 app_cmdline -- common/autotest_common.sh@970 -- # wait 2902907 00:06:06.279 00:06:06.279 real 0m1.682s 00:06:06.279 user 0m1.994s 00:06:06.279 sys 0m0.426s 00:06:06.279 16:56:53 app_cmdline -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:06.279 16:56:53 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:06.279 ************************************ 00:06:06.279 END TEST app_cmdline 00:06:06.279 ************************************ 00:06:06.279 16:56:53 -- spdk/autotest.sh@182 -- # run_test version /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:06:06.279 16:56:53 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:06.279 16:56:53 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:06.279 16:56:53 -- common/autotest_common.sh@10 -- # set +x 00:06:06.279 ************************************ 00:06:06.279 START TEST version 00:06:06.279 ************************************ 00:06:06.279 16:56:53 version -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:06:06.537 * Looking for test storage... 00:06:06.537 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:06:06.537 16:56:53 version -- app/version.sh@17 -- # get_header_version major 00:06:06.537 16:56:53 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:06:06.537 16:56:53 version -- app/version.sh@14 -- # cut -f2 00:06:06.537 16:56:53 version -- app/version.sh@14 -- # tr -d '"' 00:06:06.537 16:56:54 version -- app/version.sh@17 -- # major=24 00:06:06.537 16:56:54 version -- app/version.sh@18 -- # get_header_version minor 00:06:06.537 16:56:54 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:06:06.537 16:56:54 version -- app/version.sh@14 -- # cut -f2 00:06:06.537 16:56:54 version -- app/version.sh@14 -- # tr -d '"' 00:06:06.537 16:56:54 version -- app/version.sh@18 -- # minor=5 00:06:06.537 16:56:54 version -- app/version.sh@19 -- # get_header_version patch 00:06:06.537 16:56:54 version -- app/version.sh@14 -- # cut -f2 00:06:06.537 16:56:54 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:06:06.537 16:56:54 version -- app/version.sh@14 -- # tr -d '"' 00:06:06.537 16:56:54 version -- app/version.sh@19 -- # patch=0 00:06:06.537 16:56:54 version -- app/version.sh@20 -- # get_header_version suffix 00:06:06.537 16:56:54 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:06:06.537 16:56:54 version -- app/version.sh@14 -- # cut -f2 00:06:06.537 16:56:54 version -- app/version.sh@14 -- # tr -d '"' 00:06:06.537 16:56:54 version -- app/version.sh@20 -- # suffix=-pre 00:06:06.537 16:56:54 version -- app/version.sh@22 -- # version=24.5 00:06:06.537 16:56:54 version -- app/version.sh@25 -- # (( patch != 0 )) 00:06:06.537 16:56:54 version -- app/version.sh@28 -- # version=24.5rc0 00:06:06.537 16:56:54 version -- app/version.sh@30 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:06:06.537 16:56:54 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:06:06.537 16:56:54 version -- app/version.sh@30 -- # py_version=24.5rc0 00:06:06.537 16:56:54 version -- app/version.sh@31 -- # [[ 24.5rc0 == \2\4\.\5\r\c\0 ]] 00:06:06.537 00:06:06.537 real 0m0.149s 00:06:06.537 user 0m0.074s 00:06:06.537 sys 0m0.110s 00:06:06.537 16:56:54 version -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:06.537 16:56:54 version -- common/autotest_common.sh@10 -- # set +x 00:06:06.537 ************************************ 00:06:06.537 END TEST version 00:06:06.537 ************************************ 00:06:06.537 16:56:54 -- spdk/autotest.sh@184 -- # '[' 0 -eq 1 ']' 00:06:06.537 16:56:54 -- spdk/autotest.sh@194 -- # uname -s 00:06:06.538 16:56:54 -- spdk/autotest.sh@194 -- # [[ Linux == Linux ]] 00:06:06.538 16:56:54 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:06:06.538 16:56:54 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:06:06.538 16:56:54 -- spdk/autotest.sh@207 -- # '[' 0 -eq 1 ']' 00:06:06.538 16:56:54 -- spdk/autotest.sh@252 -- # '[' 0 -eq 1 ']' 00:06:06.538 16:56:54 -- spdk/autotest.sh@256 -- # timing_exit lib 00:06:06.538 16:56:54 -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:06.538 16:56:54 -- common/autotest_common.sh@10 -- # set +x 00:06:06.538 16:56:54 -- spdk/autotest.sh@258 -- # '[' 0 -eq 1 ']' 00:06:06.538 16:56:54 -- spdk/autotest.sh@266 -- # '[' 0 -eq 1 ']' 00:06:06.538 16:56:54 -- spdk/autotest.sh@275 -- # '[' 1 -eq 1 ']' 00:06:06.538 16:56:54 -- spdk/autotest.sh@276 -- # export NET_TYPE 00:06:06.538 16:56:54 -- spdk/autotest.sh@279 -- # '[' tcp = rdma ']' 00:06:06.538 16:56:54 -- spdk/autotest.sh@282 -- # '[' tcp = tcp ']' 00:06:06.538 16:56:54 -- spdk/autotest.sh@283 -- # run_test nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:06:06.538 16:56:54 -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:06:06.538 16:56:54 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:06.538 16:56:54 -- common/autotest_common.sh@10 -- # set +x 00:06:06.538 ************************************ 00:06:06.538 START TEST nvmf_tcp 00:06:06.538 ************************************ 00:06:06.538 16:56:54 nvmf_tcp -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:06:06.797 * Looking for test storage... 00:06:06.797 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:06:06.797 16:56:54 nvmf_tcp -- nvmf/nvmf.sh@10 -- # uname -s 00:06:06.797 16:56:54 nvmf_tcp -- nvmf/nvmf.sh@10 -- # '[' '!' Linux = Linux ']' 00:06:06.797 16:56:54 nvmf_tcp -- nvmf/nvmf.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:06.797 16:56:54 nvmf_tcp -- nvmf/common.sh@7 -- # uname -s 00:06:06.797 16:56:54 nvmf_tcp -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:06.797 16:56:54 nvmf_tcp -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:06.797 16:56:54 nvmf_tcp -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:06.797 16:56:54 nvmf_tcp -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:06.797 16:56:54 nvmf_tcp -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:06.797 16:56:54 nvmf_tcp -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:06.797 16:56:54 nvmf_tcp -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:06.797 16:56:54 nvmf_tcp -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:06.797 16:56:54 nvmf_tcp -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:06.797 16:56:54 nvmf_tcp -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:06.797 16:56:54 nvmf_tcp -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:06:06.797 16:56:54 nvmf_tcp -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:06:06.797 16:56:54 nvmf_tcp -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:06.797 16:56:54 nvmf_tcp -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:06.797 16:56:54 nvmf_tcp -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:06:06.797 16:56:54 nvmf_tcp -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:06.797 16:56:54 nvmf_tcp -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:06.797 16:56:54 nvmf_tcp -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:06.797 16:56:54 nvmf_tcp -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:06.797 16:56:54 nvmf_tcp -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:06.797 16:56:54 nvmf_tcp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:06.797 16:56:54 nvmf_tcp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:06.797 16:56:54 nvmf_tcp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:06.797 16:56:54 nvmf_tcp -- paths/export.sh@5 -- # export PATH 00:06:06.797 16:56:54 nvmf_tcp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:06.797 16:56:54 nvmf_tcp -- nvmf/common.sh@47 -- # : 0 00:06:06.797 16:56:54 nvmf_tcp -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:06:06.797 16:56:54 nvmf_tcp -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:06:06.797 16:56:54 nvmf_tcp -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:06.797 16:56:54 nvmf_tcp -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:06.797 16:56:54 nvmf_tcp -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:06.797 16:56:54 nvmf_tcp -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:06:06.797 16:56:54 nvmf_tcp -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:06:06.797 16:56:54 nvmf_tcp -- nvmf/common.sh@51 -- # have_pci_nics=0 00:06:06.797 16:56:54 nvmf_tcp -- nvmf/nvmf.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:06:06.797 16:56:54 nvmf_tcp -- nvmf/nvmf.sh@18 -- # TEST_ARGS=("$@") 00:06:06.797 16:56:54 nvmf_tcp -- nvmf/nvmf.sh@20 -- # timing_enter target 00:06:06.797 16:56:54 nvmf_tcp -- common/autotest_common.sh@720 -- # xtrace_disable 00:06:06.797 16:56:54 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:06.797 16:56:54 nvmf_tcp -- nvmf/nvmf.sh@22 -- # [[ 0 -eq 0 ]] 00:06:06.797 16:56:54 nvmf_tcp -- nvmf/nvmf.sh@23 -- # run_test nvmf_example /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:06:06.797 16:56:54 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:06:06.797 16:56:54 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:06.797 16:56:54 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:06.797 ************************************ 00:06:06.797 START TEST nvmf_example 00:06:06.797 ************************************ 00:06:06.797 16:56:54 nvmf_tcp.nvmf_example -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:06:06.797 * Looking for test storage... 00:06:06.797 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:06.797 16:56:54 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:06.797 16:56:54 nvmf_tcp.nvmf_example -- nvmf/common.sh@7 -- # uname -s 00:06:06.797 16:56:54 nvmf_tcp.nvmf_example -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:06.797 16:56:54 nvmf_tcp.nvmf_example -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:06.797 16:56:54 nvmf_tcp.nvmf_example -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:06.797 16:56:54 nvmf_tcp.nvmf_example -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:06.797 16:56:54 nvmf_tcp.nvmf_example -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:06.797 16:56:54 nvmf_tcp.nvmf_example -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:06.797 16:56:54 nvmf_tcp.nvmf_example -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:06.797 16:56:54 nvmf_tcp.nvmf_example -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:06.797 16:56:54 nvmf_tcp.nvmf_example -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:06.797 16:56:54 nvmf_tcp.nvmf_example -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:06.797 16:56:54 nvmf_tcp.nvmf_example -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:06:06.798 16:56:54 nvmf_tcp.nvmf_example -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:06:06.798 16:56:54 nvmf_tcp.nvmf_example -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:06.798 16:56:54 nvmf_tcp.nvmf_example -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:06.798 16:56:54 nvmf_tcp.nvmf_example -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:06:06.798 16:56:54 nvmf_tcp.nvmf_example -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:06.798 16:56:54 nvmf_tcp.nvmf_example -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:06.798 16:56:54 nvmf_tcp.nvmf_example -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:06.798 16:56:54 nvmf_tcp.nvmf_example -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:06.798 16:56:54 nvmf_tcp.nvmf_example -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:06.798 16:56:54 nvmf_tcp.nvmf_example -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:06.798 16:56:54 nvmf_tcp.nvmf_example -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:06.798 16:56:54 nvmf_tcp.nvmf_example -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:06.798 16:56:54 nvmf_tcp.nvmf_example -- paths/export.sh@5 -- # export PATH 00:06:06.798 16:56:54 nvmf_tcp.nvmf_example -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:06.798 16:56:54 nvmf_tcp.nvmf_example -- nvmf/common.sh@47 -- # : 0 00:06:06.798 16:56:54 nvmf_tcp.nvmf_example -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:06:06.798 16:56:54 nvmf_tcp.nvmf_example -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:06:06.798 16:56:54 nvmf_tcp.nvmf_example -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:06.798 16:56:54 nvmf_tcp.nvmf_example -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:06.798 16:56:54 nvmf_tcp.nvmf_example -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:06.798 16:56:54 nvmf_tcp.nvmf_example -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:06:06.798 16:56:54 nvmf_tcp.nvmf_example -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:06:06.798 16:56:54 nvmf_tcp.nvmf_example -- nvmf/common.sh@51 -- # have_pci_nics=0 00:06:06.798 16:56:54 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@11 -- # NVMF_EXAMPLE=("$SPDK_EXAMPLE_DIR/nvmf") 00:06:06.798 16:56:54 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@13 -- # MALLOC_BDEV_SIZE=64 00:06:06.798 16:56:54 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:06:06.798 16:56:54 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@24 -- # build_nvmf_example_args 00:06:06.798 16:56:54 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@17 -- # '[' 0 -eq 1 ']' 00:06:06.798 16:56:54 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@20 -- # NVMF_EXAMPLE+=(-i "$NVMF_APP_SHM_ID" -g 10000) 00:06:06.798 16:56:54 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@21 -- # NVMF_EXAMPLE+=("${NO_HUGE[@]}") 00:06:06.798 16:56:54 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@40 -- # timing_enter nvmf_example_test 00:06:06.798 16:56:54 nvmf_tcp.nvmf_example -- common/autotest_common.sh@720 -- # xtrace_disable 00:06:06.798 16:56:54 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:06:06.798 16:56:54 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@41 -- # nvmftestinit 00:06:06.798 16:56:54 nvmf_tcp.nvmf_example -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:06:06.798 16:56:54 nvmf_tcp.nvmf_example -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:06:06.798 16:56:54 nvmf_tcp.nvmf_example -- nvmf/common.sh@448 -- # prepare_net_devs 00:06:06.798 16:56:54 nvmf_tcp.nvmf_example -- nvmf/common.sh@410 -- # local -g is_hw=no 00:06:06.798 16:56:54 nvmf_tcp.nvmf_example -- nvmf/common.sh@412 -- # remove_spdk_ns 00:06:06.798 16:56:54 nvmf_tcp.nvmf_example -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:06.798 16:56:54 nvmf_tcp.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:06:06.798 16:56:54 nvmf_tcp.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:06.798 16:56:54 nvmf_tcp.nvmf_example -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:06:06.798 16:56:54 nvmf_tcp.nvmf_example -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:06:06.798 16:56:54 nvmf_tcp.nvmf_example -- nvmf/common.sh@285 -- # xtrace_disable 00:06:06.798 16:56:54 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:06:12.064 16:56:59 nvmf_tcp.nvmf_example -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:06:12.064 16:56:59 nvmf_tcp.nvmf_example -- nvmf/common.sh@291 -- # pci_devs=() 00:06:12.064 16:56:59 nvmf_tcp.nvmf_example -- nvmf/common.sh@291 -- # local -a pci_devs 00:06:12.064 16:56:59 nvmf_tcp.nvmf_example -- nvmf/common.sh@292 -- # pci_net_devs=() 00:06:12.064 16:56:59 nvmf_tcp.nvmf_example -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:06:12.064 16:56:59 nvmf_tcp.nvmf_example -- nvmf/common.sh@293 -- # pci_drivers=() 00:06:12.064 16:56:59 nvmf_tcp.nvmf_example -- nvmf/common.sh@293 -- # local -A pci_drivers 00:06:12.064 16:56:59 nvmf_tcp.nvmf_example -- nvmf/common.sh@295 -- # net_devs=() 00:06:12.064 16:56:59 nvmf_tcp.nvmf_example -- nvmf/common.sh@295 -- # local -ga net_devs 00:06:12.064 16:56:59 nvmf_tcp.nvmf_example -- nvmf/common.sh@296 -- # e810=() 00:06:12.064 16:56:59 nvmf_tcp.nvmf_example -- nvmf/common.sh@296 -- # local -ga e810 00:06:12.064 16:56:59 nvmf_tcp.nvmf_example -- nvmf/common.sh@297 -- # x722=() 00:06:12.064 16:56:59 nvmf_tcp.nvmf_example -- nvmf/common.sh@297 -- # local -ga x722 00:06:12.064 16:56:59 nvmf_tcp.nvmf_example -- nvmf/common.sh@298 -- # mlx=() 00:06:12.064 16:56:59 nvmf_tcp.nvmf_example -- nvmf/common.sh@298 -- # local -ga mlx 00:06:12.064 16:56:59 nvmf_tcp.nvmf_example -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:06:12.064 16:56:59 nvmf_tcp.nvmf_example -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:06:12.064 16:56:59 nvmf_tcp.nvmf_example -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:06:12.064 16:56:59 nvmf_tcp.nvmf_example -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:06:12.064 16:56:59 nvmf_tcp.nvmf_example -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:06:12.064 16:56:59 nvmf_tcp.nvmf_example -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:06:12.064 16:56:59 nvmf_tcp.nvmf_example -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:06:12.064 16:56:59 nvmf_tcp.nvmf_example -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:06:12.064 16:56:59 nvmf_tcp.nvmf_example -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:06:12.064 16:56:59 nvmf_tcp.nvmf_example -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:06:12.064 16:56:59 nvmf_tcp.nvmf_example -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:06:12.064 16:56:59 nvmf_tcp.nvmf_example -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:06:12.064 16:56:59 nvmf_tcp.nvmf_example -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:06:12.064 16:56:59 nvmf_tcp.nvmf_example -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:06:12.064 16:56:59 nvmf_tcp.nvmf_example -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:06:12.064 16:56:59 nvmf_tcp.nvmf_example -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:06:12.064 16:56:59 nvmf_tcp.nvmf_example -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:06:12.064 16:56:59 nvmf_tcp.nvmf_example -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:06:12.064 16:56:59 nvmf_tcp.nvmf_example -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:06:12.064 Found 0000:86:00.0 (0x8086 - 0x159b) 00:06:12.064 16:56:59 nvmf_tcp.nvmf_example -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:06:12.064 16:56:59 nvmf_tcp.nvmf_example -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:06:12.065 16:56:59 nvmf_tcp.nvmf_example -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:12.065 16:56:59 nvmf_tcp.nvmf_example -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:12.065 16:56:59 nvmf_tcp.nvmf_example -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:06:12.065 16:56:59 nvmf_tcp.nvmf_example -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:06:12.065 16:56:59 nvmf_tcp.nvmf_example -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:06:12.065 Found 0000:86:00.1 (0x8086 - 0x159b) 00:06:12.065 16:56:59 nvmf_tcp.nvmf_example -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:06:12.065 16:56:59 nvmf_tcp.nvmf_example -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:06:12.065 16:56:59 nvmf_tcp.nvmf_example -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:12.065 16:56:59 nvmf_tcp.nvmf_example -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:12.065 16:56:59 nvmf_tcp.nvmf_example -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:06:12.065 16:56:59 nvmf_tcp.nvmf_example -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:06:12.065 16:56:59 nvmf_tcp.nvmf_example -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:06:12.065 16:56:59 nvmf_tcp.nvmf_example -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:06:12.065 16:56:59 nvmf_tcp.nvmf_example -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:06:12.065 16:56:59 nvmf_tcp.nvmf_example -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:12.065 16:56:59 nvmf_tcp.nvmf_example -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:06:12.065 16:56:59 nvmf_tcp.nvmf_example -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:12.065 16:56:59 nvmf_tcp.nvmf_example -- nvmf/common.sh@390 -- # [[ up == up ]] 00:06:12.065 16:56:59 nvmf_tcp.nvmf_example -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:06:12.065 16:56:59 nvmf_tcp.nvmf_example -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:12.065 16:56:59 nvmf_tcp.nvmf_example -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:06:12.065 Found net devices under 0000:86:00.0: cvl_0_0 00:06:12.065 16:56:59 nvmf_tcp.nvmf_example -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:06:12.065 16:56:59 nvmf_tcp.nvmf_example -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:06:12.065 16:56:59 nvmf_tcp.nvmf_example -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:12.065 16:56:59 nvmf_tcp.nvmf_example -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:06:12.065 16:56:59 nvmf_tcp.nvmf_example -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:12.065 16:56:59 nvmf_tcp.nvmf_example -- nvmf/common.sh@390 -- # [[ up == up ]] 00:06:12.065 16:56:59 nvmf_tcp.nvmf_example -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:06:12.065 16:56:59 nvmf_tcp.nvmf_example -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:12.065 16:56:59 nvmf_tcp.nvmf_example -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:06:12.065 Found net devices under 0000:86:00.1: cvl_0_1 00:06:12.065 16:56:59 nvmf_tcp.nvmf_example -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:06:12.065 16:56:59 nvmf_tcp.nvmf_example -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:06:12.065 16:56:59 nvmf_tcp.nvmf_example -- nvmf/common.sh@414 -- # is_hw=yes 00:06:12.065 16:56:59 nvmf_tcp.nvmf_example -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:06:12.065 16:56:59 nvmf_tcp.nvmf_example -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:06:12.065 16:56:59 nvmf_tcp.nvmf_example -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:06:12.065 16:56:59 nvmf_tcp.nvmf_example -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:06:12.065 16:56:59 nvmf_tcp.nvmf_example -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:06:12.065 16:56:59 nvmf_tcp.nvmf_example -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:06:12.065 16:56:59 nvmf_tcp.nvmf_example -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:06:12.065 16:56:59 nvmf_tcp.nvmf_example -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:06:12.065 16:56:59 nvmf_tcp.nvmf_example -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:06:12.065 16:56:59 nvmf_tcp.nvmf_example -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:06:12.065 16:56:59 nvmf_tcp.nvmf_example -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:06:12.065 16:56:59 nvmf_tcp.nvmf_example -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:06:12.065 16:56:59 nvmf_tcp.nvmf_example -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:06:12.065 16:56:59 nvmf_tcp.nvmf_example -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:06:12.065 16:56:59 nvmf_tcp.nvmf_example -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:06:12.065 16:56:59 nvmf_tcp.nvmf_example -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:06:12.065 16:56:59 nvmf_tcp.nvmf_example -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:06:12.065 16:56:59 nvmf_tcp.nvmf_example -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:06:12.065 16:56:59 nvmf_tcp.nvmf_example -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:06:12.065 16:56:59 nvmf_tcp.nvmf_example -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:06:12.065 16:56:59 nvmf_tcp.nvmf_example -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:06:12.065 16:56:59 nvmf_tcp.nvmf_example -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:06:12.065 16:56:59 nvmf_tcp.nvmf_example -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:06:12.065 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:06:12.065 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.378 ms 00:06:12.065 00:06:12.065 --- 10.0.0.2 ping statistics --- 00:06:12.065 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:12.065 rtt min/avg/max/mdev = 0.378/0.378/0.378/0.000 ms 00:06:12.065 16:56:59 nvmf_tcp.nvmf_example -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:06:12.323 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:06:12.323 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.185 ms 00:06:12.323 00:06:12.323 --- 10.0.0.1 ping statistics --- 00:06:12.323 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:12.323 rtt min/avg/max/mdev = 0.185/0.185/0.185/0.000 ms 00:06:12.323 16:56:59 nvmf_tcp.nvmf_example -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:06:12.323 16:56:59 nvmf_tcp.nvmf_example -- nvmf/common.sh@422 -- # return 0 00:06:12.323 16:56:59 nvmf_tcp.nvmf_example -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:06:12.323 16:56:59 nvmf_tcp.nvmf_example -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:06:12.323 16:56:59 nvmf_tcp.nvmf_example -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:06:12.323 16:56:59 nvmf_tcp.nvmf_example -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:06:12.323 16:56:59 nvmf_tcp.nvmf_example -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:06:12.323 16:56:59 nvmf_tcp.nvmf_example -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:06:12.323 16:56:59 nvmf_tcp.nvmf_example -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:06:12.323 16:56:59 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@42 -- # nvmfexamplestart '-m 0xF' 00:06:12.323 16:56:59 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@27 -- # timing_enter start_nvmf_example 00:06:12.323 16:56:59 nvmf_tcp.nvmf_example -- common/autotest_common.sh@720 -- # xtrace_disable 00:06:12.323 16:56:59 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:06:12.323 16:56:59 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@29 -- # '[' tcp == tcp ']' 00:06:12.323 16:56:59 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@30 -- # NVMF_EXAMPLE=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_EXAMPLE[@]}") 00:06:12.324 16:56:59 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@34 -- # nvmfpid=2906517 00:06:12.324 16:56:59 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@35 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:06:12.324 16:56:59 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@36 -- # waitforlisten 2906517 00:06:12.324 16:56:59 nvmf_tcp.nvmf_example -- common/autotest_common.sh@827 -- # '[' -z 2906517 ']' 00:06:12.324 16:56:59 nvmf_tcp.nvmf_example -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:12.324 16:56:59 nvmf_tcp.nvmf_example -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:12.324 16:56:59 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/nvmf -i 0 -g 10000 -m 0xF 00:06:12.324 16:56:59 nvmf_tcp.nvmf_example -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:12.324 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:12.324 16:56:59 nvmf_tcp.nvmf_example -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:12.324 16:56:59 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:06:12.324 EAL: No free 2048 kB hugepages reported on node 1 00:06:13.256 16:57:00 nvmf_tcp.nvmf_example -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:13.256 16:57:00 nvmf_tcp.nvmf_example -- common/autotest_common.sh@860 -- # return 0 00:06:13.256 16:57:00 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@37 -- # timing_exit start_nvmf_example 00:06:13.256 16:57:00 nvmf_tcp.nvmf_example -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:13.256 16:57:00 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:06:13.256 16:57:00 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:06:13.256 16:57:00 nvmf_tcp.nvmf_example -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:13.256 16:57:00 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:06:13.256 16:57:00 nvmf_tcp.nvmf_example -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:13.256 16:57:00 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@47 -- # rpc_cmd bdev_malloc_create 64 512 00:06:13.256 16:57:00 nvmf_tcp.nvmf_example -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:13.256 16:57:00 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:06:13.256 16:57:00 nvmf_tcp.nvmf_example -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:13.256 16:57:00 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@47 -- # malloc_bdevs='Malloc0 ' 00:06:13.256 16:57:00 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@49 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:06:13.256 16:57:00 nvmf_tcp.nvmf_example -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:13.256 16:57:00 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:06:13.256 16:57:00 nvmf_tcp.nvmf_example -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:13.256 16:57:00 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@52 -- # for malloc_bdev in $malloc_bdevs 00:06:13.256 16:57:00 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:06:13.256 16:57:00 nvmf_tcp.nvmf_example -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:13.256 16:57:00 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:06:13.256 16:57:00 nvmf_tcp.nvmf_example -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:13.256 16:57:00 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:06:13.256 16:57:00 nvmf_tcp.nvmf_example -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:13.256 16:57:00 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:06:13.256 16:57:00 nvmf_tcp.nvmf_example -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:13.256 16:57:00 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@59 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:06:13.256 16:57:00 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:06:13.256 EAL: No free 2048 kB hugepages reported on node 1 00:06:25.452 Initializing NVMe Controllers 00:06:25.452 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:06:25.452 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:06:25.452 Initialization complete. Launching workers. 00:06:25.452 ======================================================== 00:06:25.452 Latency(us) 00:06:25.452 Device Information : IOPS MiB/s Average min max 00:06:25.452 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 17817.93 69.60 3591.54 506.72 15483.71 00:06:25.452 ======================================================== 00:06:25.452 Total : 17817.93 69.60 3591.54 506.72 15483.71 00:06:25.452 00:06:25.452 16:57:10 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@65 -- # trap - SIGINT SIGTERM EXIT 00:06:25.452 16:57:10 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@66 -- # nvmftestfini 00:06:25.452 16:57:10 nvmf_tcp.nvmf_example -- nvmf/common.sh@488 -- # nvmfcleanup 00:06:25.452 16:57:10 nvmf_tcp.nvmf_example -- nvmf/common.sh@117 -- # sync 00:06:25.452 16:57:10 nvmf_tcp.nvmf_example -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:06:25.452 16:57:10 nvmf_tcp.nvmf_example -- nvmf/common.sh@120 -- # set +e 00:06:25.452 16:57:10 nvmf_tcp.nvmf_example -- nvmf/common.sh@121 -- # for i in {1..20} 00:06:25.452 16:57:10 nvmf_tcp.nvmf_example -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:06:25.452 rmmod nvme_tcp 00:06:25.452 rmmod nvme_fabrics 00:06:25.452 rmmod nvme_keyring 00:06:25.452 16:57:10 nvmf_tcp.nvmf_example -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:06:25.452 16:57:10 nvmf_tcp.nvmf_example -- nvmf/common.sh@124 -- # set -e 00:06:25.452 16:57:10 nvmf_tcp.nvmf_example -- nvmf/common.sh@125 -- # return 0 00:06:25.452 16:57:10 nvmf_tcp.nvmf_example -- nvmf/common.sh@489 -- # '[' -n 2906517 ']' 00:06:25.452 16:57:10 nvmf_tcp.nvmf_example -- nvmf/common.sh@490 -- # killprocess 2906517 00:06:25.452 16:57:10 nvmf_tcp.nvmf_example -- common/autotest_common.sh@946 -- # '[' -z 2906517 ']' 00:06:25.452 16:57:10 nvmf_tcp.nvmf_example -- common/autotest_common.sh@950 -- # kill -0 2906517 00:06:25.452 16:57:10 nvmf_tcp.nvmf_example -- common/autotest_common.sh@951 -- # uname 00:06:25.452 16:57:10 nvmf_tcp.nvmf_example -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:06:25.452 16:57:10 nvmf_tcp.nvmf_example -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 2906517 00:06:25.452 16:57:11 nvmf_tcp.nvmf_example -- common/autotest_common.sh@952 -- # process_name=nvmf 00:06:25.452 16:57:11 nvmf_tcp.nvmf_example -- common/autotest_common.sh@956 -- # '[' nvmf = sudo ']' 00:06:25.452 16:57:11 nvmf_tcp.nvmf_example -- common/autotest_common.sh@964 -- # echo 'killing process with pid 2906517' 00:06:25.452 killing process with pid 2906517 00:06:25.452 16:57:11 nvmf_tcp.nvmf_example -- common/autotest_common.sh@965 -- # kill 2906517 00:06:25.452 16:57:11 nvmf_tcp.nvmf_example -- common/autotest_common.sh@970 -- # wait 2906517 00:06:25.452 nvmf threads initialize successfully 00:06:25.452 bdev subsystem init successfully 00:06:25.452 created a nvmf target service 00:06:25.452 create targets's poll groups done 00:06:25.452 all subsystems of target started 00:06:25.452 nvmf target is running 00:06:25.452 all subsystems of target stopped 00:06:25.452 destroy targets's poll groups done 00:06:25.452 destroyed the nvmf target service 00:06:25.452 bdev subsystem finish successfully 00:06:25.452 nvmf threads destroy successfully 00:06:25.452 16:57:11 nvmf_tcp.nvmf_example -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:06:25.452 16:57:11 nvmf_tcp.nvmf_example -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:06:25.452 16:57:11 nvmf_tcp.nvmf_example -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:06:25.452 16:57:11 nvmf_tcp.nvmf_example -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:06:25.452 16:57:11 nvmf_tcp.nvmf_example -- nvmf/common.sh@278 -- # remove_spdk_ns 00:06:25.452 16:57:11 nvmf_tcp.nvmf_example -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:25.452 16:57:11 nvmf_tcp.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:06:25.452 16:57:11 nvmf_tcp.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:25.710 16:57:13 nvmf_tcp.nvmf_example -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:06:25.710 16:57:13 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@67 -- # timing_exit nvmf_example_test 00:06:25.710 16:57:13 nvmf_tcp.nvmf_example -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:25.710 16:57:13 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:06:25.710 00:06:25.710 real 0m19.007s 00:06:25.710 user 0m45.729s 00:06:25.710 sys 0m5.343s 00:06:25.710 16:57:13 nvmf_tcp.nvmf_example -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:25.710 16:57:13 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:06:25.710 ************************************ 00:06:25.710 END TEST nvmf_example 00:06:25.710 ************************************ 00:06:25.710 16:57:13 nvmf_tcp -- nvmf/nvmf.sh@24 -- # run_test nvmf_filesystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:06:25.710 16:57:13 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:06:25.710 16:57:13 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:25.710 16:57:13 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:25.970 ************************************ 00:06:25.970 START TEST nvmf_filesystem 00:06:25.970 ************************************ 00:06:25.970 16:57:13 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:06:25.970 * Looking for test storage... 00:06:25.970 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:25.970 16:57:13 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh 00:06:25.970 16:57:13 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:06:25.970 16:57:13 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@34 -- # set -e 00:06:25.970 16:57:13 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:06:25.970 16:57:13 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@36 -- # shopt -s extglob 00:06:25.970 16:57:13 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@38 -- # '[' -z /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output ']' 00:06:25.970 16:57:13 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@43 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh ]] 00:06:25.970 16:57:13 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh 00:06:25.970 16:57:13 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:06:25.970 16:57:13 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@2 -- # CONFIG_ASAN=n 00:06:25.970 16:57:13 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:06:25.970 16:57:13 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:06:25.970 16:57:13 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:06:25.970 16:57:13 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:06:25.970 16:57:13 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:06:25.970 16:57:13 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:06:25.970 16:57:13 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:06:25.970 16:57:13 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:06:25.970 16:57:13 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:06:25.970 16:57:13 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:06:25.970 16:57:13 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:06:25.970 16:57:13 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:06:25.970 16:57:13 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:06:25.970 16:57:13 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:06:25.970 16:57:13 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@17 -- # CONFIG_PGO_CAPTURE=n 00:06:25.970 16:57:13 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@18 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:06:25.970 16:57:13 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@19 -- # CONFIG_ENV=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:06:25.970 16:57:13 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@20 -- # CONFIG_LTO=n 00:06:25.971 16:57:13 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@21 -- # CONFIG_ISCSI_INITIATOR=y 00:06:25.971 16:57:13 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@22 -- # CONFIG_CET=n 00:06:25.971 16:57:13 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@23 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:06:25.971 16:57:13 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@24 -- # CONFIG_OCF_PATH= 00:06:25.971 16:57:13 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@25 -- # CONFIG_RDMA_SET_TOS=y 00:06:25.971 16:57:13 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@26 -- # CONFIG_HAVE_ARC4RANDOM=y 00:06:25.971 16:57:13 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@27 -- # CONFIG_HAVE_LIBARCHIVE=n 00:06:25.971 16:57:13 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@28 -- # CONFIG_UBLK=y 00:06:25.971 16:57:13 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@29 -- # CONFIG_ISAL_CRYPTO=y 00:06:25.971 16:57:13 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@30 -- # CONFIG_OPENSSL_PATH= 00:06:25.971 16:57:13 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@31 -- # CONFIG_OCF=n 00:06:25.971 16:57:13 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@32 -- # CONFIG_FUSE=n 00:06:25.971 16:57:13 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@33 -- # CONFIG_VTUNE_DIR= 00:06:25.971 16:57:13 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@34 -- # CONFIG_FUZZER_LIB= 00:06:25.971 16:57:13 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@35 -- # CONFIG_FUZZER=n 00:06:25.971 16:57:13 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@36 -- # CONFIG_DPDK_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:06:25.971 16:57:13 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@37 -- # CONFIG_CRYPTO=n 00:06:25.971 16:57:13 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@38 -- # CONFIG_PGO_USE=n 00:06:25.971 16:57:13 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@39 -- # CONFIG_VHOST=y 00:06:25.971 16:57:13 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@40 -- # CONFIG_DAOS=n 00:06:25.971 16:57:13 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@41 -- # CONFIG_DPDK_INC_DIR= 00:06:25.971 16:57:13 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@42 -- # CONFIG_DAOS_DIR= 00:06:25.971 16:57:13 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@43 -- # CONFIG_UNIT_TESTS=n 00:06:25.971 16:57:13 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@44 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:06:25.971 16:57:13 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@45 -- # CONFIG_VIRTIO=y 00:06:25.971 16:57:13 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@46 -- # CONFIG_DPDK_UADK=n 00:06:25.971 16:57:13 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@47 -- # CONFIG_COVERAGE=y 00:06:25.971 16:57:13 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@48 -- # CONFIG_RDMA=y 00:06:25.971 16:57:13 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@49 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:06:25.971 16:57:13 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@50 -- # CONFIG_URING_PATH= 00:06:25.971 16:57:13 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@51 -- # CONFIG_XNVME=n 00:06:25.971 16:57:13 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@52 -- # CONFIG_VFIO_USER=y 00:06:25.971 16:57:13 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@53 -- # CONFIG_ARCH=native 00:06:25.971 16:57:13 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@54 -- # CONFIG_HAVE_EVP_MAC=y 00:06:25.971 16:57:13 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@55 -- # CONFIG_URING_ZNS=n 00:06:25.971 16:57:13 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@56 -- # CONFIG_WERROR=y 00:06:25.971 16:57:13 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@57 -- # CONFIG_HAVE_LIBBSD=n 00:06:25.971 16:57:13 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@58 -- # CONFIG_UBSAN=y 00:06:25.971 16:57:13 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@59 -- # CONFIG_IPSEC_MB_DIR= 00:06:25.971 16:57:13 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@60 -- # CONFIG_GOLANG=n 00:06:25.971 16:57:13 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@61 -- # CONFIG_ISAL=y 00:06:25.971 16:57:13 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@62 -- # CONFIG_IDXD_KERNEL=n 00:06:25.971 16:57:13 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@63 -- # CONFIG_DPDK_LIB_DIR= 00:06:25.971 16:57:13 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@64 -- # CONFIG_RDMA_PROV=verbs 00:06:25.971 16:57:13 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@65 -- # CONFIG_APPS=y 00:06:25.971 16:57:13 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@66 -- # CONFIG_SHARED=y 00:06:25.971 16:57:13 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@67 -- # CONFIG_HAVE_KEYUTILS=n 00:06:25.971 16:57:13 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@68 -- # CONFIG_FC_PATH= 00:06:25.971 16:57:13 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@69 -- # CONFIG_DPDK_PKG_CONFIG=n 00:06:25.971 16:57:13 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@70 -- # CONFIG_FC=n 00:06:25.971 16:57:13 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@71 -- # CONFIG_AVAHI=n 00:06:25.971 16:57:13 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@72 -- # CONFIG_FIO_PLUGIN=y 00:06:25.971 16:57:13 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@73 -- # CONFIG_RAID5F=n 00:06:25.971 16:57:13 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@74 -- # CONFIG_EXAMPLES=y 00:06:25.971 16:57:13 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@75 -- # CONFIG_TESTS=y 00:06:25.971 16:57:13 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@76 -- # CONFIG_CRYPTO_MLX5=n 00:06:25.971 16:57:13 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@77 -- # CONFIG_MAX_LCORES= 00:06:25.971 16:57:13 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@78 -- # CONFIG_IPSEC_MB=n 00:06:25.971 16:57:13 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@79 -- # CONFIG_PGO_DIR= 00:06:25.971 16:57:13 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@80 -- # CONFIG_DEBUG=y 00:06:25.971 16:57:13 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@81 -- # CONFIG_DPDK_COMPRESSDEV=n 00:06:25.971 16:57:13 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@82 -- # CONFIG_CROSS_PREFIX= 00:06:25.971 16:57:13 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@83 -- # CONFIG_URING=n 00:06:25.971 16:57:13 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@53 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:06:25.971 16:57:13 nvmf_tcp.nvmf_filesystem -- common/applications.sh@8 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:06:25.971 16:57:13 nvmf_tcp.nvmf_filesystem -- common/applications.sh@8 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:06:25.971 16:57:13 nvmf_tcp.nvmf_filesystem -- common/applications.sh@8 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:06:25.971 16:57:13 nvmf_tcp.nvmf_filesystem -- common/applications.sh@9 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:06:25.971 16:57:13 nvmf_tcp.nvmf_filesystem -- common/applications.sh@10 -- # _app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:06:25.971 16:57:13 nvmf_tcp.nvmf_filesystem -- common/applications.sh@11 -- # _test_app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:06:25.971 16:57:13 nvmf_tcp.nvmf_filesystem -- common/applications.sh@12 -- # _examples_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:06:25.971 16:57:13 nvmf_tcp.nvmf_filesystem -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:06:25.971 16:57:13 nvmf_tcp.nvmf_filesystem -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:06:25.971 16:57:13 nvmf_tcp.nvmf_filesystem -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:06:25.971 16:57:13 nvmf_tcp.nvmf_filesystem -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:06:25.971 16:57:13 nvmf_tcp.nvmf_filesystem -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:06:25.971 16:57:13 nvmf_tcp.nvmf_filesystem -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:06:25.971 16:57:13 nvmf_tcp.nvmf_filesystem -- common/applications.sh@22 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/config.h ]] 00:06:25.971 16:57:13 nvmf_tcp.nvmf_filesystem -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:06:25.971 #define SPDK_CONFIG_H 00:06:25.971 #define SPDK_CONFIG_APPS 1 00:06:25.971 #define SPDK_CONFIG_ARCH native 00:06:25.971 #undef SPDK_CONFIG_ASAN 00:06:25.971 #undef SPDK_CONFIG_AVAHI 00:06:25.971 #undef SPDK_CONFIG_CET 00:06:25.971 #define SPDK_CONFIG_COVERAGE 1 00:06:25.971 #define SPDK_CONFIG_CROSS_PREFIX 00:06:25.971 #undef SPDK_CONFIG_CRYPTO 00:06:25.971 #undef SPDK_CONFIG_CRYPTO_MLX5 00:06:25.971 #undef SPDK_CONFIG_CUSTOMOCF 00:06:25.971 #undef SPDK_CONFIG_DAOS 00:06:25.971 #define SPDK_CONFIG_DAOS_DIR 00:06:25.971 #define SPDK_CONFIG_DEBUG 1 00:06:25.971 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:06:25.971 #define SPDK_CONFIG_DPDK_DIR /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:06:25.971 #define SPDK_CONFIG_DPDK_INC_DIR 00:06:25.971 #define SPDK_CONFIG_DPDK_LIB_DIR 00:06:25.971 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:06:25.971 #undef SPDK_CONFIG_DPDK_UADK 00:06:25.971 #define SPDK_CONFIG_ENV /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:06:25.971 #define SPDK_CONFIG_EXAMPLES 1 00:06:25.971 #undef SPDK_CONFIG_FC 00:06:25.971 #define SPDK_CONFIG_FC_PATH 00:06:25.971 #define SPDK_CONFIG_FIO_PLUGIN 1 00:06:25.971 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:06:25.971 #undef SPDK_CONFIG_FUSE 00:06:25.971 #undef SPDK_CONFIG_FUZZER 00:06:25.971 #define SPDK_CONFIG_FUZZER_LIB 00:06:25.971 #undef SPDK_CONFIG_GOLANG 00:06:25.971 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:06:25.971 #define SPDK_CONFIG_HAVE_EVP_MAC 1 00:06:25.971 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:06:25.971 #undef SPDK_CONFIG_HAVE_KEYUTILS 00:06:25.971 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:06:25.971 #undef SPDK_CONFIG_HAVE_LIBBSD 00:06:25.971 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:06:25.971 #define SPDK_CONFIG_IDXD 1 00:06:25.971 #undef SPDK_CONFIG_IDXD_KERNEL 00:06:25.971 #undef SPDK_CONFIG_IPSEC_MB 00:06:25.971 #define SPDK_CONFIG_IPSEC_MB_DIR 00:06:25.971 #define SPDK_CONFIG_ISAL 1 00:06:25.971 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:06:25.971 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:06:25.971 #define SPDK_CONFIG_LIBDIR 00:06:25.971 #undef SPDK_CONFIG_LTO 00:06:25.971 #define SPDK_CONFIG_MAX_LCORES 00:06:25.971 #define SPDK_CONFIG_NVME_CUSE 1 00:06:25.971 #undef SPDK_CONFIG_OCF 00:06:25.972 #define SPDK_CONFIG_OCF_PATH 00:06:25.972 #define SPDK_CONFIG_OPENSSL_PATH 00:06:25.972 #undef SPDK_CONFIG_PGO_CAPTURE 00:06:25.972 #define SPDK_CONFIG_PGO_DIR 00:06:25.972 #undef SPDK_CONFIG_PGO_USE 00:06:25.972 #define SPDK_CONFIG_PREFIX /usr/local 00:06:25.972 #undef SPDK_CONFIG_RAID5F 00:06:25.972 #undef SPDK_CONFIG_RBD 00:06:25.972 #define SPDK_CONFIG_RDMA 1 00:06:25.972 #define SPDK_CONFIG_RDMA_PROV verbs 00:06:25.972 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:06:25.972 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:06:25.972 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:06:25.972 #define SPDK_CONFIG_SHARED 1 00:06:25.972 #undef SPDK_CONFIG_SMA 00:06:25.972 #define SPDK_CONFIG_TESTS 1 00:06:25.972 #undef SPDK_CONFIG_TSAN 00:06:25.972 #define SPDK_CONFIG_UBLK 1 00:06:25.972 #define SPDK_CONFIG_UBSAN 1 00:06:25.972 #undef SPDK_CONFIG_UNIT_TESTS 00:06:25.972 #undef SPDK_CONFIG_URING 00:06:25.972 #define SPDK_CONFIG_URING_PATH 00:06:25.972 #undef SPDK_CONFIG_URING_ZNS 00:06:25.972 #undef SPDK_CONFIG_USDT 00:06:25.972 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:06:25.972 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:06:25.972 #define SPDK_CONFIG_VFIO_USER 1 00:06:25.972 #define SPDK_CONFIG_VFIO_USER_DIR 00:06:25.972 #define SPDK_CONFIG_VHOST 1 00:06:25.972 #define SPDK_CONFIG_VIRTIO 1 00:06:25.972 #undef SPDK_CONFIG_VTUNE 00:06:25.972 #define SPDK_CONFIG_VTUNE_DIR 00:06:25.972 #define SPDK_CONFIG_WERROR 1 00:06:25.972 #define SPDK_CONFIG_WPDK_DIR 00:06:25.972 #undef SPDK_CONFIG_XNVME 00:06:25.972 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:06:25.972 16:57:13 nvmf_tcp.nvmf_filesystem -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:06:25.972 16:57:13 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@54 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:25.972 16:57:13 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:25.972 16:57:13 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:25.972 16:57:13 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:25.972 16:57:13 nvmf_tcp.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:25.972 16:57:13 nvmf_tcp.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:25.972 16:57:13 nvmf_tcp.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:25.972 16:57:13 nvmf_tcp.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:06:25.972 16:57:13 nvmf_tcp.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:25.972 16:57:13 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@55 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:06:25.972 16:57:13 nvmf_tcp.nvmf_filesystem -- pm/common@6 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:06:25.972 16:57:13 nvmf_tcp.nvmf_filesystem -- pm/common@6 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:06:25.972 16:57:13 nvmf_tcp.nvmf_filesystem -- pm/common@6 -- # _pmdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:06:25.972 16:57:13 nvmf_tcp.nvmf_filesystem -- pm/common@7 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/../../../ 00:06:25.972 16:57:13 nvmf_tcp.nvmf_filesystem -- pm/common@7 -- # _pmrootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:06:25.972 16:57:13 nvmf_tcp.nvmf_filesystem -- pm/common@64 -- # TEST_TAG=N/A 00:06:25.972 16:57:13 nvmf_tcp.nvmf_filesystem -- pm/common@65 -- # TEST_TAG_FILE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.run_test_name 00:06:25.972 16:57:13 nvmf_tcp.nvmf_filesystem -- pm/common@67 -- # PM_OUTPUTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power 00:06:25.972 16:57:13 nvmf_tcp.nvmf_filesystem -- pm/common@68 -- # uname -s 00:06:25.972 16:57:13 nvmf_tcp.nvmf_filesystem -- pm/common@68 -- # PM_OS=Linux 00:06:25.972 16:57:13 nvmf_tcp.nvmf_filesystem -- pm/common@70 -- # MONITOR_RESOURCES_SUDO=() 00:06:25.972 16:57:13 nvmf_tcp.nvmf_filesystem -- pm/common@70 -- # declare -A MONITOR_RESOURCES_SUDO 00:06:25.972 16:57:13 nvmf_tcp.nvmf_filesystem -- pm/common@71 -- # MONITOR_RESOURCES_SUDO["collect-bmc-pm"]=1 00:06:25.972 16:57:13 nvmf_tcp.nvmf_filesystem -- pm/common@72 -- # MONITOR_RESOURCES_SUDO["collect-cpu-load"]=0 00:06:25.972 16:57:13 nvmf_tcp.nvmf_filesystem -- pm/common@73 -- # MONITOR_RESOURCES_SUDO["collect-cpu-temp"]=0 00:06:25.972 16:57:13 nvmf_tcp.nvmf_filesystem -- pm/common@74 -- # MONITOR_RESOURCES_SUDO["collect-vmstat"]=0 00:06:25.972 16:57:13 nvmf_tcp.nvmf_filesystem -- pm/common@76 -- # SUDO[0]= 00:06:25.972 16:57:13 nvmf_tcp.nvmf_filesystem -- pm/common@76 -- # SUDO[1]='sudo -E' 00:06:25.972 16:57:13 nvmf_tcp.nvmf_filesystem -- pm/common@78 -- # MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:06:25.972 16:57:13 nvmf_tcp.nvmf_filesystem -- pm/common@79 -- # [[ Linux == FreeBSD ]] 00:06:25.972 16:57:13 nvmf_tcp.nvmf_filesystem -- pm/common@81 -- # [[ Linux == Linux ]] 00:06:25.972 16:57:13 nvmf_tcp.nvmf_filesystem -- pm/common@81 -- # [[ ............................... != QEMU ]] 00:06:25.972 16:57:13 nvmf_tcp.nvmf_filesystem -- pm/common@81 -- # [[ ! -e /.dockerenv ]] 00:06:25.972 16:57:13 nvmf_tcp.nvmf_filesystem -- pm/common@84 -- # MONITOR_RESOURCES+=(collect-cpu-temp) 00:06:25.972 16:57:13 nvmf_tcp.nvmf_filesystem -- pm/common@85 -- # MONITOR_RESOURCES+=(collect-bmc-pm) 00:06:25.972 16:57:13 nvmf_tcp.nvmf_filesystem -- pm/common@88 -- # [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power ]] 00:06:25.972 16:57:13 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@57 -- # : 0 00:06:25.972 16:57:13 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@58 -- # export RUN_NIGHTLY 00:06:25.972 16:57:13 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@61 -- # : 0 00:06:25.972 16:57:13 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@62 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:06:25.972 16:57:13 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@63 -- # : 0 00:06:25.972 16:57:13 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@64 -- # export SPDK_RUN_VALGRIND 00:06:25.972 16:57:13 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@65 -- # : 1 00:06:25.972 16:57:13 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@66 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:06:25.972 16:57:13 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@67 -- # : 0 00:06:25.972 16:57:13 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@68 -- # export SPDK_TEST_UNITTEST 00:06:25.972 16:57:13 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@69 -- # : 00:06:25.972 16:57:13 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@70 -- # export SPDK_TEST_AUTOBUILD 00:06:25.972 16:57:13 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@71 -- # : 0 00:06:25.972 16:57:13 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@72 -- # export SPDK_TEST_RELEASE_BUILD 00:06:25.972 16:57:13 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@73 -- # : 0 00:06:25.972 16:57:13 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@74 -- # export SPDK_TEST_ISAL 00:06:25.972 16:57:13 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@75 -- # : 0 00:06:25.972 16:57:13 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@76 -- # export SPDK_TEST_ISCSI 00:06:25.972 16:57:13 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@77 -- # : 0 00:06:25.972 16:57:13 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@78 -- # export SPDK_TEST_ISCSI_INITIATOR 00:06:25.972 16:57:13 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@79 -- # : 0 00:06:25.972 16:57:13 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@80 -- # export SPDK_TEST_NVME 00:06:25.972 16:57:13 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@81 -- # : 0 00:06:25.973 16:57:13 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@82 -- # export SPDK_TEST_NVME_PMR 00:06:25.973 16:57:13 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@83 -- # : 0 00:06:25.973 16:57:13 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@84 -- # export SPDK_TEST_NVME_BP 00:06:25.973 16:57:13 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@85 -- # : 1 00:06:25.973 16:57:13 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@86 -- # export SPDK_TEST_NVME_CLI 00:06:25.973 16:57:13 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@87 -- # : 0 00:06:25.973 16:57:13 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@88 -- # export SPDK_TEST_NVME_CUSE 00:06:25.973 16:57:13 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@89 -- # : 0 00:06:25.973 16:57:13 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@90 -- # export SPDK_TEST_NVME_FDP 00:06:25.973 16:57:13 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@91 -- # : 1 00:06:25.973 16:57:13 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@92 -- # export SPDK_TEST_NVMF 00:06:25.973 16:57:13 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@93 -- # : 1 00:06:25.973 16:57:13 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@94 -- # export SPDK_TEST_VFIOUSER 00:06:25.973 16:57:13 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@95 -- # : 0 00:06:25.973 16:57:13 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@96 -- # export SPDK_TEST_VFIOUSER_QEMU 00:06:25.973 16:57:13 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@97 -- # : 0 00:06:25.973 16:57:13 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@98 -- # export SPDK_TEST_FUZZER 00:06:25.973 16:57:13 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@99 -- # : 0 00:06:25.973 16:57:13 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@100 -- # export SPDK_TEST_FUZZER_SHORT 00:06:25.973 16:57:13 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@101 -- # : tcp 00:06:25.973 16:57:13 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@102 -- # export SPDK_TEST_NVMF_TRANSPORT 00:06:25.973 16:57:13 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@103 -- # : 0 00:06:25.973 16:57:13 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@104 -- # export SPDK_TEST_RBD 00:06:25.973 16:57:13 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@105 -- # : 0 00:06:25.973 16:57:13 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@106 -- # export SPDK_TEST_VHOST 00:06:25.973 16:57:13 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@107 -- # : 0 00:06:25.973 16:57:13 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@108 -- # export SPDK_TEST_BLOCKDEV 00:06:25.973 16:57:13 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@109 -- # : 0 00:06:25.973 16:57:13 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@110 -- # export SPDK_TEST_IOAT 00:06:25.973 16:57:13 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@111 -- # : 0 00:06:25.973 16:57:13 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@112 -- # export SPDK_TEST_BLOBFS 00:06:25.973 16:57:13 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@113 -- # : 0 00:06:25.973 16:57:13 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@114 -- # export SPDK_TEST_VHOST_INIT 00:06:25.973 16:57:13 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@115 -- # : 0 00:06:25.973 16:57:13 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@116 -- # export SPDK_TEST_LVOL 00:06:25.973 16:57:13 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@117 -- # : 0 00:06:25.973 16:57:13 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@118 -- # export SPDK_TEST_VBDEV_COMPRESS 00:06:25.973 16:57:13 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@119 -- # : 0 00:06:25.973 16:57:13 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@120 -- # export SPDK_RUN_ASAN 00:06:25.973 16:57:13 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@121 -- # : 1 00:06:25.973 16:57:13 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@122 -- # export SPDK_RUN_UBSAN 00:06:25.973 16:57:13 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@123 -- # : 00:06:25.973 16:57:13 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@124 -- # export SPDK_RUN_EXTERNAL_DPDK 00:06:25.973 16:57:13 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@125 -- # : 0 00:06:25.973 16:57:13 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@126 -- # export SPDK_RUN_NON_ROOT 00:06:25.973 16:57:13 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@127 -- # : 0 00:06:25.973 16:57:13 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@128 -- # export SPDK_TEST_CRYPTO 00:06:25.973 16:57:13 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@129 -- # : 0 00:06:25.973 16:57:13 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@130 -- # export SPDK_TEST_FTL 00:06:25.973 16:57:13 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@131 -- # : 0 00:06:25.973 16:57:13 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@132 -- # export SPDK_TEST_OCF 00:06:25.973 16:57:13 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@133 -- # : 0 00:06:25.973 16:57:13 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@134 -- # export SPDK_TEST_VMD 00:06:25.973 16:57:13 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@135 -- # : 0 00:06:25.973 16:57:13 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@136 -- # export SPDK_TEST_OPAL 00:06:25.973 16:57:13 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@137 -- # : 00:06:25.973 16:57:13 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@138 -- # export SPDK_TEST_NATIVE_DPDK 00:06:25.973 16:57:13 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@139 -- # : true 00:06:25.973 16:57:13 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@140 -- # export SPDK_AUTOTEST_X 00:06:25.973 16:57:13 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@141 -- # : 0 00:06:25.973 16:57:13 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@142 -- # export SPDK_TEST_RAID5 00:06:25.973 16:57:13 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@143 -- # : 0 00:06:25.973 16:57:13 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@144 -- # export SPDK_TEST_URING 00:06:25.973 16:57:13 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@145 -- # : 0 00:06:25.973 16:57:13 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@146 -- # export SPDK_TEST_USDT 00:06:25.973 16:57:13 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@147 -- # : 0 00:06:25.973 16:57:13 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@148 -- # export SPDK_TEST_USE_IGB_UIO 00:06:25.973 16:57:13 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@149 -- # : 0 00:06:25.973 16:57:13 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@150 -- # export SPDK_TEST_SCHEDULER 00:06:25.973 16:57:13 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@151 -- # : 0 00:06:25.973 16:57:13 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@152 -- # export SPDK_TEST_SCANBUILD 00:06:25.973 16:57:13 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@153 -- # : e810 00:06:25.973 16:57:13 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@154 -- # export SPDK_TEST_NVMF_NICS 00:06:25.973 16:57:13 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@155 -- # : 0 00:06:25.973 16:57:13 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@156 -- # export SPDK_TEST_SMA 00:06:25.973 16:57:13 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@157 -- # : 0 00:06:25.973 16:57:13 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@158 -- # export SPDK_TEST_DAOS 00:06:25.973 16:57:13 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@159 -- # : 0 00:06:25.973 16:57:13 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@160 -- # export SPDK_TEST_XNVME 00:06:25.973 16:57:13 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@161 -- # : 0 00:06:25.973 16:57:13 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@162 -- # export SPDK_TEST_ACCEL_DSA 00:06:25.973 16:57:13 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@163 -- # : 0 00:06:25.973 16:57:13 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@164 -- # export SPDK_TEST_ACCEL_IAA 00:06:25.973 16:57:13 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@166 -- # : 00:06:25.973 16:57:13 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@167 -- # export SPDK_TEST_FUZZER_TARGET 00:06:25.973 16:57:13 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@168 -- # : 0 00:06:25.973 16:57:13 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@169 -- # export SPDK_TEST_NVMF_MDNS 00:06:25.973 16:57:13 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@170 -- # : 0 00:06:25.973 16:57:13 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@171 -- # export SPDK_JSONRPC_GO_CLIENT 00:06:25.973 16:57:13 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@174 -- # export SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:06:25.973 16:57:13 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@174 -- # SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:06:25.973 16:57:13 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@175 -- # export DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib 00:06:25.973 16:57:13 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@175 -- # DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib 00:06:25.973 16:57:13 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@176 -- # export VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:06:25.973 16:57:13 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@176 -- # VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:06:25.974 16:57:13 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@177 -- # export LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:06:25.974 16:57:13 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@177 -- # LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:06:25.974 16:57:13 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@180 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:06:25.974 16:57:13 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@180 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:06:25.974 16:57:13 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@184 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:06:25.974 16:57:13 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@184 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:06:25.974 16:57:13 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@188 -- # export PYTHONDONTWRITEBYTECODE=1 00:06:25.974 16:57:13 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@188 -- # PYTHONDONTWRITEBYTECODE=1 00:06:25.974 16:57:13 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@192 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:06:25.974 16:57:13 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@192 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:06:25.974 16:57:13 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@193 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:06:25.974 16:57:13 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@193 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:06:25.974 16:57:13 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@197 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:06:25.974 16:57:13 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@198 -- # rm -rf /var/tmp/asan_suppression_file 00:06:25.974 16:57:13 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@199 -- # cat 00:06:25.974 16:57:13 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@235 -- # echo leak:libfuse3.so 00:06:25.974 16:57:13 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@237 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:06:25.974 16:57:13 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@237 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:06:25.974 16:57:13 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@239 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:06:25.974 16:57:13 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@239 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:06:25.974 16:57:13 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@241 -- # '[' -z /var/spdk/dependencies ']' 00:06:25.974 16:57:13 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@244 -- # export DEPENDENCY_DIR 00:06:25.974 16:57:13 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@248 -- # export SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:06:25.974 16:57:13 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@248 -- # SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:06:25.974 16:57:13 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@249 -- # export SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:06:25.974 16:57:13 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@249 -- # SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:06:25.974 16:57:13 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@252 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:06:25.974 16:57:13 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@252 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:06:25.974 16:57:13 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@253 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:06:25.974 16:57:13 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@253 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:06:25.974 16:57:13 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@255 -- # export AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:06:25.974 16:57:13 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@255 -- # AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:06:25.974 16:57:13 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@258 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:06:25.974 16:57:13 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@258 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:06:25.974 16:57:13 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@261 -- # '[' 0 -eq 0 ']' 00:06:25.974 16:57:13 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@262 -- # export valgrind= 00:06:25.974 16:57:13 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@262 -- # valgrind= 00:06:25.974 16:57:13 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@268 -- # uname -s 00:06:25.974 16:57:13 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@268 -- # '[' Linux = Linux ']' 00:06:25.974 16:57:13 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@269 -- # HUGEMEM=4096 00:06:25.974 16:57:13 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@270 -- # export CLEAR_HUGE=yes 00:06:25.974 16:57:13 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@270 -- # CLEAR_HUGE=yes 00:06:25.974 16:57:13 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@271 -- # [[ 0 -eq 1 ]] 00:06:25.974 16:57:13 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@271 -- # [[ 0 -eq 1 ]] 00:06:25.974 16:57:13 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@278 -- # MAKE=make 00:06:25.974 16:57:13 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@279 -- # MAKEFLAGS=-j96 00:06:25.974 16:57:13 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@295 -- # export HUGEMEM=4096 00:06:25.974 16:57:13 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@295 -- # HUGEMEM=4096 00:06:25.974 16:57:13 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@297 -- # NO_HUGE=() 00:06:25.974 16:57:13 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@298 -- # TEST_MODE= 00:06:25.974 16:57:13 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@299 -- # for i in "$@" 00:06:25.974 16:57:13 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@300 -- # case "$i" in 00:06:25.974 16:57:13 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@305 -- # TEST_TRANSPORT=tcp 00:06:25.974 16:57:13 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@317 -- # [[ -z 2909453 ]] 00:06:25.974 16:57:13 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@317 -- # kill -0 2909453 00:06:25.974 16:57:13 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1676 -- # set_test_storage 2147483648 00:06:25.974 16:57:13 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@327 -- # [[ -v testdir ]] 00:06:25.974 16:57:13 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@329 -- # local requested_size=2147483648 00:06:25.974 16:57:13 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@330 -- # local mount target_dir 00:06:25.974 16:57:13 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@332 -- # local -A mounts fss sizes avails uses 00:06:25.974 16:57:13 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@333 -- # local source fs size avail mount use 00:06:25.974 16:57:13 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@335 -- # local storage_fallback storage_candidates 00:06:25.974 16:57:13 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@337 -- # mktemp -udt spdk.XXXXXX 00:06:25.974 16:57:13 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@337 -- # storage_fallback=/tmp/spdk.6DuiRn 00:06:25.974 16:57:13 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@342 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:06:25.974 16:57:13 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@344 -- # [[ -n '' ]] 00:06:25.974 16:57:13 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@349 -- # [[ -n '' ]] 00:06:25.974 16:57:13 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@354 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target /tmp/spdk.6DuiRn/tests/target /tmp/spdk.6DuiRn 00:06:25.974 16:57:13 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@357 -- # requested_size=2214592512 00:06:25.974 16:57:13 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@359 -- # read -r source fs size use avail _ mount 00:06:25.974 16:57:13 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@326 -- # df -T 00:06:25.974 16:57:13 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@326 -- # grep -v Filesystem 00:06:25.974 16:57:13 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # mounts["$mount"]=spdk_devtmpfs 00:06:25.974 16:57:13 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # fss["$mount"]=devtmpfs 00:06:25.974 16:57:13 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # avails["$mount"]=67108864 00:06:25.974 16:57:13 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # sizes["$mount"]=67108864 00:06:25.974 16:57:13 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # uses["$mount"]=0 00:06:25.974 16:57:13 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@359 -- # read -r source fs size use avail _ mount 00:06:25.974 16:57:13 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # mounts["$mount"]=/dev/pmem0 00:06:25.974 16:57:13 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # fss["$mount"]=ext2 00:06:25.974 16:57:13 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # avails["$mount"]=972767232 00:06:25.974 16:57:13 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # sizes["$mount"]=5284429824 00:06:25.974 16:57:13 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # uses["$mount"]=4311662592 00:06:25.974 16:57:13 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@359 -- # read -r source fs size use avail _ mount 00:06:25.974 16:57:13 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # mounts["$mount"]=spdk_root 00:06:25.974 16:57:13 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # fss["$mount"]=overlay 00:06:25.974 16:57:13 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # avails["$mount"]=188707905536 00:06:25.974 16:57:13 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # sizes["$mount"]=195974311936 00:06:25.974 16:57:13 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # uses["$mount"]=7266406400 00:06:25.975 16:57:13 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@359 -- # read -r source fs size use avail _ mount 00:06:25.975 16:57:13 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # mounts["$mount"]=tmpfs 00:06:25.975 16:57:13 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # fss["$mount"]=tmpfs 00:06:25.975 16:57:13 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # avails["$mount"]=97983778816 00:06:25.975 16:57:13 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # sizes["$mount"]=97987153920 00:06:25.975 16:57:13 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # uses["$mount"]=3375104 00:06:25.975 16:57:13 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@359 -- # read -r source fs size use avail _ mount 00:06:25.975 16:57:13 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # mounts["$mount"]=tmpfs 00:06:25.975 16:57:13 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # fss["$mount"]=tmpfs 00:06:25.975 16:57:13 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # avails["$mount"]=39185489920 00:06:25.975 16:57:13 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # sizes["$mount"]=39194865664 00:06:25.975 16:57:13 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # uses["$mount"]=9375744 00:06:25.975 16:57:13 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@359 -- # read -r source fs size use avail _ mount 00:06:25.975 16:57:13 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # mounts["$mount"]=tmpfs 00:06:25.975 16:57:13 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # fss["$mount"]=tmpfs 00:06:25.975 16:57:13 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # avails["$mount"]=97986588672 00:06:25.975 16:57:13 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # sizes["$mount"]=97987158016 00:06:25.975 16:57:13 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # uses["$mount"]=569344 00:06:25.975 16:57:13 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@359 -- # read -r source fs size use avail _ mount 00:06:25.975 16:57:13 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # mounts["$mount"]=tmpfs 00:06:25.975 16:57:13 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # fss["$mount"]=tmpfs 00:06:25.975 16:57:13 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # avails["$mount"]=19597426688 00:06:25.975 16:57:13 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # sizes["$mount"]=19597430784 00:06:25.975 16:57:13 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # uses["$mount"]=4096 00:06:25.975 16:57:13 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@359 -- # read -r source fs size use avail _ mount 00:06:25.975 16:57:13 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@365 -- # printf '* Looking for test storage...\n' 00:06:25.975 * Looking for test storage... 00:06:25.975 16:57:13 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@367 -- # local target_space new_size 00:06:25.975 16:57:13 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@368 -- # for target_dir in "${storage_candidates[@]}" 00:06:25.975 16:57:13 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@371 -- # df /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:25.975 16:57:13 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@371 -- # awk '$1 !~ /Filesystem/{print $6}' 00:06:25.975 16:57:13 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@371 -- # mount=/ 00:06:25.975 16:57:13 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@373 -- # target_space=188707905536 00:06:25.975 16:57:13 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@374 -- # (( target_space == 0 || target_space < requested_size )) 00:06:25.975 16:57:13 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@377 -- # (( target_space >= requested_size )) 00:06:25.975 16:57:13 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@379 -- # [[ overlay == tmpfs ]] 00:06:25.975 16:57:13 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@379 -- # [[ overlay == ramfs ]] 00:06:25.975 16:57:13 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@379 -- # [[ / == / ]] 00:06:25.975 16:57:13 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@380 -- # new_size=9480998912 00:06:25.975 16:57:13 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@381 -- # (( new_size * 100 / sizes[/] > 95 )) 00:06:25.975 16:57:13 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@386 -- # export SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:25.975 16:57:13 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@386 -- # SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:25.975 16:57:13 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@387 -- # printf '* Found test storage at %s\n' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:25.975 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:25.975 16:57:13 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@388 -- # return 0 00:06:25.975 16:57:13 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1678 -- # set -o errtrace 00:06:25.975 16:57:13 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1679 -- # shopt -s extdebug 00:06:25.975 16:57:13 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1680 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:06:25.975 16:57:13 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1682 -- # PS4=' \t $test_domain -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:06:25.975 16:57:13 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1683 -- # true 00:06:25.975 16:57:13 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1685 -- # xtrace_fd 00:06:25.975 16:57:13 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -n 14 ]] 00:06:25.975 16:57:13 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/14 ]] 00:06:25.975 16:57:13 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@27 -- # exec 00:06:25.975 16:57:13 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@29 -- # exec 00:06:25.975 16:57:13 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@31 -- # xtrace_restore 00:06:25.975 16:57:13 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:06:25.975 16:57:13 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:06:25.975 16:57:13 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@18 -- # set -x 00:06:25.975 16:57:13 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:25.975 16:57:13 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@7 -- # uname -s 00:06:26.234 16:57:13 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:26.234 16:57:13 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:26.234 16:57:13 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:26.234 16:57:13 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:26.234 16:57:13 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:26.234 16:57:13 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:26.234 16:57:13 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:26.234 16:57:13 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:26.234 16:57:13 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:26.234 16:57:13 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:26.234 16:57:13 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:06:26.234 16:57:13 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:06:26.234 16:57:13 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:26.234 16:57:13 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:26.234 16:57:13 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:06:26.234 16:57:13 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:26.234 16:57:13 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:26.234 16:57:13 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:26.234 16:57:13 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:26.234 16:57:13 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:26.234 16:57:13 nvmf_tcp.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:26.234 16:57:13 nvmf_tcp.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:26.234 16:57:13 nvmf_tcp.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:26.234 16:57:13 nvmf_tcp.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:06:26.234 16:57:13 nvmf_tcp.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:26.234 16:57:13 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@47 -- # : 0 00:06:26.234 16:57:13 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:06:26.234 16:57:13 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:06:26.234 16:57:13 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:26.234 16:57:13 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:26.234 16:57:13 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:26.234 16:57:13 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:06:26.234 16:57:13 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:06:26.234 16:57:13 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@51 -- # have_pci_nics=0 00:06:26.234 16:57:13 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@12 -- # MALLOC_BDEV_SIZE=512 00:06:26.234 16:57:13 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:06:26.234 16:57:13 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@15 -- # nvmftestinit 00:06:26.234 16:57:13 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:06:26.234 16:57:13 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:06:26.234 16:57:13 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@448 -- # prepare_net_devs 00:06:26.234 16:57:13 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@410 -- # local -g is_hw=no 00:06:26.234 16:57:13 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@412 -- # remove_spdk_ns 00:06:26.234 16:57:13 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:26.234 16:57:13 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:06:26.234 16:57:13 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:26.234 16:57:13 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:06:26.234 16:57:13 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:06:26.234 16:57:13 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@285 -- # xtrace_disable 00:06:26.234 16:57:13 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:06:31.494 16:57:18 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:06:31.494 16:57:18 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@291 -- # pci_devs=() 00:06:31.495 16:57:18 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@291 -- # local -a pci_devs 00:06:31.495 16:57:18 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@292 -- # pci_net_devs=() 00:06:31.495 16:57:18 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:06:31.495 16:57:18 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@293 -- # pci_drivers=() 00:06:31.495 16:57:18 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@293 -- # local -A pci_drivers 00:06:31.495 16:57:18 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@295 -- # net_devs=() 00:06:31.495 16:57:18 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@295 -- # local -ga net_devs 00:06:31.495 16:57:18 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@296 -- # e810=() 00:06:31.495 16:57:18 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@296 -- # local -ga e810 00:06:31.495 16:57:18 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@297 -- # x722=() 00:06:31.495 16:57:18 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@297 -- # local -ga x722 00:06:31.495 16:57:18 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@298 -- # mlx=() 00:06:31.495 16:57:18 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@298 -- # local -ga mlx 00:06:31.495 16:57:18 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:06:31.495 16:57:18 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:06:31.495 16:57:18 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:06:31.495 16:57:18 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:06:31.495 16:57:18 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:06:31.495 16:57:18 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:06:31.495 16:57:18 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:06:31.495 16:57:18 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:06:31.495 16:57:18 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:06:31.495 16:57:18 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:06:31.495 16:57:18 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:06:31.495 16:57:18 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:06:31.495 16:57:18 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:06:31.495 16:57:18 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:06:31.495 16:57:18 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:06:31.495 16:57:18 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:06:31.495 16:57:18 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:06:31.495 16:57:18 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:06:31.495 16:57:18 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:06:31.495 Found 0000:86:00.0 (0x8086 - 0x159b) 00:06:31.495 16:57:18 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:06:31.495 16:57:18 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:06:31.495 16:57:18 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:31.495 16:57:18 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:31.495 16:57:18 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:06:31.495 16:57:18 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:06:31.495 16:57:18 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:06:31.495 Found 0000:86:00.1 (0x8086 - 0x159b) 00:06:31.495 16:57:18 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:06:31.495 16:57:18 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:06:31.495 16:57:18 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:31.495 16:57:18 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:31.495 16:57:18 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:06:31.495 16:57:18 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:06:31.495 16:57:18 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:06:31.495 16:57:18 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:06:31.495 16:57:18 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:06:31.495 16:57:18 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:31.495 16:57:18 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:06:31.495 16:57:18 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:31.495 16:57:18 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@390 -- # [[ up == up ]] 00:06:31.495 16:57:18 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:06:31.495 16:57:18 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:31.495 16:57:18 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:06:31.495 Found net devices under 0000:86:00.0: cvl_0_0 00:06:31.495 16:57:18 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:06:31.495 16:57:18 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:06:31.495 16:57:18 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:31.495 16:57:18 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:06:31.495 16:57:18 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:31.495 16:57:18 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@390 -- # [[ up == up ]] 00:06:31.495 16:57:18 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:06:31.495 16:57:18 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:31.495 16:57:18 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:06:31.495 Found net devices under 0000:86:00.1: cvl_0_1 00:06:31.495 16:57:18 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:06:31.495 16:57:18 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:06:31.495 16:57:18 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@414 -- # is_hw=yes 00:06:31.495 16:57:18 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:06:31.495 16:57:18 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:06:31.495 16:57:18 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:06:31.495 16:57:18 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:06:31.495 16:57:18 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:06:31.495 16:57:18 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:06:31.495 16:57:18 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:06:31.495 16:57:18 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:06:31.495 16:57:18 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:06:31.495 16:57:18 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:06:31.495 16:57:18 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:06:31.495 16:57:18 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:06:31.495 16:57:18 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:06:31.495 16:57:18 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:06:31.495 16:57:18 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:06:31.495 16:57:18 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:06:31.495 16:57:18 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:06:31.495 16:57:18 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:06:31.495 16:57:18 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:06:31.495 16:57:18 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:06:31.495 16:57:18 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:06:31.495 16:57:18 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:06:31.495 16:57:18 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:06:31.495 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:06:31.495 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.172 ms 00:06:31.495 00:06:31.495 --- 10.0.0.2 ping statistics --- 00:06:31.495 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:31.495 rtt min/avg/max/mdev = 0.172/0.172/0.172/0.000 ms 00:06:31.495 16:57:18 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:06:31.495 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:06:31.495 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.227 ms 00:06:31.495 00:06:31.495 --- 10.0.0.1 ping statistics --- 00:06:31.495 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:31.495 rtt min/avg/max/mdev = 0.227/0.227/0.227/0.000 ms 00:06:31.495 16:57:18 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:06:31.495 16:57:18 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@422 -- # return 0 00:06:31.495 16:57:18 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:06:31.495 16:57:18 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:06:31.495 16:57:18 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:06:31.495 16:57:18 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:06:31.495 16:57:18 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:06:31.495 16:57:18 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:06:31.495 16:57:18 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:06:31.495 16:57:18 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@105 -- # run_test nvmf_filesystem_no_in_capsule nvmf_filesystem_part 0 00:06:31.495 16:57:18 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:06:31.495 16:57:18 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:31.495 16:57:18 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:06:31.496 ************************************ 00:06:31.496 START TEST nvmf_filesystem_no_in_capsule 00:06:31.496 ************************************ 00:06:31.496 16:57:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1121 -- # nvmf_filesystem_part 0 00:06:31.496 16:57:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@47 -- # in_capsule=0 00:06:31.496 16:57:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:06:31.496 16:57:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:06:31.496 16:57:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@720 -- # xtrace_disable 00:06:31.496 16:57:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:31.496 16:57:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@481 -- # nvmfpid=2912383 00:06:31.496 16:57:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@482 -- # waitforlisten 2912383 00:06:31.496 16:57:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@827 -- # '[' -z 2912383 ']' 00:06:31.496 16:57:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:31.496 16:57:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:31.496 16:57:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:31.496 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:31.496 16:57:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:31.496 16:57:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:31.496 16:57:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:06:31.496 [2024-05-15 16:57:18.734132] Starting SPDK v24.05-pre git sha1 0ba8ca574 / DPDK 23.11.0 initialization... 00:06:31.496 [2024-05-15 16:57:18.734176] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:31.496 EAL: No free 2048 kB hugepages reported on node 1 00:06:31.496 [2024-05-15 16:57:18.789922] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:31.496 [2024-05-15 16:57:18.871762] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:06:31.496 [2024-05-15 16:57:18.871799] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:06:31.496 [2024-05-15 16:57:18.871807] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:06:31.496 [2024-05-15 16:57:18.871813] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:06:31.496 [2024-05-15 16:57:18.871818] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:06:31.496 [2024-05-15 16:57:18.871861] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:31.496 [2024-05-15 16:57:18.871879] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:06:31.496 [2024-05-15 16:57:18.871965] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:06:31.496 [2024-05-15 16:57:18.871966] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:32.059 16:57:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:32.059 16:57:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@860 -- # return 0 00:06:32.059 16:57:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:06:32.059 16:57:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:32.059 16:57:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:32.059 16:57:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:06:32.059 16:57:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:06:32.059 16:57:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:06:32.059 16:57:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:32.060 16:57:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:32.060 [2024-05-15 16:57:19.590067] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:32.060 16:57:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:32.060 16:57:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:06:32.060 16:57:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:32.060 16:57:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:32.060 Malloc1 00:06:32.060 16:57:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:32.060 16:57:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:06:32.060 16:57:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:32.060 16:57:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:32.317 16:57:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:32.317 16:57:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:06:32.317 16:57:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:32.317 16:57:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:32.317 16:57:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:32.317 16:57:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:06:32.317 16:57:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:32.317 16:57:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:32.317 [2024-05-15 16:57:19.735131] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:06:32.317 [2024-05-15 16:57:19.735362] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:06:32.317 16:57:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:32.317 16:57:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:06:32.317 16:57:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1374 -- # local bdev_name=Malloc1 00:06:32.317 16:57:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1375 -- # local bdev_info 00:06:32.317 16:57:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1376 -- # local bs 00:06:32.317 16:57:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1377 -- # local nb 00:06:32.317 16:57:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1378 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:06:32.317 16:57:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:32.317 16:57:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:32.317 16:57:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:32.317 16:57:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1378 -- # bdev_info='[ 00:06:32.317 { 00:06:32.317 "name": "Malloc1", 00:06:32.317 "aliases": [ 00:06:32.317 "c9aee931-cc03-4ab4-9567-2be7de6bac44" 00:06:32.317 ], 00:06:32.317 "product_name": "Malloc disk", 00:06:32.317 "block_size": 512, 00:06:32.317 "num_blocks": 1048576, 00:06:32.317 "uuid": "c9aee931-cc03-4ab4-9567-2be7de6bac44", 00:06:32.317 "assigned_rate_limits": { 00:06:32.317 "rw_ios_per_sec": 0, 00:06:32.317 "rw_mbytes_per_sec": 0, 00:06:32.317 "r_mbytes_per_sec": 0, 00:06:32.317 "w_mbytes_per_sec": 0 00:06:32.317 }, 00:06:32.317 "claimed": true, 00:06:32.317 "claim_type": "exclusive_write", 00:06:32.317 "zoned": false, 00:06:32.317 "supported_io_types": { 00:06:32.317 "read": true, 00:06:32.317 "write": true, 00:06:32.317 "unmap": true, 00:06:32.317 "write_zeroes": true, 00:06:32.317 "flush": true, 00:06:32.317 "reset": true, 00:06:32.317 "compare": false, 00:06:32.317 "compare_and_write": false, 00:06:32.317 "abort": true, 00:06:32.317 "nvme_admin": false, 00:06:32.317 "nvme_io": false 00:06:32.317 }, 00:06:32.317 "memory_domains": [ 00:06:32.317 { 00:06:32.317 "dma_device_id": "system", 00:06:32.317 "dma_device_type": 1 00:06:32.317 }, 00:06:32.317 { 00:06:32.317 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:32.317 "dma_device_type": 2 00:06:32.317 } 00:06:32.317 ], 00:06:32.317 "driver_specific": {} 00:06:32.317 } 00:06:32.317 ]' 00:06:32.317 16:57:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1379 -- # jq '.[] .block_size' 00:06:32.317 16:57:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1379 -- # bs=512 00:06:32.317 16:57:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1380 -- # jq '.[] .num_blocks' 00:06:32.317 16:57:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1380 -- # nb=1048576 00:06:32.317 16:57:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1383 -- # bdev_size=512 00:06:32.317 16:57:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1384 -- # echo 512 00:06:32.317 16:57:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:06:32.317 16:57:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:06:33.726 16:57:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:06:33.726 16:57:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1194 -- # local i=0 00:06:33.726 16:57:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:06:33.726 16:57:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:06:33.726 16:57:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1201 -- # sleep 2 00:06:35.623 16:57:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:06:35.623 16:57:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:06:35.623 16:57:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1203 -- # grep -c SPDKISFASTANDAWESOME 00:06:35.623 16:57:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:06:35.623 16:57:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:06:35.623 16:57:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1204 -- # return 0 00:06:35.623 16:57:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:06:35.623 16:57:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:06:35.623 16:57:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:06:35.623 16:57:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:06:35.623 16:57:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:06:35.623 16:57:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:06:35.623 16:57:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:06:35.623 16:57:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:06:35.623 16:57:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:06:35.623 16:57:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:06:35.623 16:57:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:06:35.881 16:57:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:06:35.881 16:57:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:06:36.813 16:57:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@76 -- # '[' 0 -eq 0 ']' 00:06:36.813 16:57:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@77 -- # run_test filesystem_ext4 nvmf_filesystem_create ext4 nvme0n1 00:06:36.813 16:57:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1097 -- # '[' 4 -le 1 ']' 00:06:36.813 16:57:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:36.813 16:57:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:36.813 ************************************ 00:06:36.813 START TEST filesystem_ext4 00:06:36.813 ************************************ 00:06:36.813 16:57:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1121 -- # nvmf_filesystem_create ext4 nvme0n1 00:06:36.813 16:57:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:06:36.813 16:57:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:06:36.813 16:57:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:06:36.813 16:57:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@922 -- # local fstype=ext4 00:06:36.813 16:57:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@923 -- # local dev_name=/dev/nvme0n1p1 00:06:36.813 16:57:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@924 -- # local i=0 00:06:36.813 16:57:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@925 -- # local force 00:06:36.813 16:57:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@927 -- # '[' ext4 = ext4 ']' 00:06:36.813 16:57:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@928 -- # force=-F 00:06:36.813 16:57:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@933 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:06:36.813 mke2fs 1.46.5 (30-Dec-2021) 00:06:37.071 Discarding device blocks: 0/522240 done 00:06:37.071 Creating filesystem with 522240 1k blocks and 130560 inodes 00:06:37.071 Filesystem UUID: da91d364-7e52-4cc5-8296-2b660d8c9cbb 00:06:37.071 Superblock backups stored on blocks: 00:06:37.071 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:06:37.071 00:06:37.071 Allocating group tables: 0/64 done 00:06:37.071 Writing inode tables: 0/64 done 00:06:37.071 Creating journal (8192 blocks): done 00:06:37.071 Writing superblocks and filesystem accounting information: 0/64 done 00:06:37.071 00:06:37.071 16:57:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@941 -- # return 0 00:06:37.071 16:57:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:06:37.328 16:57:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:06:37.328 16:57:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@25 -- # sync 00:06:37.328 16:57:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:06:37.328 16:57:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@27 -- # sync 00:06:37.328 16:57:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@29 -- # i=0 00:06:37.328 16:57:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:06:37.328 16:57:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@37 -- # kill -0 2912383 00:06:37.328 16:57:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:06:37.328 16:57:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:06:37.328 16:57:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:06:37.328 16:57:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:06:37.328 00:06:37.328 real 0m0.460s 00:06:37.328 user 0m0.030s 00:06:37.328 sys 0m0.057s 00:06:37.328 16:57:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:37.328 16:57:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@10 -- # set +x 00:06:37.328 ************************************ 00:06:37.328 END TEST filesystem_ext4 00:06:37.328 ************************************ 00:06:37.328 16:57:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@78 -- # run_test filesystem_btrfs nvmf_filesystem_create btrfs nvme0n1 00:06:37.328 16:57:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1097 -- # '[' 4 -le 1 ']' 00:06:37.328 16:57:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:37.328 16:57:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:37.328 ************************************ 00:06:37.328 START TEST filesystem_btrfs 00:06:37.328 ************************************ 00:06:37.328 16:57:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1121 -- # nvmf_filesystem_create btrfs nvme0n1 00:06:37.328 16:57:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:06:37.328 16:57:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:06:37.328 16:57:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:06:37.328 16:57:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@922 -- # local fstype=btrfs 00:06:37.328 16:57:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@923 -- # local dev_name=/dev/nvme0n1p1 00:06:37.329 16:57:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@924 -- # local i=0 00:06:37.329 16:57:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@925 -- # local force 00:06:37.329 16:57:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@927 -- # '[' btrfs = ext4 ']' 00:06:37.329 16:57:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@930 -- # force=-f 00:06:37.329 16:57:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@933 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:06:37.892 btrfs-progs v6.6.2 00:06:37.892 See https://btrfs.readthedocs.io for more information. 00:06:37.892 00:06:37.892 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:06:37.892 NOTE: several default settings have changed in version 5.15, please make sure 00:06:37.892 this does not affect your deployments: 00:06:37.892 - DUP for metadata (-m dup) 00:06:37.892 - enabled no-holes (-O no-holes) 00:06:37.892 - enabled free-space-tree (-R free-space-tree) 00:06:37.892 00:06:37.892 Label: (null) 00:06:37.892 UUID: a2dc5b0a-a422-483e-ace5-8ba6e9b5c6ea 00:06:37.892 Node size: 16384 00:06:37.892 Sector size: 4096 00:06:37.892 Filesystem size: 510.00MiB 00:06:37.892 Block group profiles: 00:06:37.892 Data: single 8.00MiB 00:06:37.892 Metadata: DUP 32.00MiB 00:06:37.892 System: DUP 8.00MiB 00:06:37.892 SSD detected: yes 00:06:37.892 Zoned device: no 00:06:37.892 Incompat features: extref, skinny-metadata, no-holes, free-space-tree 00:06:37.892 Runtime features: free-space-tree 00:06:37.892 Checksum: crc32c 00:06:37.892 Number of devices: 1 00:06:37.892 Devices: 00:06:37.892 ID SIZE PATH 00:06:37.892 1 510.00MiB /dev/nvme0n1p1 00:06:37.892 00:06:37.892 16:57:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@941 -- # return 0 00:06:37.892 16:57:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:06:38.149 16:57:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:06:38.149 16:57:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@25 -- # sync 00:06:38.149 16:57:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:06:38.149 16:57:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@27 -- # sync 00:06:38.149 16:57:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@29 -- # i=0 00:06:38.149 16:57:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:06:38.149 16:57:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@37 -- # kill -0 2912383 00:06:38.149 16:57:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:06:38.149 16:57:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:06:38.149 16:57:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:06:38.149 16:57:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:06:38.149 00:06:38.149 real 0m0.664s 00:06:38.149 user 0m0.026s 00:06:38.149 sys 0m0.120s 00:06:38.149 16:57:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:38.149 16:57:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@10 -- # set +x 00:06:38.149 ************************************ 00:06:38.149 END TEST filesystem_btrfs 00:06:38.149 ************************************ 00:06:38.149 16:57:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@79 -- # run_test filesystem_xfs nvmf_filesystem_create xfs nvme0n1 00:06:38.149 16:57:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1097 -- # '[' 4 -le 1 ']' 00:06:38.149 16:57:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:38.149 16:57:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:38.149 ************************************ 00:06:38.149 START TEST filesystem_xfs 00:06:38.149 ************************************ 00:06:38.149 16:57:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1121 -- # nvmf_filesystem_create xfs nvme0n1 00:06:38.149 16:57:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:06:38.149 16:57:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:06:38.150 16:57:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:06:38.150 16:57:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@922 -- # local fstype=xfs 00:06:38.150 16:57:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@923 -- # local dev_name=/dev/nvme0n1p1 00:06:38.150 16:57:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@924 -- # local i=0 00:06:38.150 16:57:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@925 -- # local force 00:06:38.150 16:57:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@927 -- # '[' xfs = ext4 ']' 00:06:38.150 16:57:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@930 -- # force=-f 00:06:38.150 16:57:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@933 -- # mkfs.xfs -f /dev/nvme0n1p1 00:06:38.150 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:06:38.150 = sectsz=512 attr=2, projid32bit=1 00:06:38.150 = crc=1 finobt=1, sparse=1, rmapbt=0 00:06:38.150 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:06:38.150 data = bsize=4096 blocks=130560, imaxpct=25 00:06:38.150 = sunit=0 swidth=0 blks 00:06:38.150 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:06:38.150 log =internal log bsize=4096 blocks=16384, version=2 00:06:38.150 = sectsz=512 sunit=0 blks, lazy-count=1 00:06:38.150 realtime =none extsz=4096 blocks=0, rtextents=0 00:06:39.521 Discarding blocks...Done. 00:06:39.521 16:57:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@941 -- # return 0 00:06:39.521 16:57:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:06:41.415 16:57:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:06:41.415 16:57:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@25 -- # sync 00:06:41.415 16:57:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:06:41.415 16:57:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@27 -- # sync 00:06:41.415 16:57:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@29 -- # i=0 00:06:41.415 16:57:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:06:41.415 16:57:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@37 -- # kill -0 2912383 00:06:41.415 16:57:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:06:41.415 16:57:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:06:41.415 16:57:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:06:41.415 16:57:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:06:41.415 00:06:41.415 real 0m3.301s 00:06:41.415 user 0m0.023s 00:06:41.415 sys 0m0.071s 00:06:41.415 16:57:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:41.415 16:57:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@10 -- # set +x 00:06:41.415 ************************************ 00:06:41.415 END TEST filesystem_xfs 00:06:41.415 ************************************ 00:06:41.415 16:57:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:06:41.671 16:57:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@93 -- # sync 00:06:41.671 16:57:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:06:41.929 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:06:41.929 16:57:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:06:41.929 16:57:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1215 -- # local i=0 00:06:41.929 16:57:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:06:41.929 16:57:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:06:41.929 16:57:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:06:41.929 16:57:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1223 -- # grep -q -w SPDKISFASTANDAWESOME 00:06:41.929 16:57:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1227 -- # return 0 00:06:41.929 16:57:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:06:41.929 16:57:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:41.929 16:57:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:41.929 16:57:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:41.929 16:57:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:06:41.929 16:57:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@101 -- # killprocess 2912383 00:06:41.929 16:57:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@946 -- # '[' -z 2912383 ']' 00:06:41.929 16:57:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@950 -- # kill -0 2912383 00:06:41.929 16:57:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@951 -- # uname 00:06:41.929 16:57:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:06:41.929 16:57:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 2912383 00:06:41.929 16:57:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:06:41.929 16:57:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:06:41.929 16:57:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@964 -- # echo 'killing process with pid 2912383' 00:06:41.929 killing process with pid 2912383 00:06:41.929 16:57:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@965 -- # kill 2912383 00:06:41.929 [2024-05-15 16:57:29.482289] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:06:41.929 16:57:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@970 -- # wait 2912383 00:06:42.495 16:57:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:06:42.495 00:06:42.495 real 0m11.163s 00:06:42.495 user 0m43.748s 00:06:42.495 sys 0m1.205s 00:06:42.495 16:57:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:42.495 16:57:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:42.495 ************************************ 00:06:42.495 END TEST nvmf_filesystem_no_in_capsule 00:06:42.495 ************************************ 00:06:42.495 16:57:29 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@106 -- # run_test nvmf_filesystem_in_capsule nvmf_filesystem_part 4096 00:06:42.495 16:57:29 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:06:42.495 16:57:29 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:42.495 16:57:29 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:06:42.495 ************************************ 00:06:42.495 START TEST nvmf_filesystem_in_capsule 00:06:42.495 ************************************ 00:06:42.495 16:57:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1121 -- # nvmf_filesystem_part 4096 00:06:42.495 16:57:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@47 -- # in_capsule=4096 00:06:42.495 16:57:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:06:42.495 16:57:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:06:42.495 16:57:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@720 -- # xtrace_disable 00:06:42.495 16:57:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:42.495 16:57:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@481 -- # nvmfpid=2914526 00:06:42.495 16:57:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@482 -- # waitforlisten 2914526 00:06:42.495 16:57:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:06:42.495 16:57:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@827 -- # '[' -z 2914526 ']' 00:06:42.495 16:57:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:42.495 16:57:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:42.495 16:57:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:42.495 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:42.495 16:57:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:42.495 16:57:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:42.495 [2024-05-15 16:57:29.981178] Starting SPDK v24.05-pre git sha1 0ba8ca574 / DPDK 23.11.0 initialization... 00:06:42.495 [2024-05-15 16:57:29.981217] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:42.495 EAL: No free 2048 kB hugepages reported on node 1 00:06:42.495 [2024-05-15 16:57:30.040230] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:42.495 [2024-05-15 16:57:30.116447] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:06:42.495 [2024-05-15 16:57:30.116491] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:06:42.495 [2024-05-15 16:57:30.116498] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:06:42.495 [2024-05-15 16:57:30.116504] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:06:42.495 [2024-05-15 16:57:30.116509] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:06:42.495 [2024-05-15 16:57:30.116573] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:42.495 [2024-05-15 16:57:30.116690] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:06:42.495 [2024-05-15 16:57:30.116776] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:06:42.495 [2024-05-15 16:57:30.116777] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:43.427 16:57:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:43.427 16:57:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@860 -- # return 0 00:06:43.427 16:57:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:06:43.427 16:57:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:43.427 16:57:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:43.427 16:57:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:06:43.427 16:57:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:06:43.427 16:57:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 4096 00:06:43.427 16:57:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:43.427 16:57:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:43.427 [2024-05-15 16:57:30.836231] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:43.427 16:57:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:43.428 16:57:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:06:43.428 16:57:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:43.428 16:57:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:43.428 Malloc1 00:06:43.428 16:57:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:43.428 16:57:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:06:43.428 16:57:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:43.428 16:57:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:43.428 16:57:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:43.428 16:57:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:06:43.428 16:57:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:43.428 16:57:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:43.428 16:57:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:43.428 16:57:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:06:43.428 16:57:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:43.428 16:57:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:43.428 [2024-05-15 16:57:30.980381] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:06:43.428 [2024-05-15 16:57:30.980635] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:06:43.428 16:57:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:43.428 16:57:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:06:43.428 16:57:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1374 -- # local bdev_name=Malloc1 00:06:43.428 16:57:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1375 -- # local bdev_info 00:06:43.428 16:57:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1376 -- # local bs 00:06:43.428 16:57:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1377 -- # local nb 00:06:43.428 16:57:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1378 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:06:43.428 16:57:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:43.428 16:57:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:43.428 16:57:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:43.428 16:57:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1378 -- # bdev_info='[ 00:06:43.428 { 00:06:43.428 "name": "Malloc1", 00:06:43.428 "aliases": [ 00:06:43.428 "a151502a-816a-49a3-ba2a-d7c78081b78c" 00:06:43.428 ], 00:06:43.428 "product_name": "Malloc disk", 00:06:43.428 "block_size": 512, 00:06:43.428 "num_blocks": 1048576, 00:06:43.428 "uuid": "a151502a-816a-49a3-ba2a-d7c78081b78c", 00:06:43.428 "assigned_rate_limits": { 00:06:43.428 "rw_ios_per_sec": 0, 00:06:43.428 "rw_mbytes_per_sec": 0, 00:06:43.428 "r_mbytes_per_sec": 0, 00:06:43.428 "w_mbytes_per_sec": 0 00:06:43.428 }, 00:06:43.428 "claimed": true, 00:06:43.428 "claim_type": "exclusive_write", 00:06:43.428 "zoned": false, 00:06:43.428 "supported_io_types": { 00:06:43.428 "read": true, 00:06:43.428 "write": true, 00:06:43.428 "unmap": true, 00:06:43.428 "write_zeroes": true, 00:06:43.428 "flush": true, 00:06:43.428 "reset": true, 00:06:43.428 "compare": false, 00:06:43.428 "compare_and_write": false, 00:06:43.428 "abort": true, 00:06:43.428 "nvme_admin": false, 00:06:43.428 "nvme_io": false 00:06:43.428 }, 00:06:43.428 "memory_domains": [ 00:06:43.428 { 00:06:43.428 "dma_device_id": "system", 00:06:43.428 "dma_device_type": 1 00:06:43.428 }, 00:06:43.428 { 00:06:43.428 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:43.428 "dma_device_type": 2 00:06:43.428 } 00:06:43.428 ], 00:06:43.428 "driver_specific": {} 00:06:43.428 } 00:06:43.428 ]' 00:06:43.428 16:57:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1379 -- # jq '.[] .block_size' 00:06:43.428 16:57:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1379 -- # bs=512 00:06:43.428 16:57:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1380 -- # jq '.[] .num_blocks' 00:06:43.686 16:57:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1380 -- # nb=1048576 00:06:43.686 16:57:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1383 -- # bdev_size=512 00:06:43.686 16:57:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1384 -- # echo 512 00:06:43.686 16:57:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:06:43.686 16:57:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:06:44.618 16:57:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:06:44.618 16:57:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1194 -- # local i=0 00:06:44.618 16:57:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:06:44.618 16:57:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:06:44.618 16:57:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1201 -- # sleep 2 00:06:47.143 16:57:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:06:47.143 16:57:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:06:47.143 16:57:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1203 -- # grep -c SPDKISFASTANDAWESOME 00:06:47.143 16:57:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:06:47.143 16:57:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:06:47.143 16:57:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1204 -- # return 0 00:06:47.143 16:57:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:06:47.143 16:57:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:06:47.143 16:57:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:06:47.143 16:57:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:06:47.143 16:57:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:06:47.143 16:57:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:06:47.143 16:57:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:06:47.143 16:57:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:06:47.143 16:57:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:06:47.143 16:57:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:06:47.143 16:57:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:06:47.143 16:57:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:06:47.143 16:57:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:06:48.515 16:57:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@76 -- # '[' 4096 -eq 0 ']' 00:06:48.515 16:57:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@81 -- # run_test filesystem_in_capsule_ext4 nvmf_filesystem_create ext4 nvme0n1 00:06:48.515 16:57:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1097 -- # '[' 4 -le 1 ']' 00:06:48.515 16:57:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:48.515 16:57:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:48.515 ************************************ 00:06:48.515 START TEST filesystem_in_capsule_ext4 00:06:48.515 ************************************ 00:06:48.515 16:57:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1121 -- # nvmf_filesystem_create ext4 nvme0n1 00:06:48.515 16:57:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:06:48.515 16:57:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:06:48.515 16:57:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:06:48.515 16:57:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@922 -- # local fstype=ext4 00:06:48.515 16:57:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@923 -- # local dev_name=/dev/nvme0n1p1 00:06:48.515 16:57:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@924 -- # local i=0 00:06:48.515 16:57:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@925 -- # local force 00:06:48.515 16:57:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@927 -- # '[' ext4 = ext4 ']' 00:06:48.515 16:57:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@928 -- # force=-F 00:06:48.515 16:57:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@933 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:06:48.515 mke2fs 1.46.5 (30-Dec-2021) 00:06:48.515 Discarding device blocks: 0/522240 done 00:06:48.515 Creating filesystem with 522240 1k blocks and 130560 inodes 00:06:48.515 Filesystem UUID: 037538dc-f70e-4823-aaac-4fbf2b62ecfa 00:06:48.515 Superblock backups stored on blocks: 00:06:48.515 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:06:48.515 00:06:48.515 Allocating group tables: 0/64 done 00:06:48.515 Writing inode tables: 0/64 done 00:06:48.515 Creating journal (8192 blocks): done 00:06:49.447 Writing superblocks and filesystem accounting information: 0/64 done 00:06:49.447 00:06:49.447 16:57:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@941 -- # return 0 00:06:49.447 16:57:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:06:49.447 16:57:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:06:49.447 16:57:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@25 -- # sync 00:06:49.447 16:57:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:06:49.447 16:57:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@27 -- # sync 00:06:49.447 16:57:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@29 -- # i=0 00:06:49.447 16:57:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:06:49.447 16:57:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@37 -- # kill -0 2914526 00:06:49.447 16:57:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:06:49.447 16:57:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:06:49.447 16:57:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:06:49.447 16:57:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:06:49.447 00:06:49.447 real 0m1.230s 00:06:49.447 user 0m0.023s 00:06:49.447 sys 0m0.066s 00:06:49.447 16:57:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:49.447 16:57:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@10 -- # set +x 00:06:49.447 ************************************ 00:06:49.447 END TEST filesystem_in_capsule_ext4 00:06:49.447 ************************************ 00:06:49.447 16:57:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@82 -- # run_test filesystem_in_capsule_btrfs nvmf_filesystem_create btrfs nvme0n1 00:06:49.447 16:57:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1097 -- # '[' 4 -le 1 ']' 00:06:49.447 16:57:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:49.447 16:57:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:49.703 ************************************ 00:06:49.703 START TEST filesystem_in_capsule_btrfs 00:06:49.703 ************************************ 00:06:49.703 16:57:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1121 -- # nvmf_filesystem_create btrfs nvme0n1 00:06:49.703 16:57:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:06:49.703 16:57:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:06:49.703 16:57:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:06:49.703 16:57:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@922 -- # local fstype=btrfs 00:06:49.703 16:57:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@923 -- # local dev_name=/dev/nvme0n1p1 00:06:49.703 16:57:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@924 -- # local i=0 00:06:49.703 16:57:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@925 -- # local force 00:06:49.703 16:57:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@927 -- # '[' btrfs = ext4 ']' 00:06:49.703 16:57:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@930 -- # force=-f 00:06:49.703 16:57:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@933 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:06:49.960 btrfs-progs v6.6.2 00:06:49.960 See https://btrfs.readthedocs.io for more information. 00:06:49.960 00:06:49.960 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:06:49.960 NOTE: several default settings have changed in version 5.15, please make sure 00:06:49.960 this does not affect your deployments: 00:06:49.960 - DUP for metadata (-m dup) 00:06:49.960 - enabled no-holes (-O no-holes) 00:06:49.960 - enabled free-space-tree (-R free-space-tree) 00:06:49.960 00:06:49.960 Label: (null) 00:06:49.960 UUID: cbe8a442-d6c7-41ee-827a-78c1e643f411 00:06:49.960 Node size: 16384 00:06:49.960 Sector size: 4096 00:06:49.960 Filesystem size: 510.00MiB 00:06:49.960 Block group profiles: 00:06:49.960 Data: single 8.00MiB 00:06:49.960 Metadata: DUP 32.00MiB 00:06:49.960 System: DUP 8.00MiB 00:06:49.960 SSD detected: yes 00:06:49.960 Zoned device: no 00:06:49.960 Incompat features: extref, skinny-metadata, no-holes, free-space-tree 00:06:49.960 Runtime features: free-space-tree 00:06:49.960 Checksum: crc32c 00:06:49.960 Number of devices: 1 00:06:49.960 Devices: 00:06:49.960 ID SIZE PATH 00:06:49.960 1 510.00MiB /dev/nvme0n1p1 00:06:49.960 00:06:49.960 16:57:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@941 -- # return 0 00:06:49.960 16:57:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:06:50.218 16:57:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:06:50.218 16:57:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@25 -- # sync 00:06:50.218 16:57:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:06:50.218 16:57:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@27 -- # sync 00:06:50.218 16:57:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@29 -- # i=0 00:06:50.218 16:57:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:06:50.474 16:57:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@37 -- # kill -0 2914526 00:06:50.474 16:57:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:06:50.474 16:57:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:06:50.474 16:57:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:06:50.474 16:57:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:06:50.474 00:06:50.474 real 0m0.758s 00:06:50.474 user 0m0.030s 00:06:50.474 sys 0m0.119s 00:06:50.474 16:57:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:50.474 16:57:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@10 -- # set +x 00:06:50.474 ************************************ 00:06:50.474 END TEST filesystem_in_capsule_btrfs 00:06:50.474 ************************************ 00:06:50.474 16:57:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@83 -- # run_test filesystem_in_capsule_xfs nvmf_filesystem_create xfs nvme0n1 00:06:50.474 16:57:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1097 -- # '[' 4 -le 1 ']' 00:06:50.474 16:57:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:50.474 16:57:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:50.474 ************************************ 00:06:50.474 START TEST filesystem_in_capsule_xfs 00:06:50.474 ************************************ 00:06:50.474 16:57:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1121 -- # nvmf_filesystem_create xfs nvme0n1 00:06:50.474 16:57:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:06:50.474 16:57:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:06:50.474 16:57:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:06:50.474 16:57:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@922 -- # local fstype=xfs 00:06:50.474 16:57:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@923 -- # local dev_name=/dev/nvme0n1p1 00:06:50.474 16:57:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@924 -- # local i=0 00:06:50.474 16:57:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@925 -- # local force 00:06:50.474 16:57:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@927 -- # '[' xfs = ext4 ']' 00:06:50.474 16:57:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@930 -- # force=-f 00:06:50.474 16:57:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@933 -- # mkfs.xfs -f /dev/nvme0n1p1 00:06:50.474 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:06:50.474 = sectsz=512 attr=2, projid32bit=1 00:06:50.474 = crc=1 finobt=1, sparse=1, rmapbt=0 00:06:50.474 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:06:50.474 data = bsize=4096 blocks=130560, imaxpct=25 00:06:50.474 = sunit=0 swidth=0 blks 00:06:50.474 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:06:50.474 log =internal log bsize=4096 blocks=16384, version=2 00:06:50.474 = sectsz=512 sunit=0 blks, lazy-count=1 00:06:50.474 realtime =none extsz=4096 blocks=0, rtextents=0 00:06:51.402 Discarding blocks...Done. 00:06:51.403 16:57:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@941 -- # return 0 00:06:51.403 16:57:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:06:53.929 16:57:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:06:53.929 16:57:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@25 -- # sync 00:06:53.929 16:57:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:06:53.929 16:57:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@27 -- # sync 00:06:53.929 16:57:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@29 -- # i=0 00:06:53.929 16:57:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:06:53.929 16:57:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@37 -- # kill -0 2914526 00:06:53.929 16:57:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:06:53.929 16:57:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:06:53.929 16:57:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:06:53.929 16:57:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:06:53.929 00:06:53.929 real 0m3.341s 00:06:53.929 user 0m0.025s 00:06:53.929 sys 0m0.070s 00:06:53.929 16:57:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:53.929 16:57:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@10 -- # set +x 00:06:53.929 ************************************ 00:06:53.929 END TEST filesystem_in_capsule_xfs 00:06:53.929 ************************************ 00:06:53.929 16:57:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:06:54.186 16:57:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@93 -- # sync 00:06:54.186 16:57:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:06:54.186 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:06:54.186 16:57:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:06:54.186 16:57:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1215 -- # local i=0 00:06:54.186 16:57:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:06:54.186 16:57:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:06:54.186 16:57:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:06:54.186 16:57:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1223 -- # grep -q -w SPDKISFASTANDAWESOME 00:06:54.186 16:57:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1227 -- # return 0 00:06:54.186 16:57:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:06:54.186 16:57:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:54.186 16:57:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:54.186 16:57:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:54.186 16:57:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:06:54.186 16:57:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@101 -- # killprocess 2914526 00:06:54.186 16:57:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@946 -- # '[' -z 2914526 ']' 00:06:54.186 16:57:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@950 -- # kill -0 2914526 00:06:54.186 16:57:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@951 -- # uname 00:06:54.187 16:57:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:06:54.187 16:57:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 2914526 00:06:54.187 16:57:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:06:54.187 16:57:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:06:54.187 16:57:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@964 -- # echo 'killing process with pid 2914526' 00:06:54.187 killing process with pid 2914526 00:06:54.187 16:57:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@965 -- # kill 2914526 00:06:54.187 [2024-05-15 16:57:41.800603] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:06:54.187 16:57:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@970 -- # wait 2914526 00:06:54.787 16:57:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:06:54.787 00:06:54.787 real 0m12.244s 00:06:54.787 user 0m48.001s 00:06:54.787 sys 0m1.249s 00:06:54.787 16:57:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:54.787 16:57:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:54.787 ************************************ 00:06:54.787 END TEST nvmf_filesystem_in_capsule 00:06:54.787 ************************************ 00:06:54.787 16:57:42 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@108 -- # nvmftestfini 00:06:54.787 16:57:42 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@488 -- # nvmfcleanup 00:06:54.787 16:57:42 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@117 -- # sync 00:06:54.787 16:57:42 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:06:54.787 16:57:42 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@120 -- # set +e 00:06:54.787 16:57:42 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@121 -- # for i in {1..20} 00:06:54.787 16:57:42 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:06:54.787 rmmod nvme_tcp 00:06:54.787 rmmod nvme_fabrics 00:06:54.787 rmmod nvme_keyring 00:06:54.787 16:57:42 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:06:54.787 16:57:42 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@124 -- # set -e 00:06:54.787 16:57:42 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@125 -- # return 0 00:06:54.787 16:57:42 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:06:54.787 16:57:42 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:06:54.787 16:57:42 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:06:54.788 16:57:42 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:06:54.788 16:57:42 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:06:54.788 16:57:42 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@278 -- # remove_spdk_ns 00:06:54.788 16:57:42 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:54.788 16:57:42 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:06:54.788 16:57:42 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:56.698 16:57:44 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:06:56.698 00:06:56.698 real 0m30.950s 00:06:56.698 user 1m33.278s 00:06:56.698 sys 0m6.407s 00:06:56.698 16:57:44 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:56.698 16:57:44 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:06:56.698 ************************************ 00:06:56.698 END TEST nvmf_filesystem 00:06:56.698 ************************************ 00:06:56.958 16:57:44 nvmf_tcp -- nvmf/nvmf.sh@25 -- # run_test nvmf_target_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:06:56.958 16:57:44 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:06:56.958 16:57:44 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:56.958 16:57:44 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:56.958 ************************************ 00:06:56.958 START TEST nvmf_target_discovery 00:06:56.958 ************************************ 00:06:56.958 16:57:44 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:06:56.958 * Looking for test storage... 00:06:56.958 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:56.958 16:57:44 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:56.958 16:57:44 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@7 -- # uname -s 00:06:56.958 16:57:44 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:56.958 16:57:44 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:56.958 16:57:44 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:56.958 16:57:44 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:56.958 16:57:44 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:56.958 16:57:44 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:56.958 16:57:44 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:56.958 16:57:44 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:56.958 16:57:44 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:56.958 16:57:44 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:56.958 16:57:44 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:06:56.958 16:57:44 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:06:56.958 16:57:44 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:56.958 16:57:44 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:56.958 16:57:44 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:06:56.958 16:57:44 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:56.958 16:57:44 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:56.958 16:57:44 nvmf_tcp.nvmf_target_discovery -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:56.958 16:57:44 nvmf_tcp.nvmf_target_discovery -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:56.958 16:57:44 nvmf_tcp.nvmf_target_discovery -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:56.958 16:57:44 nvmf_tcp.nvmf_target_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:56.958 16:57:44 nvmf_tcp.nvmf_target_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:56.958 16:57:44 nvmf_tcp.nvmf_target_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:56.958 16:57:44 nvmf_tcp.nvmf_target_discovery -- paths/export.sh@5 -- # export PATH 00:06:56.958 16:57:44 nvmf_tcp.nvmf_target_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:56.958 16:57:44 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@47 -- # : 0 00:06:56.958 16:57:44 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:06:56.958 16:57:44 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:06:56.958 16:57:44 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:56.958 16:57:44 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:56.958 16:57:44 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:56.958 16:57:44 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:06:56.958 16:57:44 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:06:56.958 16:57:44 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@51 -- # have_pci_nics=0 00:06:56.958 16:57:44 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@11 -- # NULL_BDEV_SIZE=102400 00:06:56.958 16:57:44 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@12 -- # NULL_BLOCK_SIZE=512 00:06:56.958 16:57:44 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@13 -- # NVMF_PORT_REFERRAL=4430 00:06:56.958 16:57:44 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@15 -- # hash nvme 00:06:56.958 16:57:44 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@20 -- # nvmftestinit 00:06:56.958 16:57:44 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:06:56.958 16:57:44 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:06:56.958 16:57:44 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@448 -- # prepare_net_devs 00:06:56.958 16:57:44 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@410 -- # local -g is_hw=no 00:06:56.958 16:57:44 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@412 -- # remove_spdk_ns 00:06:56.958 16:57:44 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:56.958 16:57:44 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:06:56.958 16:57:44 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:56.958 16:57:44 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:06:56.958 16:57:44 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:06:56.958 16:57:44 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@285 -- # xtrace_disable 00:06:56.958 16:57:44 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:02.228 16:57:49 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:02.228 16:57:49 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@291 -- # pci_devs=() 00:07:02.228 16:57:49 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@291 -- # local -a pci_devs 00:07:02.228 16:57:49 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@292 -- # pci_net_devs=() 00:07:02.228 16:57:49 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:07:02.228 16:57:49 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@293 -- # pci_drivers=() 00:07:02.228 16:57:49 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@293 -- # local -A pci_drivers 00:07:02.228 16:57:49 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@295 -- # net_devs=() 00:07:02.228 16:57:49 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@295 -- # local -ga net_devs 00:07:02.228 16:57:49 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@296 -- # e810=() 00:07:02.228 16:57:49 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@296 -- # local -ga e810 00:07:02.228 16:57:49 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@297 -- # x722=() 00:07:02.228 16:57:49 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@297 -- # local -ga x722 00:07:02.228 16:57:49 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@298 -- # mlx=() 00:07:02.228 16:57:49 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@298 -- # local -ga mlx 00:07:02.228 16:57:49 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:02.228 16:57:49 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:02.228 16:57:49 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:02.228 16:57:49 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:02.228 16:57:49 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:02.229 16:57:49 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:02.229 16:57:49 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:02.229 16:57:49 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:02.229 16:57:49 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:02.229 16:57:49 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:02.229 16:57:49 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:02.229 16:57:49 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:07:02.229 16:57:49 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:07:02.229 16:57:49 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:07:02.229 16:57:49 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:07:02.229 16:57:49 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:07:02.229 16:57:49 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:07:02.229 16:57:49 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:02.229 16:57:49 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:07:02.229 Found 0000:86:00.0 (0x8086 - 0x159b) 00:07:02.229 16:57:49 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:02.229 16:57:49 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:02.229 16:57:49 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:02.229 16:57:49 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:02.229 16:57:49 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:02.229 16:57:49 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:02.229 16:57:49 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:07:02.229 Found 0000:86:00.1 (0x8086 - 0x159b) 00:07:02.229 16:57:49 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:02.229 16:57:49 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:02.229 16:57:49 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:02.229 16:57:49 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:02.229 16:57:49 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:02.229 16:57:49 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:07:02.229 16:57:49 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:07:02.229 16:57:49 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:07:02.229 16:57:49 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:02.229 16:57:49 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:02.229 16:57:49 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:02.229 16:57:49 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:02.229 16:57:49 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:02.229 16:57:49 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:02.229 16:57:49 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:02.229 16:57:49 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:07:02.229 Found net devices under 0000:86:00.0: cvl_0_0 00:07:02.229 16:57:49 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:02.229 16:57:49 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:02.229 16:57:49 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:02.229 16:57:49 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:02.229 16:57:49 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:02.229 16:57:49 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:02.229 16:57:49 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:02.229 16:57:49 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:02.229 16:57:49 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:07:02.229 Found net devices under 0000:86:00.1: cvl_0_1 00:07:02.229 16:57:49 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:02.229 16:57:49 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:07:02.229 16:57:49 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@414 -- # is_hw=yes 00:07:02.229 16:57:49 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:07:02.229 16:57:49 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:07:02.229 16:57:49 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:07:02.229 16:57:49 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:02.229 16:57:49 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:02.229 16:57:49 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:02.229 16:57:49 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:07:02.229 16:57:49 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:02.229 16:57:49 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:02.229 16:57:49 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:07:02.229 16:57:49 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:02.229 16:57:49 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:02.229 16:57:49 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:07:02.229 16:57:49 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:07:02.229 16:57:49 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:07:02.229 16:57:49 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:02.229 16:57:49 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:02.229 16:57:49 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:02.229 16:57:49 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:07:02.229 16:57:49 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:02.487 16:57:49 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:02.487 16:57:49 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:02.487 16:57:49 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:07:02.487 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:02.487 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.197 ms 00:07:02.487 00:07:02.487 --- 10.0.0.2 ping statistics --- 00:07:02.487 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:02.487 rtt min/avg/max/mdev = 0.197/0.197/0.197/0.000 ms 00:07:02.487 16:57:49 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:02.487 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:02.487 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.232 ms 00:07:02.487 00:07:02.487 --- 10.0.0.1 ping statistics --- 00:07:02.487 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:02.487 rtt min/avg/max/mdev = 0.232/0.232/0.232/0.000 ms 00:07:02.487 16:57:49 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:02.487 16:57:49 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@422 -- # return 0 00:07:02.487 16:57:49 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:07:02.487 16:57:49 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:02.487 16:57:49 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:07:02.487 16:57:49 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:07:02.487 16:57:49 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:02.488 16:57:49 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:07:02.488 16:57:49 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:07:02.488 16:57:49 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@21 -- # nvmfappstart -m 0xF 00:07:02.488 16:57:49 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:07:02.488 16:57:49 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@720 -- # xtrace_disable 00:07:02.488 16:57:49 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:02.488 16:57:50 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@481 -- # nvmfpid=2920127 00:07:02.488 16:57:50 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@482 -- # waitforlisten 2920127 00:07:02.488 16:57:50 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:07:02.488 16:57:50 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@827 -- # '[' -z 2920127 ']' 00:07:02.488 16:57:50 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:02.488 16:57:50 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@832 -- # local max_retries=100 00:07:02.488 16:57:50 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:02.488 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:02.488 16:57:50 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@836 -- # xtrace_disable 00:07:02.488 16:57:50 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:02.488 [2024-05-15 16:57:50.051471] Starting SPDK v24.05-pre git sha1 0ba8ca574 / DPDK 23.11.0 initialization... 00:07:02.488 [2024-05-15 16:57:50.051517] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:02.488 EAL: No free 2048 kB hugepages reported on node 1 00:07:02.488 [2024-05-15 16:57:50.109964] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:02.745 [2024-05-15 16:57:50.191262] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:02.745 [2024-05-15 16:57:50.191296] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:02.745 [2024-05-15 16:57:50.191303] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:02.745 [2024-05-15 16:57:50.191309] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:02.745 [2024-05-15 16:57:50.191314] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:02.745 [2024-05-15 16:57:50.191359] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:02.745 [2024-05-15 16:57:50.191453] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:02.745 [2024-05-15 16:57:50.191538] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:07:02.745 [2024-05-15 16:57:50.191539] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:03.309 16:57:50 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:07:03.309 16:57:50 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@860 -- # return 0 00:07:03.309 16:57:50 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:07:03.309 16:57:50 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:03.309 16:57:50 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:03.309 16:57:50 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:03.309 16:57:50 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:07:03.309 16:57:50 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:03.309 16:57:50 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:03.309 [2024-05-15 16:57:50.912092] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:03.309 16:57:50 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:03.310 16:57:50 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@26 -- # seq 1 4 00:07:03.310 16:57:50 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:07:03.310 16:57:50 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null1 102400 512 00:07:03.310 16:57:50 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:03.310 16:57:50 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:03.310 Null1 00:07:03.310 16:57:50 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:03.310 16:57:50 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:07:03.310 16:57:50 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:03.310 16:57:50 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:03.310 16:57:50 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:03.310 16:57:50 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Null1 00:07:03.310 16:57:50 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:03.310 16:57:50 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:03.310 16:57:50 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:03.310 16:57:50 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:03.310 16:57:50 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:03.310 16:57:50 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:03.310 [2024-05-15 16:57:50.957442] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:07:03.310 [2024-05-15 16:57:50.957659] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:03.310 16:57:50 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:03.310 16:57:50 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:07:03.310 16:57:50 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null2 102400 512 00:07:03.310 16:57:50 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:03.310 16:57:50 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:03.568 Null2 00:07:03.568 16:57:50 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:03.568 16:57:50 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:07:03.568 16:57:50 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:03.568 16:57:50 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:03.568 16:57:50 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:03.568 16:57:50 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Null2 00:07:03.568 16:57:50 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:03.568 16:57:50 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:03.568 16:57:50 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:03.568 16:57:50 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:07:03.568 16:57:50 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:03.568 16:57:50 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:03.568 16:57:50 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:03.568 16:57:50 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:07:03.568 16:57:50 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null3 102400 512 00:07:03.568 16:57:50 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:03.568 16:57:50 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:03.568 Null3 00:07:03.568 16:57:51 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:03.568 16:57:51 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000003 00:07:03.568 16:57:51 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:03.568 16:57:51 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:03.568 16:57:51 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:03.568 16:57:51 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Null3 00:07:03.568 16:57:51 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:03.568 16:57:51 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:03.568 16:57:51 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:03.568 16:57:51 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:07:03.568 16:57:51 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:03.568 16:57:51 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:03.568 16:57:51 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:03.568 16:57:51 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:07:03.568 16:57:51 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null4 102400 512 00:07:03.568 16:57:51 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:03.568 16:57:51 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:03.568 Null4 00:07:03.568 16:57:51 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:03.568 16:57:51 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK00000000000004 00:07:03.568 16:57:51 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:03.568 16:57:51 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:03.568 16:57:51 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:03.568 16:57:51 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Null4 00:07:03.568 16:57:51 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:03.568 16:57:51 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:03.568 16:57:51 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:03.568 16:57:51 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.2 -s 4420 00:07:03.568 16:57:51 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:03.568 16:57:51 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:03.568 16:57:51 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:03.568 16:57:51 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@32 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:07:03.568 16:57:51 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:03.568 16:57:51 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:03.568 16:57:51 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:03.568 16:57:51 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@35 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 10.0.0.2 -s 4430 00:07:03.568 16:57:51 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:03.568 16:57:51 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:03.568 16:57:51 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:03.568 16:57:51 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@37 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 4420 00:07:03.826 00:07:03.826 Discovery Log Number of Records 6, Generation counter 6 00:07:03.826 =====Discovery Log Entry 0====== 00:07:03.826 trtype: tcp 00:07:03.826 adrfam: ipv4 00:07:03.826 subtype: current discovery subsystem 00:07:03.826 treq: not required 00:07:03.826 portid: 0 00:07:03.826 trsvcid: 4420 00:07:03.826 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:07:03.826 traddr: 10.0.0.2 00:07:03.826 eflags: explicit discovery connections, duplicate discovery information 00:07:03.826 sectype: none 00:07:03.826 =====Discovery Log Entry 1====== 00:07:03.826 trtype: tcp 00:07:03.826 adrfam: ipv4 00:07:03.826 subtype: nvme subsystem 00:07:03.826 treq: not required 00:07:03.826 portid: 0 00:07:03.826 trsvcid: 4420 00:07:03.826 subnqn: nqn.2016-06.io.spdk:cnode1 00:07:03.826 traddr: 10.0.0.2 00:07:03.826 eflags: none 00:07:03.826 sectype: none 00:07:03.826 =====Discovery Log Entry 2====== 00:07:03.826 trtype: tcp 00:07:03.826 adrfam: ipv4 00:07:03.826 subtype: nvme subsystem 00:07:03.826 treq: not required 00:07:03.826 portid: 0 00:07:03.826 trsvcid: 4420 00:07:03.826 subnqn: nqn.2016-06.io.spdk:cnode2 00:07:03.826 traddr: 10.0.0.2 00:07:03.826 eflags: none 00:07:03.826 sectype: none 00:07:03.826 =====Discovery Log Entry 3====== 00:07:03.826 trtype: tcp 00:07:03.826 adrfam: ipv4 00:07:03.826 subtype: nvme subsystem 00:07:03.826 treq: not required 00:07:03.826 portid: 0 00:07:03.826 trsvcid: 4420 00:07:03.826 subnqn: nqn.2016-06.io.spdk:cnode3 00:07:03.826 traddr: 10.0.0.2 00:07:03.826 eflags: none 00:07:03.826 sectype: none 00:07:03.826 =====Discovery Log Entry 4====== 00:07:03.826 trtype: tcp 00:07:03.826 adrfam: ipv4 00:07:03.826 subtype: nvme subsystem 00:07:03.826 treq: not required 00:07:03.826 portid: 0 00:07:03.826 trsvcid: 4420 00:07:03.826 subnqn: nqn.2016-06.io.spdk:cnode4 00:07:03.827 traddr: 10.0.0.2 00:07:03.827 eflags: none 00:07:03.827 sectype: none 00:07:03.827 =====Discovery Log Entry 5====== 00:07:03.827 trtype: tcp 00:07:03.827 adrfam: ipv4 00:07:03.827 subtype: discovery subsystem referral 00:07:03.827 treq: not required 00:07:03.827 portid: 0 00:07:03.827 trsvcid: 4430 00:07:03.827 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:07:03.827 traddr: 10.0.0.2 00:07:03.827 eflags: none 00:07:03.827 sectype: none 00:07:03.827 16:57:51 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@39 -- # echo 'Perform nvmf subsystem discovery via RPC' 00:07:03.827 Perform nvmf subsystem discovery via RPC 00:07:03.827 16:57:51 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@40 -- # rpc_cmd nvmf_get_subsystems 00:07:03.827 16:57:51 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:03.827 16:57:51 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:03.827 [ 00:07:03.827 { 00:07:03.827 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:07:03.827 "subtype": "Discovery", 00:07:03.827 "listen_addresses": [ 00:07:03.827 { 00:07:03.827 "trtype": "TCP", 00:07:03.827 "adrfam": "IPv4", 00:07:03.827 "traddr": "10.0.0.2", 00:07:03.827 "trsvcid": "4420" 00:07:03.827 } 00:07:03.827 ], 00:07:03.827 "allow_any_host": true, 00:07:03.827 "hosts": [] 00:07:03.827 }, 00:07:03.827 { 00:07:03.827 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:07:03.827 "subtype": "NVMe", 00:07:03.827 "listen_addresses": [ 00:07:03.827 { 00:07:03.827 "trtype": "TCP", 00:07:03.827 "adrfam": "IPv4", 00:07:03.827 "traddr": "10.0.0.2", 00:07:03.827 "trsvcid": "4420" 00:07:03.827 } 00:07:03.827 ], 00:07:03.827 "allow_any_host": true, 00:07:03.827 "hosts": [], 00:07:03.827 "serial_number": "SPDK00000000000001", 00:07:03.827 "model_number": "SPDK bdev Controller", 00:07:03.827 "max_namespaces": 32, 00:07:03.827 "min_cntlid": 1, 00:07:03.827 "max_cntlid": 65519, 00:07:03.827 "namespaces": [ 00:07:03.827 { 00:07:03.827 "nsid": 1, 00:07:03.827 "bdev_name": "Null1", 00:07:03.827 "name": "Null1", 00:07:03.827 "nguid": "30CF639FE4604B0CA043FF7369CE1BFF", 00:07:03.827 "uuid": "30cf639f-e460-4b0c-a043-ff7369ce1bff" 00:07:03.827 } 00:07:03.827 ] 00:07:03.827 }, 00:07:03.827 { 00:07:03.827 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:07:03.827 "subtype": "NVMe", 00:07:03.827 "listen_addresses": [ 00:07:03.827 { 00:07:03.827 "trtype": "TCP", 00:07:03.827 "adrfam": "IPv4", 00:07:03.827 "traddr": "10.0.0.2", 00:07:03.827 "trsvcid": "4420" 00:07:03.827 } 00:07:03.827 ], 00:07:03.827 "allow_any_host": true, 00:07:03.827 "hosts": [], 00:07:03.827 "serial_number": "SPDK00000000000002", 00:07:03.827 "model_number": "SPDK bdev Controller", 00:07:03.827 "max_namespaces": 32, 00:07:03.827 "min_cntlid": 1, 00:07:03.827 "max_cntlid": 65519, 00:07:03.827 "namespaces": [ 00:07:03.827 { 00:07:03.827 "nsid": 1, 00:07:03.827 "bdev_name": "Null2", 00:07:03.827 "name": "Null2", 00:07:03.827 "nguid": "F6DF245C6EB44BE89C1B3F249204C414", 00:07:03.827 "uuid": "f6df245c-6eb4-4be8-9c1b-3f249204c414" 00:07:03.827 } 00:07:03.827 ] 00:07:03.827 }, 00:07:03.827 { 00:07:03.827 "nqn": "nqn.2016-06.io.spdk:cnode3", 00:07:03.827 "subtype": "NVMe", 00:07:03.827 "listen_addresses": [ 00:07:03.827 { 00:07:03.827 "trtype": "TCP", 00:07:03.827 "adrfam": "IPv4", 00:07:03.827 "traddr": "10.0.0.2", 00:07:03.827 "trsvcid": "4420" 00:07:03.827 } 00:07:03.827 ], 00:07:03.827 "allow_any_host": true, 00:07:03.827 "hosts": [], 00:07:03.827 "serial_number": "SPDK00000000000003", 00:07:03.827 "model_number": "SPDK bdev Controller", 00:07:03.827 "max_namespaces": 32, 00:07:03.827 "min_cntlid": 1, 00:07:03.827 "max_cntlid": 65519, 00:07:03.827 "namespaces": [ 00:07:03.827 { 00:07:03.827 "nsid": 1, 00:07:03.827 "bdev_name": "Null3", 00:07:03.827 "name": "Null3", 00:07:03.827 "nguid": "766F5E32F6614FFCBF6A82F7953CC964", 00:07:03.827 "uuid": "766f5e32-f661-4ffc-bf6a-82f7953cc964" 00:07:03.827 } 00:07:03.827 ] 00:07:03.827 }, 00:07:03.827 { 00:07:03.827 "nqn": "nqn.2016-06.io.spdk:cnode4", 00:07:03.827 "subtype": "NVMe", 00:07:03.827 "listen_addresses": [ 00:07:03.827 { 00:07:03.827 "trtype": "TCP", 00:07:03.827 "adrfam": "IPv4", 00:07:03.827 "traddr": "10.0.0.2", 00:07:03.827 "trsvcid": "4420" 00:07:03.827 } 00:07:03.827 ], 00:07:03.827 "allow_any_host": true, 00:07:03.827 "hosts": [], 00:07:03.827 "serial_number": "SPDK00000000000004", 00:07:03.827 "model_number": "SPDK bdev Controller", 00:07:03.827 "max_namespaces": 32, 00:07:03.827 "min_cntlid": 1, 00:07:03.827 "max_cntlid": 65519, 00:07:03.827 "namespaces": [ 00:07:03.827 { 00:07:03.827 "nsid": 1, 00:07:03.827 "bdev_name": "Null4", 00:07:03.827 "name": "Null4", 00:07:03.827 "nguid": "C2B58273DFFA4A169BA5ACC4FF29A8AF", 00:07:03.827 "uuid": "c2b58273-dffa-4a16-9ba5-acc4ff29a8af" 00:07:03.827 } 00:07:03.827 ] 00:07:03.827 } 00:07:03.827 ] 00:07:03.827 16:57:51 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:03.827 16:57:51 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@42 -- # seq 1 4 00:07:03.828 16:57:51 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:07:03.828 16:57:51 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:07:03.828 16:57:51 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:03.828 16:57:51 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:03.828 16:57:51 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:03.828 16:57:51 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null1 00:07:03.828 16:57:51 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:03.828 16:57:51 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:03.828 16:57:51 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:03.828 16:57:51 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:07:03.828 16:57:51 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:07:03.828 16:57:51 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:03.828 16:57:51 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:03.828 16:57:51 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:03.828 16:57:51 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null2 00:07:03.828 16:57:51 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:03.828 16:57:51 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:03.828 16:57:51 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:03.828 16:57:51 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:07:03.828 16:57:51 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:07:03.828 16:57:51 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:03.828 16:57:51 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:03.828 16:57:51 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:03.828 16:57:51 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null3 00:07:03.828 16:57:51 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:03.828 16:57:51 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:03.828 16:57:51 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:03.828 16:57:51 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:07:03.828 16:57:51 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:07:03.828 16:57:51 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:03.828 16:57:51 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:03.828 16:57:51 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:03.828 16:57:51 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null4 00:07:03.828 16:57:51 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:03.828 16:57:51 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:03.828 16:57:51 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:03.828 16:57:51 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@47 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 10.0.0.2 -s 4430 00:07:03.828 16:57:51 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:03.828 16:57:51 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:03.828 16:57:51 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:03.828 16:57:51 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@49 -- # rpc_cmd bdev_get_bdevs 00:07:03.828 16:57:51 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@49 -- # jq -r '.[].name' 00:07:03.828 16:57:51 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:03.828 16:57:51 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:03.828 16:57:51 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:03.828 16:57:51 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@49 -- # check_bdevs= 00:07:03.828 16:57:51 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@50 -- # '[' -n '' ']' 00:07:03.828 16:57:51 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@55 -- # trap - SIGINT SIGTERM EXIT 00:07:03.828 16:57:51 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@57 -- # nvmftestfini 00:07:03.828 16:57:51 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@488 -- # nvmfcleanup 00:07:03.828 16:57:51 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@117 -- # sync 00:07:03.828 16:57:51 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:07:03.828 16:57:51 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@120 -- # set +e 00:07:03.828 16:57:51 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@121 -- # for i in {1..20} 00:07:03.828 16:57:51 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:07:03.828 rmmod nvme_tcp 00:07:03.828 rmmod nvme_fabrics 00:07:03.828 rmmod nvme_keyring 00:07:03.828 16:57:51 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:07:03.828 16:57:51 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@124 -- # set -e 00:07:03.828 16:57:51 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@125 -- # return 0 00:07:03.828 16:57:51 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@489 -- # '[' -n 2920127 ']' 00:07:03.828 16:57:51 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@490 -- # killprocess 2920127 00:07:03.828 16:57:51 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@946 -- # '[' -z 2920127 ']' 00:07:03.828 16:57:51 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@950 -- # kill -0 2920127 00:07:03.828 16:57:51 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@951 -- # uname 00:07:03.828 16:57:51 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:07:03.828 16:57:51 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 2920127 00:07:04.087 16:57:51 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:07:04.087 16:57:51 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:07:04.087 16:57:51 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@964 -- # echo 'killing process with pid 2920127' 00:07:04.087 killing process with pid 2920127 00:07:04.087 16:57:51 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@965 -- # kill 2920127 00:07:04.087 [2024-05-15 16:57:51.493388] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:07:04.087 16:57:51 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@970 -- # wait 2920127 00:07:04.087 16:57:51 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:07:04.087 16:57:51 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:07:04.087 16:57:51 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:07:04.087 16:57:51 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:07:04.087 16:57:51 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@278 -- # remove_spdk_ns 00:07:04.087 16:57:51 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:04.087 16:57:51 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:04.087 16:57:51 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:06.620 16:57:53 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:07:06.621 00:07:06.621 real 0m9.350s 00:07:06.621 user 0m7.697s 00:07:06.621 sys 0m4.449s 00:07:06.621 16:57:53 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:06.621 16:57:53 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:06.621 ************************************ 00:07:06.621 END TEST nvmf_target_discovery 00:07:06.621 ************************************ 00:07:06.621 16:57:53 nvmf_tcp -- nvmf/nvmf.sh@26 -- # run_test nvmf_referrals /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:07:06.621 16:57:53 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:07:06.621 16:57:53 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:06.621 16:57:53 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:07:06.621 ************************************ 00:07:06.621 START TEST nvmf_referrals 00:07:06.621 ************************************ 00:07:06.621 16:57:53 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:07:06.621 * Looking for test storage... 00:07:06.621 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:06.621 16:57:53 nvmf_tcp.nvmf_referrals -- target/referrals.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:06.621 16:57:53 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@7 -- # uname -s 00:07:06.621 16:57:53 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:06.621 16:57:53 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:06.621 16:57:53 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:06.621 16:57:53 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:06.621 16:57:53 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:06.621 16:57:53 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:06.621 16:57:53 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:06.621 16:57:53 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:06.621 16:57:53 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:06.621 16:57:53 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:06.621 16:57:53 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:07:06.621 16:57:53 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:07:06.621 16:57:53 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:06.621 16:57:53 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:06.621 16:57:53 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:06.621 16:57:53 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:06.621 16:57:53 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:06.621 16:57:53 nvmf_tcp.nvmf_referrals -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:06.621 16:57:53 nvmf_tcp.nvmf_referrals -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:06.621 16:57:53 nvmf_tcp.nvmf_referrals -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:06.621 16:57:53 nvmf_tcp.nvmf_referrals -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:06.621 16:57:53 nvmf_tcp.nvmf_referrals -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:06.621 16:57:53 nvmf_tcp.nvmf_referrals -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:06.621 16:57:53 nvmf_tcp.nvmf_referrals -- paths/export.sh@5 -- # export PATH 00:07:06.621 16:57:53 nvmf_tcp.nvmf_referrals -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:06.621 16:57:53 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@47 -- # : 0 00:07:06.621 16:57:53 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:07:06.621 16:57:53 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:07:06.621 16:57:53 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:06.621 16:57:53 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:06.621 16:57:53 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:06.621 16:57:53 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:07:06.621 16:57:53 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:07:06.621 16:57:53 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@51 -- # have_pci_nics=0 00:07:06.621 16:57:53 nvmf_tcp.nvmf_referrals -- target/referrals.sh@11 -- # NVMF_REFERRAL_IP_1=127.0.0.2 00:07:06.621 16:57:53 nvmf_tcp.nvmf_referrals -- target/referrals.sh@12 -- # NVMF_REFERRAL_IP_2=127.0.0.3 00:07:06.621 16:57:53 nvmf_tcp.nvmf_referrals -- target/referrals.sh@13 -- # NVMF_REFERRAL_IP_3=127.0.0.4 00:07:06.621 16:57:53 nvmf_tcp.nvmf_referrals -- target/referrals.sh@14 -- # NVMF_PORT_REFERRAL=4430 00:07:06.621 16:57:53 nvmf_tcp.nvmf_referrals -- target/referrals.sh@15 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:07:06.621 16:57:53 nvmf_tcp.nvmf_referrals -- target/referrals.sh@16 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:07:06.621 16:57:53 nvmf_tcp.nvmf_referrals -- target/referrals.sh@37 -- # nvmftestinit 00:07:06.621 16:57:53 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:07:06.621 16:57:53 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:06.621 16:57:53 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@448 -- # prepare_net_devs 00:07:06.621 16:57:53 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@410 -- # local -g is_hw=no 00:07:06.621 16:57:53 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@412 -- # remove_spdk_ns 00:07:06.621 16:57:53 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:06.621 16:57:53 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:06.621 16:57:53 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:06.621 16:57:53 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:07:06.621 16:57:53 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:07:06.621 16:57:53 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@285 -- # xtrace_disable 00:07:06.621 16:57:53 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:11.882 16:57:59 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:11.882 16:57:59 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@291 -- # pci_devs=() 00:07:11.882 16:57:59 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@291 -- # local -a pci_devs 00:07:11.882 16:57:59 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@292 -- # pci_net_devs=() 00:07:11.882 16:57:59 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:07:11.882 16:57:59 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@293 -- # pci_drivers=() 00:07:11.882 16:57:59 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@293 -- # local -A pci_drivers 00:07:11.882 16:57:59 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@295 -- # net_devs=() 00:07:11.882 16:57:59 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@295 -- # local -ga net_devs 00:07:11.882 16:57:59 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@296 -- # e810=() 00:07:11.882 16:57:59 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@296 -- # local -ga e810 00:07:11.882 16:57:59 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@297 -- # x722=() 00:07:11.882 16:57:59 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@297 -- # local -ga x722 00:07:11.882 16:57:59 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@298 -- # mlx=() 00:07:11.882 16:57:59 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@298 -- # local -ga mlx 00:07:11.882 16:57:59 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:11.882 16:57:59 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:11.882 16:57:59 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:11.882 16:57:59 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:11.882 16:57:59 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:11.882 16:57:59 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:11.882 16:57:59 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:11.882 16:57:59 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:11.882 16:57:59 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:11.882 16:57:59 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:11.882 16:57:59 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:11.882 16:57:59 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:07:11.882 16:57:59 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:07:11.882 16:57:59 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:07:11.882 16:57:59 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:07:11.882 16:57:59 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:07:11.882 16:57:59 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:07:11.882 16:57:59 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:11.882 16:57:59 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:07:11.882 Found 0000:86:00.0 (0x8086 - 0x159b) 00:07:11.882 16:57:59 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:11.882 16:57:59 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:11.882 16:57:59 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:11.882 16:57:59 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:11.882 16:57:59 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:11.882 16:57:59 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:11.882 16:57:59 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:07:11.882 Found 0000:86:00.1 (0x8086 - 0x159b) 00:07:11.882 16:57:59 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:11.882 16:57:59 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:11.882 16:57:59 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:11.883 16:57:59 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:11.883 16:57:59 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:11.883 16:57:59 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:07:11.883 16:57:59 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:07:11.883 16:57:59 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:07:11.883 16:57:59 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:11.883 16:57:59 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:11.883 16:57:59 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:11.883 16:57:59 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:11.883 16:57:59 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:11.883 16:57:59 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:11.883 16:57:59 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:11.883 16:57:59 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:07:11.883 Found net devices under 0000:86:00.0: cvl_0_0 00:07:11.883 16:57:59 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:11.883 16:57:59 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:11.883 16:57:59 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:11.883 16:57:59 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:11.883 16:57:59 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:11.883 16:57:59 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:11.883 16:57:59 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:11.883 16:57:59 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:11.883 16:57:59 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:07:11.883 Found net devices under 0000:86:00.1: cvl_0_1 00:07:11.883 16:57:59 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:11.883 16:57:59 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:07:11.883 16:57:59 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@414 -- # is_hw=yes 00:07:11.883 16:57:59 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:07:11.883 16:57:59 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:07:11.883 16:57:59 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:07:11.883 16:57:59 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:11.883 16:57:59 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:11.883 16:57:59 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:11.883 16:57:59 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:07:11.883 16:57:59 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:11.883 16:57:59 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:11.883 16:57:59 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:07:11.883 16:57:59 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:11.883 16:57:59 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:11.883 16:57:59 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:07:11.883 16:57:59 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:07:11.883 16:57:59 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:07:11.883 16:57:59 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:11.883 16:57:59 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:11.883 16:57:59 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:11.883 16:57:59 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:07:11.883 16:57:59 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:11.883 16:57:59 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:11.883 16:57:59 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:11.883 16:57:59 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:07:11.883 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:11.883 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.182 ms 00:07:11.883 00:07:11.883 --- 10.0.0.2 ping statistics --- 00:07:11.883 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:11.883 rtt min/avg/max/mdev = 0.182/0.182/0.182/0.000 ms 00:07:11.883 16:57:59 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:11.883 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:11.883 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.143 ms 00:07:11.883 00:07:11.883 --- 10.0.0.1 ping statistics --- 00:07:11.883 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:11.883 rtt min/avg/max/mdev = 0.143/0.143/0.143/0.000 ms 00:07:12.141 16:57:59 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:12.141 16:57:59 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@422 -- # return 0 00:07:12.141 16:57:59 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:07:12.141 16:57:59 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:12.141 16:57:59 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:07:12.141 16:57:59 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:07:12.141 16:57:59 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:12.141 16:57:59 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:07:12.141 16:57:59 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:07:12.141 16:57:59 nvmf_tcp.nvmf_referrals -- target/referrals.sh@38 -- # nvmfappstart -m 0xF 00:07:12.141 16:57:59 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:07:12.141 16:57:59 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@720 -- # xtrace_disable 00:07:12.141 16:57:59 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:12.141 16:57:59 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@481 -- # nvmfpid=2923907 00:07:12.141 16:57:59 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@482 -- # waitforlisten 2923907 00:07:12.141 16:57:59 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:07:12.141 16:57:59 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@827 -- # '[' -z 2923907 ']' 00:07:12.141 16:57:59 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:12.141 16:57:59 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@832 -- # local max_retries=100 00:07:12.141 16:57:59 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:12.141 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:12.141 16:57:59 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@836 -- # xtrace_disable 00:07:12.141 16:57:59 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:12.141 [2024-05-15 16:57:59.623850] Starting SPDK v24.05-pre git sha1 0ba8ca574 / DPDK 23.11.0 initialization... 00:07:12.141 [2024-05-15 16:57:59.623891] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:12.141 EAL: No free 2048 kB hugepages reported on node 1 00:07:12.141 [2024-05-15 16:57:59.681084] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:12.141 [2024-05-15 16:57:59.753543] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:12.141 [2024-05-15 16:57:59.753584] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:12.141 [2024-05-15 16:57:59.753591] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:12.141 [2024-05-15 16:57:59.753597] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:12.141 [2024-05-15 16:57:59.753602] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:12.141 [2024-05-15 16:57:59.753677] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:12.141 [2024-05-15 16:57:59.753799] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:12.141 [2024-05-15 16:57:59.753890] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:07:12.141 [2024-05-15 16:57:59.753891] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:13.071 16:58:00 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:07:13.071 16:58:00 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@860 -- # return 0 00:07:13.071 16:58:00 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:07:13.071 16:58:00 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:13.071 16:58:00 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:13.071 16:58:00 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:13.071 16:58:00 nvmf_tcp.nvmf_referrals -- target/referrals.sh@40 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:07:13.071 16:58:00 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:13.071 16:58:00 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:13.071 [2024-05-15 16:58:00.478059] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:13.071 16:58:00 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:13.071 16:58:00 nvmf_tcp.nvmf_referrals -- target/referrals.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 10.0.0.2 -s 8009 discovery 00:07:13.071 16:58:00 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:13.071 16:58:00 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:13.071 [2024-05-15 16:58:00.491343] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:07:13.071 [2024-05-15 16:58:00.491583] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:07:13.071 16:58:00 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:13.071 16:58:00 nvmf_tcp.nvmf_referrals -- target/referrals.sh@44 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 00:07:13.071 16:58:00 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:13.071 16:58:00 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:13.071 16:58:00 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:13.071 16:58:00 nvmf_tcp.nvmf_referrals -- target/referrals.sh@45 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.3 -s 4430 00:07:13.071 16:58:00 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:13.071 16:58:00 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:13.071 16:58:00 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:13.071 16:58:00 nvmf_tcp.nvmf_referrals -- target/referrals.sh@46 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.4 -s 4430 00:07:13.071 16:58:00 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:13.071 16:58:00 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:13.071 16:58:00 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:13.071 16:58:00 nvmf_tcp.nvmf_referrals -- target/referrals.sh@48 -- # rpc_cmd nvmf_discovery_get_referrals 00:07:13.071 16:58:00 nvmf_tcp.nvmf_referrals -- target/referrals.sh@48 -- # jq length 00:07:13.072 16:58:00 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:13.072 16:58:00 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:13.072 16:58:00 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:13.072 16:58:00 nvmf_tcp.nvmf_referrals -- target/referrals.sh@48 -- # (( 3 == 3 )) 00:07:13.072 16:58:00 nvmf_tcp.nvmf_referrals -- target/referrals.sh@49 -- # get_referral_ips rpc 00:07:13.072 16:58:00 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:07:13.072 16:58:00 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:07:13.072 16:58:00 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:07:13.072 16:58:00 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:13.072 16:58:00 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:07:13.072 16:58:00 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:13.072 16:58:00 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:13.072 16:58:00 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:07:13.072 16:58:00 nvmf_tcp.nvmf_referrals -- target/referrals.sh@49 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:07:13.072 16:58:00 nvmf_tcp.nvmf_referrals -- target/referrals.sh@50 -- # get_referral_ips nvme 00:07:13.072 16:58:00 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:07:13.072 16:58:00 nvmf_tcp.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:07:13.072 16:58:00 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:07:13.072 16:58:00 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:07:13.072 16:58:00 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:07:13.329 16:58:00 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:07:13.329 16:58:00 nvmf_tcp.nvmf_referrals -- target/referrals.sh@50 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:07:13.329 16:58:00 nvmf_tcp.nvmf_referrals -- target/referrals.sh@52 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 00:07:13.329 16:58:00 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:13.329 16:58:00 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:13.329 16:58:00 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:13.329 16:58:00 nvmf_tcp.nvmf_referrals -- target/referrals.sh@53 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.3 -s 4430 00:07:13.329 16:58:00 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:13.329 16:58:00 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:13.329 16:58:00 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:13.329 16:58:00 nvmf_tcp.nvmf_referrals -- target/referrals.sh@54 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.4 -s 4430 00:07:13.329 16:58:00 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:13.329 16:58:00 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:13.329 16:58:00 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:13.329 16:58:00 nvmf_tcp.nvmf_referrals -- target/referrals.sh@56 -- # rpc_cmd nvmf_discovery_get_referrals 00:07:13.329 16:58:00 nvmf_tcp.nvmf_referrals -- target/referrals.sh@56 -- # jq length 00:07:13.329 16:58:00 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:13.329 16:58:00 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:13.329 16:58:00 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:13.329 16:58:00 nvmf_tcp.nvmf_referrals -- target/referrals.sh@56 -- # (( 0 == 0 )) 00:07:13.329 16:58:00 nvmf_tcp.nvmf_referrals -- target/referrals.sh@57 -- # get_referral_ips nvme 00:07:13.329 16:58:00 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:07:13.329 16:58:00 nvmf_tcp.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:07:13.329 16:58:00 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:07:13.329 16:58:00 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:07:13.329 16:58:00 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:07:13.329 16:58:00 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:07:13.329 16:58:00 nvmf_tcp.nvmf_referrals -- target/referrals.sh@57 -- # [[ '' == '' ]] 00:07:13.329 16:58:00 nvmf_tcp.nvmf_referrals -- target/referrals.sh@60 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n discovery 00:07:13.329 16:58:00 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:13.329 16:58:00 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:13.329 16:58:00 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:13.329 16:58:00 nvmf_tcp.nvmf_referrals -- target/referrals.sh@62 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:07:13.329 16:58:00 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:13.329 16:58:00 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:13.587 16:58:00 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:13.587 16:58:00 nvmf_tcp.nvmf_referrals -- target/referrals.sh@65 -- # get_referral_ips rpc 00:07:13.587 16:58:00 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:07:13.587 16:58:00 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:07:13.587 16:58:00 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:07:13.587 16:58:00 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:13.587 16:58:00 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:07:13.587 16:58:00 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:13.587 16:58:01 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:13.587 16:58:01 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.2 00:07:13.587 16:58:01 nvmf_tcp.nvmf_referrals -- target/referrals.sh@65 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:07:13.587 16:58:01 nvmf_tcp.nvmf_referrals -- target/referrals.sh@66 -- # get_referral_ips nvme 00:07:13.587 16:58:01 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:07:13.587 16:58:01 nvmf_tcp.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:07:13.587 16:58:01 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:07:13.587 16:58:01 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:07:13.587 16:58:01 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:07:13.587 16:58:01 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.2 00:07:13.587 16:58:01 nvmf_tcp.nvmf_referrals -- target/referrals.sh@66 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:07:13.587 16:58:01 nvmf_tcp.nvmf_referrals -- target/referrals.sh@67 -- # get_discovery_entries 'nvme subsystem' 00:07:13.587 16:58:01 nvmf_tcp.nvmf_referrals -- target/referrals.sh@67 -- # jq -r .subnqn 00:07:13.587 16:58:01 nvmf_tcp.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:07:13.587 16:58:01 nvmf_tcp.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:07:13.587 16:58:01 nvmf_tcp.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:07:13.844 16:58:01 nvmf_tcp.nvmf_referrals -- target/referrals.sh@67 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:07:13.844 16:58:01 nvmf_tcp.nvmf_referrals -- target/referrals.sh@68 -- # get_discovery_entries 'discovery subsystem referral' 00:07:13.844 16:58:01 nvmf_tcp.nvmf_referrals -- target/referrals.sh@68 -- # jq -r .subnqn 00:07:13.844 16:58:01 nvmf_tcp.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:07:13.844 16:58:01 nvmf_tcp.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:07:13.844 16:58:01 nvmf_tcp.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:07:14.100 16:58:01 nvmf_tcp.nvmf_referrals -- target/referrals.sh@68 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:07:14.101 16:58:01 nvmf_tcp.nvmf_referrals -- target/referrals.sh@71 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:07:14.101 16:58:01 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:14.101 16:58:01 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:14.101 16:58:01 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:14.101 16:58:01 nvmf_tcp.nvmf_referrals -- target/referrals.sh@73 -- # get_referral_ips rpc 00:07:14.101 16:58:01 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:07:14.101 16:58:01 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:07:14.101 16:58:01 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:07:14.101 16:58:01 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:14.101 16:58:01 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:07:14.101 16:58:01 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:14.101 16:58:01 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:14.101 16:58:01 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 00:07:14.101 16:58:01 nvmf_tcp.nvmf_referrals -- target/referrals.sh@73 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:07:14.101 16:58:01 nvmf_tcp.nvmf_referrals -- target/referrals.sh@74 -- # get_referral_ips nvme 00:07:14.101 16:58:01 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:07:14.101 16:58:01 nvmf_tcp.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:07:14.101 16:58:01 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:07:14.101 16:58:01 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:07:14.101 16:58:01 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:07:14.101 16:58:01 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 00:07:14.101 16:58:01 nvmf_tcp.nvmf_referrals -- target/referrals.sh@74 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:07:14.101 16:58:01 nvmf_tcp.nvmf_referrals -- target/referrals.sh@75 -- # get_discovery_entries 'nvme subsystem' 00:07:14.101 16:58:01 nvmf_tcp.nvmf_referrals -- target/referrals.sh@75 -- # jq -r .subnqn 00:07:14.101 16:58:01 nvmf_tcp.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:07:14.101 16:58:01 nvmf_tcp.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:07:14.101 16:58:01 nvmf_tcp.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:07:14.357 16:58:01 nvmf_tcp.nvmf_referrals -- target/referrals.sh@75 -- # [[ '' == '' ]] 00:07:14.357 16:58:01 nvmf_tcp.nvmf_referrals -- target/referrals.sh@76 -- # get_discovery_entries 'discovery subsystem referral' 00:07:14.357 16:58:01 nvmf_tcp.nvmf_referrals -- target/referrals.sh@76 -- # jq -r .subnqn 00:07:14.357 16:58:01 nvmf_tcp.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:07:14.357 16:58:01 nvmf_tcp.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:07:14.357 16:58:01 nvmf_tcp.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:07:14.614 16:58:02 nvmf_tcp.nvmf_referrals -- target/referrals.sh@76 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:07:14.614 16:58:02 nvmf_tcp.nvmf_referrals -- target/referrals.sh@79 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2014-08.org.nvmexpress.discovery 00:07:14.614 16:58:02 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:14.614 16:58:02 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:14.614 16:58:02 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:14.614 16:58:02 nvmf_tcp.nvmf_referrals -- target/referrals.sh@82 -- # rpc_cmd nvmf_discovery_get_referrals 00:07:14.614 16:58:02 nvmf_tcp.nvmf_referrals -- target/referrals.sh@82 -- # jq length 00:07:14.614 16:58:02 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:14.614 16:58:02 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:14.614 16:58:02 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:14.614 16:58:02 nvmf_tcp.nvmf_referrals -- target/referrals.sh@82 -- # (( 0 == 0 )) 00:07:14.614 16:58:02 nvmf_tcp.nvmf_referrals -- target/referrals.sh@83 -- # get_referral_ips nvme 00:07:14.614 16:58:02 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:07:14.614 16:58:02 nvmf_tcp.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:07:14.614 16:58:02 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:07:14.614 16:58:02 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:07:14.614 16:58:02 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:07:14.614 16:58:02 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:07:14.614 16:58:02 nvmf_tcp.nvmf_referrals -- target/referrals.sh@83 -- # [[ '' == '' ]] 00:07:14.614 16:58:02 nvmf_tcp.nvmf_referrals -- target/referrals.sh@85 -- # trap - SIGINT SIGTERM EXIT 00:07:14.614 16:58:02 nvmf_tcp.nvmf_referrals -- target/referrals.sh@86 -- # nvmftestfini 00:07:14.614 16:58:02 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@488 -- # nvmfcleanup 00:07:14.614 16:58:02 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@117 -- # sync 00:07:14.614 16:58:02 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:07:14.614 16:58:02 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@120 -- # set +e 00:07:14.614 16:58:02 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@121 -- # for i in {1..20} 00:07:14.614 16:58:02 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:07:14.871 rmmod nvme_tcp 00:07:14.871 rmmod nvme_fabrics 00:07:14.871 rmmod nvme_keyring 00:07:14.871 16:58:02 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:07:14.871 16:58:02 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@124 -- # set -e 00:07:14.871 16:58:02 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@125 -- # return 0 00:07:14.871 16:58:02 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@489 -- # '[' -n 2923907 ']' 00:07:14.871 16:58:02 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@490 -- # killprocess 2923907 00:07:14.871 16:58:02 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@946 -- # '[' -z 2923907 ']' 00:07:14.871 16:58:02 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@950 -- # kill -0 2923907 00:07:14.871 16:58:02 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@951 -- # uname 00:07:14.871 16:58:02 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:07:14.871 16:58:02 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 2923907 00:07:14.871 16:58:02 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:07:14.871 16:58:02 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:07:14.871 16:58:02 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@964 -- # echo 'killing process with pid 2923907' 00:07:14.871 killing process with pid 2923907 00:07:14.871 16:58:02 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@965 -- # kill 2923907 00:07:14.871 [2024-05-15 16:58:02.373034] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:07:14.871 16:58:02 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@970 -- # wait 2923907 00:07:15.129 16:58:02 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:07:15.129 16:58:02 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:07:15.130 16:58:02 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:07:15.130 16:58:02 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:07:15.130 16:58:02 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@278 -- # remove_spdk_ns 00:07:15.130 16:58:02 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:15.130 16:58:02 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:15.130 16:58:02 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:17.031 16:58:04 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:07:17.031 00:07:17.031 real 0m10.791s 00:07:17.031 user 0m13.613s 00:07:17.031 sys 0m4.847s 00:07:17.031 16:58:04 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:17.031 16:58:04 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:17.031 ************************************ 00:07:17.031 END TEST nvmf_referrals 00:07:17.031 ************************************ 00:07:17.031 16:58:04 nvmf_tcp -- nvmf/nvmf.sh@27 -- # run_test nvmf_connect_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:07:17.031 16:58:04 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:07:17.031 16:58:04 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:17.031 16:58:04 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:07:17.289 ************************************ 00:07:17.289 START TEST nvmf_connect_disconnect 00:07:17.289 ************************************ 00:07:17.289 16:58:04 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:07:17.289 * Looking for test storage... 00:07:17.289 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:17.289 16:58:04 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:17.289 16:58:04 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # uname -s 00:07:17.289 16:58:04 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:17.289 16:58:04 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:17.289 16:58:04 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:17.289 16:58:04 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:17.289 16:58:04 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:17.289 16:58:04 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:17.289 16:58:04 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:17.289 16:58:04 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:17.289 16:58:04 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:17.289 16:58:04 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:17.289 16:58:04 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:07:17.289 16:58:04 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:07:17.289 16:58:04 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:17.289 16:58:04 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:17.289 16:58:04 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:17.289 16:58:04 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:17.289 16:58:04 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:17.289 16:58:04 nvmf_tcp.nvmf_connect_disconnect -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:17.289 16:58:04 nvmf_tcp.nvmf_connect_disconnect -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:17.289 16:58:04 nvmf_tcp.nvmf_connect_disconnect -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:17.289 16:58:04 nvmf_tcp.nvmf_connect_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:17.289 16:58:04 nvmf_tcp.nvmf_connect_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:17.289 16:58:04 nvmf_tcp.nvmf_connect_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:17.289 16:58:04 nvmf_tcp.nvmf_connect_disconnect -- paths/export.sh@5 -- # export PATH 00:07:17.289 16:58:04 nvmf_tcp.nvmf_connect_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:17.289 16:58:04 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@47 -- # : 0 00:07:17.289 16:58:04 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:07:17.289 16:58:04 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:07:17.289 16:58:04 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:17.289 16:58:04 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:17.289 16:58:04 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:17.289 16:58:04 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:07:17.289 16:58:04 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:07:17.289 16:58:04 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@51 -- # have_pci_nics=0 00:07:17.289 16:58:04 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@11 -- # MALLOC_BDEV_SIZE=64 00:07:17.289 16:58:04 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:07:17.289 16:58:04 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@15 -- # nvmftestinit 00:07:17.289 16:58:04 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:07:17.289 16:58:04 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:17.289 16:58:04 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@448 -- # prepare_net_devs 00:07:17.289 16:58:04 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@410 -- # local -g is_hw=no 00:07:17.289 16:58:04 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@412 -- # remove_spdk_ns 00:07:17.289 16:58:04 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:17.289 16:58:04 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:17.289 16:58:04 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:17.289 16:58:04 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:07:17.289 16:58:04 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:07:17.289 16:58:04 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@285 -- # xtrace_disable 00:07:17.289 16:58:04 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:07:22.601 16:58:09 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:22.601 16:58:09 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@291 -- # pci_devs=() 00:07:22.601 16:58:09 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@291 -- # local -a pci_devs 00:07:22.601 16:58:09 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@292 -- # pci_net_devs=() 00:07:22.601 16:58:09 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:07:22.601 16:58:09 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@293 -- # pci_drivers=() 00:07:22.601 16:58:09 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@293 -- # local -A pci_drivers 00:07:22.601 16:58:09 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@295 -- # net_devs=() 00:07:22.601 16:58:09 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@295 -- # local -ga net_devs 00:07:22.601 16:58:09 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@296 -- # e810=() 00:07:22.601 16:58:09 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@296 -- # local -ga e810 00:07:22.601 16:58:09 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@297 -- # x722=() 00:07:22.601 16:58:09 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@297 -- # local -ga x722 00:07:22.601 16:58:09 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@298 -- # mlx=() 00:07:22.601 16:58:09 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@298 -- # local -ga mlx 00:07:22.601 16:58:09 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:22.601 16:58:09 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:22.601 16:58:09 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:22.601 16:58:09 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:22.601 16:58:09 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:22.601 16:58:09 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:22.601 16:58:09 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:22.601 16:58:09 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:22.601 16:58:09 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:22.601 16:58:09 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:22.601 16:58:09 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:22.601 16:58:09 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:07:22.601 16:58:09 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:07:22.601 16:58:09 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:07:22.601 16:58:09 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:07:22.601 16:58:09 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:07:22.601 16:58:09 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:07:22.601 16:58:09 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:22.601 16:58:09 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:07:22.601 Found 0000:86:00.0 (0x8086 - 0x159b) 00:07:22.601 16:58:09 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:22.601 16:58:09 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:22.601 16:58:09 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:22.601 16:58:09 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:22.601 16:58:09 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:22.601 16:58:09 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:22.601 16:58:09 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:07:22.601 Found 0000:86:00.1 (0x8086 - 0x159b) 00:07:22.601 16:58:09 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:22.601 16:58:09 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:22.601 16:58:09 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:22.601 16:58:09 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:22.601 16:58:09 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:22.601 16:58:09 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:07:22.601 16:58:09 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:07:22.601 16:58:09 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:07:22.601 16:58:09 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:22.601 16:58:09 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:22.601 16:58:09 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:22.601 16:58:09 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:22.601 16:58:09 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:22.601 16:58:09 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:22.601 16:58:09 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:22.601 16:58:09 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:07:22.601 Found net devices under 0000:86:00.0: cvl_0_0 00:07:22.601 16:58:09 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:22.601 16:58:09 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:22.601 16:58:09 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:22.601 16:58:09 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:22.601 16:58:09 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:22.601 16:58:09 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:22.601 16:58:09 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:22.601 16:58:09 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:22.601 16:58:09 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:07:22.601 Found net devices under 0000:86:00.1: cvl_0_1 00:07:22.601 16:58:09 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:22.601 16:58:09 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:07:22.601 16:58:09 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@414 -- # is_hw=yes 00:07:22.601 16:58:09 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:07:22.601 16:58:09 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:07:22.601 16:58:09 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:07:22.601 16:58:09 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:22.601 16:58:09 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:22.601 16:58:09 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:22.601 16:58:09 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:07:22.601 16:58:09 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:22.601 16:58:09 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:22.601 16:58:09 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:07:22.601 16:58:09 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:22.601 16:58:09 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:22.601 16:58:09 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:07:22.601 16:58:09 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:07:22.601 16:58:09 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:07:22.601 16:58:09 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:22.601 16:58:09 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:22.601 16:58:09 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:22.601 16:58:09 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:07:22.601 16:58:09 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:22.601 16:58:10 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:22.601 16:58:10 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:22.601 16:58:10 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:07:22.601 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:22.601 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.222 ms 00:07:22.601 00:07:22.601 --- 10.0.0.2 ping statistics --- 00:07:22.601 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:22.601 rtt min/avg/max/mdev = 0.222/0.222/0.222/0.000 ms 00:07:22.601 16:58:10 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:22.601 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:22.601 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.244 ms 00:07:22.601 00:07:22.601 --- 10.0.0.1 ping statistics --- 00:07:22.601 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:22.601 rtt min/avg/max/mdev = 0.244/0.244/0.244/0.000 ms 00:07:22.601 16:58:10 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:22.601 16:58:10 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@422 -- # return 0 00:07:22.601 16:58:10 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:07:22.601 16:58:10 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:22.601 16:58:10 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:07:22.601 16:58:10 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:07:22.601 16:58:10 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:22.601 16:58:10 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:07:22.601 16:58:10 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:07:22.601 16:58:10 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@16 -- # nvmfappstart -m 0xF 00:07:22.601 16:58:10 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:07:22.601 16:58:10 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@720 -- # xtrace_disable 00:07:22.601 16:58:10 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:07:22.602 16:58:10 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@481 -- # nvmfpid=2927952 00:07:22.602 16:58:10 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@482 -- # waitforlisten 2927952 00:07:22.602 16:58:10 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@827 -- # '[' -z 2927952 ']' 00:07:22.602 16:58:10 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:22.602 16:58:10 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@832 -- # local max_retries=100 00:07:22.602 16:58:10 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:22.602 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:22.602 16:58:10 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:07:22.602 16:58:10 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@836 -- # xtrace_disable 00:07:22.602 16:58:10 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:07:22.602 [2024-05-15 16:58:10.122140] Starting SPDK v24.05-pre git sha1 0ba8ca574 / DPDK 23.11.0 initialization... 00:07:22.602 [2024-05-15 16:58:10.122194] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:22.602 EAL: No free 2048 kB hugepages reported on node 1 00:07:22.602 [2024-05-15 16:58:10.179121] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:22.859 [2024-05-15 16:58:10.260299] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:22.859 [2024-05-15 16:58:10.260333] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:22.859 [2024-05-15 16:58:10.260340] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:22.859 [2024-05-15 16:58:10.260346] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:22.859 [2024-05-15 16:58:10.260352] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:22.859 [2024-05-15 16:58:10.260388] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:22.859 [2024-05-15 16:58:10.260406] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:22.859 [2024-05-15 16:58:10.260494] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:07:22.859 [2024-05-15 16:58:10.260495] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:23.422 16:58:10 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:07:23.422 16:58:10 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@860 -- # return 0 00:07:23.422 16:58:10 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:07:23.422 16:58:10 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:23.422 16:58:10 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:07:23.422 16:58:10 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:23.422 16:58:10 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:07:23.422 16:58:10 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:23.422 16:58:10 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:07:23.422 [2024-05-15 16:58:10.977988] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:23.422 16:58:10 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:23.422 16:58:10 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 00:07:23.422 16:58:10 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:23.422 16:58:10 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:07:23.422 16:58:11 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:23.422 16:58:11 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # bdev=Malloc0 00:07:23.422 16:58:11 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:07:23.422 16:58:11 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:23.422 16:58:11 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:07:23.422 16:58:11 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:23.422 16:58:11 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:07:23.422 16:58:11 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:23.422 16:58:11 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:07:23.422 16:58:11 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:23.422 16:58:11 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:23.422 16:58:11 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:23.422 16:58:11 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:07:23.422 [2024-05-15 16:58:11.029802] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:07:23.422 [2024-05-15 16:58:11.030037] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:23.422 16:58:11 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:23.422 16:58:11 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@26 -- # '[' 0 -eq 1 ']' 00:07:23.422 16:58:11 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@31 -- # num_iterations=5 00:07:23.422 16:58:11 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@34 -- # set +x 00:07:26.699 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:29.972 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:33.243 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:36.514 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:39.783 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:39.783 16:58:27 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@43 -- # trap - SIGINT SIGTERM EXIT 00:07:39.783 16:58:27 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@45 -- # nvmftestfini 00:07:39.783 16:58:27 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@488 -- # nvmfcleanup 00:07:39.783 16:58:27 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@117 -- # sync 00:07:39.783 16:58:27 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:07:39.783 16:58:27 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@120 -- # set +e 00:07:39.783 16:58:27 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@121 -- # for i in {1..20} 00:07:39.783 16:58:27 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:07:39.783 rmmod nvme_tcp 00:07:39.783 rmmod nvme_fabrics 00:07:39.783 rmmod nvme_keyring 00:07:39.783 16:58:27 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:07:39.783 16:58:27 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@124 -- # set -e 00:07:39.783 16:58:27 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@125 -- # return 0 00:07:39.783 16:58:27 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@489 -- # '[' -n 2927952 ']' 00:07:39.783 16:58:27 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@490 -- # killprocess 2927952 00:07:39.783 16:58:27 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@946 -- # '[' -z 2927952 ']' 00:07:39.783 16:58:27 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@950 -- # kill -0 2927952 00:07:39.783 16:58:27 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@951 -- # uname 00:07:39.783 16:58:27 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:07:39.783 16:58:27 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 2927952 00:07:40.041 16:58:27 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:07:40.041 16:58:27 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:07:40.041 16:58:27 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@964 -- # echo 'killing process with pid 2927952' 00:07:40.041 killing process with pid 2927952 00:07:40.041 16:58:27 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@965 -- # kill 2927952 00:07:40.041 [2024-05-15 16:58:27.445135] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:07:40.041 16:58:27 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@970 -- # wait 2927952 00:07:40.041 16:58:27 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:07:40.041 16:58:27 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:07:40.041 16:58:27 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:07:40.041 16:58:27 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:07:40.041 16:58:27 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@278 -- # remove_spdk_ns 00:07:40.041 16:58:27 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:40.041 16:58:27 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:40.041 16:58:27 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:42.568 16:58:29 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:07:42.568 00:07:42.568 real 0m25.023s 00:07:42.568 user 1m10.634s 00:07:42.568 sys 0m5.143s 00:07:42.568 16:58:29 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:42.568 16:58:29 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:07:42.568 ************************************ 00:07:42.568 END TEST nvmf_connect_disconnect 00:07:42.568 ************************************ 00:07:42.569 16:58:29 nvmf_tcp -- nvmf/nvmf.sh@28 -- # run_test nvmf_multitarget /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:07:42.569 16:58:29 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:07:42.569 16:58:29 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:42.569 16:58:29 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:07:42.569 ************************************ 00:07:42.569 START TEST nvmf_multitarget 00:07:42.569 ************************************ 00:07:42.569 16:58:29 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:07:42.569 * Looking for test storage... 00:07:42.569 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:42.569 16:58:29 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:42.569 16:58:29 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@7 -- # uname -s 00:07:42.569 16:58:29 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:42.569 16:58:29 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:42.569 16:58:29 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:42.569 16:58:29 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:42.569 16:58:29 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:42.569 16:58:29 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:42.569 16:58:29 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:42.569 16:58:29 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:42.569 16:58:29 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:42.569 16:58:29 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:42.569 16:58:29 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:07:42.569 16:58:29 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:07:42.569 16:58:29 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:42.569 16:58:29 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:42.569 16:58:29 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:42.569 16:58:29 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:42.569 16:58:29 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:42.569 16:58:29 nvmf_tcp.nvmf_multitarget -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:42.569 16:58:29 nvmf_tcp.nvmf_multitarget -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:42.569 16:58:29 nvmf_tcp.nvmf_multitarget -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:42.569 16:58:29 nvmf_tcp.nvmf_multitarget -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:42.569 16:58:29 nvmf_tcp.nvmf_multitarget -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:42.569 16:58:29 nvmf_tcp.nvmf_multitarget -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:42.569 16:58:29 nvmf_tcp.nvmf_multitarget -- paths/export.sh@5 -- # export PATH 00:07:42.569 16:58:29 nvmf_tcp.nvmf_multitarget -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:42.569 16:58:29 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@47 -- # : 0 00:07:42.569 16:58:29 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:07:42.569 16:58:29 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:07:42.569 16:58:29 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:42.569 16:58:29 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:42.569 16:58:29 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:42.569 16:58:29 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:07:42.569 16:58:29 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:07:42.569 16:58:29 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@51 -- # have_pci_nics=0 00:07:42.569 16:58:29 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@13 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:07:42.569 16:58:29 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@15 -- # nvmftestinit 00:07:42.569 16:58:29 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:07:42.569 16:58:29 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:42.569 16:58:29 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@448 -- # prepare_net_devs 00:07:42.569 16:58:29 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@410 -- # local -g is_hw=no 00:07:42.569 16:58:29 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@412 -- # remove_spdk_ns 00:07:42.569 16:58:29 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:42.569 16:58:29 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:42.569 16:58:29 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:42.569 16:58:29 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:07:42.569 16:58:29 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:07:42.569 16:58:29 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@285 -- # xtrace_disable 00:07:42.569 16:58:29 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:07:47.829 16:58:35 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:47.829 16:58:35 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@291 -- # pci_devs=() 00:07:47.829 16:58:35 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@291 -- # local -a pci_devs 00:07:47.829 16:58:35 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@292 -- # pci_net_devs=() 00:07:47.829 16:58:35 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:07:47.829 16:58:35 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@293 -- # pci_drivers=() 00:07:47.829 16:58:35 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@293 -- # local -A pci_drivers 00:07:47.829 16:58:35 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@295 -- # net_devs=() 00:07:47.829 16:58:35 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@295 -- # local -ga net_devs 00:07:47.829 16:58:35 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@296 -- # e810=() 00:07:47.829 16:58:35 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@296 -- # local -ga e810 00:07:47.829 16:58:35 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@297 -- # x722=() 00:07:47.829 16:58:35 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@297 -- # local -ga x722 00:07:47.829 16:58:35 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@298 -- # mlx=() 00:07:47.829 16:58:35 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@298 -- # local -ga mlx 00:07:47.829 16:58:35 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:47.829 16:58:35 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:47.829 16:58:35 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:47.829 16:58:35 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:47.829 16:58:35 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:47.829 16:58:35 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:47.829 16:58:35 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:47.829 16:58:35 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:47.829 16:58:35 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:47.829 16:58:35 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:47.829 16:58:35 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:47.829 16:58:35 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:07:47.829 16:58:35 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:07:47.829 16:58:35 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:07:47.829 16:58:35 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:07:47.829 16:58:35 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:07:47.829 16:58:35 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:07:47.829 16:58:35 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:47.829 16:58:35 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:07:47.829 Found 0000:86:00.0 (0x8086 - 0x159b) 00:07:47.829 16:58:35 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:47.829 16:58:35 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:47.829 16:58:35 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:47.829 16:58:35 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:47.829 16:58:35 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:47.829 16:58:35 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:47.829 16:58:35 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:07:47.829 Found 0000:86:00.1 (0x8086 - 0x159b) 00:07:47.829 16:58:35 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:47.829 16:58:35 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:47.829 16:58:35 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:47.829 16:58:35 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:47.829 16:58:35 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:47.829 16:58:35 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:07:47.829 16:58:35 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:07:47.829 16:58:35 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:07:47.829 16:58:35 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:47.829 16:58:35 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:47.829 16:58:35 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:47.829 16:58:35 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:47.829 16:58:35 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:47.829 16:58:35 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:47.829 16:58:35 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:47.829 16:58:35 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:07:47.829 Found net devices under 0000:86:00.0: cvl_0_0 00:07:47.829 16:58:35 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:47.829 16:58:35 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:47.829 16:58:35 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:47.829 16:58:35 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:47.829 16:58:35 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:47.829 16:58:35 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:47.829 16:58:35 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:47.829 16:58:35 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:47.829 16:58:35 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:07:47.829 Found net devices under 0000:86:00.1: cvl_0_1 00:07:47.829 16:58:35 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:47.829 16:58:35 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:07:47.829 16:58:35 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@414 -- # is_hw=yes 00:07:47.829 16:58:35 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:07:47.829 16:58:35 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:07:47.829 16:58:35 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:07:47.829 16:58:35 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:47.829 16:58:35 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:47.829 16:58:35 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:47.829 16:58:35 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:07:47.829 16:58:35 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:47.829 16:58:35 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:47.829 16:58:35 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:07:47.829 16:58:35 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:47.829 16:58:35 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:47.829 16:58:35 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:07:47.829 16:58:35 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:07:47.829 16:58:35 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:07:47.829 16:58:35 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:47.829 16:58:35 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:47.829 16:58:35 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:47.829 16:58:35 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:07:47.829 16:58:35 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:48.088 16:58:35 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:48.088 16:58:35 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:48.088 16:58:35 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:07:48.088 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:48.088 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.174 ms 00:07:48.088 00:07:48.088 --- 10.0.0.2 ping statistics --- 00:07:48.088 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:48.088 rtt min/avg/max/mdev = 0.174/0.174/0.174/0.000 ms 00:07:48.088 16:58:35 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:48.088 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:48.088 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.264 ms 00:07:48.088 00:07:48.088 --- 10.0.0.1 ping statistics --- 00:07:48.088 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:48.088 rtt min/avg/max/mdev = 0.264/0.264/0.264/0.000 ms 00:07:48.088 16:58:35 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:48.088 16:58:35 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@422 -- # return 0 00:07:48.088 16:58:35 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:07:48.088 16:58:35 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:48.088 16:58:35 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:07:48.088 16:58:35 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:07:48.088 16:58:35 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:48.088 16:58:35 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:07:48.088 16:58:35 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:07:48.088 16:58:35 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@16 -- # nvmfappstart -m 0xF 00:07:48.088 16:58:35 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:07:48.088 16:58:35 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@720 -- # xtrace_disable 00:07:48.088 16:58:35 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:07:48.088 16:58:35 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@481 -- # nvmfpid=2934376 00:07:48.088 16:58:35 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@482 -- # waitforlisten 2934376 00:07:48.088 16:58:35 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:07:48.088 16:58:35 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@827 -- # '[' -z 2934376 ']' 00:07:48.088 16:58:35 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:48.088 16:58:35 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@832 -- # local max_retries=100 00:07:48.088 16:58:35 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:48.088 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:48.088 16:58:35 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@836 -- # xtrace_disable 00:07:48.088 16:58:35 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:07:48.088 [2024-05-15 16:58:35.615330] Starting SPDK v24.05-pre git sha1 0ba8ca574 / DPDK 23.11.0 initialization... 00:07:48.088 [2024-05-15 16:58:35.615370] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:48.088 EAL: No free 2048 kB hugepages reported on node 1 00:07:48.088 [2024-05-15 16:58:35.672825] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:48.346 [2024-05-15 16:58:35.750797] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:48.346 [2024-05-15 16:58:35.750834] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:48.346 [2024-05-15 16:58:35.750842] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:48.346 [2024-05-15 16:58:35.750848] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:48.346 [2024-05-15 16:58:35.750853] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:48.346 [2024-05-15 16:58:35.750896] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:48.346 [2024-05-15 16:58:35.750918] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:48.346 [2024-05-15 16:58:35.750983] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:48.346 [2024-05-15 16:58:35.750982] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:07:48.909 16:58:36 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:07:48.909 16:58:36 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@860 -- # return 0 00:07:48.909 16:58:36 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:07:48.909 16:58:36 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:48.909 16:58:36 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:07:48.909 16:58:36 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:48.909 16:58:36 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@18 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:07:48.909 16:58:36 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:07:48.909 16:58:36 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@21 -- # jq length 00:07:49.174 16:58:36 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@21 -- # '[' 1 '!=' 1 ']' 00:07:49.174 16:58:36 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_1 -s 32 00:07:49.174 "nvmf_tgt_1" 00:07:49.174 16:58:36 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_2 -s 32 00:07:49.174 "nvmf_tgt_2" 00:07:49.174 16:58:36 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:07:49.174 16:58:36 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@28 -- # jq length 00:07:49.466 16:58:36 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@28 -- # '[' 3 '!=' 3 ']' 00:07:49.466 16:58:36 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_1 00:07:49.466 true 00:07:49.466 16:58:36 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_2 00:07:49.466 true 00:07:49.466 16:58:37 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@35 -- # jq length 00:07:49.466 16:58:37 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:07:49.724 16:58:37 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@35 -- # '[' 1 '!=' 1 ']' 00:07:49.724 16:58:37 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:07:49.724 16:58:37 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@41 -- # nvmftestfini 00:07:49.724 16:58:37 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@488 -- # nvmfcleanup 00:07:49.724 16:58:37 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@117 -- # sync 00:07:49.724 16:58:37 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:07:49.724 16:58:37 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@120 -- # set +e 00:07:49.724 16:58:37 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@121 -- # for i in {1..20} 00:07:49.724 16:58:37 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:07:49.724 rmmod nvme_tcp 00:07:49.724 rmmod nvme_fabrics 00:07:49.724 rmmod nvme_keyring 00:07:49.724 16:58:37 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:07:49.724 16:58:37 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@124 -- # set -e 00:07:49.724 16:58:37 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@125 -- # return 0 00:07:49.724 16:58:37 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@489 -- # '[' -n 2934376 ']' 00:07:49.724 16:58:37 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@490 -- # killprocess 2934376 00:07:49.724 16:58:37 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@946 -- # '[' -z 2934376 ']' 00:07:49.724 16:58:37 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@950 -- # kill -0 2934376 00:07:49.724 16:58:37 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@951 -- # uname 00:07:49.724 16:58:37 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:07:49.724 16:58:37 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 2934376 00:07:49.724 16:58:37 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:07:49.724 16:58:37 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:07:49.724 16:58:37 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@964 -- # echo 'killing process with pid 2934376' 00:07:49.724 killing process with pid 2934376 00:07:49.724 16:58:37 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@965 -- # kill 2934376 00:07:49.724 16:58:37 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@970 -- # wait 2934376 00:07:49.981 16:58:37 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:07:49.981 16:58:37 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:07:49.981 16:58:37 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:07:49.981 16:58:37 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:07:49.981 16:58:37 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@278 -- # remove_spdk_ns 00:07:49.981 16:58:37 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:49.981 16:58:37 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:49.981 16:58:37 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:52.509 16:58:39 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:07:52.509 00:07:52.509 real 0m9.759s 00:07:52.509 user 0m9.230s 00:07:52.509 sys 0m4.720s 00:07:52.509 16:58:39 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:52.509 16:58:39 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:07:52.509 ************************************ 00:07:52.509 END TEST nvmf_multitarget 00:07:52.509 ************************************ 00:07:52.509 16:58:39 nvmf_tcp -- nvmf/nvmf.sh@29 -- # run_test nvmf_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:07:52.509 16:58:39 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:07:52.509 16:58:39 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:52.509 16:58:39 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:07:52.509 ************************************ 00:07:52.509 START TEST nvmf_rpc 00:07:52.509 ************************************ 00:07:52.509 16:58:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:07:52.509 * Looking for test storage... 00:07:52.509 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:52.509 16:58:39 nvmf_tcp.nvmf_rpc -- target/rpc.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:52.509 16:58:39 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@7 -- # uname -s 00:07:52.509 16:58:39 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:52.509 16:58:39 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:52.509 16:58:39 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:52.509 16:58:39 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:52.509 16:58:39 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:52.509 16:58:39 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:52.509 16:58:39 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:52.509 16:58:39 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:52.509 16:58:39 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:52.509 16:58:39 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:52.509 16:58:39 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:07:52.509 16:58:39 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:07:52.509 16:58:39 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:52.509 16:58:39 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:52.509 16:58:39 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:52.509 16:58:39 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:52.509 16:58:39 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:52.509 16:58:39 nvmf_tcp.nvmf_rpc -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:52.509 16:58:39 nvmf_tcp.nvmf_rpc -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:52.510 16:58:39 nvmf_tcp.nvmf_rpc -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:52.510 16:58:39 nvmf_tcp.nvmf_rpc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:52.510 16:58:39 nvmf_tcp.nvmf_rpc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:52.510 16:58:39 nvmf_tcp.nvmf_rpc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:52.510 16:58:39 nvmf_tcp.nvmf_rpc -- paths/export.sh@5 -- # export PATH 00:07:52.510 16:58:39 nvmf_tcp.nvmf_rpc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:52.510 16:58:39 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@47 -- # : 0 00:07:52.510 16:58:39 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:07:52.510 16:58:39 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:07:52.510 16:58:39 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:52.510 16:58:39 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:52.510 16:58:39 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:52.510 16:58:39 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:07:52.510 16:58:39 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:07:52.510 16:58:39 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@51 -- # have_pci_nics=0 00:07:52.510 16:58:39 nvmf_tcp.nvmf_rpc -- target/rpc.sh@11 -- # loops=5 00:07:52.510 16:58:39 nvmf_tcp.nvmf_rpc -- target/rpc.sh@23 -- # nvmftestinit 00:07:52.510 16:58:39 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:07:52.510 16:58:39 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:52.510 16:58:39 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@448 -- # prepare_net_devs 00:07:52.510 16:58:39 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@410 -- # local -g is_hw=no 00:07:52.510 16:58:39 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@412 -- # remove_spdk_ns 00:07:52.510 16:58:39 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:52.510 16:58:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:52.510 16:58:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:52.510 16:58:39 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:07:52.510 16:58:39 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:07:52.510 16:58:39 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@285 -- # xtrace_disable 00:07:52.510 16:58:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:57.772 16:58:44 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:57.772 16:58:44 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@291 -- # pci_devs=() 00:07:57.772 16:58:44 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@291 -- # local -a pci_devs 00:07:57.772 16:58:44 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@292 -- # pci_net_devs=() 00:07:57.772 16:58:44 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:07:57.772 16:58:44 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@293 -- # pci_drivers=() 00:07:57.772 16:58:44 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@293 -- # local -A pci_drivers 00:07:57.772 16:58:44 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@295 -- # net_devs=() 00:07:57.772 16:58:44 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@295 -- # local -ga net_devs 00:07:57.772 16:58:44 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@296 -- # e810=() 00:07:57.772 16:58:44 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@296 -- # local -ga e810 00:07:57.772 16:58:44 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@297 -- # x722=() 00:07:57.772 16:58:44 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@297 -- # local -ga x722 00:07:57.772 16:58:44 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@298 -- # mlx=() 00:07:57.772 16:58:44 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@298 -- # local -ga mlx 00:07:57.772 16:58:44 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:57.772 16:58:44 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:57.772 16:58:44 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:57.772 16:58:44 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:57.772 16:58:44 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:57.772 16:58:44 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:57.772 16:58:44 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:57.772 16:58:44 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:57.772 16:58:44 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:57.772 16:58:44 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:57.772 16:58:44 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:57.772 16:58:44 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:07:57.772 16:58:44 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:07:57.772 16:58:44 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:07:57.772 16:58:44 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:07:57.772 16:58:44 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:07:57.772 16:58:44 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:07:57.772 16:58:44 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:57.772 16:58:44 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:07:57.772 Found 0000:86:00.0 (0x8086 - 0x159b) 00:07:57.772 16:58:44 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:57.772 16:58:44 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:57.772 16:58:44 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:57.772 16:58:44 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:57.772 16:58:44 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:57.772 16:58:44 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:57.772 16:58:44 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:07:57.772 Found 0000:86:00.1 (0x8086 - 0x159b) 00:07:57.772 16:58:44 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:57.772 16:58:44 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:57.772 16:58:44 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:57.772 16:58:44 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:57.772 16:58:44 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:57.772 16:58:44 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:07:57.772 16:58:44 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:07:57.772 16:58:44 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:07:57.772 16:58:44 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:57.772 16:58:44 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:57.772 16:58:44 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:57.772 16:58:44 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:57.772 16:58:44 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:57.772 16:58:44 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:57.772 16:58:44 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:57.772 16:58:44 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:07:57.772 Found net devices under 0000:86:00.0: cvl_0_0 00:07:57.772 16:58:44 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:57.772 16:58:44 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:57.772 16:58:44 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:57.772 16:58:44 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:57.772 16:58:44 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:57.772 16:58:44 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:57.772 16:58:44 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:57.772 16:58:44 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:57.772 16:58:44 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:07:57.772 Found net devices under 0000:86:00.1: cvl_0_1 00:07:57.772 16:58:44 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:57.772 16:58:44 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:07:57.772 16:58:44 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@414 -- # is_hw=yes 00:07:57.772 16:58:44 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:07:57.772 16:58:44 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:07:57.772 16:58:44 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:07:57.772 16:58:44 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:57.772 16:58:44 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:57.772 16:58:44 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:57.772 16:58:44 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:07:57.772 16:58:44 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:57.772 16:58:44 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:57.773 16:58:44 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:07:57.773 16:58:44 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:57.773 16:58:44 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:57.773 16:58:44 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:07:57.773 16:58:44 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:07:57.773 16:58:44 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:07:57.773 16:58:44 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:57.773 16:58:44 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:57.773 16:58:44 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:57.773 16:58:44 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:07:57.773 16:58:44 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:57.773 16:58:45 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:57.773 16:58:45 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:57.773 16:58:45 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:07:57.773 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:57.773 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.163 ms 00:07:57.773 00:07:57.773 --- 10.0.0.2 ping statistics --- 00:07:57.773 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:57.773 rtt min/avg/max/mdev = 0.163/0.163/0.163/0.000 ms 00:07:57.773 16:58:45 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:57.773 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:57.773 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.146 ms 00:07:57.773 00:07:57.773 --- 10.0.0.1 ping statistics --- 00:07:57.773 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:57.773 rtt min/avg/max/mdev = 0.146/0.146/0.146/0.000 ms 00:07:57.773 16:58:45 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:57.773 16:58:45 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@422 -- # return 0 00:07:57.773 16:58:45 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:07:57.773 16:58:45 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:57.773 16:58:45 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:07:57.773 16:58:45 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:07:57.773 16:58:45 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:57.773 16:58:45 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:07:57.773 16:58:45 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:07:57.773 16:58:45 nvmf_tcp.nvmf_rpc -- target/rpc.sh@24 -- # nvmfappstart -m 0xF 00:07:57.773 16:58:45 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:07:57.773 16:58:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@720 -- # xtrace_disable 00:07:57.773 16:58:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:57.773 16:58:45 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@481 -- # nvmfpid=2938164 00:07:57.773 16:58:45 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:07:57.773 16:58:45 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@482 -- # waitforlisten 2938164 00:07:57.773 16:58:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@827 -- # '[' -z 2938164 ']' 00:07:57.773 16:58:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:57.773 16:58:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@832 -- # local max_retries=100 00:07:57.773 16:58:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:57.773 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:57.773 16:58:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@836 -- # xtrace_disable 00:07:57.773 16:58:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:57.773 [2024-05-15 16:58:45.197017] Starting SPDK v24.05-pre git sha1 0ba8ca574 / DPDK 23.11.0 initialization... 00:07:57.773 [2024-05-15 16:58:45.197061] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:57.773 EAL: No free 2048 kB hugepages reported on node 1 00:07:57.773 [2024-05-15 16:58:45.250351] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:57.773 [2024-05-15 16:58:45.329398] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:57.773 [2024-05-15 16:58:45.329437] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:57.773 [2024-05-15 16:58:45.329444] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:57.773 [2024-05-15 16:58:45.329450] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:57.773 [2024-05-15 16:58:45.329455] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:57.773 [2024-05-15 16:58:45.329509] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:57.773 [2024-05-15 16:58:45.329525] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:57.773 [2024-05-15 16:58:45.329616] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:07:57.773 [2024-05-15 16:58:45.329617] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:58.705 16:58:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:07:58.705 16:58:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@860 -- # return 0 00:07:58.705 16:58:46 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:07:58.705 16:58:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:58.705 16:58:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:58.705 16:58:46 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:58.705 16:58:46 nvmf_tcp.nvmf_rpc -- target/rpc.sh@26 -- # rpc_cmd nvmf_get_stats 00:07:58.705 16:58:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:58.705 16:58:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:58.705 16:58:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:58.705 16:58:46 nvmf_tcp.nvmf_rpc -- target/rpc.sh@26 -- # stats='{ 00:07:58.705 "tick_rate": 2300000000, 00:07:58.705 "poll_groups": [ 00:07:58.705 { 00:07:58.705 "name": "nvmf_tgt_poll_group_000", 00:07:58.705 "admin_qpairs": 0, 00:07:58.705 "io_qpairs": 0, 00:07:58.705 "current_admin_qpairs": 0, 00:07:58.705 "current_io_qpairs": 0, 00:07:58.705 "pending_bdev_io": 0, 00:07:58.705 "completed_nvme_io": 0, 00:07:58.705 "transports": [] 00:07:58.705 }, 00:07:58.705 { 00:07:58.705 "name": "nvmf_tgt_poll_group_001", 00:07:58.705 "admin_qpairs": 0, 00:07:58.705 "io_qpairs": 0, 00:07:58.705 "current_admin_qpairs": 0, 00:07:58.705 "current_io_qpairs": 0, 00:07:58.705 "pending_bdev_io": 0, 00:07:58.705 "completed_nvme_io": 0, 00:07:58.705 "transports": [] 00:07:58.705 }, 00:07:58.705 { 00:07:58.705 "name": "nvmf_tgt_poll_group_002", 00:07:58.705 "admin_qpairs": 0, 00:07:58.705 "io_qpairs": 0, 00:07:58.705 "current_admin_qpairs": 0, 00:07:58.705 "current_io_qpairs": 0, 00:07:58.705 "pending_bdev_io": 0, 00:07:58.705 "completed_nvme_io": 0, 00:07:58.705 "transports": [] 00:07:58.705 }, 00:07:58.705 { 00:07:58.705 "name": "nvmf_tgt_poll_group_003", 00:07:58.705 "admin_qpairs": 0, 00:07:58.705 "io_qpairs": 0, 00:07:58.705 "current_admin_qpairs": 0, 00:07:58.705 "current_io_qpairs": 0, 00:07:58.705 "pending_bdev_io": 0, 00:07:58.705 "completed_nvme_io": 0, 00:07:58.705 "transports": [] 00:07:58.705 } 00:07:58.705 ] 00:07:58.705 }' 00:07:58.705 16:58:46 nvmf_tcp.nvmf_rpc -- target/rpc.sh@28 -- # jcount '.poll_groups[].name' 00:07:58.705 16:58:46 nvmf_tcp.nvmf_rpc -- target/rpc.sh@14 -- # local 'filter=.poll_groups[].name' 00:07:58.705 16:58:46 nvmf_tcp.nvmf_rpc -- target/rpc.sh@15 -- # jq '.poll_groups[].name' 00:07:58.705 16:58:46 nvmf_tcp.nvmf_rpc -- target/rpc.sh@15 -- # wc -l 00:07:58.705 16:58:46 nvmf_tcp.nvmf_rpc -- target/rpc.sh@28 -- # (( 4 == 4 )) 00:07:58.705 16:58:46 nvmf_tcp.nvmf_rpc -- target/rpc.sh@29 -- # jq '.poll_groups[0].transports[0]' 00:07:58.705 16:58:46 nvmf_tcp.nvmf_rpc -- target/rpc.sh@29 -- # [[ null == null ]] 00:07:58.705 16:58:46 nvmf_tcp.nvmf_rpc -- target/rpc.sh@31 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:07:58.705 16:58:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:58.705 16:58:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:58.705 [2024-05-15 16:58:46.151528] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:58.705 16:58:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:58.705 16:58:46 nvmf_tcp.nvmf_rpc -- target/rpc.sh@33 -- # rpc_cmd nvmf_get_stats 00:07:58.705 16:58:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:58.705 16:58:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:58.705 16:58:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:58.705 16:58:46 nvmf_tcp.nvmf_rpc -- target/rpc.sh@33 -- # stats='{ 00:07:58.705 "tick_rate": 2300000000, 00:07:58.705 "poll_groups": [ 00:07:58.705 { 00:07:58.705 "name": "nvmf_tgt_poll_group_000", 00:07:58.705 "admin_qpairs": 0, 00:07:58.705 "io_qpairs": 0, 00:07:58.705 "current_admin_qpairs": 0, 00:07:58.705 "current_io_qpairs": 0, 00:07:58.705 "pending_bdev_io": 0, 00:07:58.705 "completed_nvme_io": 0, 00:07:58.705 "transports": [ 00:07:58.705 { 00:07:58.705 "trtype": "TCP" 00:07:58.705 } 00:07:58.705 ] 00:07:58.705 }, 00:07:58.705 { 00:07:58.705 "name": "nvmf_tgt_poll_group_001", 00:07:58.705 "admin_qpairs": 0, 00:07:58.705 "io_qpairs": 0, 00:07:58.705 "current_admin_qpairs": 0, 00:07:58.705 "current_io_qpairs": 0, 00:07:58.705 "pending_bdev_io": 0, 00:07:58.705 "completed_nvme_io": 0, 00:07:58.705 "transports": [ 00:07:58.705 { 00:07:58.705 "trtype": "TCP" 00:07:58.705 } 00:07:58.705 ] 00:07:58.705 }, 00:07:58.705 { 00:07:58.705 "name": "nvmf_tgt_poll_group_002", 00:07:58.705 "admin_qpairs": 0, 00:07:58.705 "io_qpairs": 0, 00:07:58.705 "current_admin_qpairs": 0, 00:07:58.705 "current_io_qpairs": 0, 00:07:58.705 "pending_bdev_io": 0, 00:07:58.705 "completed_nvme_io": 0, 00:07:58.705 "transports": [ 00:07:58.705 { 00:07:58.705 "trtype": "TCP" 00:07:58.705 } 00:07:58.705 ] 00:07:58.705 }, 00:07:58.705 { 00:07:58.705 "name": "nvmf_tgt_poll_group_003", 00:07:58.705 "admin_qpairs": 0, 00:07:58.705 "io_qpairs": 0, 00:07:58.705 "current_admin_qpairs": 0, 00:07:58.705 "current_io_qpairs": 0, 00:07:58.705 "pending_bdev_io": 0, 00:07:58.705 "completed_nvme_io": 0, 00:07:58.705 "transports": [ 00:07:58.705 { 00:07:58.705 "trtype": "TCP" 00:07:58.705 } 00:07:58.705 ] 00:07:58.705 } 00:07:58.705 ] 00:07:58.705 }' 00:07:58.705 16:58:46 nvmf_tcp.nvmf_rpc -- target/rpc.sh@35 -- # jsum '.poll_groups[].admin_qpairs' 00:07:58.705 16:58:46 nvmf_tcp.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:07:58.705 16:58:46 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:07:58.705 16:58:46 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:07:58.705 16:58:46 nvmf_tcp.nvmf_rpc -- target/rpc.sh@35 -- # (( 0 == 0 )) 00:07:58.705 16:58:46 nvmf_tcp.nvmf_rpc -- target/rpc.sh@36 -- # jsum '.poll_groups[].io_qpairs' 00:07:58.705 16:58:46 nvmf_tcp.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:07:58.705 16:58:46 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:07:58.705 16:58:46 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:07:58.705 16:58:46 nvmf_tcp.nvmf_rpc -- target/rpc.sh@36 -- # (( 0 == 0 )) 00:07:58.705 16:58:46 nvmf_tcp.nvmf_rpc -- target/rpc.sh@38 -- # '[' rdma == tcp ']' 00:07:58.705 16:58:46 nvmf_tcp.nvmf_rpc -- target/rpc.sh@46 -- # MALLOC_BDEV_SIZE=64 00:07:58.705 16:58:46 nvmf_tcp.nvmf_rpc -- target/rpc.sh@47 -- # MALLOC_BLOCK_SIZE=512 00:07:58.705 16:58:46 nvmf_tcp.nvmf_rpc -- target/rpc.sh@49 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:07:58.705 16:58:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:58.705 16:58:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:58.705 Malloc1 00:07:58.705 16:58:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:58.705 16:58:46 nvmf_tcp.nvmf_rpc -- target/rpc.sh@52 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:07:58.705 16:58:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:58.705 16:58:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:58.705 16:58:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:58.705 16:58:46 nvmf_tcp.nvmf_rpc -- target/rpc.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:07:58.705 16:58:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:58.705 16:58:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:58.705 16:58:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:58.705 16:58:46 nvmf_tcp.nvmf_rpc -- target/rpc.sh@54 -- # rpc_cmd nvmf_subsystem_allow_any_host -d nqn.2016-06.io.spdk:cnode1 00:07:58.705 16:58:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:58.705 16:58:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:58.705 16:58:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:58.705 16:58:46 nvmf_tcp.nvmf_rpc -- target/rpc.sh@55 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:58.705 16:58:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:58.705 16:58:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:58.705 [2024-05-15 16:58:46.315362] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:07:58.705 [2024-05-15 16:58:46.315613] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:58.705 16:58:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:58.705 16:58:46 nvmf_tcp.nvmf_rpc -- target/rpc.sh@58 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -a 10.0.0.2 -s 4420 00:07:58.705 16:58:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@648 -- # local es=0 00:07:58.705 16:58:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -a 10.0.0.2 -s 4420 00:07:58.705 16:58:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@636 -- # local arg=nvme 00:07:58.705 16:58:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:58.705 16:58:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # type -t nvme 00:07:58.705 16:58:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:58.705 16:58:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # type -P nvme 00:07:58.705 16:58:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:58.705 16:58:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # arg=/usr/sbin/nvme 00:07:58.705 16:58:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # [[ -x /usr/sbin/nvme ]] 00:07:58.705 16:58:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@651 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -a 10.0.0.2 -s 4420 00:07:58.705 [2024-05-15 16:58:46.348199] ctrlr.c: 816:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562' 00:07:58.963 Failed to write to /dev/nvme-fabrics: Input/output error 00:07:58.963 could not add new controller: failed to write to nvme-fabrics device 00:07:58.963 16:58:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@651 -- # es=1 00:07:58.963 16:58:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:07:58.963 16:58:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:07:58.963 16:58:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:07:58.963 16:58:46 nvmf_tcp.nvmf_rpc -- target/rpc.sh@61 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:07:58.963 16:58:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:58.963 16:58:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:58.963 16:58:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:58.963 16:58:46 nvmf_tcp.nvmf_rpc -- target/rpc.sh@62 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:08:00.333 16:58:47 nvmf_tcp.nvmf_rpc -- target/rpc.sh@63 -- # waitforserial SPDKISFASTANDAWESOME 00:08:00.333 16:58:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1194 -- # local i=0 00:08:00.333 16:58:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:08:00.333 16:58:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:08:00.333 16:58:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1201 -- # sleep 2 00:08:02.226 16:58:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:08:02.226 16:58:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:08:02.226 16:58:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # grep -c SPDKISFASTANDAWESOME 00:08:02.226 16:58:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:08:02.226 16:58:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:08:02.226 16:58:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # return 0 00:08:02.226 16:58:49 nvmf_tcp.nvmf_rpc -- target/rpc.sh@64 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:08:02.226 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:02.226 16:58:49 nvmf_tcp.nvmf_rpc -- target/rpc.sh@65 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:08:02.226 16:58:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1215 -- # local i=0 00:08:02.226 16:58:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:08:02.226 16:58:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:02.226 16:58:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:08:02.226 16:58:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1223 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:02.226 16:58:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # return 0 00:08:02.226 16:58:49 nvmf_tcp.nvmf_rpc -- target/rpc.sh@68 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:08:02.226 16:58:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:02.226 16:58:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:02.226 16:58:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:02.226 16:58:49 nvmf_tcp.nvmf_rpc -- target/rpc.sh@69 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:08:02.226 16:58:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@648 -- # local es=0 00:08:02.226 16:58:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:08:02.226 16:58:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@636 -- # local arg=nvme 00:08:02.226 16:58:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:02.226 16:58:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # type -t nvme 00:08:02.226 16:58:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:02.226 16:58:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # type -P nvme 00:08:02.226 16:58:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:02.226 16:58:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # arg=/usr/sbin/nvme 00:08:02.226 16:58:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # [[ -x /usr/sbin/nvme ]] 00:08:02.226 16:58:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@651 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:08:02.226 [2024-05-15 16:58:49.712947] ctrlr.c: 816:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562' 00:08:02.226 Failed to write to /dev/nvme-fabrics: Input/output error 00:08:02.226 could not add new controller: failed to write to nvme-fabrics device 00:08:02.226 16:58:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@651 -- # es=1 00:08:02.226 16:58:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:08:02.226 16:58:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:08:02.226 16:58:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:08:02.226 16:58:49 nvmf_tcp.nvmf_rpc -- target/rpc.sh@72 -- # rpc_cmd nvmf_subsystem_allow_any_host -e nqn.2016-06.io.spdk:cnode1 00:08:02.226 16:58:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:02.226 16:58:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:02.226 16:58:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:02.226 16:58:49 nvmf_tcp.nvmf_rpc -- target/rpc.sh@73 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:08:03.593 16:58:50 nvmf_tcp.nvmf_rpc -- target/rpc.sh@74 -- # waitforserial SPDKISFASTANDAWESOME 00:08:03.593 16:58:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1194 -- # local i=0 00:08:03.593 16:58:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:08:03.593 16:58:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:08:03.593 16:58:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1201 -- # sleep 2 00:08:05.484 16:58:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:08:05.484 16:58:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:08:05.484 16:58:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # grep -c SPDKISFASTANDAWESOME 00:08:05.484 16:58:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:08:05.484 16:58:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:08:05.484 16:58:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # return 0 00:08:05.484 16:58:52 nvmf_tcp.nvmf_rpc -- target/rpc.sh@75 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:08:05.484 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:05.484 16:58:52 nvmf_tcp.nvmf_rpc -- target/rpc.sh@76 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:08:05.484 16:58:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1215 -- # local i=0 00:08:05.484 16:58:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:08:05.484 16:58:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:05.484 16:58:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:08:05.484 16:58:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1223 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:05.484 16:58:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # return 0 00:08:05.484 16:58:53 nvmf_tcp.nvmf_rpc -- target/rpc.sh@78 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:05.484 16:58:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:05.484 16:58:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:05.484 16:58:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:05.484 16:58:53 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # seq 1 5 00:08:05.484 16:58:53 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:08:05.484 16:58:53 nvmf_tcp.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:08:05.484 16:58:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:05.484 16:58:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:05.484 16:58:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:05.484 16:58:53 nvmf_tcp.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:05.484 16:58:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:05.484 16:58:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:05.484 [2024-05-15 16:58:53.047904] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:05.484 16:58:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:05.484 16:58:53 nvmf_tcp.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:08:05.484 16:58:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:05.484 16:58:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:05.484 16:58:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:05.484 16:58:53 nvmf_tcp.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:08:05.484 16:58:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:05.484 16:58:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:05.484 16:58:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:05.484 16:58:53 nvmf_tcp.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:08:06.852 16:58:54 nvmf_tcp.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:08:06.852 16:58:54 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1194 -- # local i=0 00:08:06.852 16:58:54 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:08:06.852 16:58:54 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:08:06.852 16:58:54 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1201 -- # sleep 2 00:08:08.746 16:58:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:08:08.746 16:58:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:08:08.746 16:58:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # grep -c SPDKISFASTANDAWESOME 00:08:08.746 16:58:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:08:08.746 16:58:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:08:08.746 16:58:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # return 0 00:08:08.746 16:58:56 nvmf_tcp.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:08:08.746 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:08.746 16:58:56 nvmf_tcp.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:08:08.746 16:58:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1215 -- # local i=0 00:08:08.746 16:58:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:08:08.746 16:58:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:08.746 16:58:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:08:08.746 16:58:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1223 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:08.746 16:58:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # return 0 00:08:08.746 16:58:56 nvmf_tcp.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:08.746 16:58:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:08.746 16:58:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:08.746 16:58:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:08.746 16:58:56 nvmf_tcp.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:08.746 16:58:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:08.746 16:58:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:08.746 16:58:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:08.746 16:58:56 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:08:08.746 16:58:56 nvmf_tcp.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:08:08.746 16:58:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:08.746 16:58:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:08.746 16:58:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:08.746 16:58:56 nvmf_tcp.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:08.746 16:58:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:08.746 16:58:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:08.746 [2024-05-15 16:58:56.334786] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:08.746 16:58:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:08.746 16:58:56 nvmf_tcp.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:08:08.746 16:58:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:08.746 16:58:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:08.746 16:58:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:08.746 16:58:56 nvmf_tcp.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:08:08.746 16:58:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:08.746 16:58:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:08.746 16:58:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:08.746 16:58:56 nvmf_tcp.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:08:10.115 16:58:57 nvmf_tcp.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:08:10.115 16:58:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1194 -- # local i=0 00:08:10.115 16:58:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:08:10.115 16:58:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:08:10.115 16:58:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1201 -- # sleep 2 00:08:12.008 16:58:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:08:12.008 16:58:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:08:12.008 16:58:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # grep -c SPDKISFASTANDAWESOME 00:08:12.008 16:58:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:08:12.008 16:58:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:08:12.008 16:58:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # return 0 00:08:12.008 16:58:59 nvmf_tcp.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:08:12.008 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:12.008 16:58:59 nvmf_tcp.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:08:12.008 16:58:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1215 -- # local i=0 00:08:12.008 16:58:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:08:12.008 16:58:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:12.008 16:58:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:08:12.008 16:58:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1223 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:12.008 16:58:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # return 0 00:08:12.008 16:58:59 nvmf_tcp.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:12.008 16:58:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:12.008 16:58:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:12.008 16:58:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:12.008 16:58:59 nvmf_tcp.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:12.008 16:58:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:12.008 16:58:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:12.008 16:58:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:12.008 16:58:59 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:08:12.008 16:58:59 nvmf_tcp.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:08:12.008 16:58:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:12.008 16:58:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:12.008 16:58:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:12.008 16:58:59 nvmf_tcp.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:12.008 16:58:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:12.008 16:58:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:12.265 [2024-05-15 16:58:59.667809] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:12.265 16:58:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:12.265 16:58:59 nvmf_tcp.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:08:12.265 16:58:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:12.265 16:58:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:12.265 16:58:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:12.265 16:58:59 nvmf_tcp.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:08:12.265 16:58:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:12.265 16:58:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:12.265 16:58:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:12.265 16:58:59 nvmf_tcp.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:08:13.647 16:59:00 nvmf_tcp.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:08:13.647 16:59:00 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1194 -- # local i=0 00:08:13.647 16:59:00 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:08:13.647 16:59:00 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:08:13.647 16:59:00 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1201 -- # sleep 2 00:08:15.556 16:59:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:08:15.556 16:59:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:08:15.556 16:59:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # grep -c SPDKISFASTANDAWESOME 00:08:15.556 16:59:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:08:15.556 16:59:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:08:15.556 16:59:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # return 0 00:08:15.556 16:59:02 nvmf_tcp.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:08:15.556 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:15.556 16:59:02 nvmf_tcp.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:08:15.556 16:59:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1215 -- # local i=0 00:08:15.556 16:59:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:08:15.556 16:59:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:15.556 16:59:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:08:15.556 16:59:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1223 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:15.556 16:59:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # return 0 00:08:15.556 16:59:02 nvmf_tcp.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:15.556 16:59:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:15.556 16:59:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:15.556 16:59:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:15.556 16:59:02 nvmf_tcp.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:15.556 16:59:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:15.556 16:59:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:15.556 16:59:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:15.557 16:59:02 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:08:15.557 16:59:02 nvmf_tcp.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:08:15.557 16:59:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:15.557 16:59:02 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:15.557 16:59:03 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:15.557 16:59:03 nvmf_tcp.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:15.557 16:59:03 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:15.557 16:59:03 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:15.557 [2024-05-15 16:59:03.011155] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:15.557 16:59:03 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:15.557 16:59:03 nvmf_tcp.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:08:15.557 16:59:03 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:15.557 16:59:03 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:15.557 16:59:03 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:15.557 16:59:03 nvmf_tcp.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:08:15.557 16:59:03 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:15.557 16:59:03 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:15.557 16:59:03 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:15.557 16:59:03 nvmf_tcp.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:08:16.923 16:59:04 nvmf_tcp.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:08:16.923 16:59:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1194 -- # local i=0 00:08:16.923 16:59:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:08:16.923 16:59:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:08:16.923 16:59:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1201 -- # sleep 2 00:08:18.812 16:59:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:08:18.812 16:59:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:08:18.812 16:59:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # grep -c SPDKISFASTANDAWESOME 00:08:18.812 16:59:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:08:18.812 16:59:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:08:18.812 16:59:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # return 0 00:08:18.812 16:59:06 nvmf_tcp.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:08:18.812 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:18.812 16:59:06 nvmf_tcp.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:08:18.812 16:59:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1215 -- # local i=0 00:08:18.812 16:59:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:08:18.812 16:59:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:18.812 16:59:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:08:18.812 16:59:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1223 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:18.812 16:59:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # return 0 00:08:18.812 16:59:06 nvmf_tcp.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:18.812 16:59:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:18.812 16:59:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:18.812 16:59:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:18.812 16:59:06 nvmf_tcp.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:18.812 16:59:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:18.812 16:59:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:18.812 16:59:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:18.812 16:59:06 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:08:18.812 16:59:06 nvmf_tcp.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:08:18.812 16:59:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:18.812 16:59:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:18.812 16:59:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:18.812 16:59:06 nvmf_tcp.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:18.812 16:59:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:18.812 16:59:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:18.812 [2024-05-15 16:59:06.343291] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:18.812 16:59:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:18.812 16:59:06 nvmf_tcp.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:08:18.812 16:59:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:18.812 16:59:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:18.812 16:59:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:18.812 16:59:06 nvmf_tcp.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:08:18.812 16:59:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:18.812 16:59:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:18.812 16:59:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:18.812 16:59:06 nvmf_tcp.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:08:20.182 16:59:07 nvmf_tcp.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:08:20.182 16:59:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1194 -- # local i=0 00:08:20.182 16:59:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:08:20.182 16:59:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:08:20.182 16:59:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1201 -- # sleep 2 00:08:22.075 16:59:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:08:22.075 16:59:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:08:22.075 16:59:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # grep -c SPDKISFASTANDAWESOME 00:08:22.075 16:59:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:08:22.075 16:59:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:08:22.075 16:59:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # return 0 00:08:22.075 16:59:09 nvmf_tcp.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:08:22.075 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:22.075 16:59:09 nvmf_tcp.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:08:22.075 16:59:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1215 -- # local i=0 00:08:22.075 16:59:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:08:22.075 16:59:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:22.075 16:59:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:08:22.075 16:59:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1223 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:22.075 16:59:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # return 0 00:08:22.075 16:59:09 nvmf_tcp.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:22.075 16:59:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:22.075 16:59:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:22.075 16:59:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:22.075 16:59:09 nvmf_tcp.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:22.075 16:59:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:22.075 16:59:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:22.075 16:59:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:22.075 16:59:09 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # seq 1 5 00:08:22.075 16:59:09 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:08:22.075 16:59:09 nvmf_tcp.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:08:22.075 16:59:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:22.075 16:59:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:22.075 16:59:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:22.075 16:59:09 nvmf_tcp.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:22.075 16:59:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:22.075 16:59:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:22.075 [2024-05-15 16:59:09.607618] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:22.075 16:59:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:22.075 16:59:09 nvmf_tcp.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:08:22.075 16:59:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:22.075 16:59:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:22.075 16:59:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:22.075 16:59:09 nvmf_tcp.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:08:22.075 16:59:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:22.075 16:59:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:22.075 16:59:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:22.075 16:59:09 nvmf_tcp.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:22.075 16:59:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:22.075 16:59:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:22.075 16:59:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:22.075 16:59:09 nvmf_tcp.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:22.075 16:59:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:22.075 16:59:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:22.075 16:59:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:22.075 16:59:09 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:08:22.075 16:59:09 nvmf_tcp.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:08:22.075 16:59:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:22.075 16:59:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:22.075 16:59:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:22.075 16:59:09 nvmf_tcp.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:22.075 16:59:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:22.075 16:59:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:22.075 [2024-05-15 16:59:09.655740] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:22.075 16:59:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:22.075 16:59:09 nvmf_tcp.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:08:22.075 16:59:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:22.075 16:59:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:22.075 16:59:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:22.075 16:59:09 nvmf_tcp.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:08:22.075 16:59:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:22.075 16:59:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:22.075 16:59:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:22.075 16:59:09 nvmf_tcp.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:22.075 16:59:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:22.075 16:59:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:22.075 16:59:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:22.075 16:59:09 nvmf_tcp.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:22.075 16:59:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:22.075 16:59:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:22.075 16:59:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:22.075 16:59:09 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:08:22.075 16:59:09 nvmf_tcp.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:08:22.075 16:59:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:22.075 16:59:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:22.075 16:59:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:22.075 16:59:09 nvmf_tcp.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:22.075 16:59:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:22.075 16:59:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:22.075 [2024-05-15 16:59:09.703885] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:22.075 16:59:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:22.075 16:59:09 nvmf_tcp.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:08:22.075 16:59:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:22.075 16:59:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:22.075 16:59:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:22.075 16:59:09 nvmf_tcp.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:08:22.075 16:59:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:22.075 16:59:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:22.075 16:59:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:22.075 16:59:09 nvmf_tcp.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:22.075 16:59:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:22.075 16:59:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:22.075 16:59:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:22.075 16:59:09 nvmf_tcp.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:22.075 16:59:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:22.075 16:59:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:22.332 16:59:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:22.332 16:59:09 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:08:22.332 16:59:09 nvmf_tcp.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:08:22.332 16:59:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:22.332 16:59:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:22.332 16:59:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:22.332 16:59:09 nvmf_tcp.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:22.332 16:59:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:22.332 16:59:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:22.332 [2024-05-15 16:59:09.752035] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:22.332 16:59:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:22.332 16:59:09 nvmf_tcp.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:08:22.332 16:59:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:22.332 16:59:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:22.332 16:59:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:22.332 16:59:09 nvmf_tcp.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:08:22.332 16:59:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:22.332 16:59:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:22.332 16:59:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:22.332 16:59:09 nvmf_tcp.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:22.332 16:59:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:22.332 16:59:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:22.332 16:59:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:22.332 16:59:09 nvmf_tcp.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:22.332 16:59:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:22.332 16:59:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:22.332 16:59:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:22.332 16:59:09 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:08:22.332 16:59:09 nvmf_tcp.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:08:22.332 16:59:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:22.332 16:59:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:22.332 16:59:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:22.332 16:59:09 nvmf_tcp.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:22.332 16:59:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:22.332 16:59:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:22.332 [2024-05-15 16:59:09.800218] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:22.332 16:59:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:22.332 16:59:09 nvmf_tcp.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:08:22.332 16:59:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:22.332 16:59:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:22.332 16:59:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:22.332 16:59:09 nvmf_tcp.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:08:22.332 16:59:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:22.332 16:59:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:22.332 16:59:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:22.332 16:59:09 nvmf_tcp.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:22.332 16:59:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:22.332 16:59:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:22.332 16:59:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:22.332 16:59:09 nvmf_tcp.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:22.332 16:59:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:22.332 16:59:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:22.332 16:59:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:22.332 16:59:09 nvmf_tcp.nvmf_rpc -- target/rpc.sh@110 -- # rpc_cmd nvmf_get_stats 00:08:22.332 16:59:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:22.332 16:59:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:22.332 16:59:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:22.332 16:59:09 nvmf_tcp.nvmf_rpc -- target/rpc.sh@110 -- # stats='{ 00:08:22.332 "tick_rate": 2300000000, 00:08:22.333 "poll_groups": [ 00:08:22.333 { 00:08:22.333 "name": "nvmf_tgt_poll_group_000", 00:08:22.333 "admin_qpairs": 2, 00:08:22.333 "io_qpairs": 168, 00:08:22.333 "current_admin_qpairs": 0, 00:08:22.333 "current_io_qpairs": 0, 00:08:22.333 "pending_bdev_io": 0, 00:08:22.333 "completed_nvme_io": 267, 00:08:22.333 "transports": [ 00:08:22.333 { 00:08:22.333 "trtype": "TCP" 00:08:22.333 } 00:08:22.333 ] 00:08:22.333 }, 00:08:22.333 { 00:08:22.333 "name": "nvmf_tgt_poll_group_001", 00:08:22.333 "admin_qpairs": 2, 00:08:22.333 "io_qpairs": 168, 00:08:22.333 "current_admin_qpairs": 0, 00:08:22.333 "current_io_qpairs": 0, 00:08:22.333 "pending_bdev_io": 0, 00:08:22.333 "completed_nvme_io": 318, 00:08:22.333 "transports": [ 00:08:22.333 { 00:08:22.333 "trtype": "TCP" 00:08:22.333 } 00:08:22.333 ] 00:08:22.333 }, 00:08:22.333 { 00:08:22.333 "name": "nvmf_tgt_poll_group_002", 00:08:22.333 "admin_qpairs": 1, 00:08:22.333 "io_qpairs": 168, 00:08:22.333 "current_admin_qpairs": 0, 00:08:22.333 "current_io_qpairs": 0, 00:08:22.333 "pending_bdev_io": 0, 00:08:22.333 "completed_nvme_io": 219, 00:08:22.333 "transports": [ 00:08:22.333 { 00:08:22.333 "trtype": "TCP" 00:08:22.333 } 00:08:22.333 ] 00:08:22.333 }, 00:08:22.333 { 00:08:22.333 "name": "nvmf_tgt_poll_group_003", 00:08:22.333 "admin_qpairs": 2, 00:08:22.333 "io_qpairs": 168, 00:08:22.333 "current_admin_qpairs": 0, 00:08:22.333 "current_io_qpairs": 0, 00:08:22.333 "pending_bdev_io": 0, 00:08:22.333 "completed_nvme_io": 218, 00:08:22.333 "transports": [ 00:08:22.333 { 00:08:22.333 "trtype": "TCP" 00:08:22.333 } 00:08:22.333 ] 00:08:22.333 } 00:08:22.333 ] 00:08:22.333 }' 00:08:22.333 16:59:09 nvmf_tcp.nvmf_rpc -- target/rpc.sh@112 -- # jsum '.poll_groups[].admin_qpairs' 00:08:22.333 16:59:09 nvmf_tcp.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:08:22.333 16:59:09 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:08:22.333 16:59:09 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:08:22.333 16:59:09 nvmf_tcp.nvmf_rpc -- target/rpc.sh@112 -- # (( 7 > 0 )) 00:08:22.333 16:59:09 nvmf_tcp.nvmf_rpc -- target/rpc.sh@113 -- # jsum '.poll_groups[].io_qpairs' 00:08:22.333 16:59:09 nvmf_tcp.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:08:22.333 16:59:09 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:08:22.333 16:59:09 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:08:22.333 16:59:09 nvmf_tcp.nvmf_rpc -- target/rpc.sh@113 -- # (( 672 > 0 )) 00:08:22.333 16:59:09 nvmf_tcp.nvmf_rpc -- target/rpc.sh@115 -- # '[' rdma == tcp ']' 00:08:22.333 16:59:09 nvmf_tcp.nvmf_rpc -- target/rpc.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:08:22.333 16:59:09 nvmf_tcp.nvmf_rpc -- target/rpc.sh@123 -- # nvmftestfini 00:08:22.333 16:59:09 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@488 -- # nvmfcleanup 00:08:22.333 16:59:09 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@117 -- # sync 00:08:22.333 16:59:09 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:08:22.333 16:59:09 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@120 -- # set +e 00:08:22.333 16:59:09 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@121 -- # for i in {1..20} 00:08:22.333 16:59:09 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:08:22.333 rmmod nvme_tcp 00:08:22.333 rmmod nvme_fabrics 00:08:22.333 rmmod nvme_keyring 00:08:22.333 16:59:09 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:08:22.333 16:59:09 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@124 -- # set -e 00:08:22.333 16:59:09 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@125 -- # return 0 00:08:22.333 16:59:09 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@489 -- # '[' -n 2938164 ']' 00:08:22.333 16:59:09 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@490 -- # killprocess 2938164 00:08:22.333 16:59:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@946 -- # '[' -z 2938164 ']' 00:08:22.333 16:59:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@950 -- # kill -0 2938164 00:08:22.333 16:59:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@951 -- # uname 00:08:22.590 16:59:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:08:22.590 16:59:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 2938164 00:08:22.590 16:59:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:08:22.590 16:59:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:08:22.590 16:59:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@964 -- # echo 'killing process with pid 2938164' 00:08:22.590 killing process with pid 2938164 00:08:22.590 16:59:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@965 -- # kill 2938164 00:08:22.590 [2024-05-15 16:59:10.033829] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:08:22.590 16:59:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@970 -- # wait 2938164 00:08:22.848 16:59:10 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:08:22.848 16:59:10 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:08:22.848 16:59:10 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:08:22.848 16:59:10 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:22.848 16:59:10 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@278 -- # remove_spdk_ns 00:08:22.848 16:59:10 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:22.848 16:59:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:22.848 16:59:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:24.747 16:59:12 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:08:24.747 00:08:24.747 real 0m32.674s 00:08:24.747 user 1m40.892s 00:08:24.747 sys 0m5.779s 00:08:24.747 16:59:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:08:24.747 16:59:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:24.747 ************************************ 00:08:24.747 END TEST nvmf_rpc 00:08:24.747 ************************************ 00:08:24.747 16:59:12 nvmf_tcp -- nvmf/nvmf.sh@30 -- # run_test nvmf_invalid /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:08:24.747 16:59:12 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:08:24.747 16:59:12 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:08:24.747 16:59:12 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:08:24.747 ************************************ 00:08:24.747 START TEST nvmf_invalid 00:08:24.747 ************************************ 00:08:24.747 16:59:12 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:08:25.004 * Looking for test storage... 00:08:25.004 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:25.004 16:59:12 nvmf_tcp.nvmf_invalid -- target/invalid.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:25.004 16:59:12 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@7 -- # uname -s 00:08:25.004 16:59:12 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:25.004 16:59:12 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:25.004 16:59:12 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:25.004 16:59:12 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:25.004 16:59:12 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:25.004 16:59:12 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:25.004 16:59:12 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:25.004 16:59:12 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:25.004 16:59:12 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:25.004 16:59:12 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:25.004 16:59:12 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:08:25.004 16:59:12 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:08:25.004 16:59:12 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:25.004 16:59:12 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:25.004 16:59:12 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:25.004 16:59:12 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:25.004 16:59:12 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:25.004 16:59:12 nvmf_tcp.nvmf_invalid -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:25.004 16:59:12 nvmf_tcp.nvmf_invalid -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:25.004 16:59:12 nvmf_tcp.nvmf_invalid -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:25.004 16:59:12 nvmf_tcp.nvmf_invalid -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:25.005 16:59:12 nvmf_tcp.nvmf_invalid -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:25.005 16:59:12 nvmf_tcp.nvmf_invalid -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:25.005 16:59:12 nvmf_tcp.nvmf_invalid -- paths/export.sh@5 -- # export PATH 00:08:25.005 16:59:12 nvmf_tcp.nvmf_invalid -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:25.005 16:59:12 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@47 -- # : 0 00:08:25.005 16:59:12 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:08:25.005 16:59:12 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:08:25.005 16:59:12 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:25.005 16:59:12 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:25.005 16:59:12 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:25.005 16:59:12 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:08:25.005 16:59:12 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:08:25.005 16:59:12 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@51 -- # have_pci_nics=0 00:08:25.005 16:59:12 nvmf_tcp.nvmf_invalid -- target/invalid.sh@11 -- # multi_target_rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:08:25.005 16:59:12 nvmf_tcp.nvmf_invalid -- target/invalid.sh@12 -- # rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:25.005 16:59:12 nvmf_tcp.nvmf_invalid -- target/invalid.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode 00:08:25.005 16:59:12 nvmf_tcp.nvmf_invalid -- target/invalid.sh@14 -- # target=foobar 00:08:25.005 16:59:12 nvmf_tcp.nvmf_invalid -- target/invalid.sh@16 -- # RANDOM=0 00:08:25.005 16:59:12 nvmf_tcp.nvmf_invalid -- target/invalid.sh@34 -- # nvmftestinit 00:08:25.005 16:59:12 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:08:25.005 16:59:12 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:25.005 16:59:12 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@448 -- # prepare_net_devs 00:08:25.005 16:59:12 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@410 -- # local -g is_hw=no 00:08:25.005 16:59:12 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@412 -- # remove_spdk_ns 00:08:25.005 16:59:12 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:25.005 16:59:12 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:25.005 16:59:12 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:25.005 16:59:12 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:08:25.005 16:59:12 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:08:25.005 16:59:12 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@285 -- # xtrace_disable 00:08:25.005 16:59:12 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:08:30.260 16:59:17 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:30.260 16:59:17 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@291 -- # pci_devs=() 00:08:30.260 16:59:17 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@291 -- # local -a pci_devs 00:08:30.260 16:59:17 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@292 -- # pci_net_devs=() 00:08:30.260 16:59:17 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:08:30.260 16:59:17 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@293 -- # pci_drivers=() 00:08:30.260 16:59:17 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@293 -- # local -A pci_drivers 00:08:30.260 16:59:17 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@295 -- # net_devs=() 00:08:30.260 16:59:17 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@295 -- # local -ga net_devs 00:08:30.260 16:59:17 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@296 -- # e810=() 00:08:30.260 16:59:17 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@296 -- # local -ga e810 00:08:30.260 16:59:17 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@297 -- # x722=() 00:08:30.260 16:59:17 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@297 -- # local -ga x722 00:08:30.260 16:59:17 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@298 -- # mlx=() 00:08:30.260 16:59:17 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@298 -- # local -ga mlx 00:08:30.260 16:59:17 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:30.260 16:59:17 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:30.260 16:59:17 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:30.260 16:59:17 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:30.260 16:59:17 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:30.260 16:59:17 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:30.260 16:59:17 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:30.260 16:59:17 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:30.260 16:59:17 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:30.260 16:59:17 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:30.260 16:59:17 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:30.260 16:59:17 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:08:30.260 16:59:17 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:08:30.260 16:59:17 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:08:30.260 16:59:17 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:08:30.261 16:59:17 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:08:30.261 16:59:17 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:08:30.261 16:59:17 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:30.261 16:59:17 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:08:30.261 Found 0000:86:00.0 (0x8086 - 0x159b) 00:08:30.261 16:59:17 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:30.261 16:59:17 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:30.261 16:59:17 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:30.261 16:59:17 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:30.261 16:59:17 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:30.261 16:59:17 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:30.261 16:59:17 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:08:30.261 Found 0000:86:00.1 (0x8086 - 0x159b) 00:08:30.261 16:59:17 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:30.261 16:59:17 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:30.261 16:59:17 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:30.261 16:59:17 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:30.261 16:59:17 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:30.261 16:59:17 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:08:30.261 16:59:17 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:08:30.261 16:59:17 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:08:30.261 16:59:17 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:30.261 16:59:17 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:30.261 16:59:17 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:30.261 16:59:17 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:30.261 16:59:17 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:30.261 16:59:17 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:30.261 16:59:17 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:30.261 16:59:17 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:08:30.261 Found net devices under 0000:86:00.0: cvl_0_0 00:08:30.261 16:59:17 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:30.261 16:59:17 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:30.261 16:59:17 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:30.261 16:59:17 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:30.261 16:59:17 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:30.261 16:59:17 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:30.261 16:59:17 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:30.261 16:59:17 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:30.261 16:59:17 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:08:30.261 Found net devices under 0000:86:00.1: cvl_0_1 00:08:30.261 16:59:17 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:30.261 16:59:17 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:08:30.261 16:59:17 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@414 -- # is_hw=yes 00:08:30.261 16:59:17 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:08:30.261 16:59:17 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:08:30.261 16:59:17 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:08:30.261 16:59:17 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:30.261 16:59:17 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:30.261 16:59:17 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:30.261 16:59:17 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:08:30.261 16:59:17 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:30.261 16:59:17 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:30.261 16:59:17 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:08:30.261 16:59:17 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:30.261 16:59:17 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:30.261 16:59:17 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:08:30.261 16:59:17 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:08:30.261 16:59:17 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:08:30.261 16:59:17 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:30.261 16:59:17 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:30.261 16:59:17 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:30.261 16:59:17 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:08:30.261 16:59:17 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:30.261 16:59:17 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:30.261 16:59:17 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:30.261 16:59:17 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:08:30.261 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:30.261 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.167 ms 00:08:30.261 00:08:30.261 --- 10.0.0.2 ping statistics --- 00:08:30.261 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:30.261 rtt min/avg/max/mdev = 0.167/0.167/0.167/0.000 ms 00:08:30.261 16:59:17 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:30.261 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:30.261 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.217 ms 00:08:30.261 00:08:30.261 --- 10.0.0.1 ping statistics --- 00:08:30.261 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:30.261 rtt min/avg/max/mdev = 0.217/0.217/0.217/0.000 ms 00:08:30.261 16:59:17 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:30.261 16:59:17 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@422 -- # return 0 00:08:30.261 16:59:17 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:08:30.261 16:59:17 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:30.261 16:59:17 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:08:30.261 16:59:17 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:08:30.261 16:59:17 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:30.261 16:59:17 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:08:30.261 16:59:17 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:08:30.261 16:59:17 nvmf_tcp.nvmf_invalid -- target/invalid.sh@35 -- # nvmfappstart -m 0xF 00:08:30.261 16:59:17 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:08:30.261 16:59:17 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@720 -- # xtrace_disable 00:08:30.261 16:59:17 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:08:30.261 16:59:17 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@481 -- # nvmfpid=2945831 00:08:30.261 16:59:17 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@482 -- # waitforlisten 2945831 00:08:30.261 16:59:17 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:08:30.261 16:59:17 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@827 -- # '[' -z 2945831 ']' 00:08:30.261 16:59:17 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:30.261 16:59:17 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@832 -- # local max_retries=100 00:08:30.261 16:59:17 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:30.261 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:30.261 16:59:17 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@836 -- # xtrace_disable 00:08:30.261 16:59:17 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:08:30.261 [2024-05-15 16:59:17.908766] Starting SPDK v24.05-pre git sha1 0ba8ca574 / DPDK 23.11.0 initialization... 00:08:30.261 [2024-05-15 16:59:17.908809] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:30.518 EAL: No free 2048 kB hugepages reported on node 1 00:08:30.518 [2024-05-15 16:59:17.966500] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:30.518 [2024-05-15 16:59:18.038978] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:30.518 [2024-05-15 16:59:18.039019] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:30.519 [2024-05-15 16:59:18.039026] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:30.519 [2024-05-15 16:59:18.039031] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:30.519 [2024-05-15 16:59:18.039036] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:30.519 [2024-05-15 16:59:18.039100] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:08:30.519 [2024-05-15 16:59:18.039189] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:08:30.519 [2024-05-15 16:59:18.039238] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:08:30.519 [2024-05-15 16:59:18.039240] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:31.081 16:59:18 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:08:31.081 16:59:18 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@860 -- # return 0 00:08:31.081 16:59:18 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:08:31.081 16:59:18 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:31.081 16:59:18 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:08:31.337 16:59:18 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:31.337 16:59:18 nvmf_tcp.nvmf_invalid -- target/invalid.sh@37 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:08:31.337 16:59:18 nvmf_tcp.nvmf_invalid -- target/invalid.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -t foobar nqn.2016-06.io.spdk:cnode32570 00:08:31.337 [2024-05-15 16:59:18.908640] nvmf_rpc.c: 396:rpc_nvmf_create_subsystem: *ERROR*: Unable to find target foobar 00:08:31.337 16:59:18 nvmf_tcp.nvmf_invalid -- target/invalid.sh@40 -- # out='request: 00:08:31.337 { 00:08:31.337 "nqn": "nqn.2016-06.io.spdk:cnode32570", 00:08:31.337 "tgt_name": "foobar", 00:08:31.337 "method": "nvmf_create_subsystem", 00:08:31.337 "req_id": 1 00:08:31.337 } 00:08:31.337 Got JSON-RPC error response 00:08:31.337 response: 00:08:31.337 { 00:08:31.337 "code": -32603, 00:08:31.337 "message": "Unable to find target foobar" 00:08:31.337 }' 00:08:31.337 16:59:18 nvmf_tcp.nvmf_invalid -- target/invalid.sh@41 -- # [[ request: 00:08:31.337 { 00:08:31.337 "nqn": "nqn.2016-06.io.spdk:cnode32570", 00:08:31.337 "tgt_name": "foobar", 00:08:31.337 "method": "nvmf_create_subsystem", 00:08:31.337 "req_id": 1 00:08:31.337 } 00:08:31.337 Got JSON-RPC error response 00:08:31.337 response: 00:08:31.337 { 00:08:31.337 "code": -32603, 00:08:31.337 "message": "Unable to find target foobar" 00:08:31.337 } == *\U\n\a\b\l\e\ \t\o\ \f\i\n\d\ \t\a\r\g\e\t* ]] 00:08:31.337 16:59:18 nvmf_tcp.nvmf_invalid -- target/invalid.sh@45 -- # echo -e '\x1f' 00:08:31.337 16:59:18 nvmf_tcp.nvmf_invalid -- target/invalid.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s $'SPDKISFASTANDAWESOME\037' nqn.2016-06.io.spdk:cnode27572 00:08:31.595 [2024-05-15 16:59:19.101340] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode27572: invalid serial number 'SPDKISFASTANDAWESOME' 00:08:31.595 16:59:19 nvmf_tcp.nvmf_invalid -- target/invalid.sh@45 -- # out='request: 00:08:31.595 { 00:08:31.595 "nqn": "nqn.2016-06.io.spdk:cnode27572", 00:08:31.595 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:08:31.595 "method": "nvmf_create_subsystem", 00:08:31.595 "req_id": 1 00:08:31.595 } 00:08:31.595 Got JSON-RPC error response 00:08:31.595 response: 00:08:31.595 { 00:08:31.595 "code": -32602, 00:08:31.595 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:08:31.595 }' 00:08:31.595 16:59:19 nvmf_tcp.nvmf_invalid -- target/invalid.sh@46 -- # [[ request: 00:08:31.595 { 00:08:31.595 "nqn": "nqn.2016-06.io.spdk:cnode27572", 00:08:31.595 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:08:31.595 "method": "nvmf_create_subsystem", 00:08:31.595 "req_id": 1 00:08:31.595 } 00:08:31.595 Got JSON-RPC error response 00:08:31.595 response: 00:08:31.595 { 00:08:31.595 "code": -32602, 00:08:31.595 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:08:31.595 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:08:31.595 16:59:19 nvmf_tcp.nvmf_invalid -- target/invalid.sh@50 -- # echo -e '\x1f' 00:08:31.595 16:59:19 nvmf_tcp.nvmf_invalid -- target/invalid.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d $'SPDK_Controller\037' nqn.2016-06.io.spdk:cnode3314 00:08:31.852 [2024-05-15 16:59:19.293942] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode3314: invalid model number 'SPDK_Controller' 00:08:31.852 16:59:19 nvmf_tcp.nvmf_invalid -- target/invalid.sh@50 -- # out='request: 00:08:31.852 { 00:08:31.852 "nqn": "nqn.2016-06.io.spdk:cnode3314", 00:08:31.852 "model_number": "SPDK_Controller\u001f", 00:08:31.852 "method": "nvmf_create_subsystem", 00:08:31.852 "req_id": 1 00:08:31.852 } 00:08:31.852 Got JSON-RPC error response 00:08:31.852 response: 00:08:31.852 { 00:08:31.852 "code": -32602, 00:08:31.852 "message": "Invalid MN SPDK_Controller\u001f" 00:08:31.852 }' 00:08:31.852 16:59:19 nvmf_tcp.nvmf_invalid -- target/invalid.sh@51 -- # [[ request: 00:08:31.852 { 00:08:31.852 "nqn": "nqn.2016-06.io.spdk:cnode3314", 00:08:31.852 "model_number": "SPDK_Controller\u001f", 00:08:31.852 "method": "nvmf_create_subsystem", 00:08:31.852 "req_id": 1 00:08:31.852 } 00:08:31.852 Got JSON-RPC error response 00:08:31.852 response: 00:08:31.852 { 00:08:31.852 "code": -32602, 00:08:31.852 "message": "Invalid MN SPDK_Controller\u001f" 00:08:31.852 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:08:31.852 16:59:19 nvmf_tcp.nvmf_invalid -- target/invalid.sh@54 -- # gen_random_s 21 00:08:31.852 16:59:19 nvmf_tcp.nvmf_invalid -- target/invalid.sh@19 -- # local length=21 ll 00:08:31.852 16:59:19 nvmf_tcp.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:08:31.852 16:59:19 nvmf_tcp.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:08:31.852 16:59:19 nvmf_tcp.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:08:31.852 16:59:19 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:08:31.852 16:59:19 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:31.852 16:59:19 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 124 00:08:31.852 16:59:19 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7c' 00:08:31.852 16:59:19 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='|' 00:08:31.852 16:59:19 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:31.852 16:59:19 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:31.852 16:59:19 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 78 00:08:31.852 16:59:19 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4e' 00:08:31.852 16:59:19 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=N 00:08:31.852 16:59:19 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:31.852 16:59:19 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:31.852 16:59:19 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 72 00:08:31.852 16:59:19 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x48' 00:08:31.852 16:59:19 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=H 00:08:31.852 16:59:19 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:31.852 16:59:19 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:31.852 16:59:19 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 107 00:08:31.852 16:59:19 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6b' 00:08:31.852 16:59:19 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=k 00:08:31.852 16:59:19 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:31.852 16:59:19 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:31.852 16:59:19 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 109 00:08:31.852 16:59:19 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6d' 00:08:31.852 16:59:19 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=m 00:08:31.852 16:59:19 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:31.852 16:59:19 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:31.852 16:59:19 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 106 00:08:31.852 16:59:19 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6a' 00:08:31.852 16:59:19 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=j 00:08:31.852 16:59:19 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:31.852 16:59:19 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:31.852 16:59:19 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 116 00:08:31.853 16:59:19 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x74' 00:08:31.853 16:59:19 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=t 00:08:31.853 16:59:19 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:31.853 16:59:19 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:31.853 16:59:19 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 63 00:08:31.853 16:59:19 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3f' 00:08:31.853 16:59:19 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='?' 00:08:31.853 16:59:19 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:31.853 16:59:19 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:31.853 16:59:19 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 90 00:08:31.853 16:59:19 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5a' 00:08:31.853 16:59:19 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=Z 00:08:31.853 16:59:19 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:31.853 16:59:19 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:31.853 16:59:19 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 122 00:08:31.853 16:59:19 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7a' 00:08:31.853 16:59:19 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=z 00:08:31.853 16:59:19 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:31.853 16:59:19 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:31.853 16:59:19 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 104 00:08:31.853 16:59:19 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x68' 00:08:31.853 16:59:19 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=h 00:08:31.853 16:59:19 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:31.853 16:59:19 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:31.853 16:59:19 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 58 00:08:31.853 16:59:19 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3a' 00:08:31.853 16:59:19 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=: 00:08:31.853 16:59:19 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:31.853 16:59:19 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:31.853 16:59:19 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 80 00:08:31.853 16:59:19 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x50' 00:08:31.853 16:59:19 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=P 00:08:31.853 16:59:19 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:31.853 16:59:19 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:31.853 16:59:19 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 82 00:08:31.853 16:59:19 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x52' 00:08:31.853 16:59:19 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=R 00:08:31.853 16:59:19 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:31.853 16:59:19 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:31.853 16:59:19 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 35 00:08:31.853 16:59:19 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x23' 00:08:31.853 16:59:19 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='#' 00:08:31.853 16:59:19 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:31.853 16:59:19 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:31.853 16:59:19 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 111 00:08:31.853 16:59:19 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6f' 00:08:31.853 16:59:19 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=o 00:08:31.853 16:59:19 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:31.853 16:59:19 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:31.853 16:59:19 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 49 00:08:31.853 16:59:19 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x31' 00:08:31.853 16:59:19 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=1 00:08:31.853 16:59:19 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:31.853 16:59:19 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:31.853 16:59:19 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 110 00:08:31.853 16:59:19 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6e' 00:08:31.853 16:59:19 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=n 00:08:31.853 16:59:19 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:31.853 16:59:19 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:31.853 16:59:19 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 59 00:08:31.853 16:59:19 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3b' 00:08:31.853 16:59:19 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=';' 00:08:31.853 16:59:19 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:31.853 16:59:19 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:31.853 16:59:19 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 100 00:08:31.853 16:59:19 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x64' 00:08:31.853 16:59:19 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=d 00:08:31.853 16:59:19 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:31.853 16:59:19 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:31.853 16:59:19 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 34 00:08:31.853 16:59:19 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x22' 00:08:31.853 16:59:19 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='"' 00:08:31.853 16:59:19 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:31.853 16:59:19 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:31.853 16:59:19 nvmf_tcp.nvmf_invalid -- target/invalid.sh@28 -- # [[ | == \- ]] 00:08:31.853 16:59:19 nvmf_tcp.nvmf_invalid -- target/invalid.sh@31 -- # echo '|NHkmjt?Zzh:PR#o1n;d"' 00:08:31.853 16:59:19 nvmf_tcp.nvmf_invalid -- target/invalid.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s '|NHkmjt?Zzh:PR#o1n;d"' nqn.2016-06.io.spdk:cnode23218 00:08:32.110 [2024-05-15 16:59:19.623055] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode23218: invalid serial number '|NHkmjt?Zzh:PR#o1n;d"' 00:08:32.110 16:59:19 nvmf_tcp.nvmf_invalid -- target/invalid.sh@54 -- # out='request: 00:08:32.110 { 00:08:32.110 "nqn": "nqn.2016-06.io.spdk:cnode23218", 00:08:32.110 "serial_number": "|NHkmjt?Zzh:PR#o1n;d\"", 00:08:32.110 "method": "nvmf_create_subsystem", 00:08:32.110 "req_id": 1 00:08:32.110 } 00:08:32.110 Got JSON-RPC error response 00:08:32.110 response: 00:08:32.110 { 00:08:32.110 "code": -32602, 00:08:32.110 "message": "Invalid SN |NHkmjt?Zzh:PR#o1n;d\"" 00:08:32.110 }' 00:08:32.110 16:59:19 nvmf_tcp.nvmf_invalid -- target/invalid.sh@55 -- # [[ request: 00:08:32.110 { 00:08:32.110 "nqn": "nqn.2016-06.io.spdk:cnode23218", 00:08:32.110 "serial_number": "|NHkmjt?Zzh:PR#o1n;d\"", 00:08:32.110 "method": "nvmf_create_subsystem", 00:08:32.110 "req_id": 1 00:08:32.110 } 00:08:32.110 Got JSON-RPC error response 00:08:32.110 response: 00:08:32.110 { 00:08:32.110 "code": -32602, 00:08:32.110 "message": "Invalid SN |NHkmjt?Zzh:PR#o1n;d\"" 00:08:32.110 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:08:32.110 16:59:19 nvmf_tcp.nvmf_invalid -- target/invalid.sh@58 -- # gen_random_s 41 00:08:32.110 16:59:19 nvmf_tcp.nvmf_invalid -- target/invalid.sh@19 -- # local length=41 ll 00:08:32.110 16:59:19 nvmf_tcp.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:08:32.110 16:59:19 nvmf_tcp.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:08:32.110 16:59:19 nvmf_tcp.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:08:32.110 16:59:19 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:08:32.111 16:59:19 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:32.111 16:59:19 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 68 00:08:32.111 16:59:19 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x44' 00:08:32.111 16:59:19 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=D 00:08:32.111 16:59:19 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:32.111 16:59:19 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:32.111 16:59:19 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 35 00:08:32.111 16:59:19 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x23' 00:08:32.111 16:59:19 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='#' 00:08:32.111 16:59:19 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:32.111 16:59:19 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:32.111 16:59:19 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 75 00:08:32.111 16:59:19 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4b' 00:08:32.111 16:59:19 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=K 00:08:32.111 16:59:19 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:32.111 16:59:19 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:32.111 16:59:19 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 121 00:08:32.111 16:59:19 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x79' 00:08:32.111 16:59:19 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=y 00:08:32.111 16:59:19 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:32.111 16:59:19 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:32.111 16:59:19 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 87 00:08:32.111 16:59:19 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x57' 00:08:32.111 16:59:19 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=W 00:08:32.111 16:59:19 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:32.111 16:59:19 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:32.111 16:59:19 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 32 00:08:32.111 16:59:19 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x20' 00:08:32.111 16:59:19 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=' ' 00:08:32.111 16:59:19 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:32.111 16:59:19 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:32.111 16:59:19 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 98 00:08:32.111 16:59:19 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x62' 00:08:32.111 16:59:19 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=b 00:08:32.111 16:59:19 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:32.111 16:59:19 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:32.111 16:59:19 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 82 00:08:32.111 16:59:19 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x52' 00:08:32.111 16:59:19 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=R 00:08:32.111 16:59:19 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:32.111 16:59:19 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:32.111 16:59:19 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 109 00:08:32.111 16:59:19 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6d' 00:08:32.111 16:59:19 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=m 00:08:32.111 16:59:19 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:32.111 16:59:19 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:32.111 16:59:19 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 92 00:08:32.111 16:59:19 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5c' 00:08:32.111 16:59:19 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='\' 00:08:32.111 16:59:19 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:32.111 16:59:19 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:32.111 16:59:19 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 102 00:08:32.111 16:59:19 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x66' 00:08:32.111 16:59:19 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=f 00:08:32.111 16:59:19 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:32.111 16:59:19 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:32.111 16:59:19 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 56 00:08:32.111 16:59:19 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x38' 00:08:32.111 16:59:19 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=8 00:08:32.111 16:59:19 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:32.111 16:59:19 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:32.111 16:59:19 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 80 00:08:32.111 16:59:19 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x50' 00:08:32.111 16:59:19 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=P 00:08:32.111 16:59:19 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:32.111 16:59:19 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:32.111 16:59:19 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 74 00:08:32.111 16:59:19 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4a' 00:08:32.111 16:59:19 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=J 00:08:32.111 16:59:19 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:32.111 16:59:19 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:32.111 16:59:19 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 101 00:08:32.111 16:59:19 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x65' 00:08:32.111 16:59:19 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=e 00:08:32.111 16:59:19 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:32.111 16:59:19 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:32.111 16:59:19 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 38 00:08:32.111 16:59:19 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x26' 00:08:32.111 16:59:19 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='&' 00:08:32.111 16:59:19 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:32.111 16:59:19 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:32.111 16:59:19 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 120 00:08:32.111 16:59:19 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x78' 00:08:32.111 16:59:19 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=x 00:08:32.111 16:59:19 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:32.111 16:59:19 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:32.111 16:59:19 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 115 00:08:32.111 16:59:19 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x73' 00:08:32.111 16:59:19 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=s 00:08:32.111 16:59:19 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:32.111 16:59:19 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:32.111 16:59:19 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 91 00:08:32.111 16:59:19 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5b' 00:08:32.111 16:59:19 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='[' 00:08:32.369 16:59:19 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:32.369 16:59:19 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:32.369 16:59:19 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 100 00:08:32.369 16:59:19 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x64' 00:08:32.369 16:59:19 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=d 00:08:32.369 16:59:19 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:32.369 16:59:19 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:32.369 16:59:19 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 109 00:08:32.369 16:59:19 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6d' 00:08:32.369 16:59:19 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=m 00:08:32.369 16:59:19 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:32.369 16:59:19 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:32.369 16:59:19 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 105 00:08:32.369 16:59:19 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x69' 00:08:32.369 16:59:19 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=i 00:08:32.369 16:59:19 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:32.369 16:59:19 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:32.369 16:59:19 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 77 00:08:32.369 16:59:19 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4d' 00:08:32.369 16:59:19 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=M 00:08:32.369 16:59:19 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:32.369 16:59:19 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:32.369 16:59:19 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 82 00:08:32.369 16:59:19 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x52' 00:08:32.369 16:59:19 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=R 00:08:32.369 16:59:19 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:32.369 16:59:19 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:32.369 16:59:19 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 127 00:08:32.369 16:59:19 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7f' 00:08:32.369 16:59:19 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=$'\177' 00:08:32.369 16:59:19 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:32.369 16:59:19 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:32.369 16:59:19 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 88 00:08:32.369 16:59:19 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x58' 00:08:32.369 16:59:19 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=X 00:08:32.369 16:59:19 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:32.369 16:59:19 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:32.369 16:59:19 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 108 00:08:32.369 16:59:19 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6c' 00:08:32.369 16:59:19 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=l 00:08:32.369 16:59:19 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:32.369 16:59:19 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:32.369 16:59:19 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 95 00:08:32.369 16:59:19 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5f' 00:08:32.369 16:59:19 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=_ 00:08:32.369 16:59:19 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:32.369 16:59:19 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:32.369 16:59:19 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 104 00:08:32.369 16:59:19 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x68' 00:08:32.369 16:59:19 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=h 00:08:32.369 16:59:19 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:32.369 16:59:19 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:32.369 16:59:19 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 62 00:08:32.369 16:59:19 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3e' 00:08:32.369 16:59:19 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='>' 00:08:32.369 16:59:19 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:32.369 16:59:19 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:32.369 16:59:19 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 106 00:08:32.369 16:59:19 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6a' 00:08:32.369 16:59:19 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=j 00:08:32.369 16:59:19 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:32.369 16:59:19 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:32.369 16:59:19 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 62 00:08:32.369 16:59:19 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3e' 00:08:32.369 16:59:19 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='>' 00:08:32.369 16:59:19 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:32.369 16:59:19 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:32.369 16:59:19 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 111 00:08:32.369 16:59:19 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6f' 00:08:32.369 16:59:19 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=o 00:08:32.369 16:59:19 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:32.369 16:59:19 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:32.369 16:59:19 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 80 00:08:32.369 16:59:19 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x50' 00:08:32.369 16:59:19 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=P 00:08:32.369 16:59:19 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:32.369 16:59:19 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:32.369 16:59:19 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 53 00:08:32.369 16:59:19 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x35' 00:08:32.369 16:59:19 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=5 00:08:32.369 16:59:19 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:32.369 16:59:19 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:32.369 16:59:19 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 68 00:08:32.369 16:59:19 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x44' 00:08:32.369 16:59:19 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=D 00:08:32.369 16:59:19 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:32.369 16:59:19 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:32.369 16:59:19 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 99 00:08:32.369 16:59:19 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x63' 00:08:32.369 16:59:19 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=c 00:08:32.369 16:59:19 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:32.369 16:59:19 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:32.369 16:59:19 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 120 00:08:32.369 16:59:19 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x78' 00:08:32.369 16:59:19 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=x 00:08:32.370 16:59:19 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:32.370 16:59:19 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:32.370 16:59:19 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 104 00:08:32.370 16:59:19 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x68' 00:08:32.370 16:59:19 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=h 00:08:32.370 16:59:19 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:32.370 16:59:19 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:32.370 16:59:19 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 49 00:08:32.370 16:59:19 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x31' 00:08:32.370 16:59:19 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=1 00:08:32.370 16:59:19 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:32.370 16:59:19 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:32.370 16:59:19 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 58 00:08:32.370 16:59:19 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3a' 00:08:32.370 16:59:19 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=: 00:08:32.370 16:59:19 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:32.370 16:59:19 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:32.370 16:59:19 nvmf_tcp.nvmf_invalid -- target/invalid.sh@28 -- # [[ D == \- ]] 00:08:32.370 16:59:19 nvmf_tcp.nvmf_invalid -- target/invalid.sh@31 -- # echo 'D#KyW bRm\f8PJe&xs[dmiMRXl_h>j>oP5Dcxh1:' 00:08:32.370 16:59:19 nvmf_tcp.nvmf_invalid -- target/invalid.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d 'D#KyW bRm\f8PJe&xs[dmiMRXl_h>j>oP5Dcxh1:' nqn.2016-06.io.spdk:cnode20297 00:08:32.626 [2024-05-15 16:59:20.064564] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode20297: invalid model number 'D#KyW bRm\f8PJe&xs[dmiMRXl_h>j>oP5Dcxh1:' 00:08:32.626 16:59:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@58 -- # out='request: 00:08:32.626 { 00:08:32.626 "nqn": "nqn.2016-06.io.spdk:cnode20297", 00:08:32.626 "model_number": "D#KyW bRm\\f8PJe&xs[dmiMR\u007fXl_h>j>oP5Dcxh1:", 00:08:32.626 "method": "nvmf_create_subsystem", 00:08:32.626 "req_id": 1 00:08:32.626 } 00:08:32.626 Got JSON-RPC error response 00:08:32.626 response: 00:08:32.626 { 00:08:32.626 "code": -32602, 00:08:32.626 "message": "Invalid MN D#KyW bRm\\f8PJe&xs[dmiMR\u007fXl_h>j>oP5Dcxh1:" 00:08:32.626 }' 00:08:32.626 16:59:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@59 -- # [[ request: 00:08:32.626 { 00:08:32.626 "nqn": "nqn.2016-06.io.spdk:cnode20297", 00:08:32.626 "model_number": "D#KyW bRm\\f8PJe&xs[dmiMR\u007fXl_h>j>oP5Dcxh1:", 00:08:32.626 "method": "nvmf_create_subsystem", 00:08:32.626 "req_id": 1 00:08:32.626 } 00:08:32.626 Got JSON-RPC error response 00:08:32.626 response: 00:08:32.626 { 00:08:32.626 "code": -32602, 00:08:32.626 "message": "Invalid MN D#KyW bRm\\f8PJe&xs[dmiMR\u007fXl_h>j>oP5Dcxh1:" 00:08:32.626 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:08:32.627 16:59:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport --trtype tcp 00:08:32.627 [2024-05-15 16:59:20.257297] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:32.883 16:59:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode -s SPDK001 -a 00:08:32.883 16:59:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@64 -- # [[ tcp == \T\C\P ]] 00:08:32.883 16:59:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@67 -- # echo '' 00:08:32.883 16:59:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@67 -- # head -n 1 00:08:32.883 16:59:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@67 -- # IP= 00:08:32.883 16:59:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode -t tcp -a '' -s 4421 00:08:33.140 [2024-05-15 16:59:20.650538] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:08:33.140 [2024-05-15 16:59:20.650610] nvmf_rpc.c: 794:nvmf_rpc_listen_paused: *ERROR*: Unable to remove listener, rc -2 00:08:33.140 16:59:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@69 -- # out='request: 00:08:33.140 { 00:08:33.140 "nqn": "nqn.2016-06.io.spdk:cnode", 00:08:33.140 "listen_address": { 00:08:33.140 "trtype": "tcp", 00:08:33.140 "traddr": "", 00:08:33.140 "trsvcid": "4421" 00:08:33.140 }, 00:08:33.140 "method": "nvmf_subsystem_remove_listener", 00:08:33.140 "req_id": 1 00:08:33.140 } 00:08:33.140 Got JSON-RPC error response 00:08:33.140 response: 00:08:33.140 { 00:08:33.140 "code": -32602, 00:08:33.140 "message": "Invalid parameters" 00:08:33.140 }' 00:08:33.140 16:59:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@70 -- # [[ request: 00:08:33.140 { 00:08:33.140 "nqn": "nqn.2016-06.io.spdk:cnode", 00:08:33.140 "listen_address": { 00:08:33.140 "trtype": "tcp", 00:08:33.140 "traddr": "", 00:08:33.140 "trsvcid": "4421" 00:08:33.140 }, 00:08:33.140 "method": "nvmf_subsystem_remove_listener", 00:08:33.140 "req_id": 1 00:08:33.140 } 00:08:33.140 Got JSON-RPC error response 00:08:33.140 response: 00:08:33.140 { 00:08:33.140 "code": -32602, 00:08:33.140 "message": "Invalid parameters" 00:08:33.140 } != *\U\n\a\b\l\e\ \t\o\ \s\t\o\p\ \l\i\s\t\e\n\e\r\.* ]] 00:08:33.140 16:59:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode11196 -i 0 00:08:33.397 [2024-05-15 16:59:20.843183] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode11196: invalid cntlid range [0-65519] 00:08:33.397 16:59:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@73 -- # out='request: 00:08:33.397 { 00:08:33.397 "nqn": "nqn.2016-06.io.spdk:cnode11196", 00:08:33.397 "min_cntlid": 0, 00:08:33.397 "method": "nvmf_create_subsystem", 00:08:33.397 "req_id": 1 00:08:33.397 } 00:08:33.397 Got JSON-RPC error response 00:08:33.397 response: 00:08:33.397 { 00:08:33.397 "code": -32602, 00:08:33.397 "message": "Invalid cntlid range [0-65519]" 00:08:33.397 }' 00:08:33.397 16:59:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@74 -- # [[ request: 00:08:33.397 { 00:08:33.397 "nqn": "nqn.2016-06.io.spdk:cnode11196", 00:08:33.397 "min_cntlid": 0, 00:08:33.397 "method": "nvmf_create_subsystem", 00:08:33.397 "req_id": 1 00:08:33.397 } 00:08:33.398 Got JSON-RPC error response 00:08:33.398 response: 00:08:33.398 { 00:08:33.398 "code": -32602, 00:08:33.398 "message": "Invalid cntlid range [0-65519]" 00:08:33.398 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:08:33.398 16:59:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@75 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode19691 -i 65520 00:08:33.398 [2024-05-15 16:59:21.035856] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode19691: invalid cntlid range [65520-65519] 00:08:33.654 16:59:21 nvmf_tcp.nvmf_invalid -- target/invalid.sh@75 -- # out='request: 00:08:33.654 { 00:08:33.654 "nqn": "nqn.2016-06.io.spdk:cnode19691", 00:08:33.654 "min_cntlid": 65520, 00:08:33.654 "method": "nvmf_create_subsystem", 00:08:33.654 "req_id": 1 00:08:33.654 } 00:08:33.654 Got JSON-RPC error response 00:08:33.654 response: 00:08:33.654 { 00:08:33.654 "code": -32602, 00:08:33.654 "message": "Invalid cntlid range [65520-65519]" 00:08:33.654 }' 00:08:33.654 16:59:21 nvmf_tcp.nvmf_invalid -- target/invalid.sh@76 -- # [[ request: 00:08:33.654 { 00:08:33.654 "nqn": "nqn.2016-06.io.spdk:cnode19691", 00:08:33.654 "min_cntlid": 65520, 00:08:33.654 "method": "nvmf_create_subsystem", 00:08:33.654 "req_id": 1 00:08:33.654 } 00:08:33.654 Got JSON-RPC error response 00:08:33.654 response: 00:08:33.654 { 00:08:33.654 "code": -32602, 00:08:33.654 "message": "Invalid cntlid range [65520-65519]" 00:08:33.654 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:08:33.654 16:59:21 nvmf_tcp.nvmf_invalid -- target/invalid.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode26096 -I 0 00:08:33.654 [2024-05-15 16:59:21.212475] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode26096: invalid cntlid range [1-0] 00:08:33.654 16:59:21 nvmf_tcp.nvmf_invalid -- target/invalid.sh@77 -- # out='request: 00:08:33.654 { 00:08:33.654 "nqn": "nqn.2016-06.io.spdk:cnode26096", 00:08:33.654 "max_cntlid": 0, 00:08:33.654 "method": "nvmf_create_subsystem", 00:08:33.654 "req_id": 1 00:08:33.654 } 00:08:33.654 Got JSON-RPC error response 00:08:33.654 response: 00:08:33.654 { 00:08:33.654 "code": -32602, 00:08:33.654 "message": "Invalid cntlid range [1-0]" 00:08:33.654 }' 00:08:33.654 16:59:21 nvmf_tcp.nvmf_invalid -- target/invalid.sh@78 -- # [[ request: 00:08:33.654 { 00:08:33.654 "nqn": "nqn.2016-06.io.spdk:cnode26096", 00:08:33.654 "max_cntlid": 0, 00:08:33.654 "method": "nvmf_create_subsystem", 00:08:33.654 "req_id": 1 00:08:33.654 } 00:08:33.654 Got JSON-RPC error response 00:08:33.654 response: 00:08:33.654 { 00:08:33.654 "code": -32602, 00:08:33.654 "message": "Invalid cntlid range [1-0]" 00:08:33.654 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:08:33.654 16:59:21 nvmf_tcp.nvmf_invalid -- target/invalid.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode27788 -I 65520 00:08:33.912 [2024-05-15 16:59:21.385055] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode27788: invalid cntlid range [1-65520] 00:08:33.912 16:59:21 nvmf_tcp.nvmf_invalid -- target/invalid.sh@79 -- # out='request: 00:08:33.912 { 00:08:33.912 "nqn": "nqn.2016-06.io.spdk:cnode27788", 00:08:33.912 "max_cntlid": 65520, 00:08:33.912 "method": "nvmf_create_subsystem", 00:08:33.912 "req_id": 1 00:08:33.912 } 00:08:33.912 Got JSON-RPC error response 00:08:33.912 response: 00:08:33.912 { 00:08:33.912 "code": -32602, 00:08:33.912 "message": "Invalid cntlid range [1-65520]" 00:08:33.912 }' 00:08:33.912 16:59:21 nvmf_tcp.nvmf_invalid -- target/invalid.sh@80 -- # [[ request: 00:08:33.912 { 00:08:33.912 "nqn": "nqn.2016-06.io.spdk:cnode27788", 00:08:33.912 "max_cntlid": 65520, 00:08:33.912 "method": "nvmf_create_subsystem", 00:08:33.912 "req_id": 1 00:08:33.912 } 00:08:33.912 Got JSON-RPC error response 00:08:33.912 response: 00:08:33.912 { 00:08:33.912 "code": -32602, 00:08:33.912 "message": "Invalid cntlid range [1-65520]" 00:08:33.912 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:08:33.912 16:59:21 nvmf_tcp.nvmf_invalid -- target/invalid.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode28721 -i 6 -I 5 00:08:33.912 [2024-05-15 16:59:21.557659] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode28721: invalid cntlid range [6-5] 00:08:34.170 16:59:21 nvmf_tcp.nvmf_invalid -- target/invalid.sh@83 -- # out='request: 00:08:34.170 { 00:08:34.170 "nqn": "nqn.2016-06.io.spdk:cnode28721", 00:08:34.170 "min_cntlid": 6, 00:08:34.170 "max_cntlid": 5, 00:08:34.170 "method": "nvmf_create_subsystem", 00:08:34.170 "req_id": 1 00:08:34.170 } 00:08:34.170 Got JSON-RPC error response 00:08:34.170 response: 00:08:34.170 { 00:08:34.170 "code": -32602, 00:08:34.170 "message": "Invalid cntlid range [6-5]" 00:08:34.170 }' 00:08:34.170 16:59:21 nvmf_tcp.nvmf_invalid -- target/invalid.sh@84 -- # [[ request: 00:08:34.170 { 00:08:34.170 "nqn": "nqn.2016-06.io.spdk:cnode28721", 00:08:34.170 "min_cntlid": 6, 00:08:34.170 "max_cntlid": 5, 00:08:34.170 "method": "nvmf_create_subsystem", 00:08:34.170 "req_id": 1 00:08:34.170 } 00:08:34.170 Got JSON-RPC error response 00:08:34.170 response: 00:08:34.170 { 00:08:34.170 "code": -32602, 00:08:34.170 "message": "Invalid cntlid range [6-5]" 00:08:34.170 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:08:34.170 16:59:21 nvmf_tcp.nvmf_invalid -- target/invalid.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target --name foobar 00:08:34.170 16:59:21 nvmf_tcp.nvmf_invalid -- target/invalid.sh@87 -- # out='request: 00:08:34.170 { 00:08:34.170 "name": "foobar", 00:08:34.170 "method": "nvmf_delete_target", 00:08:34.170 "req_id": 1 00:08:34.170 } 00:08:34.170 Got JSON-RPC error response 00:08:34.170 response: 00:08:34.170 { 00:08:34.170 "code": -32602, 00:08:34.170 "message": "The specified target doesn'\''t exist, cannot delete it." 00:08:34.170 }' 00:08:34.170 16:59:21 nvmf_tcp.nvmf_invalid -- target/invalid.sh@88 -- # [[ request: 00:08:34.170 { 00:08:34.170 "name": "foobar", 00:08:34.170 "method": "nvmf_delete_target", 00:08:34.170 "req_id": 1 00:08:34.170 } 00:08:34.170 Got JSON-RPC error response 00:08:34.170 response: 00:08:34.170 { 00:08:34.170 "code": -32602, 00:08:34.170 "message": "The specified target doesn't exist, cannot delete it." 00:08:34.170 } == *\T\h\e\ \s\p\e\c\i\f\i\e\d\ \t\a\r\g\e\t\ \d\o\e\s\n\'\t\ \e\x\i\s\t\,\ \c\a\n\n\o\t\ \d\e\l\e\t\e\ \i\t\.* ]] 00:08:34.170 16:59:21 nvmf_tcp.nvmf_invalid -- target/invalid.sh@90 -- # trap - SIGINT SIGTERM EXIT 00:08:34.170 16:59:21 nvmf_tcp.nvmf_invalid -- target/invalid.sh@91 -- # nvmftestfini 00:08:34.170 16:59:21 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@488 -- # nvmfcleanup 00:08:34.170 16:59:21 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@117 -- # sync 00:08:34.170 16:59:21 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:08:34.170 16:59:21 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@120 -- # set +e 00:08:34.170 16:59:21 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@121 -- # for i in {1..20} 00:08:34.170 16:59:21 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:08:34.170 rmmod nvme_tcp 00:08:34.170 rmmod nvme_fabrics 00:08:34.170 rmmod nvme_keyring 00:08:34.170 16:59:21 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:08:34.170 16:59:21 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@124 -- # set -e 00:08:34.170 16:59:21 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@125 -- # return 0 00:08:34.170 16:59:21 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@489 -- # '[' -n 2945831 ']' 00:08:34.170 16:59:21 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@490 -- # killprocess 2945831 00:08:34.170 16:59:21 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@946 -- # '[' -z 2945831 ']' 00:08:34.170 16:59:21 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@950 -- # kill -0 2945831 00:08:34.170 16:59:21 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@951 -- # uname 00:08:34.170 16:59:21 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:08:34.170 16:59:21 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 2945831 00:08:34.170 16:59:21 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:08:34.170 16:59:21 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:08:34.170 16:59:21 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@964 -- # echo 'killing process with pid 2945831' 00:08:34.170 killing process with pid 2945831 00:08:34.170 16:59:21 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@965 -- # kill 2945831 00:08:34.170 [2024-05-15 16:59:21.810157] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:08:34.170 16:59:21 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@970 -- # wait 2945831 00:08:34.429 16:59:22 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:08:34.429 16:59:22 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:08:34.429 16:59:22 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:08:34.429 16:59:22 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:34.429 16:59:22 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@278 -- # remove_spdk_ns 00:08:34.429 16:59:22 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:34.429 16:59:22 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:34.429 16:59:22 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:37.036 16:59:24 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:08:37.036 00:08:37.036 real 0m11.684s 00:08:37.036 user 0m19.535s 00:08:37.036 sys 0m5.002s 00:08:37.036 16:59:24 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@1122 -- # xtrace_disable 00:08:37.036 16:59:24 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:08:37.036 ************************************ 00:08:37.036 END TEST nvmf_invalid 00:08:37.036 ************************************ 00:08:37.036 16:59:24 nvmf_tcp -- nvmf/nvmf.sh@31 -- # run_test nvmf_abort /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:08:37.036 16:59:24 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:08:37.036 16:59:24 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:08:37.036 16:59:24 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:08:37.036 ************************************ 00:08:37.036 START TEST nvmf_abort 00:08:37.036 ************************************ 00:08:37.036 16:59:24 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:08:37.036 * Looking for test storage... 00:08:37.036 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:37.036 16:59:24 nvmf_tcp.nvmf_abort -- target/abort.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:37.036 16:59:24 nvmf_tcp.nvmf_abort -- nvmf/common.sh@7 -- # uname -s 00:08:37.036 16:59:24 nvmf_tcp.nvmf_abort -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:37.036 16:59:24 nvmf_tcp.nvmf_abort -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:37.036 16:59:24 nvmf_tcp.nvmf_abort -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:37.036 16:59:24 nvmf_tcp.nvmf_abort -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:37.036 16:59:24 nvmf_tcp.nvmf_abort -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:37.036 16:59:24 nvmf_tcp.nvmf_abort -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:37.036 16:59:24 nvmf_tcp.nvmf_abort -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:37.036 16:59:24 nvmf_tcp.nvmf_abort -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:37.036 16:59:24 nvmf_tcp.nvmf_abort -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:37.036 16:59:24 nvmf_tcp.nvmf_abort -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:37.036 16:59:24 nvmf_tcp.nvmf_abort -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:08:37.036 16:59:24 nvmf_tcp.nvmf_abort -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:08:37.036 16:59:24 nvmf_tcp.nvmf_abort -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:37.036 16:59:24 nvmf_tcp.nvmf_abort -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:37.036 16:59:24 nvmf_tcp.nvmf_abort -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:37.036 16:59:24 nvmf_tcp.nvmf_abort -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:37.036 16:59:24 nvmf_tcp.nvmf_abort -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:37.036 16:59:24 nvmf_tcp.nvmf_abort -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:37.036 16:59:24 nvmf_tcp.nvmf_abort -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:37.036 16:59:24 nvmf_tcp.nvmf_abort -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:37.036 16:59:24 nvmf_tcp.nvmf_abort -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:37.036 16:59:24 nvmf_tcp.nvmf_abort -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:37.036 16:59:24 nvmf_tcp.nvmf_abort -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:37.036 16:59:24 nvmf_tcp.nvmf_abort -- paths/export.sh@5 -- # export PATH 00:08:37.036 16:59:24 nvmf_tcp.nvmf_abort -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:37.036 16:59:24 nvmf_tcp.nvmf_abort -- nvmf/common.sh@47 -- # : 0 00:08:37.036 16:59:24 nvmf_tcp.nvmf_abort -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:08:37.036 16:59:24 nvmf_tcp.nvmf_abort -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:08:37.036 16:59:24 nvmf_tcp.nvmf_abort -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:37.036 16:59:24 nvmf_tcp.nvmf_abort -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:37.036 16:59:24 nvmf_tcp.nvmf_abort -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:37.036 16:59:24 nvmf_tcp.nvmf_abort -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:08:37.036 16:59:24 nvmf_tcp.nvmf_abort -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:08:37.036 16:59:24 nvmf_tcp.nvmf_abort -- nvmf/common.sh@51 -- # have_pci_nics=0 00:08:37.036 16:59:24 nvmf_tcp.nvmf_abort -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:08:37.036 16:59:24 nvmf_tcp.nvmf_abort -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:08:37.036 16:59:24 nvmf_tcp.nvmf_abort -- target/abort.sh@14 -- # nvmftestinit 00:08:37.036 16:59:24 nvmf_tcp.nvmf_abort -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:08:37.036 16:59:24 nvmf_tcp.nvmf_abort -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:37.036 16:59:24 nvmf_tcp.nvmf_abort -- nvmf/common.sh@448 -- # prepare_net_devs 00:08:37.036 16:59:24 nvmf_tcp.nvmf_abort -- nvmf/common.sh@410 -- # local -g is_hw=no 00:08:37.036 16:59:24 nvmf_tcp.nvmf_abort -- nvmf/common.sh@412 -- # remove_spdk_ns 00:08:37.036 16:59:24 nvmf_tcp.nvmf_abort -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:37.036 16:59:24 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:37.036 16:59:24 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:37.036 16:59:24 nvmf_tcp.nvmf_abort -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:08:37.036 16:59:24 nvmf_tcp.nvmf_abort -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:08:37.036 16:59:24 nvmf_tcp.nvmf_abort -- nvmf/common.sh@285 -- # xtrace_disable 00:08:37.036 16:59:24 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:08:42.285 16:59:29 nvmf_tcp.nvmf_abort -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:42.285 16:59:29 nvmf_tcp.nvmf_abort -- nvmf/common.sh@291 -- # pci_devs=() 00:08:42.285 16:59:29 nvmf_tcp.nvmf_abort -- nvmf/common.sh@291 -- # local -a pci_devs 00:08:42.285 16:59:29 nvmf_tcp.nvmf_abort -- nvmf/common.sh@292 -- # pci_net_devs=() 00:08:42.285 16:59:29 nvmf_tcp.nvmf_abort -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:08:42.285 16:59:29 nvmf_tcp.nvmf_abort -- nvmf/common.sh@293 -- # pci_drivers=() 00:08:42.285 16:59:29 nvmf_tcp.nvmf_abort -- nvmf/common.sh@293 -- # local -A pci_drivers 00:08:42.285 16:59:29 nvmf_tcp.nvmf_abort -- nvmf/common.sh@295 -- # net_devs=() 00:08:42.285 16:59:29 nvmf_tcp.nvmf_abort -- nvmf/common.sh@295 -- # local -ga net_devs 00:08:42.285 16:59:29 nvmf_tcp.nvmf_abort -- nvmf/common.sh@296 -- # e810=() 00:08:42.285 16:59:29 nvmf_tcp.nvmf_abort -- nvmf/common.sh@296 -- # local -ga e810 00:08:42.285 16:59:29 nvmf_tcp.nvmf_abort -- nvmf/common.sh@297 -- # x722=() 00:08:42.285 16:59:29 nvmf_tcp.nvmf_abort -- nvmf/common.sh@297 -- # local -ga x722 00:08:42.285 16:59:29 nvmf_tcp.nvmf_abort -- nvmf/common.sh@298 -- # mlx=() 00:08:42.285 16:59:29 nvmf_tcp.nvmf_abort -- nvmf/common.sh@298 -- # local -ga mlx 00:08:42.285 16:59:29 nvmf_tcp.nvmf_abort -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:42.285 16:59:29 nvmf_tcp.nvmf_abort -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:42.285 16:59:29 nvmf_tcp.nvmf_abort -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:42.285 16:59:29 nvmf_tcp.nvmf_abort -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:42.285 16:59:29 nvmf_tcp.nvmf_abort -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:42.285 16:59:29 nvmf_tcp.nvmf_abort -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:42.285 16:59:29 nvmf_tcp.nvmf_abort -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:42.285 16:59:29 nvmf_tcp.nvmf_abort -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:42.285 16:59:29 nvmf_tcp.nvmf_abort -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:42.285 16:59:29 nvmf_tcp.nvmf_abort -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:42.285 16:59:29 nvmf_tcp.nvmf_abort -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:42.285 16:59:29 nvmf_tcp.nvmf_abort -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:08:42.285 16:59:29 nvmf_tcp.nvmf_abort -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:08:42.285 16:59:29 nvmf_tcp.nvmf_abort -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:08:42.285 16:59:29 nvmf_tcp.nvmf_abort -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:08:42.285 16:59:29 nvmf_tcp.nvmf_abort -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:08:42.285 16:59:29 nvmf_tcp.nvmf_abort -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:08:42.285 16:59:29 nvmf_tcp.nvmf_abort -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:42.285 16:59:29 nvmf_tcp.nvmf_abort -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:08:42.285 Found 0000:86:00.0 (0x8086 - 0x159b) 00:08:42.285 16:59:29 nvmf_tcp.nvmf_abort -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:42.285 16:59:29 nvmf_tcp.nvmf_abort -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:42.285 16:59:29 nvmf_tcp.nvmf_abort -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:42.285 16:59:29 nvmf_tcp.nvmf_abort -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:42.285 16:59:29 nvmf_tcp.nvmf_abort -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:42.285 16:59:29 nvmf_tcp.nvmf_abort -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:42.285 16:59:29 nvmf_tcp.nvmf_abort -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:08:42.285 Found 0000:86:00.1 (0x8086 - 0x159b) 00:08:42.285 16:59:29 nvmf_tcp.nvmf_abort -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:42.285 16:59:29 nvmf_tcp.nvmf_abort -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:42.285 16:59:29 nvmf_tcp.nvmf_abort -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:42.285 16:59:29 nvmf_tcp.nvmf_abort -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:42.286 16:59:29 nvmf_tcp.nvmf_abort -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:42.286 16:59:29 nvmf_tcp.nvmf_abort -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:08:42.286 16:59:29 nvmf_tcp.nvmf_abort -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:08:42.286 16:59:29 nvmf_tcp.nvmf_abort -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:08:42.286 16:59:29 nvmf_tcp.nvmf_abort -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:42.286 16:59:29 nvmf_tcp.nvmf_abort -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:42.286 16:59:29 nvmf_tcp.nvmf_abort -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:42.286 16:59:29 nvmf_tcp.nvmf_abort -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:42.286 16:59:29 nvmf_tcp.nvmf_abort -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:42.286 16:59:29 nvmf_tcp.nvmf_abort -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:42.286 16:59:29 nvmf_tcp.nvmf_abort -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:42.286 16:59:29 nvmf_tcp.nvmf_abort -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:08:42.286 Found net devices under 0000:86:00.0: cvl_0_0 00:08:42.286 16:59:29 nvmf_tcp.nvmf_abort -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:42.286 16:59:29 nvmf_tcp.nvmf_abort -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:42.286 16:59:29 nvmf_tcp.nvmf_abort -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:42.286 16:59:29 nvmf_tcp.nvmf_abort -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:42.286 16:59:29 nvmf_tcp.nvmf_abort -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:42.286 16:59:29 nvmf_tcp.nvmf_abort -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:42.286 16:59:29 nvmf_tcp.nvmf_abort -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:42.286 16:59:29 nvmf_tcp.nvmf_abort -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:42.286 16:59:29 nvmf_tcp.nvmf_abort -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:08:42.286 Found net devices under 0000:86:00.1: cvl_0_1 00:08:42.286 16:59:29 nvmf_tcp.nvmf_abort -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:42.286 16:59:29 nvmf_tcp.nvmf_abort -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:08:42.286 16:59:29 nvmf_tcp.nvmf_abort -- nvmf/common.sh@414 -- # is_hw=yes 00:08:42.286 16:59:29 nvmf_tcp.nvmf_abort -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:08:42.286 16:59:29 nvmf_tcp.nvmf_abort -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:08:42.286 16:59:29 nvmf_tcp.nvmf_abort -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:08:42.286 16:59:29 nvmf_tcp.nvmf_abort -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:42.286 16:59:29 nvmf_tcp.nvmf_abort -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:42.286 16:59:29 nvmf_tcp.nvmf_abort -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:42.286 16:59:29 nvmf_tcp.nvmf_abort -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:08:42.286 16:59:29 nvmf_tcp.nvmf_abort -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:42.286 16:59:29 nvmf_tcp.nvmf_abort -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:42.286 16:59:29 nvmf_tcp.nvmf_abort -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:08:42.286 16:59:29 nvmf_tcp.nvmf_abort -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:42.286 16:59:29 nvmf_tcp.nvmf_abort -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:42.286 16:59:29 nvmf_tcp.nvmf_abort -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:08:42.286 16:59:29 nvmf_tcp.nvmf_abort -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:08:42.286 16:59:29 nvmf_tcp.nvmf_abort -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:08:42.286 16:59:29 nvmf_tcp.nvmf_abort -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:42.286 16:59:29 nvmf_tcp.nvmf_abort -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:42.286 16:59:29 nvmf_tcp.nvmf_abort -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:42.286 16:59:29 nvmf_tcp.nvmf_abort -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:08:42.286 16:59:29 nvmf_tcp.nvmf_abort -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:42.286 16:59:29 nvmf_tcp.nvmf_abort -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:42.286 16:59:29 nvmf_tcp.nvmf_abort -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:42.286 16:59:29 nvmf_tcp.nvmf_abort -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:08:42.286 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:42.286 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.150 ms 00:08:42.286 00:08:42.286 --- 10.0.0.2 ping statistics --- 00:08:42.286 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:42.286 rtt min/avg/max/mdev = 0.150/0.150/0.150/0.000 ms 00:08:42.286 16:59:29 nvmf_tcp.nvmf_abort -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:42.286 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:42.286 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.221 ms 00:08:42.286 00:08:42.286 --- 10.0.0.1 ping statistics --- 00:08:42.286 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:42.286 rtt min/avg/max/mdev = 0.221/0.221/0.221/0.000 ms 00:08:42.286 16:59:29 nvmf_tcp.nvmf_abort -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:42.286 16:59:29 nvmf_tcp.nvmf_abort -- nvmf/common.sh@422 -- # return 0 00:08:42.286 16:59:29 nvmf_tcp.nvmf_abort -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:08:42.286 16:59:29 nvmf_tcp.nvmf_abort -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:42.286 16:59:29 nvmf_tcp.nvmf_abort -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:08:42.286 16:59:29 nvmf_tcp.nvmf_abort -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:08:42.286 16:59:29 nvmf_tcp.nvmf_abort -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:42.286 16:59:29 nvmf_tcp.nvmf_abort -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:08:42.286 16:59:29 nvmf_tcp.nvmf_abort -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:08:42.286 16:59:29 nvmf_tcp.nvmf_abort -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:08:42.286 16:59:29 nvmf_tcp.nvmf_abort -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:08:42.286 16:59:29 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@720 -- # xtrace_disable 00:08:42.286 16:59:29 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:08:42.286 16:59:29 nvmf_tcp.nvmf_abort -- nvmf/common.sh@481 -- # nvmfpid=2950148 00:08:42.286 16:59:29 nvmf_tcp.nvmf_abort -- nvmf/common.sh@482 -- # waitforlisten 2950148 00:08:42.286 16:59:29 nvmf_tcp.nvmf_abort -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:08:42.286 16:59:29 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@827 -- # '[' -z 2950148 ']' 00:08:42.286 16:59:29 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:42.286 16:59:29 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@832 -- # local max_retries=100 00:08:42.286 16:59:29 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:42.286 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:42.286 16:59:29 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@836 -- # xtrace_disable 00:08:42.286 16:59:29 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:08:42.286 [2024-05-15 16:59:29.602803] Starting SPDK v24.05-pre git sha1 0ba8ca574 / DPDK 23.11.0 initialization... 00:08:42.286 [2024-05-15 16:59:29.602843] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:42.286 EAL: No free 2048 kB hugepages reported on node 1 00:08:42.286 [2024-05-15 16:59:29.659919] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:08:42.286 [2024-05-15 16:59:29.737403] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:42.286 [2024-05-15 16:59:29.737444] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:42.287 [2024-05-15 16:59:29.737451] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:42.287 [2024-05-15 16:59:29.737457] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:42.287 [2024-05-15 16:59:29.737462] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:42.287 [2024-05-15 16:59:29.737568] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:08:42.287 [2024-05-15 16:59:29.737675] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:08:42.287 [2024-05-15 16:59:29.737676] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:08:42.849 16:59:30 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:08:42.849 16:59:30 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@860 -- # return 0 00:08:42.849 16:59:30 nvmf_tcp.nvmf_abort -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:08:42.849 16:59:30 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:42.850 16:59:30 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:08:42.850 16:59:30 nvmf_tcp.nvmf_abort -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:42.850 16:59:30 nvmf_tcp.nvmf_abort -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -a 256 00:08:42.850 16:59:30 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:42.850 16:59:30 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:08:42.850 [2024-05-15 16:59:30.453930] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:42.850 16:59:30 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:42.850 16:59:30 nvmf_tcp.nvmf_abort -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:08:42.850 16:59:30 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:42.850 16:59:30 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:08:42.850 Malloc0 00:08:42.850 16:59:30 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:42.850 16:59:30 nvmf_tcp.nvmf_abort -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:08:42.850 16:59:30 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:42.850 16:59:30 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:08:42.850 Delay0 00:08:42.850 16:59:30 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:42.850 16:59:30 nvmf_tcp.nvmf_abort -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:08:42.850 16:59:30 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:42.850 16:59:30 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:08:43.106 16:59:30 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:43.106 16:59:30 nvmf_tcp.nvmf_abort -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:08:43.106 16:59:30 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:43.106 16:59:30 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:08:43.106 16:59:30 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:43.106 16:59:30 nvmf_tcp.nvmf_abort -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:08:43.106 16:59:30 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:43.106 16:59:30 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:08:43.106 [2024-05-15 16:59:30.523662] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:08:43.106 [2024-05-15 16:59:30.523920] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:43.106 16:59:30 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:43.106 16:59:30 nvmf_tcp.nvmf_abort -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:08:43.106 16:59:30 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:43.106 16:59:30 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:08:43.106 16:59:30 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:43.106 16:59:30 nvmf_tcp.nvmf_abort -- target/abort.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:08:43.106 EAL: No free 2048 kB hugepages reported on node 1 00:08:43.106 [2024-05-15 16:59:30.634916] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:08:45.624 Initializing NVMe Controllers 00:08:45.624 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:08:45.624 controller IO queue size 128 less than required 00:08:45.624 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:08:45.624 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:08:45.624 Initialization complete. Launching workers. 00:08:45.624 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 125, failed: 43666 00:08:45.624 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 43729, failed to submit 62 00:08:45.624 success 43670, unsuccess 59, failed 0 00:08:45.624 16:59:32 nvmf_tcp.nvmf_abort -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:08:45.624 16:59:32 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:45.624 16:59:32 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:08:45.624 16:59:32 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:45.624 16:59:32 nvmf_tcp.nvmf_abort -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:08:45.624 16:59:32 nvmf_tcp.nvmf_abort -- target/abort.sh@38 -- # nvmftestfini 00:08:45.624 16:59:32 nvmf_tcp.nvmf_abort -- nvmf/common.sh@488 -- # nvmfcleanup 00:08:45.624 16:59:32 nvmf_tcp.nvmf_abort -- nvmf/common.sh@117 -- # sync 00:08:45.624 16:59:32 nvmf_tcp.nvmf_abort -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:08:45.624 16:59:32 nvmf_tcp.nvmf_abort -- nvmf/common.sh@120 -- # set +e 00:08:45.624 16:59:32 nvmf_tcp.nvmf_abort -- nvmf/common.sh@121 -- # for i in {1..20} 00:08:45.624 16:59:32 nvmf_tcp.nvmf_abort -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:08:45.624 rmmod nvme_tcp 00:08:45.624 rmmod nvme_fabrics 00:08:45.624 rmmod nvme_keyring 00:08:45.624 16:59:32 nvmf_tcp.nvmf_abort -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:08:45.624 16:59:32 nvmf_tcp.nvmf_abort -- nvmf/common.sh@124 -- # set -e 00:08:45.624 16:59:32 nvmf_tcp.nvmf_abort -- nvmf/common.sh@125 -- # return 0 00:08:45.624 16:59:32 nvmf_tcp.nvmf_abort -- nvmf/common.sh@489 -- # '[' -n 2950148 ']' 00:08:45.624 16:59:32 nvmf_tcp.nvmf_abort -- nvmf/common.sh@490 -- # killprocess 2950148 00:08:45.624 16:59:32 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@946 -- # '[' -z 2950148 ']' 00:08:45.624 16:59:32 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@950 -- # kill -0 2950148 00:08:45.624 16:59:32 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@951 -- # uname 00:08:45.624 16:59:32 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:08:45.624 16:59:32 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 2950148 00:08:45.624 16:59:32 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:08:45.624 16:59:32 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:08:45.624 16:59:32 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@964 -- # echo 'killing process with pid 2950148' 00:08:45.624 killing process with pid 2950148 00:08:45.624 16:59:32 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@965 -- # kill 2950148 00:08:45.624 [2024-05-15 16:59:32.823284] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:08:45.624 16:59:32 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@970 -- # wait 2950148 00:08:45.624 16:59:33 nvmf_tcp.nvmf_abort -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:08:45.624 16:59:33 nvmf_tcp.nvmf_abort -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:08:45.624 16:59:33 nvmf_tcp.nvmf_abort -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:08:45.624 16:59:33 nvmf_tcp.nvmf_abort -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:45.624 16:59:33 nvmf_tcp.nvmf_abort -- nvmf/common.sh@278 -- # remove_spdk_ns 00:08:45.624 16:59:33 nvmf_tcp.nvmf_abort -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:45.624 16:59:33 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:45.624 16:59:33 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:47.520 16:59:35 nvmf_tcp.nvmf_abort -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:08:47.520 00:08:47.520 real 0m10.964s 00:08:47.520 user 0m13.049s 00:08:47.520 sys 0m4.925s 00:08:47.520 16:59:35 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@1122 -- # xtrace_disable 00:08:47.520 16:59:35 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:08:47.520 ************************************ 00:08:47.520 END TEST nvmf_abort 00:08:47.520 ************************************ 00:08:47.520 16:59:35 nvmf_tcp -- nvmf/nvmf.sh@32 -- # run_test nvmf_ns_hotplug_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:08:47.520 16:59:35 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:08:47.520 16:59:35 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:08:47.520 16:59:35 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:08:47.777 ************************************ 00:08:47.777 START TEST nvmf_ns_hotplug_stress 00:08:47.777 ************************************ 00:08:47.777 16:59:35 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:08:47.777 * Looking for test storage... 00:08:47.777 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:47.777 16:59:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:47.777 16:59:35 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # uname -s 00:08:47.777 16:59:35 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:47.777 16:59:35 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:47.777 16:59:35 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:47.777 16:59:35 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:47.777 16:59:35 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:47.777 16:59:35 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:47.777 16:59:35 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:47.777 16:59:35 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:47.777 16:59:35 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:47.777 16:59:35 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:47.777 16:59:35 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:08:47.777 16:59:35 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:08:47.777 16:59:35 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:47.777 16:59:35 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:47.777 16:59:35 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:47.777 16:59:35 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:47.777 16:59:35 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:47.777 16:59:35 nvmf_tcp.nvmf_ns_hotplug_stress -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:47.777 16:59:35 nvmf_tcp.nvmf_ns_hotplug_stress -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:47.777 16:59:35 nvmf_tcp.nvmf_ns_hotplug_stress -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:47.777 16:59:35 nvmf_tcp.nvmf_ns_hotplug_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:47.777 16:59:35 nvmf_tcp.nvmf_ns_hotplug_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:47.777 16:59:35 nvmf_tcp.nvmf_ns_hotplug_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:47.777 16:59:35 nvmf_tcp.nvmf_ns_hotplug_stress -- paths/export.sh@5 -- # export PATH 00:08:47.778 16:59:35 nvmf_tcp.nvmf_ns_hotplug_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:47.778 16:59:35 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@47 -- # : 0 00:08:47.778 16:59:35 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:08:47.778 16:59:35 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:08:47.778 16:59:35 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:47.778 16:59:35 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:47.778 16:59:35 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:47.778 16:59:35 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:08:47.778 16:59:35 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:08:47.778 16:59:35 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@51 -- # have_pci_nics=0 00:08:47.778 16:59:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:47.778 16:59:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@22 -- # nvmftestinit 00:08:47.778 16:59:35 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:08:47.778 16:59:35 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:47.778 16:59:35 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@448 -- # prepare_net_devs 00:08:47.778 16:59:35 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # local -g is_hw=no 00:08:47.778 16:59:35 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@412 -- # remove_spdk_ns 00:08:47.778 16:59:35 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:47.778 16:59:35 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:47.778 16:59:35 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:47.778 16:59:35 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:08:47.778 16:59:35 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:08:47.778 16:59:35 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@285 -- # xtrace_disable 00:08:47.778 16:59:35 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:08:53.037 16:59:40 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:53.037 16:59:40 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@291 -- # pci_devs=() 00:08:53.037 16:59:40 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@291 -- # local -a pci_devs 00:08:53.037 16:59:40 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@292 -- # pci_net_devs=() 00:08:53.037 16:59:40 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:08:53.037 16:59:40 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@293 -- # pci_drivers=() 00:08:53.037 16:59:40 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@293 -- # local -A pci_drivers 00:08:53.037 16:59:40 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@295 -- # net_devs=() 00:08:53.037 16:59:40 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@295 -- # local -ga net_devs 00:08:53.037 16:59:40 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@296 -- # e810=() 00:08:53.038 16:59:40 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@296 -- # local -ga e810 00:08:53.038 16:59:40 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # x722=() 00:08:53.038 16:59:40 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # local -ga x722 00:08:53.038 16:59:40 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # mlx=() 00:08:53.038 16:59:40 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # local -ga mlx 00:08:53.038 16:59:40 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:53.038 16:59:40 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:53.038 16:59:40 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:53.038 16:59:40 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:53.038 16:59:40 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:53.038 16:59:40 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:53.038 16:59:40 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:53.038 16:59:40 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:53.038 16:59:40 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:53.038 16:59:40 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:53.038 16:59:40 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:53.038 16:59:40 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:08:53.038 16:59:40 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:08:53.038 16:59:40 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:08:53.038 16:59:40 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:08:53.038 16:59:40 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:08:53.038 16:59:40 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:08:53.038 16:59:40 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:53.038 16:59:40 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:08:53.038 Found 0000:86:00.0 (0x8086 - 0x159b) 00:08:53.038 16:59:40 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:53.038 16:59:40 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:53.038 16:59:40 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:53.038 16:59:40 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:53.038 16:59:40 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:53.038 16:59:40 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:53.038 16:59:40 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:08:53.038 Found 0000:86:00.1 (0x8086 - 0x159b) 00:08:53.038 16:59:40 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:53.038 16:59:40 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:53.038 16:59:40 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:53.038 16:59:40 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:53.038 16:59:40 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:53.038 16:59:40 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:08:53.038 16:59:40 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:08:53.038 16:59:40 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:08:53.038 16:59:40 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:53.038 16:59:40 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:53.038 16:59:40 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:53.038 16:59:40 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:53.038 16:59:40 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:53.038 16:59:40 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:53.038 16:59:40 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:53.038 16:59:40 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:08:53.038 Found net devices under 0000:86:00.0: cvl_0_0 00:08:53.038 16:59:40 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:53.038 16:59:40 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:53.038 16:59:40 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:53.038 16:59:40 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:53.038 16:59:40 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:53.038 16:59:40 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:53.038 16:59:40 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:53.038 16:59:40 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:53.038 16:59:40 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:08:53.038 Found net devices under 0000:86:00.1: cvl_0_1 00:08:53.038 16:59:40 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:53.038 16:59:40 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:08:53.038 16:59:40 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # is_hw=yes 00:08:53.038 16:59:40 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:08:53.038 16:59:40 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:08:53.038 16:59:40 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:08:53.038 16:59:40 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:53.038 16:59:40 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:53.038 16:59:40 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:53.038 16:59:40 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:08:53.038 16:59:40 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:53.038 16:59:40 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:53.038 16:59:40 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:08:53.038 16:59:40 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:53.038 16:59:40 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:53.038 16:59:40 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:08:53.038 16:59:40 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:08:53.038 16:59:40 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:08:53.038 16:59:40 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:53.296 16:59:40 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:53.296 16:59:40 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:53.296 16:59:40 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:08:53.296 16:59:40 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:53.296 16:59:40 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:53.296 16:59:40 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:53.296 16:59:40 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:08:53.296 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:53.296 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.259 ms 00:08:53.296 00:08:53.296 --- 10.0.0.2 ping statistics --- 00:08:53.296 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:53.296 rtt min/avg/max/mdev = 0.259/0.259/0.259/0.000 ms 00:08:53.296 16:59:40 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:53.296 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:53.296 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.108 ms 00:08:53.296 00:08:53.296 --- 10.0.0.1 ping statistics --- 00:08:53.296 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:53.296 rtt min/avg/max/mdev = 0.108/0.108/0.108/0.000 ms 00:08:53.296 16:59:40 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:53.296 16:59:40 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # return 0 00:08:53.297 16:59:40 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:08:53.297 16:59:40 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:53.297 16:59:40 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:08:53.297 16:59:40 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:08:53.297 16:59:40 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:53.297 16:59:40 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:08:53.297 16:59:40 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:08:53.297 16:59:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@23 -- # nvmfappstart -m 0xE 00:08:53.297 16:59:40 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:08:53.297 16:59:40 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@720 -- # xtrace_disable 00:08:53.297 16:59:40 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:08:53.297 16:59:40 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:08:53.297 16:59:40 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@481 -- # nvmfpid=2954154 00:08:53.297 16:59:40 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@482 -- # waitforlisten 2954154 00:08:53.297 16:59:40 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@827 -- # '[' -z 2954154 ']' 00:08:53.297 16:59:40 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:53.297 16:59:40 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@832 -- # local max_retries=100 00:08:53.297 16:59:40 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:53.297 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:53.297 16:59:40 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@836 -- # xtrace_disable 00:08:53.297 16:59:40 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:08:53.297 [2024-05-15 16:59:40.928677] Starting SPDK v24.05-pre git sha1 0ba8ca574 / DPDK 23.11.0 initialization... 00:08:53.297 [2024-05-15 16:59:40.928721] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:53.297 EAL: No free 2048 kB hugepages reported on node 1 00:08:53.554 [2024-05-15 16:59:40.981141] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:08:53.554 [2024-05-15 16:59:41.057225] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:53.554 [2024-05-15 16:59:41.057261] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:53.554 [2024-05-15 16:59:41.057269] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:53.554 [2024-05-15 16:59:41.057274] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:53.554 [2024-05-15 16:59:41.057280] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:53.554 [2024-05-15 16:59:41.057378] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:08:53.554 [2024-05-15 16:59:41.057439] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:08:53.554 [2024-05-15 16:59:41.057441] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:08:54.117 16:59:41 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:08:54.117 16:59:41 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@860 -- # return 0 00:08:54.117 16:59:41 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:08:54.117 16:59:41 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:54.117 16:59:41 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:08:54.374 16:59:41 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:54.374 16:59:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@25 -- # null_size=1000 00:08:54.374 16:59:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:08:54.374 [2024-05-15 16:59:41.938444] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:54.374 16:59:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:08:54.630 16:59:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:54.888 [2024-05-15 16:59:42.295604] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:08:54.888 [2024-05-15 16:59:42.295867] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:54.888 16:59:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:08:54.888 16:59:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:08:55.144 Malloc0 00:08:55.144 16:59:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:08:55.406 Delay0 00:08:55.406 16:59:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:55.406 16:59:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:08:55.663 NULL1 00:08:55.663 16:59:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:08:55.920 16:59:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@42 -- # PERF_PID=2954641 00:08:55.920 16:59:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:08:55.920 16:59:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2954641 00:08:55.920 16:59:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:55.920 EAL: No free 2048 kB hugepages reported on node 1 00:08:57.288 Read completed with error (sct=0, sc=11) 00:08:57.288 16:59:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:57.288 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:57.288 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:57.288 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:57.288 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:57.288 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:57.288 16:59:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1001 00:08:57.288 16:59:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:08:57.288 true 00:08:57.544 16:59:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2954641 00:08:57.544 16:59:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:58.474 16:59:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:58.474 16:59:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1002 00:08:58.474 16:59:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:08:58.731 true 00:08:58.731 16:59:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2954641 00:08:58.731 16:59:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:58.731 16:59:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:58.987 16:59:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1003 00:08:58.988 16:59:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:08:58.988 true 00:08:59.244 16:59:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2954641 00:08:59.244 16:59:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:00.174 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:00.174 16:59:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:00.454 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:00.454 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:00.454 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:00.454 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:00.454 16:59:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1004 00:09:00.454 16:59:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:09:00.724 true 00:09:00.724 16:59:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2954641 00:09:00.724 16:59:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:01.653 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:01.653 16:59:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:01.653 16:59:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1005 00:09:01.653 16:59:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:09:01.910 true 00:09:01.910 16:59:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2954641 00:09:01.910 16:59:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:02.167 16:59:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:02.167 16:59:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1006 00:09:02.167 16:59:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:09:02.425 true 00:09:02.425 16:59:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2954641 00:09:02.425 16:59:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:03.792 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:03.793 16:59:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:03.793 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:03.793 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:03.793 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:03.793 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:03.793 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:03.793 16:59:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1007 00:09:03.793 16:59:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:09:04.049 true 00:09:04.049 16:59:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2954641 00:09:04.049 16:59:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:04.979 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:04.979 16:59:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:04.979 16:59:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1008 00:09:04.979 16:59:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:09:05.235 true 00:09:05.235 16:59:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2954641 00:09:05.235 16:59:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:05.491 16:59:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:05.491 16:59:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1009 00:09:05.492 16:59:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:09:05.748 true 00:09:05.748 16:59:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2954641 00:09:05.748 16:59:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:07.118 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:07.118 16:59:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:07.118 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:07.118 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:07.118 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:07.118 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:07.118 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:07.118 16:59:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1010 00:09:07.118 16:59:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:09:07.374 true 00:09:07.374 16:59:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2954641 00:09:07.374 16:59:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:08.304 16:59:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:08.304 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:08.304 16:59:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1011 00:09:08.304 16:59:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:09:08.561 true 00:09:08.561 16:59:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2954641 00:09:08.561 16:59:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:08.816 16:59:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:08.817 16:59:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1012 00:09:08.817 16:59:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:09:09.073 true 00:09:09.073 16:59:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2954641 00:09:09.073 16:59:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:10.444 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:10.444 16:59:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:10.444 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:10.444 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:10.444 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:10.444 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:10.444 16:59:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1013 00:09:10.444 16:59:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:09:10.702 true 00:09:10.702 16:59:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2954641 00:09:10.702 16:59:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:11.634 16:59:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:11.634 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:11.634 16:59:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1014 00:09:11.634 16:59:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:09:11.891 true 00:09:11.891 16:59:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2954641 00:09:11.891 16:59:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:12.148 16:59:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:12.149 16:59:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1015 00:09:12.149 16:59:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:09:12.407 true 00:09:12.407 16:59:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2954641 00:09:12.407 16:59:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:13.780 17:00:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:13.780 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:13.780 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:13.780 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:13.780 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:13.780 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:13.780 17:00:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1016 00:09:13.780 17:00:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:09:14.036 true 00:09:14.036 17:00:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2954641 00:09:14.036 17:00:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:14.963 17:00:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:14.963 17:00:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1017 00:09:14.963 17:00:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:09:15.219 true 00:09:15.219 17:00:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2954641 00:09:15.219 17:00:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:15.219 17:00:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:15.477 17:00:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1018 00:09:15.477 17:00:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:09:15.734 true 00:09:15.734 17:00:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2954641 00:09:15.734 17:00:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:16.663 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:16.663 17:00:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:16.919 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:16.919 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:16.919 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:16.919 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:16.919 17:00:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1019 00:09:16.919 17:00:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:09:17.177 true 00:09:17.177 17:00:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2954641 00:09:17.177 17:00:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:18.108 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:18.108 17:00:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:18.108 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:18.108 17:00:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1020 00:09:18.108 17:00:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:09:18.364 true 00:09:18.364 17:00:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2954641 00:09:18.364 17:00:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:18.620 17:00:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:18.878 17:00:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1021 00:09:18.878 17:00:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 00:09:18.878 true 00:09:18.878 17:00:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2954641 00:09:18.878 17:00:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:20.301 17:00:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:20.301 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:20.301 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:20.301 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:20.301 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:20.301 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:20.301 17:00:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1022 00:09:20.301 17:00:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1022 00:09:20.558 true 00:09:20.558 17:00:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2954641 00:09:20.558 17:00:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:21.487 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:21.487 17:00:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:21.487 17:00:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1023 00:09:21.487 17:00:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1023 00:09:21.743 true 00:09:21.743 17:00:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2954641 00:09:21.743 17:00:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:21.743 17:00:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:21.998 17:00:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1024 00:09:21.998 17:00:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1024 00:09:22.255 true 00:09:22.255 17:00:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2954641 00:09:22.255 17:00:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:23.625 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:23.625 17:00:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:23.625 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:23.625 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:23.625 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:23.625 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:23.625 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:23.625 17:00:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1025 00:09:23.625 17:00:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1025 00:09:23.882 true 00:09:23.882 17:00:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2954641 00:09:23.882 17:00:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:24.812 17:00:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:24.812 17:00:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1026 00:09:24.812 17:00:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1026 00:09:25.069 true 00:09:25.069 17:00:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2954641 00:09:25.069 17:00:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:25.069 17:00:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:25.325 17:00:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1027 00:09:25.325 17:00:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1027 00:09:25.583 true 00:09:25.583 17:00:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2954641 00:09:25.583 17:00:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:26.513 Initializing NVMe Controllers 00:09:26.513 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:09:26.513 Controller IO queue size 128, less than required. 00:09:26.513 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:09:26.513 Controller IO queue size 128, less than required. 00:09:26.513 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:09:26.513 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:09:26.513 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:09:26.513 Initialization complete. Launching workers. 00:09:26.513 ======================================================== 00:09:26.513 Latency(us) 00:09:26.513 Device Information : IOPS MiB/s Average min max 00:09:26.513 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1927.73 0.94 47906.76 2938.81 1017164.40 00:09:26.513 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 17793.70 8.69 7193.33 2302.42 383981.83 00:09:26.513 ======================================================== 00:09:26.513 Total : 19721.43 9.63 11172.99 2302.42 1017164.40 00:09:26.513 00:09:26.513 17:00:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:26.770 17:00:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1028 00:09:26.770 17:00:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1028 00:09:27.028 true 00:09:27.028 17:00:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2954641 00:09:27.028 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 44: kill: (2954641) - No such process 00:09:27.028 17:00:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@53 -- # wait 2954641 00:09:27.028 17:00:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:27.285 17:00:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:09:27.285 17:00:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # nthreads=8 00:09:27.285 17:00:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # pids=() 00:09:27.285 17:00:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i = 0 )) 00:09:27.285 17:00:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:09:27.285 17:00:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null0 100 4096 00:09:27.542 null0 00:09:27.542 17:00:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:09:27.542 17:00:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:09:27.542 17:00:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null1 100 4096 00:09:27.799 null1 00:09:27.799 17:00:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:09:27.799 17:00:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:09:27.799 17:00:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null2 100 4096 00:09:27.799 null2 00:09:27.799 17:00:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:09:27.799 17:00:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:09:27.799 17:00:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null3 100 4096 00:09:28.056 null3 00:09:28.056 17:00:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:09:28.056 17:00:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:09:28.056 17:00:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null4 100 4096 00:09:28.312 null4 00:09:28.312 17:00:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:09:28.312 17:00:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:09:28.312 17:00:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null5 100 4096 00:09:28.312 null5 00:09:28.568 17:00:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:09:28.568 17:00:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:09:28.568 17:00:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null6 100 4096 00:09:28.568 null6 00:09:28.568 17:00:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:09:28.568 17:00:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:09:28.568 17:00:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null7 100 4096 00:09:28.826 null7 00:09:28.826 17:00:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:09:28.826 17:00:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:09:28.826 17:00:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i = 0 )) 00:09:28.826 17:00:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:09:28.826 17:00:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 1 null0 00:09:28.826 17:00:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:09:28.826 17:00:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=1 bdev=null0 00:09:28.826 17:00:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:09:28.826 17:00:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:28.826 17:00:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:09:28.826 17:00:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:09:28.826 17:00:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:09:28.826 17:00:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:09:28.826 17:00:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:09:28.826 17:00:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 2 null1 00:09:28.826 17:00:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:09:28.826 17:00:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=2 bdev=null1 00:09:28.826 17:00:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:09:28.826 17:00:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:28.826 17:00:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:09:28.826 17:00:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:09:28.826 17:00:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:09:28.826 17:00:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 3 null2 00:09:28.826 17:00:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:09:28.826 17:00:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=3 bdev=null2 00:09:28.826 17:00:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:09:28.826 17:00:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:28.826 17:00:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:09:28.826 17:00:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:09:28.826 17:00:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:09:28.826 17:00:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 4 null3 00:09:28.826 17:00:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:09:28.826 17:00:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=4 bdev=null3 00:09:28.826 17:00:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:09:28.826 17:00:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:28.826 17:00:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:09:28.826 17:00:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:09:28.826 17:00:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:09:28.826 17:00:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 5 null4 00:09:28.826 17:00:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:09:28.826 17:00:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=5 bdev=null4 00:09:28.827 17:00:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:09:28.827 17:00:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:28.827 17:00:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:09:28.827 17:00:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:09:28.827 17:00:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 6 null5 00:09:28.827 17:00:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:09:28.827 17:00:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:09:28.827 17:00:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=6 bdev=null5 00:09:28.827 17:00:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:09:28.827 17:00:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:28.827 17:00:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:09:28.827 17:00:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:09:28.827 17:00:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 7 null6 00:09:28.827 17:00:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:09:28.827 17:00:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:09:28.827 17:00:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=7 bdev=null6 00:09:28.827 17:00:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:09:28.827 17:00:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:28.827 17:00:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:09:28.827 17:00:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:09:28.827 17:00:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:09:28.827 17:00:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:09:28.827 17:00:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@66 -- # wait 2960745 2960747 2960748 2960750 2960752 2960754 2960756 2960758 00:09:28.827 17:00:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 8 null7 00:09:28.827 17:00:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=8 bdev=null7 00:09:28.827 17:00:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:09:28.827 17:00:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:28.827 17:00:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:09:29.084 17:00:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:29.084 17:00:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:09:29.084 17:00:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:09:29.084 17:00:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:09:29.084 17:00:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:09:29.084 17:00:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:09:29.084 17:00:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:09:29.084 17:00:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:09:29.084 17:00:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:29.084 17:00:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:29.084 17:00:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:29.084 17:00:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:09:29.084 17:00:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:29.084 17:00:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:09:29.343 17:00:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:29.343 17:00:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:29.343 17:00:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:29.343 17:00:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:09:29.343 17:00:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:29.343 17:00:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:09:29.343 17:00:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:29.343 17:00:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:29.343 17:00:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:09:29.343 17:00:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:29.343 17:00:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:29.343 17:00:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:09:29.343 17:00:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:29.343 17:00:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:29.343 17:00:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:29.343 17:00:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:09:29.343 17:00:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:29.343 17:00:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:09:29.343 17:00:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:29.343 17:00:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:09:29.343 17:00:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:09:29.343 17:00:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:09:29.343 17:00:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:09:29.343 17:00:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:09:29.343 17:00:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:09:29.343 17:00:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:09:29.606 17:00:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:29.606 17:00:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:29.606 17:00:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:09:29.606 17:00:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:29.606 17:00:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:29.606 17:00:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:09:29.606 17:00:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:29.606 17:00:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:29.606 17:00:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:09:29.606 17:00:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:29.606 17:00:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:29.606 17:00:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:09:29.606 17:00:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:29.606 17:00:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:29.606 17:00:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:29.606 17:00:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:29.606 17:00:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:09:29.606 17:00:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:09:29.606 17:00:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:29.606 17:00:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:29.606 17:00:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:09:29.606 17:00:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:29.606 17:00:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:29.606 17:00:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:09:29.864 17:00:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:29.864 17:00:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:09:29.864 17:00:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:09:29.864 17:00:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:09:29.864 17:00:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:09:29.864 17:00:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:09:29.864 17:00:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:09:29.864 17:00:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:09:29.864 17:00:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:29.864 17:00:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:29.864 17:00:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:09:29.864 17:00:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:29.864 17:00:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:29.864 17:00:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:09:29.864 17:00:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:29.864 17:00:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:29.864 17:00:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:09:29.864 17:00:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:29.864 17:00:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:29.864 17:00:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:09:29.864 17:00:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:29.864 17:00:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:29.864 17:00:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:29.864 17:00:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:09:29.864 17:00:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:29.864 17:00:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:09:29.864 17:00:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:29.864 17:00:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:29.864 17:00:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:09:30.122 17:00:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:30.122 17:00:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:30.122 17:00:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:09:30.122 17:00:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:09:30.122 17:00:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:09:30.122 17:00:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:30.122 17:00:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:09:30.122 17:00:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:09:30.122 17:00:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:09:30.122 17:00:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:09:30.122 17:00:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:09:30.380 17:00:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:30.380 17:00:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:30.380 17:00:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:09:30.380 17:00:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:30.380 17:00:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:30.380 17:00:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:09:30.380 17:00:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:30.380 17:00:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:30.380 17:00:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:30.380 17:00:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:30.380 17:00:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:09:30.380 17:00:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:09:30.380 17:00:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:30.380 17:00:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:30.380 17:00:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:09:30.380 17:00:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:30.380 17:00:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:30.380 17:00:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:09:30.380 17:00:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:30.380 17:00:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:30.380 17:00:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:09:30.380 17:00:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:30.380 17:00:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:30.380 17:00:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:09:30.637 17:00:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:30.637 17:00:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:09:30.637 17:00:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:09:30.637 17:00:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:09:30.638 17:00:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:09:30.638 17:00:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:09:30.638 17:00:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:09:30.638 17:00:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:09:30.638 17:00:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:30.638 17:00:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:30.638 17:00:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:09:30.638 17:00:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:30.638 17:00:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:30.638 17:00:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:09:30.638 17:00:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:30.638 17:00:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:30.638 17:00:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:09:30.638 17:00:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:30.638 17:00:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:30.638 17:00:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:09:30.638 17:00:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:30.638 17:00:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:30.638 17:00:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:09:30.638 17:00:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:30.638 17:00:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:30.638 17:00:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:09:30.638 17:00:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:30.638 17:00:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:30.638 17:00:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:09:30.638 17:00:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:30.638 17:00:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:30.638 17:00:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:09:30.895 17:00:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:30.895 17:00:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:09:30.895 17:00:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:09:30.895 17:00:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:09:30.895 17:00:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:09:30.895 17:00:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:09:30.895 17:00:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:09:30.895 17:00:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:09:31.153 17:00:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:31.153 17:00:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:31.154 17:00:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:09:31.154 17:00:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:31.154 17:00:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:31.154 17:00:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:09:31.154 17:00:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:31.154 17:00:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:31.154 17:00:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:09:31.154 17:00:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:31.154 17:00:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:31.154 17:00:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:09:31.154 17:00:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:31.154 17:00:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:31.154 17:00:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:31.154 17:00:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:09:31.154 17:00:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:31.154 17:00:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:09:31.154 17:00:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:31.154 17:00:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:31.154 17:00:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:09:31.154 17:00:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:31.154 17:00:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:31.154 17:00:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:09:31.413 17:00:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:31.413 17:00:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:09:31.413 17:00:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:09:31.413 17:00:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:09:31.414 17:00:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:09:31.414 17:00:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:09:31.414 17:00:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:09:31.414 17:00:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:09:31.414 17:00:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:31.414 17:00:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:31.414 17:00:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:31.414 17:00:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:31.414 17:00:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:09:31.414 17:00:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:09:31.414 17:00:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:31.414 17:00:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:31.414 17:00:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:09:31.414 17:00:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:31.414 17:00:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:31.414 17:00:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:09:31.414 17:00:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:31.414 17:00:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:31.414 17:00:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:09:31.414 17:00:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:31.414 17:00:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:31.414 17:00:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:09:31.414 17:00:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:31.414 17:00:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:31.414 17:00:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:09:31.414 17:00:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:31.414 17:00:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:31.414 17:00:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:09:31.671 17:00:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:09:31.671 17:00:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:09:31.671 17:00:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:31.671 17:00:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:09:31.671 17:00:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:09:31.671 17:00:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:09:31.671 17:00:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:09:31.671 17:00:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:09:31.928 17:00:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:31.928 17:00:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:31.928 17:00:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:09:31.928 17:00:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:31.928 17:00:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:31.928 17:00:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:09:31.928 17:00:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:31.928 17:00:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:31.928 17:00:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:09:31.928 17:00:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:31.928 17:00:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:31.928 17:00:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:09:31.928 17:00:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:31.928 17:00:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:31.928 17:00:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:09:31.928 17:00:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:31.928 17:00:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:31.928 17:00:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:09:31.928 17:00:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:31.928 17:00:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:31.928 17:00:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:09:31.928 17:00:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:31.928 17:00:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:31.928 17:00:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:09:31.928 17:00:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:09:31.929 17:00:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:32.184 17:00:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:09:32.184 17:00:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:09:32.184 17:00:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:09:32.184 17:00:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:09:32.184 17:00:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:09:32.184 17:00:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:09:32.184 17:00:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:32.184 17:00:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:32.184 17:00:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:09:32.184 17:00:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:32.184 17:00:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:32.184 17:00:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:09:32.184 17:00:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:32.184 17:00:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:32.184 17:00:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:09:32.184 17:00:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:32.184 17:00:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:32.184 17:00:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:09:32.184 17:00:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:32.184 17:00:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:32.184 17:00:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:09:32.184 17:00:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:32.184 17:00:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:32.184 17:00:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:32.185 17:00:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:09:32.185 17:00:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:32.185 17:00:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:09:32.185 17:00:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:32.185 17:00:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:32.185 17:00:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:09:32.445 17:00:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:32.445 17:00:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:09:32.445 17:00:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:09:32.445 17:00:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:09:32.445 17:00:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:09:32.445 17:00:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:09:32.445 17:00:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:09:32.445 17:00:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:09:32.710 17:00:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:32.710 17:00:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:32.710 17:00:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:32.710 17:00:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:32.710 17:00:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:32.710 17:00:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:32.710 17:00:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:32.710 17:00:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:32.710 17:00:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:32.710 17:00:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:32.710 17:00:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:32.710 17:00:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:32.710 17:00:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:32.710 17:00:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:32.710 17:00:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:32.710 17:00:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:32.710 17:00:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:09:32.710 17:00:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@70 -- # nvmftestfini 00:09:32.710 17:00:20 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@488 -- # nvmfcleanup 00:09:32.710 17:00:20 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@117 -- # sync 00:09:32.710 17:00:20 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:09:32.711 17:00:20 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@120 -- # set +e 00:09:32.711 17:00:20 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@121 -- # for i in {1..20} 00:09:32.711 17:00:20 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:09:32.711 rmmod nvme_tcp 00:09:32.711 rmmod nvme_fabrics 00:09:32.711 rmmod nvme_keyring 00:09:32.711 17:00:20 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:09:32.711 17:00:20 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@124 -- # set -e 00:09:32.711 17:00:20 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@125 -- # return 0 00:09:32.711 17:00:20 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@489 -- # '[' -n 2954154 ']' 00:09:32.711 17:00:20 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@490 -- # killprocess 2954154 00:09:32.711 17:00:20 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@946 -- # '[' -z 2954154 ']' 00:09:32.711 17:00:20 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@950 -- # kill -0 2954154 00:09:32.711 17:00:20 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@951 -- # uname 00:09:32.711 17:00:20 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:09:32.711 17:00:20 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 2954154 00:09:32.711 17:00:20 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:09:32.711 17:00:20 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:09:32.711 17:00:20 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@964 -- # echo 'killing process with pid 2954154' 00:09:32.711 killing process with pid 2954154 00:09:32.711 17:00:20 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@965 -- # kill 2954154 00:09:32.711 [2024-05-15 17:00:20.301883] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:09:32.711 17:00:20 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@970 -- # wait 2954154 00:09:32.968 17:00:20 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:09:32.968 17:00:20 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:09:32.968 17:00:20 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:09:32.968 17:00:20 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:09:32.968 17:00:20 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@278 -- # remove_spdk_ns 00:09:32.968 17:00:20 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:32.968 17:00:20 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:32.968 17:00:20 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:35.497 17:00:22 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:09:35.497 00:09:35.497 real 0m47.391s 00:09:35.497 user 3m9.945s 00:09:35.497 sys 0m14.408s 00:09:35.497 17:00:22 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1122 -- # xtrace_disable 00:09:35.497 17:00:22 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:09:35.497 ************************************ 00:09:35.497 END TEST nvmf_ns_hotplug_stress 00:09:35.497 ************************************ 00:09:35.497 17:00:22 nvmf_tcp -- nvmf/nvmf.sh@33 -- # run_test nvmf_connect_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:09:35.497 17:00:22 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:09:35.497 17:00:22 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:09:35.497 17:00:22 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:09:35.497 ************************************ 00:09:35.497 START TEST nvmf_connect_stress 00:09:35.497 ************************************ 00:09:35.497 17:00:22 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:09:35.497 * Looking for test storage... 00:09:35.497 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:35.497 17:00:22 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:35.497 17:00:22 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@7 -- # uname -s 00:09:35.497 17:00:22 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:35.497 17:00:22 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:35.497 17:00:22 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:35.497 17:00:22 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:35.497 17:00:22 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:35.497 17:00:22 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:35.497 17:00:22 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:35.497 17:00:22 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:35.497 17:00:22 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:35.497 17:00:22 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:35.497 17:00:22 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:09:35.497 17:00:22 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:09:35.497 17:00:22 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:35.497 17:00:22 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:35.497 17:00:22 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:35.497 17:00:22 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:35.497 17:00:22 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:35.497 17:00:22 nvmf_tcp.nvmf_connect_stress -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:35.497 17:00:22 nvmf_tcp.nvmf_connect_stress -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:35.497 17:00:22 nvmf_tcp.nvmf_connect_stress -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:35.497 17:00:22 nvmf_tcp.nvmf_connect_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:35.497 17:00:22 nvmf_tcp.nvmf_connect_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:35.497 17:00:22 nvmf_tcp.nvmf_connect_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:35.498 17:00:22 nvmf_tcp.nvmf_connect_stress -- paths/export.sh@5 -- # export PATH 00:09:35.498 17:00:22 nvmf_tcp.nvmf_connect_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:35.498 17:00:22 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@47 -- # : 0 00:09:35.498 17:00:22 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:09:35.498 17:00:22 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:09:35.498 17:00:22 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:35.498 17:00:22 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:35.498 17:00:22 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:35.498 17:00:22 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:09:35.498 17:00:22 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:09:35.498 17:00:22 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@51 -- # have_pci_nics=0 00:09:35.498 17:00:22 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@12 -- # nvmftestinit 00:09:35.498 17:00:22 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:09:35.498 17:00:22 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:35.498 17:00:22 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@448 -- # prepare_net_devs 00:09:35.498 17:00:22 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@410 -- # local -g is_hw=no 00:09:35.498 17:00:22 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@412 -- # remove_spdk_ns 00:09:35.498 17:00:22 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:35.498 17:00:22 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:35.498 17:00:22 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:35.498 17:00:22 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:09:35.498 17:00:22 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:09:35.498 17:00:22 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@285 -- # xtrace_disable 00:09:35.498 17:00:22 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:09:40.825 17:00:27 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:40.825 17:00:27 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@291 -- # pci_devs=() 00:09:40.825 17:00:27 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@291 -- # local -a pci_devs 00:09:40.825 17:00:27 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@292 -- # pci_net_devs=() 00:09:40.825 17:00:27 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:09:40.825 17:00:27 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@293 -- # pci_drivers=() 00:09:40.825 17:00:27 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@293 -- # local -A pci_drivers 00:09:40.825 17:00:27 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@295 -- # net_devs=() 00:09:40.825 17:00:27 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@295 -- # local -ga net_devs 00:09:40.825 17:00:27 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@296 -- # e810=() 00:09:40.825 17:00:27 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@296 -- # local -ga e810 00:09:40.825 17:00:27 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@297 -- # x722=() 00:09:40.825 17:00:27 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@297 -- # local -ga x722 00:09:40.825 17:00:27 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@298 -- # mlx=() 00:09:40.825 17:00:27 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@298 -- # local -ga mlx 00:09:40.825 17:00:27 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:40.825 17:00:27 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:40.825 17:00:27 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:40.825 17:00:27 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:40.825 17:00:27 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:40.825 17:00:27 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:40.825 17:00:27 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:40.825 17:00:27 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:40.825 17:00:27 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:40.825 17:00:27 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:40.825 17:00:27 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:40.825 17:00:27 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:09:40.825 17:00:27 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:09:40.825 17:00:27 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:09:40.825 17:00:27 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:09:40.825 17:00:27 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:09:40.825 17:00:27 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:09:40.825 17:00:27 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:40.825 17:00:27 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:09:40.826 Found 0000:86:00.0 (0x8086 - 0x159b) 00:09:40.826 17:00:27 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:40.826 17:00:27 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:40.826 17:00:27 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:40.826 17:00:27 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:40.826 17:00:27 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:40.826 17:00:27 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:40.826 17:00:27 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:09:40.826 Found 0000:86:00.1 (0x8086 - 0x159b) 00:09:40.826 17:00:27 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:40.826 17:00:27 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:40.826 17:00:27 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:40.826 17:00:27 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:40.826 17:00:27 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:40.826 17:00:27 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:09:40.826 17:00:27 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:09:40.826 17:00:27 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:09:40.826 17:00:27 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:40.826 17:00:27 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:40.826 17:00:27 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:40.826 17:00:27 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:40.826 17:00:27 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:40.826 17:00:27 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:40.826 17:00:27 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:40.826 17:00:27 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:09:40.826 Found net devices under 0000:86:00.0: cvl_0_0 00:09:40.826 17:00:27 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:40.826 17:00:27 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:40.826 17:00:27 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:40.826 17:00:27 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:40.826 17:00:27 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:40.826 17:00:27 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:40.826 17:00:27 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:40.826 17:00:27 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:40.826 17:00:27 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:09:40.826 Found net devices under 0000:86:00.1: cvl_0_1 00:09:40.826 17:00:27 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:40.826 17:00:27 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:09:40.826 17:00:27 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@414 -- # is_hw=yes 00:09:40.826 17:00:27 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:09:40.826 17:00:27 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:09:40.826 17:00:27 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:09:40.826 17:00:27 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:40.826 17:00:27 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:40.826 17:00:27 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:40.826 17:00:27 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:09:40.826 17:00:27 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:40.826 17:00:27 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:40.826 17:00:27 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:09:40.826 17:00:27 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:40.826 17:00:27 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:40.826 17:00:27 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:09:40.826 17:00:27 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:09:40.826 17:00:27 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:09:40.826 17:00:27 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:40.826 17:00:28 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:40.826 17:00:28 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:40.826 17:00:28 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:09:40.826 17:00:28 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:40.826 17:00:28 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:40.826 17:00:28 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:40.826 17:00:28 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:09:40.826 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:40.826 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.189 ms 00:09:40.826 00:09:40.826 --- 10.0.0.2 ping statistics --- 00:09:40.826 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:40.826 rtt min/avg/max/mdev = 0.189/0.189/0.189/0.000 ms 00:09:40.826 17:00:28 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:40.826 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:40.826 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.219 ms 00:09:40.826 00:09:40.826 --- 10.0.0.1 ping statistics --- 00:09:40.826 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:40.826 rtt min/avg/max/mdev = 0.219/0.219/0.219/0.000 ms 00:09:40.826 17:00:28 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:40.826 17:00:28 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@422 -- # return 0 00:09:40.826 17:00:28 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:09:40.826 17:00:28 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:40.826 17:00:28 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:09:40.826 17:00:28 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:09:40.826 17:00:28 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:40.826 17:00:28 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:09:40.826 17:00:28 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:09:40.826 17:00:28 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@13 -- # nvmfappstart -m 0xE 00:09:40.827 17:00:28 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:09:40.827 17:00:28 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@720 -- # xtrace_disable 00:09:40.827 17:00:28 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:09:40.827 17:00:28 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@481 -- # nvmfpid=2964907 00:09:40.827 17:00:28 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@482 -- # waitforlisten 2964907 00:09:40.827 17:00:28 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:09:40.827 17:00:28 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@827 -- # '[' -z 2964907 ']' 00:09:40.827 17:00:28 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:40.827 17:00:28 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@832 -- # local max_retries=100 00:09:40.827 17:00:28 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:40.827 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:40.827 17:00:28 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@836 -- # xtrace_disable 00:09:40.827 17:00:28 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:09:40.827 [2024-05-15 17:00:28.206057] Starting SPDK v24.05-pre git sha1 0ba8ca574 / DPDK 23.11.0 initialization... 00:09:40.827 [2024-05-15 17:00:28.206103] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:40.827 EAL: No free 2048 kB hugepages reported on node 1 00:09:40.827 [2024-05-15 17:00:28.263659] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:09:40.827 [2024-05-15 17:00:28.341837] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:40.827 [2024-05-15 17:00:28.341870] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:40.827 [2024-05-15 17:00:28.341878] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:40.827 [2024-05-15 17:00:28.341884] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:40.827 [2024-05-15 17:00:28.341890] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:40.827 [2024-05-15 17:00:28.341988] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:09:40.827 [2024-05-15 17:00:28.342070] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:09:40.827 [2024-05-15 17:00:28.342072] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:09:41.397 17:00:29 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:09:41.397 17:00:29 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@860 -- # return 0 00:09:41.397 17:00:29 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:09:41.397 17:00:29 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@726 -- # xtrace_disable 00:09:41.397 17:00:29 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:09:41.657 17:00:29 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:41.657 17:00:29 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:09:41.657 17:00:29 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:41.657 17:00:29 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:09:41.657 [2024-05-15 17:00:29.062709] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:41.657 17:00:29 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:41.657 17:00:29 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:09:41.657 17:00:29 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:41.657 17:00:29 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:09:41.657 17:00:29 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:41.657 17:00:29 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:41.657 17:00:29 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:41.657 17:00:29 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:09:41.657 [2024-05-15 17:00:29.082715] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:09:41.657 [2024-05-15 17:00:29.090273] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:41.657 17:00:29 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:41.657 17:00:29 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:09:41.657 17:00:29 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:41.657 17:00:29 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:09:41.657 NULL1 00:09:41.657 17:00:29 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:41.657 17:00:29 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@21 -- # PERF_PID=2965153 00:09:41.657 17:00:29 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@23 -- # rpcs=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:09:41.657 17:00:29 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/connect_stress/connect_stress -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -t 10 00:09:41.657 17:00:29 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@25 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:09:41.657 17:00:29 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # seq 1 20 00:09:41.657 17:00:29 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:09:41.657 17:00:29 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:09:41.657 17:00:29 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:09:41.657 17:00:29 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:09:41.657 17:00:29 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:09:41.657 17:00:29 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:09:41.657 17:00:29 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:09:41.657 17:00:29 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:09:41.657 17:00:29 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:09:41.657 17:00:29 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:09:41.657 17:00:29 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:09:41.657 17:00:29 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:09:41.657 EAL: No free 2048 kB hugepages reported on node 1 00:09:41.657 17:00:29 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:09:41.657 17:00:29 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:09:41.657 17:00:29 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:09:41.657 17:00:29 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:09:41.657 17:00:29 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:09:41.657 17:00:29 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:09:41.657 17:00:29 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:09:41.657 17:00:29 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:09:41.657 17:00:29 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:09:41.657 17:00:29 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:09:41.657 17:00:29 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:09:41.657 17:00:29 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:09:41.657 17:00:29 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:09:41.658 17:00:29 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:09:41.658 17:00:29 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:09:41.658 17:00:29 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:09:41.658 17:00:29 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:09:41.658 17:00:29 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:09:41.658 17:00:29 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:09:41.658 17:00:29 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:09:41.658 17:00:29 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:09:41.658 17:00:29 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:09:41.658 17:00:29 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:09:41.658 17:00:29 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:09:41.658 17:00:29 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:09:41.658 17:00:29 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:09:41.658 17:00:29 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:09:41.658 17:00:29 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:09:41.658 17:00:29 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2965153 00:09:41.658 17:00:29 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:09:41.658 17:00:29 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:41.658 17:00:29 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:09:41.915 17:00:29 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:41.915 17:00:29 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2965153 00:09:41.915 17:00:29 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:09:41.915 17:00:29 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:41.915 17:00:29 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:09:42.479 17:00:29 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:42.479 17:00:29 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2965153 00:09:42.479 17:00:29 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:09:42.479 17:00:29 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:42.479 17:00:29 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:09:42.736 17:00:30 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:42.736 17:00:30 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2965153 00:09:42.736 17:00:30 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:09:42.736 17:00:30 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:42.736 17:00:30 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:09:42.992 17:00:30 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:42.992 17:00:30 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2965153 00:09:42.992 17:00:30 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:09:42.992 17:00:30 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:42.992 17:00:30 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:09:43.249 17:00:30 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:43.249 17:00:30 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2965153 00:09:43.249 17:00:30 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:09:43.249 17:00:30 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:43.249 17:00:30 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:09:43.506 17:00:31 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:43.506 17:00:31 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2965153 00:09:43.506 17:00:31 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:09:43.506 17:00:31 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:43.506 17:00:31 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:09:44.067 17:00:31 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:44.067 17:00:31 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2965153 00:09:44.067 17:00:31 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:09:44.067 17:00:31 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:44.067 17:00:31 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:09:44.323 17:00:31 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:44.323 17:00:31 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2965153 00:09:44.323 17:00:31 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:09:44.323 17:00:31 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:44.323 17:00:31 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:09:44.580 17:00:32 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:44.580 17:00:32 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2965153 00:09:44.580 17:00:32 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:09:44.580 17:00:32 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:44.580 17:00:32 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:09:44.836 17:00:32 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:44.836 17:00:32 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2965153 00:09:44.836 17:00:32 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:09:44.836 17:00:32 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:44.836 17:00:32 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:09:45.398 17:00:32 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:45.398 17:00:32 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2965153 00:09:45.398 17:00:32 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:09:45.398 17:00:32 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:45.398 17:00:32 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:09:45.655 17:00:33 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:45.655 17:00:33 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2965153 00:09:45.655 17:00:33 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:09:45.655 17:00:33 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:45.655 17:00:33 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:09:45.912 17:00:33 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:45.912 17:00:33 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2965153 00:09:45.912 17:00:33 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:09:45.912 17:00:33 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:45.912 17:00:33 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:09:46.169 17:00:33 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:46.169 17:00:33 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2965153 00:09:46.169 17:00:33 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:09:46.169 17:00:33 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:46.169 17:00:33 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:09:46.424 17:00:34 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:46.424 17:00:34 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2965153 00:09:46.424 17:00:34 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:09:46.424 17:00:34 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:46.424 17:00:34 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:09:46.987 17:00:34 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:46.987 17:00:34 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2965153 00:09:46.987 17:00:34 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:09:46.987 17:00:34 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:46.987 17:00:34 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:09:47.243 17:00:34 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:47.243 17:00:34 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2965153 00:09:47.243 17:00:34 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:09:47.243 17:00:34 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:47.243 17:00:34 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:09:47.499 17:00:35 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:47.499 17:00:35 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2965153 00:09:47.499 17:00:35 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:09:47.499 17:00:35 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:47.499 17:00:35 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:09:47.755 17:00:35 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:47.755 17:00:35 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2965153 00:09:47.755 17:00:35 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:09:47.755 17:00:35 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:47.756 17:00:35 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:09:48.012 17:00:35 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:48.012 17:00:35 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2965153 00:09:48.012 17:00:35 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:09:48.012 17:00:35 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:48.012 17:00:35 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:09:48.574 17:00:35 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:48.574 17:00:35 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2965153 00:09:48.574 17:00:35 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:09:48.574 17:00:35 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:48.574 17:00:35 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:09:48.830 17:00:36 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:48.830 17:00:36 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2965153 00:09:48.830 17:00:36 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:09:48.830 17:00:36 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:48.830 17:00:36 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:09:49.086 17:00:36 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:49.086 17:00:36 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2965153 00:09:49.086 17:00:36 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:09:49.086 17:00:36 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:49.086 17:00:36 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:09:49.343 17:00:36 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:49.343 17:00:36 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2965153 00:09:49.343 17:00:36 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:09:49.343 17:00:36 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:49.343 17:00:36 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:09:49.907 17:00:37 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:49.907 17:00:37 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2965153 00:09:49.907 17:00:37 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:09:49.907 17:00:37 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:49.907 17:00:37 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:09:50.163 17:00:37 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:50.163 17:00:37 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2965153 00:09:50.163 17:00:37 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:09:50.163 17:00:37 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:50.163 17:00:37 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:09:50.420 17:00:37 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:50.420 17:00:37 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2965153 00:09:50.420 17:00:37 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:09:50.420 17:00:37 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:50.420 17:00:37 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:09:50.675 17:00:38 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:50.675 17:00:38 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2965153 00:09:50.675 17:00:38 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:09:50.675 17:00:38 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:50.675 17:00:38 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:09:50.932 17:00:38 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:50.932 17:00:38 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2965153 00:09:50.932 17:00:38 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:09:50.932 17:00:38 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:50.932 17:00:38 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:09:51.496 17:00:38 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:51.496 17:00:38 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2965153 00:09:51.496 17:00:38 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:09:51.496 17:00:38 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:51.496 17:00:38 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:09:51.752 17:00:39 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:51.752 17:00:39 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2965153 00:09:51.752 17:00:39 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:09:51.752 17:00:39 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:51.752 17:00:39 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:09:51.752 Testing NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:09:52.009 17:00:39 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:52.009 17:00:39 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2965153 00:09:52.009 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh: line 34: kill: (2965153) - No such process 00:09:52.009 17:00:39 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@38 -- # wait 2965153 00:09:52.009 17:00:39 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@39 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:09:52.009 17:00:39 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:09:52.009 17:00:39 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@43 -- # nvmftestfini 00:09:52.009 17:00:39 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@488 -- # nvmfcleanup 00:09:52.009 17:00:39 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@117 -- # sync 00:09:52.009 17:00:39 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:09:52.009 17:00:39 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@120 -- # set +e 00:09:52.009 17:00:39 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@121 -- # for i in {1..20} 00:09:52.009 17:00:39 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:09:52.009 rmmod nvme_tcp 00:09:52.009 rmmod nvme_fabrics 00:09:52.009 rmmod nvme_keyring 00:09:52.009 17:00:39 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:09:52.009 17:00:39 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@124 -- # set -e 00:09:52.009 17:00:39 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@125 -- # return 0 00:09:52.009 17:00:39 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@489 -- # '[' -n 2964907 ']' 00:09:52.009 17:00:39 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@490 -- # killprocess 2964907 00:09:52.009 17:00:39 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@946 -- # '[' -z 2964907 ']' 00:09:52.009 17:00:39 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@950 -- # kill -0 2964907 00:09:52.009 17:00:39 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@951 -- # uname 00:09:52.009 17:00:39 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:09:52.009 17:00:39 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 2964907 00:09:52.009 17:00:39 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:09:52.009 17:00:39 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:09:52.009 17:00:39 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@964 -- # echo 'killing process with pid 2964907' 00:09:52.009 killing process with pid 2964907 00:09:52.009 17:00:39 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@965 -- # kill 2964907 00:09:52.009 [2024-05-15 17:00:39.665380] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:09:52.009 17:00:39 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@970 -- # wait 2964907 00:09:52.266 17:00:39 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:09:52.266 17:00:39 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:09:52.266 17:00:39 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:09:52.266 17:00:39 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:09:52.266 17:00:39 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@278 -- # remove_spdk_ns 00:09:52.266 17:00:39 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:52.266 17:00:39 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:52.266 17:00:39 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:54.803 17:00:41 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:09:54.803 00:09:54.803 real 0m19.276s 00:09:54.803 user 0m41.905s 00:09:54.803 sys 0m8.076s 00:09:54.803 17:00:41 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@1122 -- # xtrace_disable 00:09:54.803 17:00:41 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:09:54.803 ************************************ 00:09:54.803 END TEST nvmf_connect_stress 00:09:54.803 ************************************ 00:09:54.803 17:00:41 nvmf_tcp -- nvmf/nvmf.sh@34 -- # run_test nvmf_fused_ordering /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:09:54.803 17:00:41 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:09:54.803 17:00:41 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:09:54.803 17:00:41 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:09:54.803 ************************************ 00:09:54.803 START TEST nvmf_fused_ordering 00:09:54.803 ************************************ 00:09:54.803 17:00:41 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:09:54.803 * Looking for test storage... 00:09:54.803 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:54.803 17:00:42 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:54.803 17:00:42 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@7 -- # uname -s 00:09:54.803 17:00:42 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:54.803 17:00:42 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:54.803 17:00:42 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:54.803 17:00:42 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:54.803 17:00:42 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:54.803 17:00:42 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:54.803 17:00:42 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:54.803 17:00:42 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:54.803 17:00:42 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:54.803 17:00:42 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:54.803 17:00:42 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:09:54.803 17:00:42 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:09:54.803 17:00:42 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:54.803 17:00:42 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:54.803 17:00:42 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:54.803 17:00:42 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:54.803 17:00:42 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:54.803 17:00:42 nvmf_tcp.nvmf_fused_ordering -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:54.803 17:00:42 nvmf_tcp.nvmf_fused_ordering -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:54.803 17:00:42 nvmf_tcp.nvmf_fused_ordering -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:54.803 17:00:42 nvmf_tcp.nvmf_fused_ordering -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:54.803 17:00:42 nvmf_tcp.nvmf_fused_ordering -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:54.803 17:00:42 nvmf_tcp.nvmf_fused_ordering -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:54.803 17:00:42 nvmf_tcp.nvmf_fused_ordering -- paths/export.sh@5 -- # export PATH 00:09:54.803 17:00:42 nvmf_tcp.nvmf_fused_ordering -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:54.803 17:00:42 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@47 -- # : 0 00:09:54.803 17:00:42 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:09:54.803 17:00:42 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:09:54.803 17:00:42 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:54.803 17:00:42 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:54.803 17:00:42 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:54.803 17:00:42 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:09:54.803 17:00:42 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:09:54.803 17:00:42 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@51 -- # have_pci_nics=0 00:09:54.803 17:00:42 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@12 -- # nvmftestinit 00:09:54.803 17:00:42 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:09:54.803 17:00:42 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:54.803 17:00:42 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@448 -- # prepare_net_devs 00:09:54.803 17:00:42 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@410 -- # local -g is_hw=no 00:09:54.803 17:00:42 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@412 -- # remove_spdk_ns 00:09:54.803 17:00:42 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:54.803 17:00:42 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:54.803 17:00:42 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:54.803 17:00:42 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:09:54.803 17:00:42 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:09:54.803 17:00:42 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@285 -- # xtrace_disable 00:09:54.803 17:00:42 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:10:00.070 17:00:47 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:00.070 17:00:47 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@291 -- # pci_devs=() 00:10:00.070 17:00:47 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@291 -- # local -a pci_devs 00:10:00.070 17:00:47 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@292 -- # pci_net_devs=() 00:10:00.070 17:00:47 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:10:00.070 17:00:47 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@293 -- # pci_drivers=() 00:10:00.070 17:00:47 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@293 -- # local -A pci_drivers 00:10:00.070 17:00:47 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@295 -- # net_devs=() 00:10:00.070 17:00:47 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@295 -- # local -ga net_devs 00:10:00.070 17:00:47 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@296 -- # e810=() 00:10:00.070 17:00:47 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@296 -- # local -ga e810 00:10:00.070 17:00:47 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@297 -- # x722=() 00:10:00.070 17:00:47 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@297 -- # local -ga x722 00:10:00.070 17:00:47 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@298 -- # mlx=() 00:10:00.070 17:00:47 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@298 -- # local -ga mlx 00:10:00.070 17:00:47 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:00.070 17:00:47 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:00.070 17:00:47 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:00.070 17:00:47 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:00.070 17:00:47 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:00.070 17:00:47 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:00.070 17:00:47 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:00.070 17:00:47 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:00.070 17:00:47 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:00.070 17:00:47 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:00.070 17:00:47 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:00.070 17:00:47 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:10:00.070 17:00:47 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:10:00.070 17:00:47 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:10:00.070 17:00:47 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:10:00.070 17:00:47 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:10:00.070 17:00:47 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:10:00.070 17:00:47 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:00.070 17:00:47 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:10:00.070 Found 0000:86:00.0 (0x8086 - 0x159b) 00:10:00.070 17:00:47 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:10:00.070 17:00:47 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:10:00.070 17:00:47 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:00.070 17:00:47 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:00.070 17:00:47 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:10:00.070 17:00:47 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:00.070 17:00:47 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:10:00.070 Found 0000:86:00.1 (0x8086 - 0x159b) 00:10:00.070 17:00:47 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:10:00.070 17:00:47 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:10:00.070 17:00:47 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:00.070 17:00:47 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:00.070 17:00:47 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:10:00.070 17:00:47 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:10:00.070 17:00:47 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:10:00.070 17:00:47 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:10:00.070 17:00:47 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:00.070 17:00:47 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:00.070 17:00:47 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:10:00.070 17:00:47 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:00.070 17:00:47 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@390 -- # [[ up == up ]] 00:10:00.070 17:00:47 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:10:00.070 17:00:47 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:00.070 17:00:47 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:10:00.070 Found net devices under 0000:86:00.0: cvl_0_0 00:10:00.070 17:00:47 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:10:00.070 17:00:47 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:00.070 17:00:47 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:00.070 17:00:47 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:10:00.070 17:00:47 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:00.070 17:00:47 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@390 -- # [[ up == up ]] 00:10:00.070 17:00:47 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:10:00.070 17:00:47 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:00.070 17:00:47 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:10:00.070 Found net devices under 0000:86:00.1: cvl_0_1 00:10:00.070 17:00:47 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:10:00.070 17:00:47 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:10:00.070 17:00:47 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@414 -- # is_hw=yes 00:10:00.070 17:00:47 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:10:00.070 17:00:47 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:10:00.070 17:00:47 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:10:00.070 17:00:47 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:00.070 17:00:47 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:00.070 17:00:47 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:00.070 17:00:47 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:10:00.070 17:00:47 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:00.070 17:00:47 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:00.070 17:00:47 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:10:00.070 17:00:47 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:00.070 17:00:47 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:00.070 17:00:47 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:10:00.070 17:00:47 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:10:00.070 17:00:47 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:10:00.070 17:00:47 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:00.070 17:00:47 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:00.070 17:00:47 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:00.070 17:00:47 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:10:00.070 17:00:47 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:00.070 17:00:47 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:00.070 17:00:47 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:00.071 17:00:47 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:10:00.071 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:00.071 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.194 ms 00:10:00.071 00:10:00.071 --- 10.0.0.2 ping statistics --- 00:10:00.071 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:00.071 rtt min/avg/max/mdev = 0.194/0.194/0.194/0.000 ms 00:10:00.071 17:00:47 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:00.071 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:00.071 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.249 ms 00:10:00.071 00:10:00.071 --- 10.0.0.1 ping statistics --- 00:10:00.071 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:00.071 rtt min/avg/max/mdev = 0.249/0.249/0.249/0.000 ms 00:10:00.071 17:00:47 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:00.071 17:00:47 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@422 -- # return 0 00:10:00.071 17:00:47 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:10:00.071 17:00:47 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:00.071 17:00:47 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:10:00.071 17:00:47 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:10:00.071 17:00:47 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:00.071 17:00:47 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:10:00.071 17:00:47 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:10:00.071 17:00:47 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@13 -- # nvmfappstart -m 0x2 00:10:00.071 17:00:47 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:10:00.071 17:00:47 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@720 -- # xtrace_disable 00:10:00.071 17:00:47 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:10:00.071 17:00:47 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@481 -- # nvmfpid=2970311 00:10:00.071 17:00:47 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@482 -- # waitforlisten 2970311 00:10:00.071 17:00:47 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:10:00.071 17:00:47 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@827 -- # '[' -z 2970311 ']' 00:10:00.071 17:00:47 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:00.071 17:00:47 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@832 -- # local max_retries=100 00:10:00.071 17:00:47 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:00.071 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:00.071 17:00:47 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@836 -- # xtrace_disable 00:10:00.071 17:00:47 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:10:00.071 [2024-05-15 17:00:47.654356] Starting SPDK v24.05-pre git sha1 0ba8ca574 / DPDK 23.11.0 initialization... 00:10:00.071 [2024-05-15 17:00:47.654404] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:00.071 EAL: No free 2048 kB hugepages reported on node 1 00:10:00.071 [2024-05-15 17:00:47.711747] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:00.329 [2024-05-15 17:00:47.790558] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:00.329 [2024-05-15 17:00:47.790593] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:00.329 [2024-05-15 17:00:47.790600] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:00.329 [2024-05-15 17:00:47.790606] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:00.329 [2024-05-15 17:00:47.790611] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:00.329 [2024-05-15 17:00:47.790632] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:10:00.934 17:00:48 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:10:00.934 17:00:48 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@860 -- # return 0 00:10:00.934 17:00:48 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:10:00.934 17:00:48 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@726 -- # xtrace_disable 00:10:00.934 17:00:48 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:10:00.934 17:00:48 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:00.934 17:00:48 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:10:00.934 17:00:48 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:00.934 17:00:48 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:10:00.934 [2024-05-15 17:00:48.501318] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:00.934 17:00:48 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:00.934 17:00:48 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:10:00.934 17:00:48 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:00.934 17:00:48 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:10:00.934 17:00:48 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:00.934 17:00:48 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:00.934 17:00:48 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:00.934 17:00:48 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:10:00.934 [2024-05-15 17:00:48.517302] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:10:00.934 [2024-05-15 17:00:48.517491] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:00.934 17:00:48 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:00.934 17:00:48 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:10:00.934 17:00:48 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:00.934 17:00:48 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:10:00.934 NULL1 00:10:00.934 17:00:48 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:00.934 17:00:48 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@19 -- # rpc_cmd bdev_wait_for_examine 00:10:00.934 17:00:48 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:00.934 17:00:48 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:10:00.934 17:00:48 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:00.934 17:00:48 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:10:00.934 17:00:48 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:00.934 17:00:48 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:10:00.934 17:00:48 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:00.934 17:00:48 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/fused_ordering/fused_ordering -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:10:00.934 [2024-05-15 17:00:48.569114] Starting SPDK v24.05-pre git sha1 0ba8ca574 / DPDK 23.11.0 initialization... 00:10:00.934 [2024-05-15 17:00:48.569147] [ DPDK EAL parameters: fused_ordering --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2970553 ] 00:10:00.934 EAL: No free 2048 kB hugepages reported on node 1 00:10:01.499 Attached to nqn.2016-06.io.spdk:cnode1 00:10:01.499 Namespace ID: 1 size: 1GB 00:10:01.499 fused_ordering(0) 00:10:01.499 fused_ordering(1) 00:10:01.499 fused_ordering(2) 00:10:01.499 fused_ordering(3) 00:10:01.499 fused_ordering(4) 00:10:01.499 fused_ordering(5) 00:10:01.499 fused_ordering(6) 00:10:01.499 fused_ordering(7) 00:10:01.499 fused_ordering(8) 00:10:01.499 fused_ordering(9) 00:10:01.499 fused_ordering(10) 00:10:01.499 fused_ordering(11) 00:10:01.499 fused_ordering(12) 00:10:01.499 fused_ordering(13) 00:10:01.499 fused_ordering(14) 00:10:01.499 fused_ordering(15) 00:10:01.499 fused_ordering(16) 00:10:01.499 fused_ordering(17) 00:10:01.499 fused_ordering(18) 00:10:01.499 fused_ordering(19) 00:10:01.499 fused_ordering(20) 00:10:01.499 fused_ordering(21) 00:10:01.499 fused_ordering(22) 00:10:01.499 fused_ordering(23) 00:10:01.499 fused_ordering(24) 00:10:01.499 fused_ordering(25) 00:10:01.499 fused_ordering(26) 00:10:01.499 fused_ordering(27) 00:10:01.499 fused_ordering(28) 00:10:01.499 fused_ordering(29) 00:10:01.499 fused_ordering(30) 00:10:01.499 fused_ordering(31) 00:10:01.499 fused_ordering(32) 00:10:01.499 fused_ordering(33) 00:10:01.499 fused_ordering(34) 00:10:01.499 fused_ordering(35) 00:10:01.499 fused_ordering(36) 00:10:01.499 fused_ordering(37) 00:10:01.499 fused_ordering(38) 00:10:01.499 fused_ordering(39) 00:10:01.499 fused_ordering(40) 00:10:01.499 fused_ordering(41) 00:10:01.499 fused_ordering(42) 00:10:01.499 fused_ordering(43) 00:10:01.499 fused_ordering(44) 00:10:01.499 fused_ordering(45) 00:10:01.499 fused_ordering(46) 00:10:01.499 fused_ordering(47) 00:10:01.499 fused_ordering(48) 00:10:01.499 fused_ordering(49) 00:10:01.499 fused_ordering(50) 00:10:01.499 fused_ordering(51) 00:10:01.499 fused_ordering(52) 00:10:01.499 fused_ordering(53) 00:10:01.499 fused_ordering(54) 00:10:01.499 fused_ordering(55) 00:10:01.499 fused_ordering(56) 00:10:01.499 fused_ordering(57) 00:10:01.499 fused_ordering(58) 00:10:01.499 fused_ordering(59) 00:10:01.499 fused_ordering(60) 00:10:01.499 fused_ordering(61) 00:10:01.499 fused_ordering(62) 00:10:01.499 fused_ordering(63) 00:10:01.499 fused_ordering(64) 00:10:01.499 fused_ordering(65) 00:10:01.499 fused_ordering(66) 00:10:01.499 fused_ordering(67) 00:10:01.499 fused_ordering(68) 00:10:01.499 fused_ordering(69) 00:10:01.499 fused_ordering(70) 00:10:01.499 fused_ordering(71) 00:10:01.499 fused_ordering(72) 00:10:01.499 fused_ordering(73) 00:10:01.499 fused_ordering(74) 00:10:01.499 fused_ordering(75) 00:10:01.499 fused_ordering(76) 00:10:01.499 fused_ordering(77) 00:10:01.499 fused_ordering(78) 00:10:01.499 fused_ordering(79) 00:10:01.499 fused_ordering(80) 00:10:01.499 fused_ordering(81) 00:10:01.499 fused_ordering(82) 00:10:01.499 fused_ordering(83) 00:10:01.499 fused_ordering(84) 00:10:01.499 fused_ordering(85) 00:10:01.499 fused_ordering(86) 00:10:01.499 fused_ordering(87) 00:10:01.499 fused_ordering(88) 00:10:01.499 fused_ordering(89) 00:10:01.499 fused_ordering(90) 00:10:01.499 fused_ordering(91) 00:10:01.499 fused_ordering(92) 00:10:01.499 fused_ordering(93) 00:10:01.499 fused_ordering(94) 00:10:01.499 fused_ordering(95) 00:10:01.499 fused_ordering(96) 00:10:01.499 fused_ordering(97) 00:10:01.499 fused_ordering(98) 00:10:01.499 fused_ordering(99) 00:10:01.499 fused_ordering(100) 00:10:01.499 fused_ordering(101) 00:10:01.499 fused_ordering(102) 00:10:01.499 fused_ordering(103) 00:10:01.499 fused_ordering(104) 00:10:01.499 fused_ordering(105) 00:10:01.499 fused_ordering(106) 00:10:01.499 fused_ordering(107) 00:10:01.499 fused_ordering(108) 00:10:01.499 fused_ordering(109) 00:10:01.499 fused_ordering(110) 00:10:01.499 fused_ordering(111) 00:10:01.499 fused_ordering(112) 00:10:01.499 fused_ordering(113) 00:10:01.499 fused_ordering(114) 00:10:01.499 fused_ordering(115) 00:10:01.499 fused_ordering(116) 00:10:01.499 fused_ordering(117) 00:10:01.499 fused_ordering(118) 00:10:01.499 fused_ordering(119) 00:10:01.499 fused_ordering(120) 00:10:01.499 fused_ordering(121) 00:10:01.499 fused_ordering(122) 00:10:01.499 fused_ordering(123) 00:10:01.499 fused_ordering(124) 00:10:01.499 fused_ordering(125) 00:10:01.499 fused_ordering(126) 00:10:01.499 fused_ordering(127) 00:10:01.499 fused_ordering(128) 00:10:01.499 fused_ordering(129) 00:10:01.499 fused_ordering(130) 00:10:01.499 fused_ordering(131) 00:10:01.499 fused_ordering(132) 00:10:01.499 fused_ordering(133) 00:10:01.499 fused_ordering(134) 00:10:01.499 fused_ordering(135) 00:10:01.499 fused_ordering(136) 00:10:01.499 fused_ordering(137) 00:10:01.499 fused_ordering(138) 00:10:01.499 fused_ordering(139) 00:10:01.499 fused_ordering(140) 00:10:01.499 fused_ordering(141) 00:10:01.499 fused_ordering(142) 00:10:01.499 fused_ordering(143) 00:10:01.499 fused_ordering(144) 00:10:01.499 fused_ordering(145) 00:10:01.499 fused_ordering(146) 00:10:01.499 fused_ordering(147) 00:10:01.499 fused_ordering(148) 00:10:01.499 fused_ordering(149) 00:10:01.499 fused_ordering(150) 00:10:01.499 fused_ordering(151) 00:10:01.499 fused_ordering(152) 00:10:01.499 fused_ordering(153) 00:10:01.499 fused_ordering(154) 00:10:01.499 fused_ordering(155) 00:10:01.499 fused_ordering(156) 00:10:01.499 fused_ordering(157) 00:10:01.499 fused_ordering(158) 00:10:01.499 fused_ordering(159) 00:10:01.499 fused_ordering(160) 00:10:01.499 fused_ordering(161) 00:10:01.499 fused_ordering(162) 00:10:01.499 fused_ordering(163) 00:10:01.499 fused_ordering(164) 00:10:01.499 fused_ordering(165) 00:10:01.500 fused_ordering(166) 00:10:01.500 fused_ordering(167) 00:10:01.500 fused_ordering(168) 00:10:01.500 fused_ordering(169) 00:10:01.500 fused_ordering(170) 00:10:01.500 fused_ordering(171) 00:10:01.500 fused_ordering(172) 00:10:01.500 fused_ordering(173) 00:10:01.500 fused_ordering(174) 00:10:01.500 fused_ordering(175) 00:10:01.500 fused_ordering(176) 00:10:01.500 fused_ordering(177) 00:10:01.500 fused_ordering(178) 00:10:01.500 fused_ordering(179) 00:10:01.500 fused_ordering(180) 00:10:01.500 fused_ordering(181) 00:10:01.500 fused_ordering(182) 00:10:01.500 fused_ordering(183) 00:10:01.500 fused_ordering(184) 00:10:01.500 fused_ordering(185) 00:10:01.500 fused_ordering(186) 00:10:01.500 fused_ordering(187) 00:10:01.500 fused_ordering(188) 00:10:01.500 fused_ordering(189) 00:10:01.500 fused_ordering(190) 00:10:01.500 fused_ordering(191) 00:10:01.500 fused_ordering(192) 00:10:01.500 fused_ordering(193) 00:10:01.500 fused_ordering(194) 00:10:01.500 fused_ordering(195) 00:10:01.500 fused_ordering(196) 00:10:01.500 fused_ordering(197) 00:10:01.500 fused_ordering(198) 00:10:01.500 fused_ordering(199) 00:10:01.500 fused_ordering(200) 00:10:01.500 fused_ordering(201) 00:10:01.500 fused_ordering(202) 00:10:01.500 fused_ordering(203) 00:10:01.500 fused_ordering(204) 00:10:01.500 fused_ordering(205) 00:10:01.770 fused_ordering(206) 00:10:01.770 fused_ordering(207) 00:10:01.770 fused_ordering(208) 00:10:01.770 fused_ordering(209) 00:10:01.770 fused_ordering(210) 00:10:01.770 fused_ordering(211) 00:10:01.770 fused_ordering(212) 00:10:01.770 fused_ordering(213) 00:10:01.770 fused_ordering(214) 00:10:01.770 fused_ordering(215) 00:10:01.770 fused_ordering(216) 00:10:01.770 fused_ordering(217) 00:10:01.770 fused_ordering(218) 00:10:01.770 fused_ordering(219) 00:10:01.771 fused_ordering(220) 00:10:01.771 fused_ordering(221) 00:10:01.771 fused_ordering(222) 00:10:01.771 fused_ordering(223) 00:10:01.771 fused_ordering(224) 00:10:01.771 fused_ordering(225) 00:10:01.771 fused_ordering(226) 00:10:01.771 fused_ordering(227) 00:10:01.771 fused_ordering(228) 00:10:01.771 fused_ordering(229) 00:10:01.771 fused_ordering(230) 00:10:01.771 fused_ordering(231) 00:10:01.771 fused_ordering(232) 00:10:01.771 fused_ordering(233) 00:10:01.771 fused_ordering(234) 00:10:01.771 fused_ordering(235) 00:10:01.771 fused_ordering(236) 00:10:01.771 fused_ordering(237) 00:10:01.771 fused_ordering(238) 00:10:01.771 fused_ordering(239) 00:10:01.771 fused_ordering(240) 00:10:01.771 fused_ordering(241) 00:10:01.771 fused_ordering(242) 00:10:01.771 fused_ordering(243) 00:10:01.771 fused_ordering(244) 00:10:01.771 fused_ordering(245) 00:10:01.771 fused_ordering(246) 00:10:01.771 fused_ordering(247) 00:10:01.771 fused_ordering(248) 00:10:01.771 fused_ordering(249) 00:10:01.771 fused_ordering(250) 00:10:01.771 fused_ordering(251) 00:10:01.771 fused_ordering(252) 00:10:01.771 fused_ordering(253) 00:10:01.771 fused_ordering(254) 00:10:01.771 fused_ordering(255) 00:10:01.771 fused_ordering(256) 00:10:01.771 fused_ordering(257) 00:10:01.771 fused_ordering(258) 00:10:01.771 fused_ordering(259) 00:10:01.771 fused_ordering(260) 00:10:01.771 fused_ordering(261) 00:10:01.771 fused_ordering(262) 00:10:01.771 fused_ordering(263) 00:10:01.771 fused_ordering(264) 00:10:01.771 fused_ordering(265) 00:10:01.771 fused_ordering(266) 00:10:01.771 fused_ordering(267) 00:10:01.771 fused_ordering(268) 00:10:01.771 fused_ordering(269) 00:10:01.771 fused_ordering(270) 00:10:01.771 fused_ordering(271) 00:10:01.771 fused_ordering(272) 00:10:01.771 fused_ordering(273) 00:10:01.771 fused_ordering(274) 00:10:01.771 fused_ordering(275) 00:10:01.771 fused_ordering(276) 00:10:01.771 fused_ordering(277) 00:10:01.771 fused_ordering(278) 00:10:01.771 fused_ordering(279) 00:10:01.771 fused_ordering(280) 00:10:01.771 fused_ordering(281) 00:10:01.771 fused_ordering(282) 00:10:01.771 fused_ordering(283) 00:10:01.771 fused_ordering(284) 00:10:01.771 fused_ordering(285) 00:10:01.771 fused_ordering(286) 00:10:01.771 fused_ordering(287) 00:10:01.771 fused_ordering(288) 00:10:01.771 fused_ordering(289) 00:10:01.771 fused_ordering(290) 00:10:01.771 fused_ordering(291) 00:10:01.771 fused_ordering(292) 00:10:01.771 fused_ordering(293) 00:10:01.771 fused_ordering(294) 00:10:01.771 fused_ordering(295) 00:10:01.771 fused_ordering(296) 00:10:01.771 fused_ordering(297) 00:10:01.771 fused_ordering(298) 00:10:01.771 fused_ordering(299) 00:10:01.771 fused_ordering(300) 00:10:01.771 fused_ordering(301) 00:10:01.771 fused_ordering(302) 00:10:01.771 fused_ordering(303) 00:10:01.771 fused_ordering(304) 00:10:01.771 fused_ordering(305) 00:10:01.771 fused_ordering(306) 00:10:01.771 fused_ordering(307) 00:10:01.771 fused_ordering(308) 00:10:01.771 fused_ordering(309) 00:10:01.771 fused_ordering(310) 00:10:01.771 fused_ordering(311) 00:10:01.771 fused_ordering(312) 00:10:01.771 fused_ordering(313) 00:10:01.771 fused_ordering(314) 00:10:01.771 fused_ordering(315) 00:10:01.771 fused_ordering(316) 00:10:01.771 fused_ordering(317) 00:10:01.771 fused_ordering(318) 00:10:01.771 fused_ordering(319) 00:10:01.771 fused_ordering(320) 00:10:01.771 fused_ordering(321) 00:10:01.771 fused_ordering(322) 00:10:01.771 fused_ordering(323) 00:10:01.771 fused_ordering(324) 00:10:01.771 fused_ordering(325) 00:10:01.771 fused_ordering(326) 00:10:01.771 fused_ordering(327) 00:10:01.771 fused_ordering(328) 00:10:01.771 fused_ordering(329) 00:10:01.771 fused_ordering(330) 00:10:01.771 fused_ordering(331) 00:10:01.771 fused_ordering(332) 00:10:01.771 fused_ordering(333) 00:10:01.771 fused_ordering(334) 00:10:01.771 fused_ordering(335) 00:10:01.771 fused_ordering(336) 00:10:01.771 fused_ordering(337) 00:10:01.771 fused_ordering(338) 00:10:01.771 fused_ordering(339) 00:10:01.771 fused_ordering(340) 00:10:01.771 fused_ordering(341) 00:10:01.771 fused_ordering(342) 00:10:01.771 fused_ordering(343) 00:10:01.771 fused_ordering(344) 00:10:01.771 fused_ordering(345) 00:10:01.771 fused_ordering(346) 00:10:01.771 fused_ordering(347) 00:10:01.771 fused_ordering(348) 00:10:01.771 fused_ordering(349) 00:10:01.771 fused_ordering(350) 00:10:01.771 fused_ordering(351) 00:10:01.771 fused_ordering(352) 00:10:01.771 fused_ordering(353) 00:10:01.771 fused_ordering(354) 00:10:01.771 fused_ordering(355) 00:10:01.771 fused_ordering(356) 00:10:01.771 fused_ordering(357) 00:10:01.771 fused_ordering(358) 00:10:01.771 fused_ordering(359) 00:10:01.771 fused_ordering(360) 00:10:01.771 fused_ordering(361) 00:10:01.771 fused_ordering(362) 00:10:01.771 fused_ordering(363) 00:10:01.771 fused_ordering(364) 00:10:01.771 fused_ordering(365) 00:10:01.771 fused_ordering(366) 00:10:01.771 fused_ordering(367) 00:10:01.771 fused_ordering(368) 00:10:01.771 fused_ordering(369) 00:10:01.771 fused_ordering(370) 00:10:01.771 fused_ordering(371) 00:10:01.771 fused_ordering(372) 00:10:01.771 fused_ordering(373) 00:10:01.771 fused_ordering(374) 00:10:01.771 fused_ordering(375) 00:10:01.771 fused_ordering(376) 00:10:01.771 fused_ordering(377) 00:10:01.771 fused_ordering(378) 00:10:01.771 fused_ordering(379) 00:10:01.771 fused_ordering(380) 00:10:01.771 fused_ordering(381) 00:10:01.771 fused_ordering(382) 00:10:01.771 fused_ordering(383) 00:10:01.771 fused_ordering(384) 00:10:01.771 fused_ordering(385) 00:10:01.771 fused_ordering(386) 00:10:01.771 fused_ordering(387) 00:10:01.771 fused_ordering(388) 00:10:01.771 fused_ordering(389) 00:10:01.771 fused_ordering(390) 00:10:01.771 fused_ordering(391) 00:10:01.771 fused_ordering(392) 00:10:01.771 fused_ordering(393) 00:10:01.771 fused_ordering(394) 00:10:01.771 fused_ordering(395) 00:10:01.771 fused_ordering(396) 00:10:01.771 fused_ordering(397) 00:10:01.771 fused_ordering(398) 00:10:01.771 fused_ordering(399) 00:10:01.771 fused_ordering(400) 00:10:01.771 fused_ordering(401) 00:10:01.771 fused_ordering(402) 00:10:01.771 fused_ordering(403) 00:10:01.771 fused_ordering(404) 00:10:01.771 fused_ordering(405) 00:10:01.771 fused_ordering(406) 00:10:01.771 fused_ordering(407) 00:10:01.771 fused_ordering(408) 00:10:01.771 fused_ordering(409) 00:10:01.771 fused_ordering(410) 00:10:02.033 fused_ordering(411) 00:10:02.033 fused_ordering(412) 00:10:02.033 fused_ordering(413) 00:10:02.033 fused_ordering(414) 00:10:02.033 fused_ordering(415) 00:10:02.033 fused_ordering(416) 00:10:02.033 fused_ordering(417) 00:10:02.033 fused_ordering(418) 00:10:02.033 fused_ordering(419) 00:10:02.033 fused_ordering(420) 00:10:02.033 fused_ordering(421) 00:10:02.033 fused_ordering(422) 00:10:02.033 fused_ordering(423) 00:10:02.033 fused_ordering(424) 00:10:02.033 fused_ordering(425) 00:10:02.033 fused_ordering(426) 00:10:02.033 fused_ordering(427) 00:10:02.033 fused_ordering(428) 00:10:02.033 fused_ordering(429) 00:10:02.033 fused_ordering(430) 00:10:02.033 fused_ordering(431) 00:10:02.033 fused_ordering(432) 00:10:02.033 fused_ordering(433) 00:10:02.033 fused_ordering(434) 00:10:02.033 fused_ordering(435) 00:10:02.033 fused_ordering(436) 00:10:02.033 fused_ordering(437) 00:10:02.033 fused_ordering(438) 00:10:02.033 fused_ordering(439) 00:10:02.033 fused_ordering(440) 00:10:02.033 fused_ordering(441) 00:10:02.033 fused_ordering(442) 00:10:02.033 fused_ordering(443) 00:10:02.033 fused_ordering(444) 00:10:02.033 fused_ordering(445) 00:10:02.033 fused_ordering(446) 00:10:02.033 fused_ordering(447) 00:10:02.033 fused_ordering(448) 00:10:02.033 fused_ordering(449) 00:10:02.033 fused_ordering(450) 00:10:02.033 fused_ordering(451) 00:10:02.033 fused_ordering(452) 00:10:02.033 fused_ordering(453) 00:10:02.033 fused_ordering(454) 00:10:02.033 fused_ordering(455) 00:10:02.033 fused_ordering(456) 00:10:02.033 fused_ordering(457) 00:10:02.033 fused_ordering(458) 00:10:02.033 fused_ordering(459) 00:10:02.033 fused_ordering(460) 00:10:02.033 fused_ordering(461) 00:10:02.033 fused_ordering(462) 00:10:02.033 fused_ordering(463) 00:10:02.033 fused_ordering(464) 00:10:02.033 fused_ordering(465) 00:10:02.033 fused_ordering(466) 00:10:02.033 fused_ordering(467) 00:10:02.033 fused_ordering(468) 00:10:02.033 fused_ordering(469) 00:10:02.033 fused_ordering(470) 00:10:02.033 fused_ordering(471) 00:10:02.033 fused_ordering(472) 00:10:02.033 fused_ordering(473) 00:10:02.033 fused_ordering(474) 00:10:02.033 fused_ordering(475) 00:10:02.033 fused_ordering(476) 00:10:02.033 fused_ordering(477) 00:10:02.033 fused_ordering(478) 00:10:02.033 fused_ordering(479) 00:10:02.033 fused_ordering(480) 00:10:02.033 fused_ordering(481) 00:10:02.033 fused_ordering(482) 00:10:02.033 fused_ordering(483) 00:10:02.033 fused_ordering(484) 00:10:02.033 fused_ordering(485) 00:10:02.033 fused_ordering(486) 00:10:02.033 fused_ordering(487) 00:10:02.033 fused_ordering(488) 00:10:02.033 fused_ordering(489) 00:10:02.033 fused_ordering(490) 00:10:02.033 fused_ordering(491) 00:10:02.033 fused_ordering(492) 00:10:02.033 fused_ordering(493) 00:10:02.033 fused_ordering(494) 00:10:02.033 fused_ordering(495) 00:10:02.033 fused_ordering(496) 00:10:02.033 fused_ordering(497) 00:10:02.033 fused_ordering(498) 00:10:02.033 fused_ordering(499) 00:10:02.033 fused_ordering(500) 00:10:02.033 fused_ordering(501) 00:10:02.033 fused_ordering(502) 00:10:02.033 fused_ordering(503) 00:10:02.033 fused_ordering(504) 00:10:02.033 fused_ordering(505) 00:10:02.033 fused_ordering(506) 00:10:02.033 fused_ordering(507) 00:10:02.033 fused_ordering(508) 00:10:02.033 fused_ordering(509) 00:10:02.033 fused_ordering(510) 00:10:02.033 fused_ordering(511) 00:10:02.033 fused_ordering(512) 00:10:02.033 fused_ordering(513) 00:10:02.033 fused_ordering(514) 00:10:02.033 fused_ordering(515) 00:10:02.033 fused_ordering(516) 00:10:02.033 fused_ordering(517) 00:10:02.033 fused_ordering(518) 00:10:02.033 fused_ordering(519) 00:10:02.033 fused_ordering(520) 00:10:02.033 fused_ordering(521) 00:10:02.033 fused_ordering(522) 00:10:02.033 fused_ordering(523) 00:10:02.033 fused_ordering(524) 00:10:02.033 fused_ordering(525) 00:10:02.033 fused_ordering(526) 00:10:02.033 fused_ordering(527) 00:10:02.033 fused_ordering(528) 00:10:02.033 fused_ordering(529) 00:10:02.033 fused_ordering(530) 00:10:02.033 fused_ordering(531) 00:10:02.033 fused_ordering(532) 00:10:02.033 fused_ordering(533) 00:10:02.033 fused_ordering(534) 00:10:02.033 fused_ordering(535) 00:10:02.033 fused_ordering(536) 00:10:02.033 fused_ordering(537) 00:10:02.033 fused_ordering(538) 00:10:02.033 fused_ordering(539) 00:10:02.033 fused_ordering(540) 00:10:02.033 fused_ordering(541) 00:10:02.033 fused_ordering(542) 00:10:02.033 fused_ordering(543) 00:10:02.033 fused_ordering(544) 00:10:02.033 fused_ordering(545) 00:10:02.033 fused_ordering(546) 00:10:02.033 fused_ordering(547) 00:10:02.033 fused_ordering(548) 00:10:02.033 fused_ordering(549) 00:10:02.033 fused_ordering(550) 00:10:02.033 fused_ordering(551) 00:10:02.033 fused_ordering(552) 00:10:02.033 fused_ordering(553) 00:10:02.033 fused_ordering(554) 00:10:02.033 fused_ordering(555) 00:10:02.033 fused_ordering(556) 00:10:02.033 fused_ordering(557) 00:10:02.034 fused_ordering(558) 00:10:02.034 fused_ordering(559) 00:10:02.034 fused_ordering(560) 00:10:02.034 fused_ordering(561) 00:10:02.034 fused_ordering(562) 00:10:02.034 fused_ordering(563) 00:10:02.034 fused_ordering(564) 00:10:02.034 fused_ordering(565) 00:10:02.034 fused_ordering(566) 00:10:02.034 fused_ordering(567) 00:10:02.034 fused_ordering(568) 00:10:02.034 fused_ordering(569) 00:10:02.034 fused_ordering(570) 00:10:02.034 fused_ordering(571) 00:10:02.034 fused_ordering(572) 00:10:02.034 fused_ordering(573) 00:10:02.034 fused_ordering(574) 00:10:02.034 fused_ordering(575) 00:10:02.034 fused_ordering(576) 00:10:02.034 fused_ordering(577) 00:10:02.034 fused_ordering(578) 00:10:02.034 fused_ordering(579) 00:10:02.034 fused_ordering(580) 00:10:02.034 fused_ordering(581) 00:10:02.034 fused_ordering(582) 00:10:02.034 fused_ordering(583) 00:10:02.034 fused_ordering(584) 00:10:02.034 fused_ordering(585) 00:10:02.034 fused_ordering(586) 00:10:02.034 fused_ordering(587) 00:10:02.034 fused_ordering(588) 00:10:02.034 fused_ordering(589) 00:10:02.034 fused_ordering(590) 00:10:02.034 fused_ordering(591) 00:10:02.034 fused_ordering(592) 00:10:02.034 fused_ordering(593) 00:10:02.034 fused_ordering(594) 00:10:02.034 fused_ordering(595) 00:10:02.034 fused_ordering(596) 00:10:02.034 fused_ordering(597) 00:10:02.034 fused_ordering(598) 00:10:02.034 fused_ordering(599) 00:10:02.034 fused_ordering(600) 00:10:02.034 fused_ordering(601) 00:10:02.034 fused_ordering(602) 00:10:02.034 fused_ordering(603) 00:10:02.034 fused_ordering(604) 00:10:02.034 fused_ordering(605) 00:10:02.034 fused_ordering(606) 00:10:02.034 fused_ordering(607) 00:10:02.034 fused_ordering(608) 00:10:02.034 fused_ordering(609) 00:10:02.034 fused_ordering(610) 00:10:02.034 fused_ordering(611) 00:10:02.034 fused_ordering(612) 00:10:02.034 fused_ordering(613) 00:10:02.034 fused_ordering(614) 00:10:02.034 fused_ordering(615) 00:10:02.292 fused_ordering(616) 00:10:02.292 fused_ordering(617) 00:10:02.292 fused_ordering(618) 00:10:02.292 fused_ordering(619) 00:10:02.292 fused_ordering(620) 00:10:02.292 fused_ordering(621) 00:10:02.292 fused_ordering(622) 00:10:02.292 fused_ordering(623) 00:10:02.292 fused_ordering(624) 00:10:02.292 fused_ordering(625) 00:10:02.292 fused_ordering(626) 00:10:02.292 fused_ordering(627) 00:10:02.292 fused_ordering(628) 00:10:02.292 fused_ordering(629) 00:10:02.292 fused_ordering(630) 00:10:02.292 fused_ordering(631) 00:10:02.292 fused_ordering(632) 00:10:02.292 fused_ordering(633) 00:10:02.292 fused_ordering(634) 00:10:02.292 fused_ordering(635) 00:10:02.292 fused_ordering(636) 00:10:02.292 fused_ordering(637) 00:10:02.292 fused_ordering(638) 00:10:02.292 fused_ordering(639) 00:10:02.292 fused_ordering(640) 00:10:02.292 fused_ordering(641) 00:10:02.292 fused_ordering(642) 00:10:02.292 fused_ordering(643) 00:10:02.292 fused_ordering(644) 00:10:02.292 fused_ordering(645) 00:10:02.292 fused_ordering(646) 00:10:02.292 fused_ordering(647) 00:10:02.292 fused_ordering(648) 00:10:02.292 fused_ordering(649) 00:10:02.292 fused_ordering(650) 00:10:02.292 fused_ordering(651) 00:10:02.292 fused_ordering(652) 00:10:02.292 fused_ordering(653) 00:10:02.292 fused_ordering(654) 00:10:02.292 fused_ordering(655) 00:10:02.292 fused_ordering(656) 00:10:02.292 fused_ordering(657) 00:10:02.292 fused_ordering(658) 00:10:02.292 fused_ordering(659) 00:10:02.292 fused_ordering(660) 00:10:02.292 fused_ordering(661) 00:10:02.292 fused_ordering(662) 00:10:02.292 fused_ordering(663) 00:10:02.292 fused_ordering(664) 00:10:02.292 fused_ordering(665) 00:10:02.292 fused_ordering(666) 00:10:02.292 fused_ordering(667) 00:10:02.292 fused_ordering(668) 00:10:02.292 fused_ordering(669) 00:10:02.292 fused_ordering(670) 00:10:02.292 fused_ordering(671) 00:10:02.292 fused_ordering(672) 00:10:02.292 fused_ordering(673) 00:10:02.292 fused_ordering(674) 00:10:02.292 fused_ordering(675) 00:10:02.292 fused_ordering(676) 00:10:02.292 fused_ordering(677) 00:10:02.292 fused_ordering(678) 00:10:02.292 fused_ordering(679) 00:10:02.292 fused_ordering(680) 00:10:02.292 fused_ordering(681) 00:10:02.292 fused_ordering(682) 00:10:02.292 fused_ordering(683) 00:10:02.292 fused_ordering(684) 00:10:02.292 fused_ordering(685) 00:10:02.292 fused_ordering(686) 00:10:02.292 fused_ordering(687) 00:10:02.292 fused_ordering(688) 00:10:02.292 fused_ordering(689) 00:10:02.292 fused_ordering(690) 00:10:02.292 fused_ordering(691) 00:10:02.292 fused_ordering(692) 00:10:02.292 fused_ordering(693) 00:10:02.292 fused_ordering(694) 00:10:02.292 fused_ordering(695) 00:10:02.292 fused_ordering(696) 00:10:02.292 fused_ordering(697) 00:10:02.292 fused_ordering(698) 00:10:02.292 fused_ordering(699) 00:10:02.293 fused_ordering(700) 00:10:02.293 fused_ordering(701) 00:10:02.293 fused_ordering(702) 00:10:02.293 fused_ordering(703) 00:10:02.293 fused_ordering(704) 00:10:02.293 fused_ordering(705) 00:10:02.293 fused_ordering(706) 00:10:02.293 fused_ordering(707) 00:10:02.293 fused_ordering(708) 00:10:02.293 fused_ordering(709) 00:10:02.293 fused_ordering(710) 00:10:02.293 fused_ordering(711) 00:10:02.293 fused_ordering(712) 00:10:02.293 fused_ordering(713) 00:10:02.293 fused_ordering(714) 00:10:02.293 fused_ordering(715) 00:10:02.293 fused_ordering(716) 00:10:02.293 fused_ordering(717) 00:10:02.293 fused_ordering(718) 00:10:02.293 fused_ordering(719) 00:10:02.293 fused_ordering(720) 00:10:02.293 fused_ordering(721) 00:10:02.293 fused_ordering(722) 00:10:02.293 fused_ordering(723) 00:10:02.293 fused_ordering(724) 00:10:02.293 fused_ordering(725) 00:10:02.293 fused_ordering(726) 00:10:02.293 fused_ordering(727) 00:10:02.293 fused_ordering(728) 00:10:02.293 fused_ordering(729) 00:10:02.293 fused_ordering(730) 00:10:02.293 fused_ordering(731) 00:10:02.293 fused_ordering(732) 00:10:02.293 fused_ordering(733) 00:10:02.293 fused_ordering(734) 00:10:02.293 fused_ordering(735) 00:10:02.293 fused_ordering(736) 00:10:02.293 fused_ordering(737) 00:10:02.293 fused_ordering(738) 00:10:02.293 fused_ordering(739) 00:10:02.293 fused_ordering(740) 00:10:02.293 fused_ordering(741) 00:10:02.293 fused_ordering(742) 00:10:02.293 fused_ordering(743) 00:10:02.293 fused_ordering(744) 00:10:02.293 fused_ordering(745) 00:10:02.293 fused_ordering(746) 00:10:02.293 fused_ordering(747) 00:10:02.293 fused_ordering(748) 00:10:02.293 fused_ordering(749) 00:10:02.293 fused_ordering(750) 00:10:02.293 fused_ordering(751) 00:10:02.293 fused_ordering(752) 00:10:02.293 fused_ordering(753) 00:10:02.293 fused_ordering(754) 00:10:02.293 fused_ordering(755) 00:10:02.293 fused_ordering(756) 00:10:02.293 fused_ordering(757) 00:10:02.293 fused_ordering(758) 00:10:02.293 fused_ordering(759) 00:10:02.293 fused_ordering(760) 00:10:02.293 fused_ordering(761) 00:10:02.293 fused_ordering(762) 00:10:02.293 fused_ordering(763) 00:10:02.293 fused_ordering(764) 00:10:02.293 fused_ordering(765) 00:10:02.293 fused_ordering(766) 00:10:02.293 fused_ordering(767) 00:10:02.293 fused_ordering(768) 00:10:02.293 fused_ordering(769) 00:10:02.293 fused_ordering(770) 00:10:02.293 fused_ordering(771) 00:10:02.293 fused_ordering(772) 00:10:02.293 fused_ordering(773) 00:10:02.293 fused_ordering(774) 00:10:02.293 fused_ordering(775) 00:10:02.293 fused_ordering(776) 00:10:02.293 fused_ordering(777) 00:10:02.293 fused_ordering(778) 00:10:02.293 fused_ordering(779) 00:10:02.293 fused_ordering(780) 00:10:02.293 fused_ordering(781) 00:10:02.293 fused_ordering(782) 00:10:02.293 fused_ordering(783) 00:10:02.293 fused_ordering(784) 00:10:02.293 fused_ordering(785) 00:10:02.293 fused_ordering(786) 00:10:02.293 fused_ordering(787) 00:10:02.293 fused_ordering(788) 00:10:02.293 fused_ordering(789) 00:10:02.293 fused_ordering(790) 00:10:02.293 fused_ordering(791) 00:10:02.293 fused_ordering(792) 00:10:02.293 fused_ordering(793) 00:10:02.293 fused_ordering(794) 00:10:02.293 fused_ordering(795) 00:10:02.293 fused_ordering(796) 00:10:02.293 fused_ordering(797) 00:10:02.293 fused_ordering(798) 00:10:02.293 fused_ordering(799) 00:10:02.293 fused_ordering(800) 00:10:02.293 fused_ordering(801) 00:10:02.293 fused_ordering(802) 00:10:02.293 fused_ordering(803) 00:10:02.293 fused_ordering(804) 00:10:02.293 fused_ordering(805) 00:10:02.293 fused_ordering(806) 00:10:02.293 fused_ordering(807) 00:10:02.293 fused_ordering(808) 00:10:02.293 fused_ordering(809) 00:10:02.293 fused_ordering(810) 00:10:02.293 fused_ordering(811) 00:10:02.293 fused_ordering(812) 00:10:02.293 fused_ordering(813) 00:10:02.293 fused_ordering(814) 00:10:02.293 fused_ordering(815) 00:10:02.293 fused_ordering(816) 00:10:02.293 fused_ordering(817) 00:10:02.293 fused_ordering(818) 00:10:02.293 fused_ordering(819) 00:10:02.293 fused_ordering(820) 00:10:02.857 fused_ordering(821) 00:10:02.857 fused_ordering(822) 00:10:02.857 fused_ordering(823) 00:10:02.857 fused_ordering(824) 00:10:02.857 fused_ordering(825) 00:10:02.857 fused_ordering(826) 00:10:02.857 fused_ordering(827) 00:10:02.857 fused_ordering(828) 00:10:02.857 fused_ordering(829) 00:10:02.857 fused_ordering(830) 00:10:02.857 fused_ordering(831) 00:10:02.857 fused_ordering(832) 00:10:02.857 fused_ordering(833) 00:10:02.857 fused_ordering(834) 00:10:02.857 fused_ordering(835) 00:10:02.857 fused_ordering(836) 00:10:02.857 fused_ordering(837) 00:10:02.857 fused_ordering(838) 00:10:02.857 fused_ordering(839) 00:10:02.857 fused_ordering(840) 00:10:02.857 fused_ordering(841) 00:10:02.857 fused_ordering(842) 00:10:02.857 fused_ordering(843) 00:10:02.857 fused_ordering(844) 00:10:02.857 fused_ordering(845) 00:10:02.857 fused_ordering(846) 00:10:02.857 fused_ordering(847) 00:10:02.857 fused_ordering(848) 00:10:02.857 fused_ordering(849) 00:10:02.857 fused_ordering(850) 00:10:02.857 fused_ordering(851) 00:10:02.857 fused_ordering(852) 00:10:02.857 fused_ordering(853) 00:10:02.857 fused_ordering(854) 00:10:02.857 fused_ordering(855) 00:10:02.857 fused_ordering(856) 00:10:02.857 fused_ordering(857) 00:10:02.857 fused_ordering(858) 00:10:02.857 fused_ordering(859) 00:10:02.857 fused_ordering(860) 00:10:02.857 fused_ordering(861) 00:10:02.857 fused_ordering(862) 00:10:02.857 fused_ordering(863) 00:10:02.857 fused_ordering(864) 00:10:02.857 fused_ordering(865) 00:10:02.857 fused_ordering(866) 00:10:02.857 fused_ordering(867) 00:10:02.857 fused_ordering(868) 00:10:02.857 fused_ordering(869) 00:10:02.857 fused_ordering(870) 00:10:02.857 fused_ordering(871) 00:10:02.857 fused_ordering(872) 00:10:02.857 fused_ordering(873) 00:10:02.857 fused_ordering(874) 00:10:02.857 fused_ordering(875) 00:10:02.857 fused_ordering(876) 00:10:02.857 fused_ordering(877) 00:10:02.857 fused_ordering(878) 00:10:02.857 fused_ordering(879) 00:10:02.857 fused_ordering(880) 00:10:02.857 fused_ordering(881) 00:10:02.857 fused_ordering(882) 00:10:02.857 fused_ordering(883) 00:10:02.857 fused_ordering(884) 00:10:02.857 fused_ordering(885) 00:10:02.857 fused_ordering(886) 00:10:02.857 fused_ordering(887) 00:10:02.857 fused_ordering(888) 00:10:02.857 fused_ordering(889) 00:10:02.857 fused_ordering(890) 00:10:02.857 fused_ordering(891) 00:10:02.857 fused_ordering(892) 00:10:02.857 fused_ordering(893) 00:10:02.857 fused_ordering(894) 00:10:02.857 fused_ordering(895) 00:10:02.857 fused_ordering(896) 00:10:02.857 fused_ordering(897) 00:10:02.857 fused_ordering(898) 00:10:02.857 fused_ordering(899) 00:10:02.857 fused_ordering(900) 00:10:02.857 fused_ordering(901) 00:10:02.857 fused_ordering(902) 00:10:02.857 fused_ordering(903) 00:10:02.857 fused_ordering(904) 00:10:02.857 fused_ordering(905) 00:10:02.857 fused_ordering(906) 00:10:02.857 fused_ordering(907) 00:10:02.857 fused_ordering(908) 00:10:02.857 fused_ordering(909) 00:10:02.857 fused_ordering(910) 00:10:02.857 fused_ordering(911) 00:10:02.857 fused_ordering(912) 00:10:02.857 fused_ordering(913) 00:10:02.857 fused_ordering(914) 00:10:02.857 fused_ordering(915) 00:10:02.857 fused_ordering(916) 00:10:02.857 fused_ordering(917) 00:10:02.857 fused_ordering(918) 00:10:02.857 fused_ordering(919) 00:10:02.857 fused_ordering(920) 00:10:02.857 fused_ordering(921) 00:10:02.857 fused_ordering(922) 00:10:02.857 fused_ordering(923) 00:10:02.857 fused_ordering(924) 00:10:02.857 fused_ordering(925) 00:10:02.857 fused_ordering(926) 00:10:02.857 fused_ordering(927) 00:10:02.857 fused_ordering(928) 00:10:02.857 fused_ordering(929) 00:10:02.857 fused_ordering(930) 00:10:02.857 fused_ordering(931) 00:10:02.857 fused_ordering(932) 00:10:02.857 fused_ordering(933) 00:10:02.857 fused_ordering(934) 00:10:02.857 fused_ordering(935) 00:10:02.857 fused_ordering(936) 00:10:02.857 fused_ordering(937) 00:10:02.857 fused_ordering(938) 00:10:02.857 fused_ordering(939) 00:10:02.857 fused_ordering(940) 00:10:02.857 fused_ordering(941) 00:10:02.857 fused_ordering(942) 00:10:02.857 fused_ordering(943) 00:10:02.857 fused_ordering(944) 00:10:02.857 fused_ordering(945) 00:10:02.857 fused_ordering(946) 00:10:02.857 fused_ordering(947) 00:10:02.857 fused_ordering(948) 00:10:02.857 fused_ordering(949) 00:10:02.857 fused_ordering(950) 00:10:02.857 fused_ordering(951) 00:10:02.857 fused_ordering(952) 00:10:02.857 fused_ordering(953) 00:10:02.857 fused_ordering(954) 00:10:02.857 fused_ordering(955) 00:10:02.857 fused_ordering(956) 00:10:02.857 fused_ordering(957) 00:10:02.857 fused_ordering(958) 00:10:02.857 fused_ordering(959) 00:10:02.857 fused_ordering(960) 00:10:02.857 fused_ordering(961) 00:10:02.857 fused_ordering(962) 00:10:02.857 fused_ordering(963) 00:10:02.857 fused_ordering(964) 00:10:02.857 fused_ordering(965) 00:10:02.857 fused_ordering(966) 00:10:02.857 fused_ordering(967) 00:10:02.857 fused_ordering(968) 00:10:02.857 fused_ordering(969) 00:10:02.857 fused_ordering(970) 00:10:02.857 fused_ordering(971) 00:10:02.857 fused_ordering(972) 00:10:02.857 fused_ordering(973) 00:10:02.857 fused_ordering(974) 00:10:02.857 fused_ordering(975) 00:10:02.857 fused_ordering(976) 00:10:02.857 fused_ordering(977) 00:10:02.857 fused_ordering(978) 00:10:02.857 fused_ordering(979) 00:10:02.857 fused_ordering(980) 00:10:02.857 fused_ordering(981) 00:10:02.857 fused_ordering(982) 00:10:02.857 fused_ordering(983) 00:10:02.857 fused_ordering(984) 00:10:02.857 fused_ordering(985) 00:10:02.857 fused_ordering(986) 00:10:02.857 fused_ordering(987) 00:10:02.857 fused_ordering(988) 00:10:02.857 fused_ordering(989) 00:10:02.857 fused_ordering(990) 00:10:02.857 fused_ordering(991) 00:10:02.857 fused_ordering(992) 00:10:02.857 fused_ordering(993) 00:10:02.857 fused_ordering(994) 00:10:02.857 fused_ordering(995) 00:10:02.857 fused_ordering(996) 00:10:02.857 fused_ordering(997) 00:10:02.857 fused_ordering(998) 00:10:02.857 fused_ordering(999) 00:10:02.857 fused_ordering(1000) 00:10:02.857 fused_ordering(1001) 00:10:02.857 fused_ordering(1002) 00:10:02.857 fused_ordering(1003) 00:10:02.857 fused_ordering(1004) 00:10:02.857 fused_ordering(1005) 00:10:02.857 fused_ordering(1006) 00:10:02.857 fused_ordering(1007) 00:10:02.857 fused_ordering(1008) 00:10:02.857 fused_ordering(1009) 00:10:02.857 fused_ordering(1010) 00:10:02.857 fused_ordering(1011) 00:10:02.857 fused_ordering(1012) 00:10:02.857 fused_ordering(1013) 00:10:02.857 fused_ordering(1014) 00:10:02.857 fused_ordering(1015) 00:10:02.857 fused_ordering(1016) 00:10:02.857 fused_ordering(1017) 00:10:02.857 fused_ordering(1018) 00:10:02.857 fused_ordering(1019) 00:10:02.857 fused_ordering(1020) 00:10:02.857 fused_ordering(1021) 00:10:02.857 fused_ordering(1022) 00:10:02.857 fused_ordering(1023) 00:10:02.857 17:00:50 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@23 -- # trap - SIGINT SIGTERM EXIT 00:10:02.857 17:00:50 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@25 -- # nvmftestfini 00:10:02.857 17:00:50 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@488 -- # nvmfcleanup 00:10:02.857 17:00:50 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@117 -- # sync 00:10:02.857 17:00:50 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:10:02.857 17:00:50 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@120 -- # set +e 00:10:02.857 17:00:50 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@121 -- # for i in {1..20} 00:10:02.857 17:00:50 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:10:02.857 rmmod nvme_tcp 00:10:02.857 rmmod nvme_fabrics 00:10:03.116 rmmod nvme_keyring 00:10:03.116 17:00:50 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:10:03.116 17:00:50 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@124 -- # set -e 00:10:03.116 17:00:50 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@125 -- # return 0 00:10:03.116 17:00:50 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@489 -- # '[' -n 2970311 ']' 00:10:03.116 17:00:50 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@490 -- # killprocess 2970311 00:10:03.116 17:00:50 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@946 -- # '[' -z 2970311 ']' 00:10:03.116 17:00:50 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@950 -- # kill -0 2970311 00:10:03.116 17:00:50 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@951 -- # uname 00:10:03.116 17:00:50 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:10:03.116 17:00:50 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 2970311 00:10:03.116 17:00:50 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:10:03.116 17:00:50 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:10:03.116 17:00:50 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@964 -- # echo 'killing process with pid 2970311' 00:10:03.116 killing process with pid 2970311 00:10:03.116 17:00:50 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@965 -- # kill 2970311 00:10:03.116 [2024-05-15 17:00:50.603918] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:10:03.116 17:00:50 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@970 -- # wait 2970311 00:10:03.375 17:00:50 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:10:03.375 17:00:50 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:10:03.375 17:00:50 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:10:03.375 17:00:50 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:10:03.375 17:00:50 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@278 -- # remove_spdk_ns 00:10:03.375 17:00:50 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:03.375 17:00:50 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:03.375 17:00:50 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:05.276 17:00:52 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:10:05.276 00:10:05.276 real 0m10.875s 00:10:05.276 user 0m5.631s 00:10:05.276 sys 0m5.632s 00:10:05.276 17:00:52 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@1122 -- # xtrace_disable 00:10:05.276 17:00:52 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:10:05.276 ************************************ 00:10:05.276 END TEST nvmf_fused_ordering 00:10:05.276 ************************************ 00:10:05.276 17:00:52 nvmf_tcp -- nvmf/nvmf.sh@35 -- # run_test nvmf_delete_subsystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:10:05.276 17:00:52 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:10:05.276 17:00:52 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:10:05.276 17:00:52 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:10:05.534 ************************************ 00:10:05.534 START TEST nvmf_delete_subsystem 00:10:05.534 ************************************ 00:10:05.534 17:00:52 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:10:05.534 * Looking for test storage... 00:10:05.534 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:05.534 17:00:53 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:05.534 17:00:53 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # uname -s 00:10:05.534 17:00:53 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:05.534 17:00:53 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:05.534 17:00:53 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:05.534 17:00:53 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:05.534 17:00:53 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:05.534 17:00:53 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:05.534 17:00:53 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:05.534 17:00:53 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:05.534 17:00:53 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:05.534 17:00:53 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:05.534 17:00:53 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:10:05.534 17:00:53 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:10:05.534 17:00:53 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:05.534 17:00:53 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:05.534 17:00:53 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:05.534 17:00:53 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:05.534 17:00:53 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:05.534 17:00:53 nvmf_tcp.nvmf_delete_subsystem -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:05.534 17:00:53 nvmf_tcp.nvmf_delete_subsystem -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:05.534 17:00:53 nvmf_tcp.nvmf_delete_subsystem -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:05.534 17:00:53 nvmf_tcp.nvmf_delete_subsystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:05.534 17:00:53 nvmf_tcp.nvmf_delete_subsystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:05.534 17:00:53 nvmf_tcp.nvmf_delete_subsystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:05.534 17:00:53 nvmf_tcp.nvmf_delete_subsystem -- paths/export.sh@5 -- # export PATH 00:10:05.534 17:00:53 nvmf_tcp.nvmf_delete_subsystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:05.534 17:00:53 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@47 -- # : 0 00:10:05.534 17:00:53 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:10:05.534 17:00:53 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:10:05.534 17:00:53 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:05.534 17:00:53 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:05.534 17:00:53 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:05.534 17:00:53 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:10:05.534 17:00:53 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:10:05.534 17:00:53 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@51 -- # have_pci_nics=0 00:10:05.534 17:00:53 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:10:05.534 17:00:53 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:10:05.534 17:00:53 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:05.534 17:00:53 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@448 -- # prepare_net_devs 00:10:05.534 17:00:53 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # local -g is_hw=no 00:10:05.534 17:00:53 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@412 -- # remove_spdk_ns 00:10:05.534 17:00:53 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:05.534 17:00:53 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:05.534 17:00:53 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:05.534 17:00:53 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:10:05.534 17:00:53 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:10:05.534 17:00:53 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@285 -- # xtrace_disable 00:10:05.534 17:00:53 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:10:10.795 17:00:57 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:10.795 17:00:57 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@291 -- # pci_devs=() 00:10:10.795 17:00:57 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@291 -- # local -a pci_devs 00:10:10.795 17:00:57 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@292 -- # pci_net_devs=() 00:10:10.796 17:00:57 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:10:10.796 17:00:57 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@293 -- # pci_drivers=() 00:10:10.796 17:00:57 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@293 -- # local -A pci_drivers 00:10:10.796 17:00:57 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@295 -- # net_devs=() 00:10:10.796 17:00:57 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@295 -- # local -ga net_devs 00:10:10.796 17:00:57 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@296 -- # e810=() 00:10:10.796 17:00:57 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@296 -- # local -ga e810 00:10:10.796 17:00:57 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # x722=() 00:10:10.796 17:00:57 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # local -ga x722 00:10:10.796 17:00:57 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # mlx=() 00:10:10.796 17:00:57 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # local -ga mlx 00:10:10.796 17:00:57 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:10.796 17:00:57 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:10.796 17:00:57 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:10.796 17:00:57 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:10.796 17:00:57 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:10.796 17:00:57 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:10.796 17:00:57 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:10.796 17:00:57 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:10.796 17:00:57 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:10.796 17:00:57 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:10.796 17:00:57 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:10.796 17:00:57 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:10:10.796 17:00:57 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:10:10.796 17:00:57 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:10:10.796 17:00:57 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:10:10.796 17:00:57 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:10:10.796 17:00:57 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:10:10.796 17:00:57 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:10.796 17:00:57 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:10:10.796 Found 0000:86:00.0 (0x8086 - 0x159b) 00:10:10.796 17:00:57 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:10:10.796 17:00:57 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:10:10.796 17:00:57 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:10.796 17:00:57 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:10.796 17:00:57 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:10:10.796 17:00:57 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:10.796 17:00:57 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:10:10.796 Found 0000:86:00.1 (0x8086 - 0x159b) 00:10:10.796 17:00:57 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:10:10.796 17:00:57 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:10:10.796 17:00:57 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:10.796 17:00:57 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:10.796 17:00:57 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:10:10.796 17:00:57 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:10:10.796 17:00:57 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:10:10.796 17:00:57 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:10:10.796 17:00:57 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:10.796 17:00:57 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:10.796 17:00:57 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:10:10.796 17:00:57 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:10.796 17:00:57 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@390 -- # [[ up == up ]] 00:10:10.796 17:00:57 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:10:10.796 17:00:57 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:10.796 17:00:57 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:10:10.796 Found net devices under 0000:86:00.0: cvl_0_0 00:10:10.796 17:00:57 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:10:10.796 17:00:57 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:10.796 17:00:57 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:10.796 17:00:57 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:10:10.796 17:00:57 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:10.796 17:00:57 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@390 -- # [[ up == up ]] 00:10:10.796 17:00:57 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:10:10.796 17:00:57 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:10.796 17:00:57 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:10:10.796 Found net devices under 0000:86:00.1: cvl_0_1 00:10:10.796 17:00:57 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:10:10.796 17:00:57 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:10:10.796 17:00:57 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # is_hw=yes 00:10:10.796 17:00:57 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:10:10.796 17:00:57 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:10:10.796 17:00:57 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:10:10.796 17:00:57 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:10.796 17:00:57 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:10.796 17:00:57 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:10.796 17:00:57 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:10:10.796 17:00:57 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:10.796 17:00:57 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:10.796 17:00:57 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:10:10.796 17:00:57 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:10.796 17:00:57 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:10.796 17:00:57 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:10:10.796 17:00:57 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:10:10.796 17:00:57 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:10:10.796 17:00:57 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:10.796 17:00:57 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:10.796 17:00:57 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:10.796 17:00:57 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:10:10.796 17:00:57 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:10.796 17:00:57 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:10.796 17:00:57 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:10.796 17:00:57 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:10:10.796 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:10.796 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.165 ms 00:10:10.796 00:10:10.796 --- 10.0.0.2 ping statistics --- 00:10:10.796 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:10.796 rtt min/avg/max/mdev = 0.165/0.165/0.165/0.000 ms 00:10:10.796 17:00:57 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:10.796 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:10.796 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.086 ms 00:10:10.796 00:10:10.796 --- 10.0.0.1 ping statistics --- 00:10:10.796 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:10.796 rtt min/avg/max/mdev = 0.086/0.086/0.086/0.000 ms 00:10:10.796 17:00:57 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:10.796 17:00:57 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # return 0 00:10:10.796 17:00:57 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:10:10.796 17:00:57 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:10.796 17:00:57 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:10:10.796 17:00:57 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:10:10.796 17:00:57 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:10.796 17:00:57 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:10:10.796 17:00:57 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:10:10.797 17:00:58 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:10:10.797 17:00:58 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:10:10.797 17:00:58 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@720 -- # xtrace_disable 00:10:10.797 17:00:58 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:10:10.797 17:00:58 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@481 -- # nvmfpid=2974296 00:10:10.797 17:00:58 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:10:10.797 17:00:58 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@482 -- # waitforlisten 2974296 00:10:10.797 17:00:58 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@827 -- # '[' -z 2974296 ']' 00:10:10.797 17:00:58 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:10.797 17:00:58 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@832 -- # local max_retries=100 00:10:10.797 17:00:58 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:10.797 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:10.797 17:00:58 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@836 -- # xtrace_disable 00:10:10.797 17:00:58 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:10:10.797 [2024-05-15 17:00:58.073222] Starting SPDK v24.05-pre git sha1 0ba8ca574 / DPDK 23.11.0 initialization... 00:10:10.797 [2024-05-15 17:00:58.073263] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:10.797 EAL: No free 2048 kB hugepages reported on node 1 00:10:10.797 [2024-05-15 17:00:58.128272] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:10:10.797 [2024-05-15 17:00:58.207095] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:10.797 [2024-05-15 17:00:58.207133] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:10.797 [2024-05-15 17:00:58.207141] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:10.797 [2024-05-15 17:00:58.207147] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:10.797 [2024-05-15 17:00:58.207152] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:10.797 [2024-05-15 17:00:58.207208] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:10:10.797 [2024-05-15 17:00:58.207210] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:11.368 17:00:58 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:10:11.368 17:00:58 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@860 -- # return 0 00:10:11.368 17:00:58 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:10:11.368 17:00:58 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@726 -- # xtrace_disable 00:10:11.368 17:00:58 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:10:11.368 17:00:58 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:11.368 17:00:58 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:10:11.368 17:00:58 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:11.368 17:00:58 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:10:11.368 [2024-05-15 17:00:58.915690] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:11.368 17:00:58 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:11.368 17:00:58 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:10:11.368 17:00:58 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:11.368 17:00:58 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:10:11.368 17:00:58 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:11.368 17:00:58 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:11.368 17:00:58 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:11.368 17:00:58 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:10:11.368 [2024-05-15 17:00:58.931669] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:10:11.368 [2024-05-15 17:00:58.931910] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:11.368 17:00:58 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:11.368 17:00:58 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:10:11.368 17:00:58 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:11.368 17:00:58 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:10:11.368 NULL1 00:10:11.368 17:00:58 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:11.368 17:00:58 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:10:11.368 17:00:58 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:11.368 17:00:58 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:10:11.368 Delay0 00:10:11.368 17:00:58 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:11.368 17:00:58 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:11.368 17:00:58 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:11.368 17:00:58 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:10:11.368 17:00:58 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:11.368 17:00:58 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@28 -- # perf_pid=2974512 00:10:11.368 17:00:58 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@30 -- # sleep 2 00:10:11.368 17:00:58 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:10:11.368 EAL: No free 2048 kB hugepages reported on node 1 00:10:11.368 [2024-05-15 17:00:59.006386] subsystem.c:1568:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:10:13.907 17:01:00 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:13.907 17:01:00 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:13.907 17:01:00 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:10:13.907 Write completed with error (sct=0, sc=8) 00:10:13.907 Write completed with error (sct=0, sc=8) 00:10:13.907 Read completed with error (sct=0, sc=8) 00:10:13.907 starting I/O failed: -6 00:10:13.907 Read completed with error (sct=0, sc=8) 00:10:13.907 Read completed with error (sct=0, sc=8) 00:10:13.907 Read completed with error (sct=0, sc=8) 00:10:13.907 Write completed with error (sct=0, sc=8) 00:10:13.907 starting I/O failed: -6 00:10:13.907 Read completed with error (sct=0, sc=8) 00:10:13.907 Read completed with error (sct=0, sc=8) 00:10:13.907 Read completed with error (sct=0, sc=8) 00:10:13.907 Read completed with error (sct=0, sc=8) 00:10:13.907 starting I/O failed: -6 00:10:13.907 Read completed with error (sct=0, sc=8) 00:10:13.907 Read completed with error (sct=0, sc=8) 00:10:13.907 Read completed with error (sct=0, sc=8) 00:10:13.907 Read completed with error (sct=0, sc=8) 00:10:13.907 starting I/O failed: -6 00:10:13.907 Read completed with error (sct=0, sc=8) 00:10:13.907 Read completed with error (sct=0, sc=8) 00:10:13.907 Write completed with error (sct=0, sc=8) 00:10:13.907 Write completed with error (sct=0, sc=8) 00:10:13.907 starting I/O failed: -6 00:10:13.907 Read completed with error (sct=0, sc=8) 00:10:13.907 Write completed with error (sct=0, sc=8) 00:10:13.907 Read completed with error (sct=0, sc=8) 00:10:13.907 Write completed with error (sct=0, sc=8) 00:10:13.907 starting I/O failed: -6 00:10:13.907 Read completed with error (sct=0, sc=8) 00:10:13.907 Write completed with error (sct=0, sc=8) 00:10:13.907 Read completed with error (sct=0, sc=8) 00:10:13.907 Read completed with error (sct=0, sc=8) 00:10:13.907 starting I/O failed: -6 00:10:13.907 Read completed with error (sct=0, sc=8) 00:10:13.907 Read completed with error (sct=0, sc=8) 00:10:13.907 Read completed with error (sct=0, sc=8) 00:10:13.907 Read completed with error (sct=0, sc=8) 00:10:13.907 starting I/O failed: -6 00:10:13.907 Read completed with error (sct=0, sc=8) 00:10:13.907 Write completed with error (sct=0, sc=8) 00:10:13.907 Read completed with error (sct=0, sc=8) 00:10:13.907 Write completed with error (sct=0, sc=8) 00:10:13.907 starting I/O failed: -6 00:10:13.907 Read completed with error (sct=0, sc=8) 00:10:13.907 Read completed with error (sct=0, sc=8) 00:10:13.907 Read completed with error (sct=0, sc=8) 00:10:13.907 Read completed with error (sct=0, sc=8) 00:10:13.907 starting I/O failed: -6 00:10:13.907 Read completed with error (sct=0, sc=8) 00:10:13.907 Read completed with error (sct=0, sc=8) 00:10:13.907 Read completed with error (sct=0, sc=8) 00:10:13.907 Read completed with error (sct=0, sc=8) 00:10:13.907 starting I/O failed: -6 00:10:13.907 Read completed with error (sct=0, sc=8) 00:10:13.907 Read completed with error (sct=0, sc=8) 00:10:13.907 Write completed with error (sct=0, sc=8) 00:10:13.907 Read completed with error (sct=0, sc=8) 00:10:13.907 starting I/O failed: -6 00:10:13.907 Write completed with error (sct=0, sc=8) 00:10:13.907 Read completed with error (sct=0, sc=8) 00:10:13.907 Read completed with error (sct=0, sc=8) 00:10:13.907 Write completed with error (sct=0, sc=8) 00:10:13.907 starting I/O failed: -6 00:10:13.907 Read completed with error (sct=0, sc=8) 00:10:13.907 Read completed with error (sct=0, sc=8) 00:10:13.907 Read completed with error (sct=0, sc=8) 00:10:13.907 Write completed with error (sct=0, sc=8) 00:10:13.907 starting I/O failed: -6 00:10:13.907 Read completed with error (sct=0, sc=8) 00:10:13.907 [2024-05-15 17:01:01.127603] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a66a0 is same with the state(5) to be set 00:10:13.907 Write completed with error (sct=0, sc=8) 00:10:13.907 starting I/O failed: -6 00:10:13.907 Read completed with error (sct=0, sc=8) 00:10:13.907 Write completed with error (sct=0, sc=8) 00:10:13.907 Write completed with error (sct=0, sc=8) 00:10:13.907 Read completed with error (sct=0, sc=8) 00:10:13.907 starting I/O failed: -6 00:10:13.907 Read completed with error (sct=0, sc=8) 00:10:13.907 Read completed with error (sct=0, sc=8) 00:10:13.907 Read completed with error (sct=0, sc=8) 00:10:13.907 Read completed with error (sct=0, sc=8) 00:10:13.907 starting I/O failed: -6 00:10:13.907 Read completed with error (sct=0, sc=8) 00:10:13.907 Read completed with error (sct=0, sc=8) 00:10:13.907 Write completed with error (sct=0, sc=8) 00:10:13.907 Read completed with error (sct=0, sc=8) 00:10:13.907 starting I/O failed: -6 00:10:13.907 Write completed with error (sct=0, sc=8) 00:10:13.907 Read completed with error (sct=0, sc=8) 00:10:13.907 Write completed with error (sct=0, sc=8) 00:10:13.907 Read completed with error (sct=0, sc=8) 00:10:13.907 starting I/O failed: -6 00:10:13.907 Write completed with error (sct=0, sc=8) 00:10:13.907 Write completed with error (sct=0, sc=8) 00:10:13.907 Read completed with error (sct=0, sc=8) 00:10:13.907 Read completed with error (sct=0, sc=8) 00:10:13.907 starting I/O failed: -6 00:10:13.907 Read completed with error (sct=0, sc=8) 00:10:13.907 Read completed with error (sct=0, sc=8) 00:10:13.907 Read completed with error (sct=0, sc=8) 00:10:13.907 Read completed with error (sct=0, sc=8) 00:10:13.907 starting I/O failed: -6 00:10:13.907 Read completed with error (sct=0, sc=8) 00:10:13.907 Read completed with error (sct=0, sc=8) 00:10:13.907 Read completed with error (sct=0, sc=8) 00:10:13.907 Read completed with error (sct=0, sc=8) 00:10:13.907 starting I/O failed: -6 00:10:13.907 Read completed with error (sct=0, sc=8) 00:10:13.907 Write completed with error (sct=0, sc=8) 00:10:13.907 Write completed with error (sct=0, sc=8) 00:10:13.907 Read completed with error (sct=0, sc=8) 00:10:13.907 starting I/O failed: -6 00:10:13.907 Read completed with error (sct=0, sc=8) 00:10:13.907 Read completed with error (sct=0, sc=8) 00:10:13.907 Read completed with error (sct=0, sc=8) 00:10:13.907 Write completed with error (sct=0, sc=8) 00:10:13.907 starting I/O failed: -6 00:10:13.907 [2024-05-15 17:01:01.127931] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fc6a8000c00 is same with the state(5) to be set 00:10:13.907 Read completed with error (sct=0, sc=8) 00:10:13.907 Read completed with error (sct=0, sc=8) 00:10:13.907 Read completed with error (sct=0, sc=8) 00:10:13.907 Read completed with error (sct=0, sc=8) 00:10:13.907 Read completed with error (sct=0, sc=8) 00:10:13.907 Write completed with error (sct=0, sc=8) 00:10:13.907 Read completed with error (sct=0, sc=8) 00:10:13.907 Read completed with error (sct=0, sc=8) 00:10:13.907 Read completed with error (sct=0, sc=8) 00:10:13.907 Read completed with error (sct=0, sc=8) 00:10:13.907 Read completed with error (sct=0, sc=8) 00:10:13.907 Read completed with error (sct=0, sc=8) 00:10:13.907 Read completed with error (sct=0, sc=8) 00:10:13.907 Read completed with error (sct=0, sc=8) 00:10:13.907 Read completed with error (sct=0, sc=8) 00:10:13.907 Read completed with error (sct=0, sc=8) 00:10:13.907 Read completed with error (sct=0, sc=8) 00:10:13.907 Write completed with error (sct=0, sc=8) 00:10:13.907 Read completed with error (sct=0, sc=8) 00:10:13.907 Write completed with error (sct=0, sc=8) 00:10:13.907 Write completed with error (sct=0, sc=8) 00:10:13.907 Read completed with error (sct=0, sc=8) 00:10:13.907 Read completed with error (sct=0, sc=8) 00:10:13.907 Write completed with error (sct=0, sc=8) 00:10:13.907 Read completed with error (sct=0, sc=8) 00:10:13.907 Read completed with error (sct=0, sc=8) 00:10:13.907 Read completed with error (sct=0, sc=8) 00:10:13.907 Read completed with error (sct=0, sc=8) 00:10:13.907 Write completed with error (sct=0, sc=8) 00:10:13.907 Read completed with error (sct=0, sc=8) 00:10:13.907 Read completed with error (sct=0, sc=8) 00:10:13.907 Read completed with error (sct=0, sc=8) 00:10:13.907 Write completed with error (sct=0, sc=8) 00:10:13.907 Read completed with error (sct=0, sc=8) 00:10:13.907 Write completed with error (sct=0, sc=8) 00:10:13.907 Read completed with error (sct=0, sc=8) 00:10:13.907 Read completed with error (sct=0, sc=8) 00:10:13.907 Read completed with error (sct=0, sc=8) 00:10:13.907 Read completed with error (sct=0, sc=8) 00:10:13.907 Read completed with error (sct=0, sc=8) 00:10:13.907 Read completed with error (sct=0, sc=8) 00:10:13.907 Read completed with error (sct=0, sc=8) 00:10:13.908 Write completed with error (sct=0, sc=8) 00:10:13.908 Read completed with error (sct=0, sc=8) 00:10:13.908 Read completed with error (sct=0, sc=8) 00:10:13.908 Write completed with error (sct=0, sc=8) 00:10:13.908 Read completed with error (sct=0, sc=8) 00:10:13.908 Read completed with error (sct=0, sc=8) 00:10:13.908 Read completed with error (sct=0, sc=8) 00:10:13.908 Read completed with error (sct=0, sc=8) 00:10:13.908 Read completed with error (sct=0, sc=8) 00:10:13.908 Read completed with error (sct=0, sc=8) 00:10:13.908 Write completed with error (sct=0, sc=8) 00:10:13.908 Read completed with error (sct=0, sc=8) 00:10:13.908 Read completed with error (sct=0, sc=8) 00:10:13.908 Read completed with error (sct=0, sc=8) 00:10:13.908 Write completed with error (sct=0, sc=8) 00:10:13.908 Read completed with error (sct=0, sc=8) 00:10:13.908 Read completed with error (sct=0, sc=8) 00:10:13.908 Write completed with error (sct=0, sc=8) 00:10:13.908 Write completed with error (sct=0, sc=8) 00:10:13.908 Read completed with error (sct=0, sc=8) 00:10:13.908 Read completed with error (sct=0, sc=8) 00:10:13.908 Read completed with error (sct=0, sc=8) 00:10:13.908 Read completed with error (sct=0, sc=8) 00:10:13.908 Read completed with error (sct=0, sc=8) 00:10:13.908 Write completed with error (sct=0, sc=8) 00:10:13.908 Read completed with error (sct=0, sc=8) 00:10:13.908 Read completed with error (sct=0, sc=8) 00:10:13.908 Read completed with error (sct=0, sc=8) 00:10:13.908 Write completed with error (sct=0, sc=8) 00:10:13.908 Read completed with error (sct=0, sc=8) 00:10:13.908 Read completed with error (sct=0, sc=8) 00:10:13.908 Read completed with error (sct=0, sc=8) 00:10:13.908 Write completed with error (sct=0, sc=8) 00:10:13.908 Read completed with error (sct=0, sc=8) 00:10:13.908 Write completed with error (sct=0, sc=8) 00:10:13.908 Read completed with error (sct=0, sc=8) 00:10:13.908 Read completed with error (sct=0, sc=8) 00:10:13.908 Read completed with error (sct=0, sc=8) 00:10:13.908 Read completed with error (sct=0, sc=8) 00:10:13.908 Read completed with error (sct=0, sc=8) 00:10:13.908 Read completed with error (sct=0, sc=8) 00:10:13.908 Write completed with error (sct=0, sc=8) 00:10:13.908 Read completed with error (sct=0, sc=8) 00:10:13.908 Read completed with error (sct=0, sc=8) 00:10:13.908 Read completed with error (sct=0, sc=8) 00:10:13.908 Read completed with error (sct=0, sc=8) 00:10:13.908 Read completed with error (sct=0, sc=8) 00:10:13.908 Write completed with error (sct=0, sc=8) 00:10:13.908 Write completed with error (sct=0, sc=8) 00:10:13.908 Read completed with error (sct=0, sc=8) 00:10:14.472 [2024-05-15 17:01:02.101612] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a6060 is same with the state(5) to be set 00:10:14.472 Read completed with error (sct=0, sc=8) 00:10:14.472 Read completed with error (sct=0, sc=8) 00:10:14.472 Read completed with error (sct=0, sc=8) 00:10:14.472 Read completed with error (sct=0, sc=8) 00:10:14.472 Write completed with error (sct=0, sc=8) 00:10:14.472 Write completed with error (sct=0, sc=8) 00:10:14.472 Read completed with error (sct=0, sc=8) 00:10:14.472 Read completed with error (sct=0, sc=8) 00:10:14.472 Read completed with error (sct=0, sc=8) 00:10:14.472 Read completed with error (sct=0, sc=8) 00:10:14.472 Read completed with error (sct=0, sc=8) 00:10:14.472 Read completed with error (sct=0, sc=8) 00:10:14.472 Read completed with error (sct=0, sc=8) 00:10:14.472 Write completed with error (sct=0, sc=8) 00:10:14.472 Read completed with error (sct=0, sc=8) 00:10:14.472 Read completed with error (sct=0, sc=8) 00:10:14.472 Write completed with error (sct=0, sc=8) 00:10:14.472 Read completed with error (sct=0, sc=8) 00:10:14.472 Read completed with error (sct=0, sc=8) 00:10:14.472 Read completed with error (sct=0, sc=8) 00:10:14.472 Read completed with error (sct=0, sc=8) 00:10:14.472 Read completed with error (sct=0, sc=8) 00:10:14.472 Read completed with error (sct=0, sc=8) 00:10:14.472 Read completed with error (sct=0, sc=8) 00:10:14.472 Write completed with error (sct=0, sc=8) 00:10:14.472 Read completed with error (sct=0, sc=8) 00:10:14.472 Read completed with error (sct=0, sc=8) 00:10:14.472 Read completed with error (sct=0, sc=8) 00:10:14.472 Read completed with error (sct=0, sc=8) 00:10:14.472 Write completed with error (sct=0, sc=8) 00:10:14.472 Write completed with error (sct=0, sc=8) 00:10:14.472 Read completed with error (sct=0, sc=8) 00:10:14.472 Read completed with error (sct=0, sc=8) 00:10:14.472 Read completed with error (sct=0, sc=8) 00:10:14.472 Read completed with error (sct=0, sc=8) 00:10:14.472 Read completed with error (sct=0, sc=8) 00:10:14.472 Read completed with error (sct=0, sc=8) 00:10:14.472 Read completed with error (sct=0, sc=8) 00:10:14.472 [2024-05-15 17:01:02.129944] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a7f10 is same with the state(5) to be set 00:10:14.472 Read completed with error (sct=0, sc=8) 00:10:14.472 Write completed with error (sct=0, sc=8) 00:10:14.472 Read completed with error (sct=0, sc=8) 00:10:14.472 Read completed with error (sct=0, sc=8) 00:10:14.472 Read completed with error (sct=0, sc=8) 00:10:14.472 Write completed with error (sct=0, sc=8) 00:10:14.472 Read completed with error (sct=0, sc=8) 00:10:14.472 Read completed with error (sct=0, sc=8) 00:10:14.472 Read completed with error (sct=0, sc=8) 00:10:14.472 Write completed with error (sct=0, sc=8) 00:10:14.472 Read completed with error (sct=0, sc=8) 00:10:14.472 Write completed with error (sct=0, sc=8) 00:10:14.472 Read completed with error (sct=0, sc=8) 00:10:14.472 Read completed with error (sct=0, sc=8) 00:10:14.472 [2024-05-15 17:01:02.130080] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fc6a800c2f0 is same with the state(5) to be set 00:10:14.472 Read completed with error (sct=0, sc=8) 00:10:14.472 Read completed with error (sct=0, sc=8) 00:10:14.472 Read completed with error (sct=0, sc=8) 00:10:14.472 Read completed with error (sct=0, sc=8) 00:10:14.472 Read completed with error (sct=0, sc=8) 00:10:14.472 Read completed with error (sct=0, sc=8) 00:10:14.472 Read completed with error (sct=0, sc=8) 00:10:14.473 Read completed with error (sct=0, sc=8) 00:10:14.473 Read completed with error (sct=0, sc=8) 00:10:14.473 Read completed with error (sct=0, sc=8) 00:10:14.473 Write completed with error (sct=0, sc=8) 00:10:14.473 Write completed with error (sct=0, sc=8) 00:10:14.473 Read completed with error (sct=0, sc=8) 00:10:14.473 Read completed with error (sct=0, sc=8) 00:10:14.473 Write completed with error (sct=0, sc=8) 00:10:14.473 Read completed with error (sct=0, sc=8) 00:10:14.473 Read completed with error (sct=0, sc=8) 00:10:14.473 Read completed with error (sct=0, sc=8) 00:10:14.473 Read completed with error (sct=0, sc=8) 00:10:14.473 Write completed with error (sct=0, sc=8) 00:10:14.473 Read completed with error (sct=0, sc=8) 00:10:14.473 Read completed with error (sct=0, sc=8) 00:10:14.473 Read completed with error (sct=0, sc=8) 00:10:14.473 Write completed with error (sct=0, sc=8) 00:10:14.473 Read completed with error (sct=0, sc=8) 00:10:14.473 Write completed with error (sct=0, sc=8) 00:10:14.473 Read completed with error (sct=0, sc=8) 00:10:14.473 Read completed with error (sct=0, sc=8) 00:10:14.473 Read completed with error (sct=0, sc=8) 00:10:14.473 Read completed with error (sct=0, sc=8) 00:10:14.473 Write completed with error (sct=0, sc=8) 00:10:14.473 Read completed with error (sct=0, sc=8) 00:10:14.473 Read completed with error (sct=0, sc=8) 00:10:14.473 Read completed with error (sct=0, sc=8) 00:10:14.473 Read completed with error (sct=0, sc=8) 00:10:14.473 Read completed with error (sct=0, sc=8) 00:10:14.473 Read completed with error (sct=0, sc=8) 00:10:14.473 Write completed with error (sct=0, sc=8) 00:10:14.473 [2024-05-15 17:01:02.130410] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a70c0 is same with the state(5) to be set 00:10:14.473 Write completed with error (sct=0, sc=8) 00:10:14.473 Read completed with error (sct=0, sc=8) 00:10:14.473 Write completed with error (sct=0, sc=8) 00:10:14.473 Read completed with error (sct=0, sc=8) 00:10:14.473 Read completed with error (sct=0, sc=8) 00:10:14.473 Read completed with error (sct=0, sc=8) 00:10:14.473 Write completed with error (sct=0, sc=8) 00:10:14.473 Read completed with error (sct=0, sc=8) 00:10:14.473 Read completed with error (sct=0, sc=8) 00:10:14.473 Read completed with error (sct=0, sc=8) 00:10:14.473 Read completed with error (sct=0, sc=8) 00:10:14.473 Write completed with error (sct=0, sc=8) 00:10:14.473 Read completed with error (sct=0, sc=8) 00:10:14.473 Write completed with error (sct=0, sc=8) 00:10:14.473 Read completed with error (sct=0, sc=8) 00:10:14.473 Write completed with error (sct=0, sc=8) 00:10:14.473 Read completed with error (sct=0, sc=8) 00:10:14.473 Read completed with error (sct=0, sc=8) 00:10:14.473 Read completed with error (sct=0, sc=8) 00:10:14.473 Read completed with error (sct=0, sc=8) 00:10:14.473 Write completed with error (sct=0, sc=8) 00:10:14.473 Write completed with error (sct=0, sc=8) 00:10:14.473 Read completed with error (sct=0, sc=8) 00:10:14.473 Read completed with error (sct=0, sc=8) 00:10:14.473 Read completed with error (sct=0, sc=8) 00:10:14.473 Write completed with error (sct=0, sc=8) 00:10:14.473 Write completed with error (sct=0, sc=8) 00:10:14.473 Read completed with error (sct=0, sc=8) 00:10:14.473 Read completed with error (sct=0, sc=8) 00:10:14.473 Read completed with error (sct=0, sc=8) 00:10:14.473 Read completed with error (sct=0, sc=8) 00:10:14.473 Read completed with error (sct=0, sc=8) 00:10:14.473 Read completed with error (sct=0, sc=8) 00:10:14.473 Write completed with error (sct=0, sc=8) 00:10:14.473 Read completed with error (sct=0, sc=8) 00:10:14.473 Read completed with error (sct=0, sc=8) 00:10:14.473 Read completed with error (sct=0, sc=8) 00:10:14.473 Read completed with error (sct=0, sc=8) 00:10:14.473 [2024-05-15 17:01:02.130569] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14aec20 is same with the state(5) to be set 00:10:14.473 Initializing NVMe Controllers 00:10:14.473 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:10:14.473 Controller IO queue size 128, less than required. 00:10:14.473 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:10:14.473 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:10:14.473 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:10:14.473 Initialization complete. Launching workers. 00:10:14.473 ======================================================== 00:10:14.473 Latency(us) 00:10:14.473 Device Information : IOPS MiB/s Average min max 00:10:14.473 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 195.58 0.10 943780.74 831.47 1012168.57 00:10:14.473 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 150.41 0.07 897248.20 223.79 1012038.38 00:10:14.473 ======================================================== 00:10:14.473 Total : 345.99 0.17 923552.10 223.79 1012168.57 00:10:14.473 00:10:14.473 [2024-05-15 17:01:02.131218] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a6060 (9): Bad file descriptor 00:10:14.473 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:10:14.730 17:01:02 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:14.730 17:01:02 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@34 -- # delay=0 00:10:14.730 17:01:02 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 2974512 00:10:14.730 17:01:02 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:10:14.988 17:01:02 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:10:14.988 17:01:02 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 2974512 00:10:14.988 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (2974512) - No such process 00:10:14.988 17:01:02 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@45 -- # NOT wait 2974512 00:10:14.988 17:01:02 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@648 -- # local es=0 00:10:14.988 17:01:02 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@650 -- # valid_exec_arg wait 2974512 00:10:14.988 17:01:02 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@636 -- # local arg=wait 00:10:14.988 17:01:02 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:10:14.988 17:01:02 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@640 -- # type -t wait 00:10:14.988 17:01:02 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:10:14.988 17:01:02 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@651 -- # wait 2974512 00:10:14.988 17:01:02 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@651 -- # es=1 00:10:14.988 17:01:02 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:10:14.988 17:01:02 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:10:14.988 17:01:02 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:10:14.988 17:01:02 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:10:14.988 17:01:02 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:14.988 17:01:02 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:10:15.246 17:01:02 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:15.246 17:01:02 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:15.246 17:01:02 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:15.246 17:01:02 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:10:15.246 [2024-05-15 17:01:02.657823] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:15.246 17:01:02 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:15.246 17:01:02 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:15.246 17:01:02 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:15.246 17:01:02 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:10:15.246 17:01:02 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:15.246 17:01:02 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@54 -- # perf_pid=2975019 00:10:15.246 17:01:02 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@56 -- # delay=0 00:10:15.246 17:01:02 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:10:15.246 17:01:02 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2975019 00:10:15.246 17:01:02 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:10:15.246 EAL: No free 2048 kB hugepages reported on node 1 00:10:15.246 [2024-05-15 17:01:02.718854] subsystem.c:1568:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:10:15.810 17:01:03 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:10:15.810 17:01:03 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2975019 00:10:15.810 17:01:03 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:10:16.067 17:01:03 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:10:16.067 17:01:03 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2975019 00:10:16.067 17:01:03 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:10:16.631 17:01:04 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:10:16.631 17:01:04 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2975019 00:10:16.631 17:01:04 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:10:17.205 17:01:04 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:10:17.205 17:01:04 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2975019 00:10:17.205 17:01:04 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:10:17.769 17:01:05 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:10:17.769 17:01:05 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2975019 00:10:17.769 17:01:05 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:10:18.333 17:01:05 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:10:18.333 17:01:05 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2975019 00:10:18.333 17:01:05 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:10:18.333 Initializing NVMe Controllers 00:10:18.333 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:10:18.333 Controller IO queue size 128, less than required. 00:10:18.333 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:10:18.333 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:10:18.333 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:10:18.333 Initialization complete. Launching workers. 00:10:18.333 ======================================================== 00:10:18.333 Latency(us) 00:10:18.333 Device Information : IOPS MiB/s Average min max 00:10:18.333 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1003442.05 1000167.78 1041234.24 00:10:18.333 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1004936.92 1000419.43 1012755.71 00:10:18.333 ======================================================== 00:10:18.333 Total : 256.00 0.12 1004189.48 1000167.78 1041234.24 00:10:18.333 00:10:18.590 17:01:06 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:10:18.590 17:01:06 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2975019 00:10:18.590 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (2975019) - No such process 00:10:18.590 17:01:06 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@67 -- # wait 2975019 00:10:18.590 17:01:06 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:10:18.590 17:01:06 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:10:18.590 17:01:06 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@488 -- # nvmfcleanup 00:10:18.590 17:01:06 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@117 -- # sync 00:10:18.590 17:01:06 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:10:18.590 17:01:06 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@120 -- # set +e 00:10:18.590 17:01:06 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@121 -- # for i in {1..20} 00:10:18.590 17:01:06 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:10:18.590 rmmod nvme_tcp 00:10:18.590 rmmod nvme_fabrics 00:10:18.847 rmmod nvme_keyring 00:10:18.847 17:01:06 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:10:18.847 17:01:06 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@124 -- # set -e 00:10:18.847 17:01:06 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@125 -- # return 0 00:10:18.847 17:01:06 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@489 -- # '[' -n 2974296 ']' 00:10:18.847 17:01:06 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@490 -- # killprocess 2974296 00:10:18.847 17:01:06 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@946 -- # '[' -z 2974296 ']' 00:10:18.847 17:01:06 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@950 -- # kill -0 2974296 00:10:18.847 17:01:06 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@951 -- # uname 00:10:18.847 17:01:06 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:10:18.847 17:01:06 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 2974296 00:10:18.847 17:01:06 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:10:18.847 17:01:06 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:10:18.847 17:01:06 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@964 -- # echo 'killing process with pid 2974296' 00:10:18.847 killing process with pid 2974296 00:10:18.847 17:01:06 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@965 -- # kill 2974296 00:10:18.847 [2024-05-15 17:01:06.335263] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:10:18.847 17:01:06 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@970 -- # wait 2974296 00:10:19.105 17:01:06 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:10:19.105 17:01:06 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:10:19.105 17:01:06 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:10:19.105 17:01:06 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:10:19.105 17:01:06 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@278 -- # remove_spdk_ns 00:10:19.105 17:01:06 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:19.105 17:01:06 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:19.105 17:01:06 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:21.006 17:01:08 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:10:21.006 00:10:21.006 real 0m15.655s 00:10:21.006 user 0m29.977s 00:10:21.006 sys 0m4.604s 00:10:21.006 17:01:08 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@1122 -- # xtrace_disable 00:10:21.006 17:01:08 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:10:21.006 ************************************ 00:10:21.006 END TEST nvmf_delete_subsystem 00:10:21.006 ************************************ 00:10:21.006 17:01:08 nvmf_tcp -- nvmf/nvmf.sh@36 -- # run_test nvmf_ns_masking test/nvmf/target/ns_masking.sh --transport=tcp 00:10:21.006 17:01:08 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:10:21.006 17:01:08 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:10:21.006 17:01:08 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:10:21.264 ************************************ 00:10:21.264 START TEST nvmf_ns_masking 00:10:21.264 ************************************ 00:10:21.264 17:01:08 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1121 -- # test/nvmf/target/ns_masking.sh --transport=tcp 00:10:21.264 * Looking for test storage... 00:10:21.264 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:21.264 17:01:08 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:21.264 17:01:08 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@7 -- # uname -s 00:10:21.264 17:01:08 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:21.264 17:01:08 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:21.264 17:01:08 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:21.264 17:01:08 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:21.264 17:01:08 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:21.265 17:01:08 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:21.265 17:01:08 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:21.265 17:01:08 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:21.265 17:01:08 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:21.265 17:01:08 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:21.265 17:01:08 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:10:21.265 17:01:08 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:10:21.265 17:01:08 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:21.265 17:01:08 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:21.265 17:01:08 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:21.265 17:01:08 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:21.265 17:01:08 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:21.265 17:01:08 nvmf_tcp.nvmf_ns_masking -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:21.265 17:01:08 nvmf_tcp.nvmf_ns_masking -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:21.265 17:01:08 nvmf_tcp.nvmf_ns_masking -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:21.265 17:01:08 nvmf_tcp.nvmf_ns_masking -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:21.265 17:01:08 nvmf_tcp.nvmf_ns_masking -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:21.265 17:01:08 nvmf_tcp.nvmf_ns_masking -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:21.265 17:01:08 nvmf_tcp.nvmf_ns_masking -- paths/export.sh@5 -- # export PATH 00:10:21.265 17:01:08 nvmf_tcp.nvmf_ns_masking -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:21.265 17:01:08 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@47 -- # : 0 00:10:21.265 17:01:08 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:10:21.265 17:01:08 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:10:21.265 17:01:08 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:21.265 17:01:08 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:21.265 17:01:08 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:21.265 17:01:08 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:10:21.265 17:01:08 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:10:21.265 17:01:08 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@51 -- # have_pci_nics=0 00:10:21.265 17:01:08 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@10 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:10:21.265 17:01:08 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@11 -- # loops=5 00:10:21.265 17:01:08 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@13 -- # SUBSYSNQN=nqn.2016-06.io.spdk:cnode1 00:10:21.265 17:01:08 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@14 -- # HOSTNQN=nqn.2016-06.io.spdk:host1 00:10:21.265 17:01:08 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@15 -- # uuidgen 00:10:21.265 17:01:08 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@15 -- # HOSTID=43fea315-f1f4-4e2e-8186-b76449dc770e 00:10:21.265 17:01:08 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvmftestinit 00:10:21.265 17:01:08 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:10:21.265 17:01:08 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:21.265 17:01:08 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@448 -- # prepare_net_devs 00:10:21.265 17:01:08 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@410 -- # local -g is_hw=no 00:10:21.265 17:01:08 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@412 -- # remove_spdk_ns 00:10:21.265 17:01:08 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:21.265 17:01:08 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:21.265 17:01:08 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:21.265 17:01:08 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:10:21.265 17:01:08 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:10:21.265 17:01:08 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@285 -- # xtrace_disable 00:10:21.265 17:01:08 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:10:26.570 17:01:13 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:26.570 17:01:13 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@291 -- # pci_devs=() 00:10:26.570 17:01:13 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@291 -- # local -a pci_devs 00:10:26.570 17:01:13 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@292 -- # pci_net_devs=() 00:10:26.570 17:01:13 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:10:26.570 17:01:13 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@293 -- # pci_drivers=() 00:10:26.570 17:01:13 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@293 -- # local -A pci_drivers 00:10:26.570 17:01:13 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@295 -- # net_devs=() 00:10:26.570 17:01:13 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@295 -- # local -ga net_devs 00:10:26.570 17:01:13 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@296 -- # e810=() 00:10:26.570 17:01:13 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@296 -- # local -ga e810 00:10:26.570 17:01:13 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@297 -- # x722=() 00:10:26.570 17:01:13 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@297 -- # local -ga x722 00:10:26.570 17:01:13 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@298 -- # mlx=() 00:10:26.570 17:01:13 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@298 -- # local -ga mlx 00:10:26.570 17:01:13 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:26.570 17:01:13 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:26.570 17:01:13 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:26.570 17:01:13 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:26.570 17:01:13 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:26.570 17:01:13 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:26.570 17:01:13 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:26.570 17:01:13 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:26.570 17:01:13 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:26.570 17:01:13 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:26.570 17:01:13 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:26.570 17:01:13 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:10:26.570 17:01:13 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:10:26.570 17:01:13 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:10:26.570 17:01:13 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:10:26.570 17:01:13 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:10:26.570 17:01:13 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:10:26.570 17:01:13 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:26.570 17:01:13 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:10:26.570 Found 0000:86:00.0 (0x8086 - 0x159b) 00:10:26.570 17:01:13 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:10:26.570 17:01:13 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:10:26.570 17:01:13 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:26.570 17:01:13 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:26.570 17:01:13 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:10:26.570 17:01:13 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:26.570 17:01:13 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:10:26.570 Found 0000:86:00.1 (0x8086 - 0x159b) 00:10:26.570 17:01:13 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:10:26.570 17:01:13 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:10:26.570 17:01:13 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:26.570 17:01:13 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:26.570 17:01:13 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:10:26.570 17:01:13 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:10:26.570 17:01:13 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:10:26.570 17:01:13 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:10:26.570 17:01:13 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:26.570 17:01:13 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:26.570 17:01:13 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:10:26.570 17:01:13 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:26.570 17:01:13 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@390 -- # [[ up == up ]] 00:10:26.570 17:01:13 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:10:26.570 17:01:13 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:26.570 17:01:13 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:10:26.570 Found net devices under 0000:86:00.0: cvl_0_0 00:10:26.570 17:01:13 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:10:26.570 17:01:13 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:26.570 17:01:13 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:26.570 17:01:13 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:10:26.570 17:01:13 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:26.570 17:01:13 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@390 -- # [[ up == up ]] 00:10:26.570 17:01:13 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:10:26.570 17:01:13 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:26.570 17:01:13 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:10:26.570 Found net devices under 0000:86:00.1: cvl_0_1 00:10:26.570 17:01:13 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:10:26.570 17:01:13 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:10:26.570 17:01:13 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@414 -- # is_hw=yes 00:10:26.570 17:01:13 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:10:26.570 17:01:13 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:10:26.570 17:01:13 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:10:26.570 17:01:13 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:26.570 17:01:13 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:26.570 17:01:13 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:26.570 17:01:13 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:10:26.570 17:01:13 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:26.570 17:01:13 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:26.570 17:01:13 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:10:26.570 17:01:13 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:26.570 17:01:13 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:26.570 17:01:13 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:10:26.570 17:01:13 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:10:26.570 17:01:13 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:10:26.570 17:01:13 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:26.570 17:01:13 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:26.570 17:01:13 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:26.570 17:01:13 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:10:26.570 17:01:13 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:26.570 17:01:13 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:26.570 17:01:13 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:26.570 17:01:13 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:10:26.570 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:26.570 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.183 ms 00:10:26.570 00:10:26.570 --- 10.0.0.2 ping statistics --- 00:10:26.570 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:26.570 rtt min/avg/max/mdev = 0.183/0.183/0.183/0.000 ms 00:10:26.570 17:01:13 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:26.570 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:26.570 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.098 ms 00:10:26.570 00:10:26.570 --- 10.0.0.1 ping statistics --- 00:10:26.570 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:26.570 rtt min/avg/max/mdev = 0.098/0.098/0.098/0.000 ms 00:10:26.570 17:01:13 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:26.570 17:01:13 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@422 -- # return 0 00:10:26.570 17:01:13 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:10:26.570 17:01:13 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:26.570 17:01:13 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:10:26.570 17:01:13 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:10:26.570 17:01:13 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:26.571 17:01:13 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:10:26.571 17:01:13 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:10:26.571 17:01:13 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # nvmfappstart -m 0xF 00:10:26.571 17:01:13 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:10:26.571 17:01:13 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@720 -- # xtrace_disable 00:10:26.571 17:01:13 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:10:26.571 17:01:13 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@481 -- # nvmfpid=2979036 00:10:26.571 17:01:13 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@482 -- # waitforlisten 2979036 00:10:26.571 17:01:13 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@827 -- # '[' -z 2979036 ']' 00:10:26.571 17:01:13 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:26.571 17:01:13 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@832 -- # local max_retries=100 00:10:26.571 17:01:13 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:26.571 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:26.571 17:01:13 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@836 -- # xtrace_disable 00:10:26.571 17:01:13 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:10:26.571 17:01:13 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:10:26.571 [2024-05-15 17:01:14.025736] Starting SPDK v24.05-pre git sha1 0ba8ca574 / DPDK 23.11.0 initialization... 00:10:26.571 [2024-05-15 17:01:14.025780] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:26.571 EAL: No free 2048 kB hugepages reported on node 1 00:10:26.571 [2024-05-15 17:01:14.082100] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:26.571 [2024-05-15 17:01:14.162689] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:26.571 [2024-05-15 17:01:14.162724] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:26.571 [2024-05-15 17:01:14.162731] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:26.571 [2024-05-15 17:01:14.162737] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:26.571 [2024-05-15 17:01:14.162742] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:26.571 [2024-05-15 17:01:14.162791] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:10:26.571 [2024-05-15 17:01:14.162806] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:10:26.571 [2024-05-15 17:01:14.162895] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:10:26.571 [2024-05-15 17:01:14.162897] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:27.501 17:01:14 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:10:27.501 17:01:14 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@860 -- # return 0 00:10:27.501 17:01:14 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:10:27.501 17:01:14 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@726 -- # xtrace_disable 00:10:27.501 17:01:14 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:10:27.501 17:01:14 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:27.501 17:01:14 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:10:27.501 [2024-05-15 17:01:15.017688] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:27.501 17:01:15 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@49 -- # MALLOC_BDEV_SIZE=64 00:10:27.501 17:01:15 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@50 -- # MALLOC_BLOCK_SIZE=512 00:10:27.501 17:01:15 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:10:27.758 Malloc1 00:10:27.758 17:01:15 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:10:27.758 Malloc2 00:10:28.016 17:01:15 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:10:28.016 17:01:15 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 00:10:28.273 17:01:15 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:28.530 [2024-05-15 17:01:15.970043] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:10:28.530 [2024-05-15 17:01:15.970297] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:28.530 17:01:15 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@61 -- # connect 00:10:28.530 17:01:15 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@18 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 43fea315-f1f4-4e2e-8186-b76449dc770e -a 10.0.0.2 -s 4420 -i 4 00:10:28.530 17:01:16 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@20 -- # waitforserial SPDKISFASTANDAWESOME 00:10:28.530 17:01:16 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1194 -- # local i=0 00:10:28.530 17:01:16 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:10:28.530 17:01:16 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:10:28.530 17:01:16 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1201 -- # sleep 2 00:10:31.051 17:01:18 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:10:31.051 17:01:18 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:10:31.051 17:01:18 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # grep -c SPDKISFASTANDAWESOME 00:10:31.051 17:01:18 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:10:31.051 17:01:18 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:10:31.051 17:01:18 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # return 0 00:10:31.051 17:01:18 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme list-subsys -o json 00:10:31.051 17:01:18 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:10:31.051 17:01:18 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # ctrl_id=nvme0 00:10:31.051 17:01:18 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@23 -- # [[ -z nvme0 ]] 00:10:31.051 17:01:18 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@62 -- # ns_is_visible 0x1 00:10:31.051 17:01:18 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:10:31.051 17:01:18 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x1 00:10:31.051 [ 0]:0x1 00:10:31.051 17:01:18 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:10:31.051 17:01:18 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:10:31.051 17:01:18 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=31580a68940242b9bf71206826ae179d 00:10:31.051 17:01:18 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 31580a68940242b9bf71206826ae179d != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:10:31.051 17:01:18 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 00:10:31.051 17:01:18 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@66 -- # ns_is_visible 0x1 00:10:31.051 17:01:18 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x1 00:10:31.051 17:01:18 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:10:31.051 [ 0]:0x1 00:10:31.051 17:01:18 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:10:31.051 17:01:18 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:10:31.051 17:01:18 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=31580a68940242b9bf71206826ae179d 00:10:31.051 17:01:18 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 31580a68940242b9bf71206826ae179d != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:10:31.051 17:01:18 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@67 -- # ns_is_visible 0x2 00:10:31.051 17:01:18 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:10:31.051 17:01:18 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x2 00:10:31.051 [ 1]:0x2 00:10:31.051 17:01:18 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:10:31.051 17:01:18 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:10:31.051 17:01:18 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=1b6d34ef25e24d32b0dcf280a6c9bed6 00:10:31.052 17:01:18 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 1b6d34ef25e24d32b0dcf280a6c9bed6 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:10:31.052 17:01:18 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@69 -- # disconnect 00:10:31.052 17:01:18 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@34 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:10:31.308 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:31.308 17:01:18 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:31.308 17:01:18 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 --no-auto-visible 00:10:31.565 17:01:19 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@77 -- # connect 1 00:10:31.565 17:01:19 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@18 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 43fea315-f1f4-4e2e-8186-b76449dc770e -a 10.0.0.2 -s 4420 -i 4 00:10:31.822 17:01:19 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@20 -- # waitforserial SPDKISFASTANDAWESOME 1 00:10:31.822 17:01:19 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1194 -- # local i=0 00:10:31.822 17:01:19 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:10:31.822 17:01:19 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1196 -- # [[ -n 1 ]] 00:10:31.822 17:01:19 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1197 -- # nvme_device_counter=1 00:10:31.822 17:01:19 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1201 -- # sleep 2 00:10:33.717 17:01:21 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:10:33.717 17:01:21 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:10:33.717 17:01:21 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # grep -c SPDKISFASTANDAWESOME 00:10:33.717 17:01:21 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:10:33.717 17:01:21 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:10:33.717 17:01:21 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # return 0 00:10:33.717 17:01:21 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme list-subsys -o json 00:10:33.717 17:01:21 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:10:33.717 17:01:21 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # ctrl_id=nvme0 00:10:33.717 17:01:21 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@23 -- # [[ -z nvme0 ]] 00:10:33.717 17:01:21 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@78 -- # NOT ns_is_visible 0x1 00:10:33.717 17:01:21 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@648 -- # local es=0 00:10:33.717 17:01:21 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@650 -- # valid_exec_arg ns_is_visible 0x1 00:10:33.717 17:01:21 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@636 -- # local arg=ns_is_visible 00:10:33.717 17:01:21 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:10:33.717 17:01:21 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # type -t ns_is_visible 00:10:33.717 17:01:21 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:10:33.717 17:01:21 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # ns_is_visible 0x1 00:10:33.717 17:01:21 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:10:33.717 17:01:21 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x1 00:10:33.974 17:01:21 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:10:33.974 17:01:21 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:10:33.974 17:01:21 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=00000000000000000000000000000000 00:10:33.974 17:01:21 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:10:33.974 17:01:21 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # es=1 00:10:33.974 17:01:21 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:10:33.974 17:01:21 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:10:33.974 17:01:21 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:10:33.974 17:01:21 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@79 -- # ns_is_visible 0x2 00:10:33.975 17:01:21 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:10:33.975 17:01:21 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x2 00:10:33.975 [ 0]:0x2 00:10:33.975 17:01:21 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:10:33.975 17:01:21 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:10:33.975 17:01:21 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=1b6d34ef25e24d32b0dcf280a6c9bed6 00:10:33.975 17:01:21 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 1b6d34ef25e24d32b0dcf280a6c9bed6 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:10:33.975 17:01:21 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:10:34.232 17:01:21 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@83 -- # ns_is_visible 0x1 00:10:34.232 17:01:21 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x1 00:10:34.232 17:01:21 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:10:34.232 [ 0]:0x1 00:10:34.232 17:01:21 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:10:34.232 17:01:21 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:10:34.232 17:01:21 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=31580a68940242b9bf71206826ae179d 00:10:34.232 17:01:21 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 31580a68940242b9bf71206826ae179d != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:10:34.232 17:01:21 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@84 -- # ns_is_visible 0x2 00:10:34.232 17:01:21 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:10:34.232 17:01:21 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x2 00:10:34.232 [ 1]:0x2 00:10:34.232 17:01:21 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:10:34.232 17:01:21 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:10:34.232 17:01:21 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=1b6d34ef25e24d32b0dcf280a6c9bed6 00:10:34.232 17:01:21 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 1b6d34ef25e24d32b0dcf280a6c9bed6 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:10:34.232 17:01:21 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:10:34.490 17:01:22 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@88 -- # NOT ns_is_visible 0x1 00:10:34.490 17:01:22 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@648 -- # local es=0 00:10:34.490 17:01:22 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@650 -- # valid_exec_arg ns_is_visible 0x1 00:10:34.490 17:01:22 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@636 -- # local arg=ns_is_visible 00:10:34.490 17:01:22 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:10:34.490 17:01:22 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # type -t ns_is_visible 00:10:34.490 17:01:22 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:10:34.490 17:01:22 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # ns_is_visible 0x1 00:10:34.490 17:01:22 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:10:34.490 17:01:22 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x1 00:10:34.490 17:01:22 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:10:34.490 17:01:22 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:10:34.490 17:01:22 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=00000000000000000000000000000000 00:10:34.490 17:01:22 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:10:34.490 17:01:22 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # es=1 00:10:34.490 17:01:22 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:10:34.490 17:01:22 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:10:34.490 17:01:22 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:10:34.490 17:01:22 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@89 -- # ns_is_visible 0x2 00:10:34.490 17:01:22 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:10:34.490 17:01:22 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x2 00:10:34.490 [ 0]:0x2 00:10:34.490 17:01:22 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:10:34.490 17:01:22 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:10:34.490 17:01:22 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=1b6d34ef25e24d32b0dcf280a6c9bed6 00:10:34.490 17:01:22 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 1b6d34ef25e24d32b0dcf280a6c9bed6 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:10:34.490 17:01:22 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@91 -- # disconnect 00:10:34.490 17:01:22 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@34 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:10:34.490 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:34.490 17:01:22 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:10:34.748 17:01:22 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@95 -- # connect 2 00:10:34.748 17:01:22 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@18 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 43fea315-f1f4-4e2e-8186-b76449dc770e -a 10.0.0.2 -s 4420 -i 4 00:10:35.005 17:01:22 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@20 -- # waitforserial SPDKISFASTANDAWESOME 2 00:10:35.005 17:01:22 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1194 -- # local i=0 00:10:35.005 17:01:22 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:10:35.005 17:01:22 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1196 -- # [[ -n 2 ]] 00:10:35.005 17:01:22 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1197 -- # nvme_device_counter=2 00:10:35.005 17:01:22 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1201 -- # sleep 2 00:10:36.901 17:01:24 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:10:36.901 17:01:24 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:10:36.901 17:01:24 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # grep -c SPDKISFASTANDAWESOME 00:10:36.901 17:01:24 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # nvme_devices=2 00:10:36.901 17:01:24 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:10:36.901 17:01:24 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # return 0 00:10:36.901 17:01:24 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme list-subsys -o json 00:10:36.901 17:01:24 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:10:37.159 17:01:24 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # ctrl_id=nvme0 00:10:37.159 17:01:24 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@23 -- # [[ -z nvme0 ]] 00:10:37.159 17:01:24 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@96 -- # ns_is_visible 0x1 00:10:37.159 17:01:24 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:10:37.159 17:01:24 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x1 00:10:37.159 [ 0]:0x1 00:10:37.159 17:01:24 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:10:37.159 17:01:24 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:10:37.159 17:01:24 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=31580a68940242b9bf71206826ae179d 00:10:37.159 17:01:24 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 31580a68940242b9bf71206826ae179d != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:10:37.159 17:01:24 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@97 -- # ns_is_visible 0x2 00:10:37.159 17:01:24 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:10:37.159 17:01:24 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x2 00:10:37.159 [ 1]:0x2 00:10:37.159 17:01:24 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:10:37.159 17:01:24 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:10:37.159 17:01:24 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=1b6d34ef25e24d32b0dcf280a6c9bed6 00:10:37.159 17:01:24 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 1b6d34ef25e24d32b0dcf280a6c9bed6 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:10:37.159 17:01:24 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:10:37.417 17:01:24 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@101 -- # NOT ns_is_visible 0x1 00:10:37.417 17:01:24 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@648 -- # local es=0 00:10:37.417 17:01:24 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@650 -- # valid_exec_arg ns_is_visible 0x1 00:10:37.417 17:01:24 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@636 -- # local arg=ns_is_visible 00:10:37.417 17:01:24 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:10:37.417 17:01:24 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # type -t ns_is_visible 00:10:37.417 17:01:24 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:10:37.417 17:01:24 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # ns_is_visible 0x1 00:10:37.417 17:01:24 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:10:37.417 17:01:24 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x1 00:10:37.417 17:01:24 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:10:37.417 17:01:24 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:10:37.417 17:01:24 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=00000000000000000000000000000000 00:10:37.417 17:01:24 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:10:37.417 17:01:24 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # es=1 00:10:37.417 17:01:24 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:10:37.417 17:01:24 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:10:37.417 17:01:24 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:10:37.417 17:01:24 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@102 -- # ns_is_visible 0x2 00:10:37.417 17:01:24 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:10:37.417 17:01:24 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x2 00:10:37.417 [ 0]:0x2 00:10:37.417 17:01:24 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:10:37.417 17:01:24 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:10:37.417 17:01:24 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=1b6d34ef25e24d32b0dcf280a6c9bed6 00:10:37.417 17:01:24 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 1b6d34ef25e24d32b0dcf280a6c9bed6 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:10:37.417 17:01:24 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@105 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:10:37.417 17:01:24 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@648 -- # local es=0 00:10:37.417 17:01:24 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:10:37.417 17:01:24 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:10:37.417 17:01:24 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:10:37.417 17:01:24 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:10:37.417 17:01:24 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:10:37.417 17:01:24 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:10:37.417 17:01:24 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:10:37.418 17:01:24 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:10:37.418 17:01:24 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:10:37.418 17:01:24 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:10:37.676 [2024-05-15 17:01:25.152398] nvmf_rpc.c:1781:nvmf_rpc_ns_visible_paused: *ERROR*: Unable to add/remove nqn.2016-06.io.spdk:host1 to namespace ID 2 00:10:37.676 request: 00:10:37.676 { 00:10:37.676 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:10:37.676 "nsid": 2, 00:10:37.676 "host": "nqn.2016-06.io.spdk:host1", 00:10:37.676 "method": "nvmf_ns_remove_host", 00:10:37.676 "req_id": 1 00:10:37.676 } 00:10:37.676 Got JSON-RPC error response 00:10:37.676 response: 00:10:37.676 { 00:10:37.676 "code": -32602, 00:10:37.676 "message": "Invalid parameters" 00:10:37.676 } 00:10:37.676 17:01:25 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # es=1 00:10:37.676 17:01:25 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:10:37.676 17:01:25 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:10:37.676 17:01:25 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:10:37.676 17:01:25 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@106 -- # NOT ns_is_visible 0x1 00:10:37.676 17:01:25 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@648 -- # local es=0 00:10:37.676 17:01:25 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@650 -- # valid_exec_arg ns_is_visible 0x1 00:10:37.676 17:01:25 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@636 -- # local arg=ns_is_visible 00:10:37.676 17:01:25 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:10:37.676 17:01:25 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # type -t ns_is_visible 00:10:37.676 17:01:25 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:10:37.676 17:01:25 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # ns_is_visible 0x1 00:10:37.676 17:01:25 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:10:37.676 17:01:25 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x1 00:10:37.676 17:01:25 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:10:37.676 17:01:25 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:10:37.676 17:01:25 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=00000000000000000000000000000000 00:10:37.676 17:01:25 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:10:37.676 17:01:25 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # es=1 00:10:37.676 17:01:25 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:10:37.676 17:01:25 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:10:37.676 17:01:25 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:10:37.676 17:01:25 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@107 -- # ns_is_visible 0x2 00:10:37.676 17:01:25 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:10:37.676 17:01:25 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x2 00:10:37.676 [ 0]:0x2 00:10:37.676 17:01:25 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:10:37.676 17:01:25 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:10:37.933 17:01:25 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=1b6d34ef25e24d32b0dcf280a6c9bed6 00:10:37.933 17:01:25 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 1b6d34ef25e24d32b0dcf280a6c9bed6 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:10:37.933 17:01:25 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@108 -- # disconnect 00:10:37.933 17:01:25 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@34 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:10:37.933 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:37.933 17:01:25 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@110 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:38.191 17:01:25 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:10:38.191 17:01:25 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@114 -- # nvmftestfini 00:10:38.191 17:01:25 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@488 -- # nvmfcleanup 00:10:38.191 17:01:25 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@117 -- # sync 00:10:38.191 17:01:25 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:10:38.191 17:01:25 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@120 -- # set +e 00:10:38.191 17:01:25 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@121 -- # for i in {1..20} 00:10:38.191 17:01:25 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:10:38.191 rmmod nvme_tcp 00:10:38.191 rmmod nvme_fabrics 00:10:38.191 rmmod nvme_keyring 00:10:38.191 17:01:25 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:10:38.191 17:01:25 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@124 -- # set -e 00:10:38.191 17:01:25 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@125 -- # return 0 00:10:38.191 17:01:25 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@489 -- # '[' -n 2979036 ']' 00:10:38.191 17:01:25 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@490 -- # killprocess 2979036 00:10:38.191 17:01:25 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@946 -- # '[' -z 2979036 ']' 00:10:38.191 17:01:25 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@950 -- # kill -0 2979036 00:10:38.191 17:01:25 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@951 -- # uname 00:10:38.191 17:01:25 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:10:38.191 17:01:25 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 2979036 00:10:38.191 17:01:25 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:10:38.191 17:01:25 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:10:38.191 17:01:25 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@964 -- # echo 'killing process with pid 2979036' 00:10:38.191 killing process with pid 2979036 00:10:38.191 17:01:25 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@965 -- # kill 2979036 00:10:38.191 [2024-05-15 17:01:25.767840] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:10:38.191 17:01:25 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@970 -- # wait 2979036 00:10:38.450 17:01:26 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:10:38.450 17:01:26 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:10:38.450 17:01:26 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:10:38.450 17:01:26 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:10:38.450 17:01:26 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@278 -- # remove_spdk_ns 00:10:38.450 17:01:26 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:38.450 17:01:26 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:38.450 17:01:26 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:40.976 17:01:28 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:10:40.976 00:10:40.976 real 0m19.395s 00:10:40.976 user 0m51.378s 00:10:40.976 sys 0m5.436s 00:10:40.976 17:01:28 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1122 -- # xtrace_disable 00:10:40.976 17:01:28 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:10:40.976 ************************************ 00:10:40.976 END TEST nvmf_ns_masking 00:10:40.976 ************************************ 00:10:40.976 17:01:28 nvmf_tcp -- nvmf/nvmf.sh@37 -- # [[ 1 -eq 1 ]] 00:10:40.976 17:01:28 nvmf_tcp -- nvmf/nvmf.sh@38 -- # run_test nvmf_nvme_cli /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:10:40.976 17:01:28 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:10:40.976 17:01:28 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:10:40.976 17:01:28 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:10:40.976 ************************************ 00:10:40.976 START TEST nvmf_nvme_cli 00:10:40.976 ************************************ 00:10:40.976 17:01:28 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:10:40.976 * Looking for test storage... 00:10:40.976 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:40.976 17:01:28 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:40.976 17:01:28 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@7 -- # uname -s 00:10:40.976 17:01:28 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:40.976 17:01:28 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:40.976 17:01:28 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:40.976 17:01:28 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:40.976 17:01:28 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:40.976 17:01:28 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:40.976 17:01:28 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:40.976 17:01:28 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:40.976 17:01:28 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:40.976 17:01:28 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:40.976 17:01:28 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:10:40.976 17:01:28 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:10:40.976 17:01:28 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:40.976 17:01:28 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:40.976 17:01:28 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:40.976 17:01:28 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:40.976 17:01:28 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:40.976 17:01:28 nvmf_tcp.nvmf_nvme_cli -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:40.976 17:01:28 nvmf_tcp.nvmf_nvme_cli -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:40.976 17:01:28 nvmf_tcp.nvmf_nvme_cli -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:40.976 17:01:28 nvmf_tcp.nvmf_nvme_cli -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:40.976 17:01:28 nvmf_tcp.nvmf_nvme_cli -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:40.976 17:01:28 nvmf_tcp.nvmf_nvme_cli -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:40.976 17:01:28 nvmf_tcp.nvmf_nvme_cli -- paths/export.sh@5 -- # export PATH 00:10:40.976 17:01:28 nvmf_tcp.nvmf_nvme_cli -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:40.976 17:01:28 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@47 -- # : 0 00:10:40.976 17:01:28 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:10:40.976 17:01:28 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:10:40.976 17:01:28 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:40.976 17:01:28 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:40.976 17:01:28 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:40.976 17:01:28 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:10:40.976 17:01:28 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:10:40.976 17:01:28 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@51 -- # have_pci_nics=0 00:10:40.976 17:01:28 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@11 -- # MALLOC_BDEV_SIZE=64 00:10:40.976 17:01:28 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:10:40.976 17:01:28 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@14 -- # devs=() 00:10:40.976 17:01:28 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@16 -- # nvmftestinit 00:10:40.976 17:01:28 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:10:40.976 17:01:28 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:40.976 17:01:28 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@448 -- # prepare_net_devs 00:10:40.976 17:01:28 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@410 -- # local -g is_hw=no 00:10:40.976 17:01:28 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@412 -- # remove_spdk_ns 00:10:40.976 17:01:28 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:40.977 17:01:28 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:40.977 17:01:28 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:40.977 17:01:28 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:10:40.977 17:01:28 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:10:40.977 17:01:28 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@285 -- # xtrace_disable 00:10:40.977 17:01:28 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:10:46.240 17:01:33 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:46.241 17:01:33 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@291 -- # pci_devs=() 00:10:46.241 17:01:33 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@291 -- # local -a pci_devs 00:10:46.241 17:01:33 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@292 -- # pci_net_devs=() 00:10:46.241 17:01:33 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:10:46.241 17:01:33 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@293 -- # pci_drivers=() 00:10:46.241 17:01:33 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@293 -- # local -A pci_drivers 00:10:46.241 17:01:33 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@295 -- # net_devs=() 00:10:46.241 17:01:33 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@295 -- # local -ga net_devs 00:10:46.241 17:01:33 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@296 -- # e810=() 00:10:46.241 17:01:33 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@296 -- # local -ga e810 00:10:46.241 17:01:33 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@297 -- # x722=() 00:10:46.241 17:01:33 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@297 -- # local -ga x722 00:10:46.241 17:01:33 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@298 -- # mlx=() 00:10:46.241 17:01:33 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@298 -- # local -ga mlx 00:10:46.241 17:01:33 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:46.241 17:01:33 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:46.241 17:01:33 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:46.241 17:01:33 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:46.241 17:01:33 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:46.241 17:01:33 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:46.241 17:01:33 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:46.241 17:01:33 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:46.241 17:01:33 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:46.241 17:01:33 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:46.241 17:01:33 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:46.241 17:01:33 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:10:46.241 17:01:33 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:10:46.241 17:01:33 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:10:46.241 17:01:33 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:10:46.241 17:01:33 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:10:46.241 17:01:33 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:10:46.241 17:01:33 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:46.241 17:01:33 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:10:46.241 Found 0000:86:00.0 (0x8086 - 0x159b) 00:10:46.241 17:01:33 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:10:46.241 17:01:33 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:10:46.241 17:01:33 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:46.241 17:01:33 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:46.241 17:01:33 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:10:46.241 17:01:33 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:46.241 17:01:33 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:10:46.241 Found 0000:86:00.1 (0x8086 - 0x159b) 00:10:46.241 17:01:33 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:10:46.241 17:01:33 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:10:46.241 17:01:33 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:46.241 17:01:33 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:46.241 17:01:33 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:10:46.241 17:01:33 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:10:46.241 17:01:33 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:10:46.241 17:01:33 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:10:46.241 17:01:33 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:46.241 17:01:33 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:46.241 17:01:33 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:10:46.241 17:01:33 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:46.241 17:01:33 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@390 -- # [[ up == up ]] 00:10:46.241 17:01:33 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:10:46.241 17:01:33 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:46.241 17:01:33 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:10:46.241 Found net devices under 0000:86:00.0: cvl_0_0 00:10:46.241 17:01:33 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:10:46.241 17:01:33 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:46.241 17:01:33 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:46.241 17:01:33 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:10:46.241 17:01:33 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:46.241 17:01:33 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@390 -- # [[ up == up ]] 00:10:46.241 17:01:33 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:10:46.241 17:01:33 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:46.241 17:01:33 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:10:46.241 Found net devices under 0000:86:00.1: cvl_0_1 00:10:46.241 17:01:33 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:10:46.241 17:01:33 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:10:46.241 17:01:33 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@414 -- # is_hw=yes 00:10:46.241 17:01:33 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:10:46.241 17:01:33 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:10:46.241 17:01:33 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:10:46.241 17:01:33 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:46.241 17:01:33 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:46.241 17:01:33 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:46.241 17:01:33 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:10:46.241 17:01:33 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:46.241 17:01:33 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:46.241 17:01:33 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:10:46.241 17:01:33 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:46.241 17:01:33 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:46.241 17:01:33 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:10:46.241 17:01:33 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:10:46.241 17:01:33 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:10:46.241 17:01:33 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:46.241 17:01:33 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:46.241 17:01:33 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:46.241 17:01:33 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:10:46.241 17:01:33 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:46.241 17:01:33 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:46.241 17:01:33 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:46.241 17:01:33 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:10:46.241 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:46.241 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.248 ms 00:10:46.241 00:10:46.241 --- 10.0.0.2 ping statistics --- 00:10:46.241 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:46.241 rtt min/avg/max/mdev = 0.248/0.248/0.248/0.000 ms 00:10:46.241 17:01:33 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:46.241 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:46.241 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.147 ms 00:10:46.241 00:10:46.241 --- 10.0.0.1 ping statistics --- 00:10:46.241 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:46.241 rtt min/avg/max/mdev = 0.147/0.147/0.147/0.000 ms 00:10:46.241 17:01:33 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:46.241 17:01:33 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@422 -- # return 0 00:10:46.241 17:01:33 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:10:46.241 17:01:33 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:46.241 17:01:33 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:10:46.241 17:01:33 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:10:46.241 17:01:33 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:46.241 17:01:33 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:10:46.241 17:01:33 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:10:46.241 17:01:33 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@17 -- # nvmfappstart -m 0xF 00:10:46.241 17:01:33 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:10:46.241 17:01:33 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@720 -- # xtrace_disable 00:10:46.241 17:01:33 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:10:46.241 17:01:33 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@481 -- # nvmfpid=2984640 00:10:46.241 17:01:33 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@482 -- # waitforlisten 2984640 00:10:46.241 17:01:33 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@827 -- # '[' -z 2984640 ']' 00:10:46.241 17:01:33 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:46.241 17:01:33 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@832 -- # local max_retries=100 00:10:46.241 17:01:33 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:46.241 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:46.241 17:01:33 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@836 -- # xtrace_disable 00:10:46.241 17:01:33 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:10:46.241 17:01:33 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:10:46.242 [2024-05-15 17:01:33.601565] Starting SPDK v24.05-pre git sha1 0ba8ca574 / DPDK 23.11.0 initialization... 00:10:46.242 [2024-05-15 17:01:33.601609] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:46.242 EAL: No free 2048 kB hugepages reported on node 1 00:10:46.242 [2024-05-15 17:01:33.657058] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:46.242 [2024-05-15 17:01:33.737167] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:46.242 [2024-05-15 17:01:33.737203] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:46.242 [2024-05-15 17:01:33.737210] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:46.242 [2024-05-15 17:01:33.737216] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:46.242 [2024-05-15 17:01:33.737222] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:46.242 [2024-05-15 17:01:33.737270] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:10:46.242 [2024-05-15 17:01:33.737286] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:10:46.242 [2024-05-15 17:01:33.737374] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:10:46.242 [2024-05-15 17:01:33.737376] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:46.807 17:01:34 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:10:46.807 17:01:34 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@860 -- # return 0 00:10:46.807 17:01:34 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:10:46.807 17:01:34 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@726 -- # xtrace_disable 00:10:46.807 17:01:34 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:10:46.807 17:01:34 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:46.808 17:01:34 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:10:46.808 17:01:34 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:46.808 17:01:34 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:10:46.808 [2024-05-15 17:01:34.450142] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:46.808 17:01:34 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:46.808 17:01:34 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@21 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:10:46.808 17:01:34 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:46.808 17:01:34 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:10:47.065 Malloc0 00:10:47.065 17:01:34 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:47.065 17:01:34 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:10:47.065 17:01:34 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:47.065 17:01:34 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:10:47.065 Malloc1 00:10:47.065 17:01:34 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:47.065 17:01:34 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME -d SPDK_Controller1 -i 291 00:10:47.065 17:01:34 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:47.065 17:01:34 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:10:47.065 17:01:34 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:47.065 17:01:34 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:10:47.065 17:01:34 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:47.065 17:01:34 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:10:47.065 17:01:34 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:47.065 17:01:34 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:10:47.065 17:01:34 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:47.065 17:01:34 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:10:47.065 17:01:34 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:47.065 17:01:34 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:47.065 17:01:34 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:47.065 17:01:34 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:10:47.065 [2024-05-15 17:01:34.531664] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:10:47.065 [2024-05-15 17:01:34.531907] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:47.065 17:01:34 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:47.065 17:01:34 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@28 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:10:47.065 17:01:34 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:47.065 17:01:34 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:10:47.065 17:01:34 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:47.065 17:01:34 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@30 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 4420 00:10:47.065 00:10:47.065 Discovery Log Number of Records 2, Generation counter 2 00:10:47.065 =====Discovery Log Entry 0====== 00:10:47.065 trtype: tcp 00:10:47.066 adrfam: ipv4 00:10:47.066 subtype: current discovery subsystem 00:10:47.066 treq: not required 00:10:47.066 portid: 0 00:10:47.066 trsvcid: 4420 00:10:47.066 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:10:47.066 traddr: 10.0.0.2 00:10:47.066 eflags: explicit discovery connections, duplicate discovery information 00:10:47.066 sectype: none 00:10:47.066 =====Discovery Log Entry 1====== 00:10:47.066 trtype: tcp 00:10:47.066 adrfam: ipv4 00:10:47.066 subtype: nvme subsystem 00:10:47.066 treq: not required 00:10:47.066 portid: 0 00:10:47.066 trsvcid: 4420 00:10:47.066 subnqn: nqn.2016-06.io.spdk:cnode1 00:10:47.066 traddr: 10.0.0.2 00:10:47.066 eflags: none 00:10:47.066 sectype: none 00:10:47.066 17:01:34 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # devs=($(get_nvme_devs)) 00:10:47.066 17:01:34 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # get_nvme_devs 00:10:47.066 17:01:34 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@522 -- # local dev _ 00:10:47.066 17:01:34 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:10:47.066 17:01:34 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@521 -- # nvme list 00:10:47.066 17:01:34 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ Node == /dev/nvme* ]] 00:10:47.066 17:01:34 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:10:47.066 17:01:34 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ --------------------- == /dev/nvme* ]] 00:10:47.066 17:01:34 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:10:47.066 17:01:34 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # nvme_num_before_connection=0 00:10:47.066 17:01:34 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@32 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:10:48.492 17:01:35 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@34 -- # waitforserial SPDKISFASTANDAWESOME 2 00:10:48.492 17:01:35 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1194 -- # local i=0 00:10:48.492 17:01:35 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:10:48.492 17:01:35 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1196 -- # [[ -n 2 ]] 00:10:48.492 17:01:35 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1197 -- # nvme_device_counter=2 00:10:48.492 17:01:35 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1201 -- # sleep 2 00:10:50.389 17:01:37 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:10:50.389 17:01:37 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:10:50.389 17:01:37 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1203 -- # grep -c SPDKISFASTANDAWESOME 00:10:50.389 17:01:37 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1203 -- # nvme_devices=2 00:10:50.389 17:01:37 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:10:50.389 17:01:37 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1204 -- # return 0 00:10:50.389 17:01:37 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # get_nvme_devs 00:10:50.389 17:01:37 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@522 -- # local dev _ 00:10:50.389 17:01:37 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:10:50.389 17:01:37 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@521 -- # nvme list 00:10:50.389 17:01:37 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ Node == /dev/nvme* ]] 00:10:50.389 17:01:37 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:10:50.389 17:01:37 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ --------------------- == /dev/nvme* ]] 00:10:50.389 17:01:37 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:10:50.389 17:01:37 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:10:50.389 17:01:37 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme0n2 00:10:50.389 17:01:37 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:10:50.389 17:01:37 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:10:50.389 17:01:37 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme0n1 00:10:50.389 17:01:37 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:10:50.389 17:01:37 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # [[ -z /dev/nvme0n2 00:10:50.389 /dev/nvme0n1 ]] 00:10:50.389 17:01:37 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # devs=($(get_nvme_devs)) 00:10:50.389 17:01:37 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # get_nvme_devs 00:10:50.389 17:01:37 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@522 -- # local dev _ 00:10:50.389 17:01:37 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:10:50.389 17:01:37 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@521 -- # nvme list 00:10:50.647 17:01:38 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ Node == /dev/nvme* ]] 00:10:50.647 17:01:38 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:10:50.647 17:01:38 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ --------------------- == /dev/nvme* ]] 00:10:50.647 17:01:38 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:10:50.647 17:01:38 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:10:50.647 17:01:38 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme0n2 00:10:50.647 17:01:38 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:10:50.647 17:01:38 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:10:50.647 17:01:38 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme0n1 00:10:50.647 17:01:38 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:10:50.647 17:01:38 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # nvme_num=2 00:10:50.647 17:01:38 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@60 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:10:50.905 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:50.905 17:01:38 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@61 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:10:50.905 17:01:38 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1215 -- # local i=0 00:10:50.905 17:01:38 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:10:50.905 17:01:38 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:50.905 17:01:38 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:10:50.905 17:01:38 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1223 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:50.905 17:01:38 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1227 -- # return 0 00:10:50.905 17:01:38 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@62 -- # (( nvme_num <= nvme_num_before_connection )) 00:10:50.905 17:01:38 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:50.905 17:01:38 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:50.905 17:01:38 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:10:50.905 17:01:38 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:50.905 17:01:38 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:10:50.905 17:01:38 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@70 -- # nvmftestfini 00:10:50.905 17:01:38 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@488 -- # nvmfcleanup 00:10:50.905 17:01:38 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@117 -- # sync 00:10:50.905 17:01:38 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:10:50.905 17:01:38 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@120 -- # set +e 00:10:50.905 17:01:38 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@121 -- # for i in {1..20} 00:10:50.905 17:01:38 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:10:50.905 rmmod nvme_tcp 00:10:50.905 rmmod nvme_fabrics 00:10:50.905 rmmod nvme_keyring 00:10:50.905 17:01:38 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:10:50.905 17:01:38 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@124 -- # set -e 00:10:50.905 17:01:38 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@125 -- # return 0 00:10:50.905 17:01:38 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@489 -- # '[' -n 2984640 ']' 00:10:50.905 17:01:38 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@490 -- # killprocess 2984640 00:10:50.905 17:01:38 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@946 -- # '[' -z 2984640 ']' 00:10:50.905 17:01:38 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@950 -- # kill -0 2984640 00:10:50.905 17:01:38 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@951 -- # uname 00:10:50.905 17:01:38 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:10:50.905 17:01:38 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 2984640 00:10:50.905 17:01:38 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:10:50.905 17:01:38 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:10:50.905 17:01:38 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@964 -- # echo 'killing process with pid 2984640' 00:10:50.905 killing process with pid 2984640 00:10:50.905 17:01:38 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@965 -- # kill 2984640 00:10:50.905 [2024-05-15 17:01:38.552367] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:10:50.905 17:01:38 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@970 -- # wait 2984640 00:10:51.164 17:01:38 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:10:51.164 17:01:38 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:10:51.164 17:01:38 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:10:51.164 17:01:38 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:10:51.164 17:01:38 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@278 -- # remove_spdk_ns 00:10:51.164 17:01:38 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:51.164 17:01:38 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:51.164 17:01:38 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:53.703 17:01:40 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:10:53.703 00:10:53.703 real 0m12.704s 00:10:53.703 user 0m21.531s 00:10:53.703 sys 0m4.580s 00:10:53.703 17:01:40 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1122 -- # xtrace_disable 00:10:53.703 17:01:40 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:10:53.703 ************************************ 00:10:53.703 END TEST nvmf_nvme_cli 00:10:53.703 ************************************ 00:10:53.703 17:01:40 nvmf_tcp -- nvmf/nvmf.sh@40 -- # [[ 1 -eq 1 ]] 00:10:53.703 17:01:40 nvmf_tcp -- nvmf/nvmf.sh@41 -- # run_test nvmf_vfio_user /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:10:53.703 17:01:40 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:10:53.703 17:01:40 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:10:53.703 17:01:40 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:10:53.703 ************************************ 00:10:53.703 START TEST nvmf_vfio_user 00:10:53.703 ************************************ 00:10:53.703 17:01:40 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:10:53.703 * Looking for test storage... 00:10:53.703 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:53.703 17:01:41 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:53.703 17:01:41 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@7 -- # uname -s 00:10:53.703 17:01:41 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:53.703 17:01:41 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:53.703 17:01:41 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:53.703 17:01:41 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:53.703 17:01:41 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:53.703 17:01:41 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:53.703 17:01:41 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:53.703 17:01:41 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:53.703 17:01:41 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:53.703 17:01:41 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:53.703 17:01:41 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:10:53.703 17:01:41 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:10:53.703 17:01:41 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:53.703 17:01:41 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:53.703 17:01:41 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:53.703 17:01:41 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:53.703 17:01:41 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:53.703 17:01:41 nvmf_tcp.nvmf_vfio_user -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:53.703 17:01:41 nvmf_tcp.nvmf_vfio_user -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:53.703 17:01:41 nvmf_tcp.nvmf_vfio_user -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:53.703 17:01:41 nvmf_tcp.nvmf_vfio_user -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:53.703 17:01:41 nvmf_tcp.nvmf_vfio_user -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:53.703 17:01:41 nvmf_tcp.nvmf_vfio_user -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:53.703 17:01:41 nvmf_tcp.nvmf_vfio_user -- paths/export.sh@5 -- # export PATH 00:10:53.704 17:01:41 nvmf_tcp.nvmf_vfio_user -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:53.704 17:01:41 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@47 -- # : 0 00:10:53.704 17:01:41 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:10:53.704 17:01:41 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:10:53.704 17:01:41 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:53.704 17:01:41 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:53.704 17:01:41 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:53.704 17:01:41 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:10:53.704 17:01:41 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:10:53.704 17:01:41 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@51 -- # have_pci_nics=0 00:10:53.704 17:01:41 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@12 -- # MALLOC_BDEV_SIZE=64 00:10:53.704 17:01:41 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:10:53.704 17:01:41 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@14 -- # NUM_DEVICES=2 00:10:53.704 17:01:41 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:10:53.704 17:01:41 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:10:53.704 17:01:41 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:10:53.704 17:01:41 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@47 -- # rm -rf /var/run/vfio-user 00:10:53.704 17:01:41 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@103 -- # setup_nvmf_vfio_user '' '' 00:10:53.704 17:01:41 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args= 00:10:53.704 17:01:41 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@52 -- # local transport_args= 00:10:53.704 17:01:41 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=2986035 00:10:53.704 17:01:41 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 2986035' 00:10:53.704 Process pid: 2986035 00:10:53.704 17:01:41 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:10:53.704 17:01:41 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' 00:10:53.704 17:01:41 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 2986035 00:10:53.704 17:01:41 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@827 -- # '[' -z 2986035 ']' 00:10:53.704 17:01:41 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:53.704 17:01:41 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@832 -- # local max_retries=100 00:10:53.704 17:01:41 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:53.704 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:53.704 17:01:41 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@836 -- # xtrace_disable 00:10:53.704 17:01:41 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:10:53.704 [2024-05-15 17:01:41.116322] Starting SPDK v24.05-pre git sha1 0ba8ca574 / DPDK 23.11.0 initialization... 00:10:53.704 [2024-05-15 17:01:41.116365] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:53.704 EAL: No free 2048 kB hugepages reported on node 1 00:10:53.704 [2024-05-15 17:01:41.168815] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:53.704 [2024-05-15 17:01:41.241834] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:53.704 [2024-05-15 17:01:41.241871] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:53.704 [2024-05-15 17:01:41.241877] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:53.704 [2024-05-15 17:01:41.241883] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:53.704 [2024-05-15 17:01:41.241888] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:53.704 [2024-05-15 17:01:41.241980] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:10:53.704 [2024-05-15 17:01:41.242096] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:10:53.704 [2024-05-15 17:01:41.242188] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:10:53.704 [2024-05-15 17:01:41.242190] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:54.267 17:01:41 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:10:54.524 17:01:41 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@860 -- # return 0 00:10:54.524 17:01:41 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:10:55.454 17:01:42 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER 00:10:55.710 17:01:43 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:10:55.710 17:01:43 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:10:55.710 17:01:43 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:10:55.710 17:01:43 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:10:55.710 17:01:43 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:10:55.710 Malloc1 00:10:55.710 17:01:43 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:10:55.967 17:01:43 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:10:56.225 17:01:43 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:10:56.225 [2024-05-15 17:01:43.842001] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:10:56.225 17:01:43 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:10:56.225 17:01:43 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:10:56.225 17:01:43 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:10:56.482 Malloc2 00:10:56.482 17:01:44 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:10:56.738 17:01:44 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:10:56.995 17:01:44 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:10:56.995 17:01:44 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@104 -- # run_nvmf_vfio_user 00:10:56.995 17:01:44 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # seq 1 2 00:10:56.995 17:01:44 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:10:56.995 17:01:44 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user1/1 00:10:56.995 17:01:44 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode1 00:10:56.995 17:01:44 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -L nvme -L nvme_vfio -L vfio_pci 00:10:56.996 [2024-05-15 17:01:44.624959] Starting SPDK v24.05-pre git sha1 0ba8ca574 / DPDK 23.11.0 initialization... 00:10:56.996 [2024-05-15 17:01:44.624992] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2986606 ] 00:10:56.996 EAL: No free 2048 kB hugepages reported on node 1 00:10:56.996 [2024-05-15 17:01:44.654708] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user1/1 00:10:57.256 [2024-05-15 17:01:44.664502] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:10:57.256 [2024-05-15 17:01:44.664522] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7f1d9f09f000 00:10:57.256 [2024-05-15 17:01:44.665504] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:10:57.256 [2024-05-15 17:01:44.666508] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:10:57.256 [2024-05-15 17:01:44.667513] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:10:57.256 [2024-05-15 17:01:44.668517] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:10:57.256 [2024-05-15 17:01:44.669527] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:10:57.256 [2024-05-15 17:01:44.670531] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:10:57.256 [2024-05-15 17:01:44.671540] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:10:57.256 [2024-05-15 17:01:44.672543] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:10:57.256 [2024-05-15 17:01:44.673551] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:10:57.256 [2024-05-15 17:01:44.673563] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7f1d9f094000 00:10:57.256 [2024-05-15 17:01:44.674507] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:10:57.256 [2024-05-15 17:01:44.683115] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user1/1/cntrl Setup Successfully 00:10:57.256 [2024-05-15 17:01:44.683140] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to connect adminq (no timeout) 00:10:57.256 [2024-05-15 17:01:44.687634] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:10:57.256 [2024-05-15 17:01:44.687669] nvme_pcie_common.c: 132:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:10:57.256 [2024-05-15 17:01:44.687742] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for connect adminq (no timeout) 00:10:57.256 [2024-05-15 17:01:44.687758] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read vs (no timeout) 00:10:57.256 [2024-05-15 17:01:44.687764] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read vs wait for vs (no timeout) 00:10:57.256 [2024-05-15 17:01:44.690174] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x8, value 0x10300 00:10:57.256 [2024-05-15 17:01:44.690184] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read cap (no timeout) 00:10:57.256 [2024-05-15 17:01:44.690190] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read cap wait for cap (no timeout) 00:10:57.256 [2024-05-15 17:01:44.690649] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:10:57.256 [2024-05-15 17:01:44.690657] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to check en (no timeout) 00:10:57.256 [2024-05-15 17:01:44.690663] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to check en wait for cc (timeout 15000 ms) 00:10:57.256 [2024-05-15 17:01:44.691657] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x0 00:10:57.256 [2024-05-15 17:01:44.691665] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:10:57.256 [2024-05-15 17:01:44.692660] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x0 00:10:57.256 [2024-05-15 17:01:44.692667] nvme_ctrlr.c:3750:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] CC.EN = 0 && CSTS.RDY = 0 00:10:57.256 [2024-05-15 17:01:44.692741] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to controller is disabled (timeout 15000 ms) 00:10:57.256 [2024-05-15 17:01:44.692747] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:10:57.256 [2024-05-15 17:01:44.692852] nvme_ctrlr.c:3943:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Setting CC.EN = 1 00:10:57.256 [2024-05-15 17:01:44.692856] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:10:57.256 [2024-05-15 17:01:44.692861] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x28, value 0x2000003c0000 00:10:57.256 [2024-05-15 17:01:44.693668] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x30, value 0x2000003be000 00:10:57.256 [2024-05-15 17:01:44.694667] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x24, value 0xff00ff 00:10:57.256 [2024-05-15 17:01:44.695676] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:10:57.256 [2024-05-15 17:01:44.696669] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:10:57.256 [2024-05-15 17:01:44.696726] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:10:57.256 [2024-05-15 17:01:44.697682] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x1 00:10:57.256 [2024-05-15 17:01:44.697690] nvme_ctrlr.c:3785:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:10:57.256 [2024-05-15 17:01:44.697694] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to reset admin queue (timeout 30000 ms) 00:10:57.256 [2024-05-15 17:01:44.697711] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify controller (no timeout) 00:10:57.256 [2024-05-15 17:01:44.697718] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify controller (timeout 30000 ms) 00:10:57.256 [2024-05-15 17:01:44.697735] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:10:57.256 [2024-05-15 17:01:44.697740] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:10:57.256 [2024-05-15 17:01:44.697753] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:10:57.256 [2024-05-15 17:01:44.697804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:10:57.256 [2024-05-15 17:01:44.697813] nvme_ctrlr.c:1985:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] transport max_xfer_size 131072 00:10:57.256 [2024-05-15 17:01:44.697818] nvme_ctrlr.c:1989:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] MDTS max_xfer_size 131072 00:10:57.256 [2024-05-15 17:01:44.697822] nvme_ctrlr.c:1992:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] CNTLID 0x0001 00:10:57.256 [2024-05-15 17:01:44.697826] nvme_ctrlr.c:2003:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:10:57.256 [2024-05-15 17:01:44.697830] nvme_ctrlr.c:2016:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] transport max_sges 1 00:10:57.256 [2024-05-15 17:01:44.697834] nvme_ctrlr.c:2031:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] fuses compare and write: 1 00:10:57.256 [2024-05-15 17:01:44.697842] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to configure AER (timeout 30000 ms) 00:10:57.256 [2024-05-15 17:01:44.697851] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for configure aer (timeout 30000 ms) 00:10:57.256 [2024-05-15 17:01:44.697863] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:10:57.256 [2024-05-15 17:01:44.697876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:10:57.256 [2024-05-15 17:01:44.697888] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:10:57.256 [2024-05-15 17:01:44.697896] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:10:57.256 [2024-05-15 17:01:44.697903] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:10:57.256 [2024-05-15 17:01:44.697910] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:10:57.256 [2024-05-15 17:01:44.697915] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set keep alive timeout (timeout 30000 ms) 00:10:57.256 [2024-05-15 17:01:44.697921] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:10:57.256 [2024-05-15 17:01:44.697929] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:10:57.257 [2024-05-15 17:01:44.697936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:10:57.257 [2024-05-15 17:01:44.697942] nvme_ctrlr.c:2891:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Controller adjusted keep alive timeout to 0 ms 00:10:57.257 [2024-05-15 17:01:44.697948] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify controller iocs specific (timeout 30000 ms) 00:10:57.257 [2024-05-15 17:01:44.697954] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set number of queues (timeout 30000 ms) 00:10:57.257 [2024-05-15 17:01:44.697960] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for set number of queues (timeout 30000 ms) 00:10:57.257 [2024-05-15 17:01:44.697968] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:10:57.257 [2024-05-15 17:01:44.697981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:10:57.257 [2024-05-15 17:01:44.698022] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify active ns (timeout 30000 ms) 00:10:57.257 [2024-05-15 17:01:44.698028] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify active ns (timeout 30000 ms) 00:10:57.257 [2024-05-15 17:01:44.698036] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:10:57.257 [2024-05-15 17:01:44.698040] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:10:57.257 [2024-05-15 17:01:44.698046] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:10:57.257 [2024-05-15 17:01:44.698056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:10:57.257 [2024-05-15 17:01:44.698067] nvme_ctrlr.c:4558:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Namespace 1 was added 00:10:57.257 [2024-05-15 17:01:44.698081] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify ns (timeout 30000 ms) 00:10:57.257 [2024-05-15 17:01:44.698087] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify ns (timeout 30000 ms) 00:10:57.257 [2024-05-15 17:01:44.698093] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:10:57.257 [2024-05-15 17:01:44.698097] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:10:57.257 [2024-05-15 17:01:44.698102] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:10:57.257 [2024-05-15 17:01:44.698120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:10:57.257 [2024-05-15 17:01:44.698132] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:10:57.257 [2024-05-15 17:01:44.698138] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:10:57.257 [2024-05-15 17:01:44.698144] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:10:57.257 [2024-05-15 17:01:44.698148] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:10:57.257 [2024-05-15 17:01:44.698153] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:10:57.257 [2024-05-15 17:01:44.698162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:10:57.257 [2024-05-15 17:01:44.698180] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify ns iocs specific (timeout 30000 ms) 00:10:57.257 [2024-05-15 17:01:44.698185] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set supported log pages (timeout 30000 ms) 00:10:57.257 [2024-05-15 17:01:44.698193] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set supported features (timeout 30000 ms) 00:10:57.257 [2024-05-15 17:01:44.698199] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set doorbell buffer config (timeout 30000 ms) 00:10:57.257 [2024-05-15 17:01:44.698203] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set host ID (timeout 30000 ms) 00:10:57.257 [2024-05-15 17:01:44.698208] nvme_ctrlr.c:2991:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] NVMe-oF transport - not sending Set Features - Host ID 00:10:57.257 [2024-05-15 17:01:44.698212] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to transport ready (timeout 30000 ms) 00:10:57.257 [2024-05-15 17:01:44.698216] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to ready (no timeout) 00:10:57.257 [2024-05-15 17:01:44.698235] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:10:57.257 [2024-05-15 17:01:44.698247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:10:57.257 [2024-05-15 17:01:44.698256] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:10:57.257 [2024-05-15 17:01:44.698264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:10:57.257 [2024-05-15 17:01:44.698273] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:10:57.257 [2024-05-15 17:01:44.698286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:10:57.257 [2024-05-15 17:01:44.698295] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:10:57.257 [2024-05-15 17:01:44.698308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:10:57.257 [2024-05-15 17:01:44.698317] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:10:57.257 [2024-05-15 17:01:44.698321] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:10:57.257 [2024-05-15 17:01:44.698324] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:10:57.257 [2024-05-15 17:01:44.698327] nvme_pcie_common.c:1254:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:10:57.257 [2024-05-15 17:01:44.698332] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:10:57.257 [2024-05-15 17:01:44.698338] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:10:57.257 [2024-05-15 17:01:44.698342] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:10:57.257 [2024-05-15 17:01:44.698347] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:10:57.257 [2024-05-15 17:01:44.698353] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:10:57.257 [2024-05-15 17:01:44.698357] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:10:57.257 [2024-05-15 17:01:44.698362] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:10:57.257 [2024-05-15 17:01:44.698370] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:10:57.257 [2024-05-15 17:01:44.698374] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:10:57.257 [2024-05-15 17:01:44.698379] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:10:57.257 [2024-05-15 17:01:44.698385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:10:57.257 [2024-05-15 17:01:44.698397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:10:57.257 [2024-05-15 17:01:44.698405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:10:57.257 [2024-05-15 17:01:44.698413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:10:57.257 ===================================================== 00:10:57.257 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:10:57.257 ===================================================== 00:10:57.257 Controller Capabilities/Features 00:10:57.257 ================================ 00:10:57.257 Vendor ID: 4e58 00:10:57.257 Subsystem Vendor ID: 4e58 00:10:57.257 Serial Number: SPDK1 00:10:57.257 Model Number: SPDK bdev Controller 00:10:57.257 Firmware Version: 24.05 00:10:57.257 Recommended Arb Burst: 6 00:10:57.257 IEEE OUI Identifier: 8d 6b 50 00:10:57.257 Multi-path I/O 00:10:57.257 May have multiple subsystem ports: Yes 00:10:57.257 May have multiple controllers: Yes 00:10:57.257 Associated with SR-IOV VF: No 00:10:57.257 Max Data Transfer Size: 131072 00:10:57.257 Max Number of Namespaces: 32 00:10:57.257 Max Number of I/O Queues: 127 00:10:57.257 NVMe Specification Version (VS): 1.3 00:10:57.257 NVMe Specification Version (Identify): 1.3 00:10:57.257 Maximum Queue Entries: 256 00:10:57.257 Contiguous Queues Required: Yes 00:10:57.257 Arbitration Mechanisms Supported 00:10:57.257 Weighted Round Robin: Not Supported 00:10:57.257 Vendor Specific: Not Supported 00:10:57.257 Reset Timeout: 15000 ms 00:10:57.257 Doorbell Stride: 4 bytes 00:10:57.257 NVM Subsystem Reset: Not Supported 00:10:57.257 Command Sets Supported 00:10:57.257 NVM Command Set: Supported 00:10:57.257 Boot Partition: Not Supported 00:10:57.257 Memory Page Size Minimum: 4096 bytes 00:10:57.257 Memory Page Size Maximum: 4096 bytes 00:10:57.257 Persistent Memory Region: Not Supported 00:10:57.257 Optional Asynchronous Events Supported 00:10:57.257 Namespace Attribute Notices: Supported 00:10:57.257 Firmware Activation Notices: Not Supported 00:10:57.257 ANA Change Notices: Not Supported 00:10:57.257 PLE Aggregate Log Change Notices: Not Supported 00:10:57.257 LBA Status Info Alert Notices: Not Supported 00:10:57.257 EGE Aggregate Log Change Notices: Not Supported 00:10:57.257 Normal NVM Subsystem Shutdown event: Not Supported 00:10:57.257 Zone Descriptor Change Notices: Not Supported 00:10:57.257 Discovery Log Change Notices: Not Supported 00:10:57.257 Controller Attributes 00:10:57.257 128-bit Host Identifier: Supported 00:10:57.257 Non-Operational Permissive Mode: Not Supported 00:10:57.257 NVM Sets: Not Supported 00:10:57.257 Read Recovery Levels: Not Supported 00:10:57.257 Endurance Groups: Not Supported 00:10:57.257 Predictable Latency Mode: Not Supported 00:10:57.258 Traffic Based Keep ALive: Not Supported 00:10:57.258 Namespace Granularity: Not Supported 00:10:57.258 SQ Associations: Not Supported 00:10:57.258 UUID List: Not Supported 00:10:57.258 Multi-Domain Subsystem: Not Supported 00:10:57.258 Fixed Capacity Management: Not Supported 00:10:57.258 Variable Capacity Management: Not Supported 00:10:57.258 Delete Endurance Group: Not Supported 00:10:57.258 Delete NVM Set: Not Supported 00:10:57.258 Extended LBA Formats Supported: Not Supported 00:10:57.258 Flexible Data Placement Supported: Not Supported 00:10:57.258 00:10:57.258 Controller Memory Buffer Support 00:10:57.258 ================================ 00:10:57.258 Supported: No 00:10:57.258 00:10:57.258 Persistent Memory Region Support 00:10:57.258 ================================ 00:10:57.258 Supported: No 00:10:57.258 00:10:57.258 Admin Command Set Attributes 00:10:57.258 ============================ 00:10:57.258 Security Send/Receive: Not Supported 00:10:57.258 Format NVM: Not Supported 00:10:57.258 Firmware Activate/Download: Not Supported 00:10:57.258 Namespace Management: Not Supported 00:10:57.258 Device Self-Test: Not Supported 00:10:57.258 Directives: Not Supported 00:10:57.258 NVMe-MI: Not Supported 00:10:57.258 Virtualization Management: Not Supported 00:10:57.258 Doorbell Buffer Config: Not Supported 00:10:57.258 Get LBA Status Capability: Not Supported 00:10:57.258 Command & Feature Lockdown Capability: Not Supported 00:10:57.258 Abort Command Limit: 4 00:10:57.258 Async Event Request Limit: 4 00:10:57.258 Number of Firmware Slots: N/A 00:10:57.258 Firmware Slot 1 Read-Only: N/A 00:10:57.258 Firmware Activation Without Reset: N/A 00:10:57.258 Multiple Update Detection Support: N/A 00:10:57.258 Firmware Update Granularity: No Information Provided 00:10:57.258 Per-Namespace SMART Log: No 00:10:57.258 Asymmetric Namespace Access Log Page: Not Supported 00:10:57.258 Subsystem NQN: nqn.2019-07.io.spdk:cnode1 00:10:57.258 Command Effects Log Page: Supported 00:10:57.258 Get Log Page Extended Data: Supported 00:10:57.258 Telemetry Log Pages: Not Supported 00:10:57.258 Persistent Event Log Pages: Not Supported 00:10:57.258 Supported Log Pages Log Page: May Support 00:10:57.258 Commands Supported & Effects Log Page: Not Supported 00:10:57.258 Feature Identifiers & Effects Log Page:May Support 00:10:57.258 NVMe-MI Commands & Effects Log Page: May Support 00:10:57.258 Data Area 4 for Telemetry Log: Not Supported 00:10:57.258 Error Log Page Entries Supported: 128 00:10:57.258 Keep Alive: Supported 00:10:57.258 Keep Alive Granularity: 10000 ms 00:10:57.258 00:10:57.258 NVM Command Set Attributes 00:10:57.258 ========================== 00:10:57.258 Submission Queue Entry Size 00:10:57.258 Max: 64 00:10:57.258 Min: 64 00:10:57.258 Completion Queue Entry Size 00:10:57.258 Max: 16 00:10:57.258 Min: 16 00:10:57.258 Number of Namespaces: 32 00:10:57.258 Compare Command: Supported 00:10:57.258 Write Uncorrectable Command: Not Supported 00:10:57.258 Dataset Management Command: Supported 00:10:57.258 Write Zeroes Command: Supported 00:10:57.258 Set Features Save Field: Not Supported 00:10:57.258 Reservations: Not Supported 00:10:57.258 Timestamp: Not Supported 00:10:57.258 Copy: Supported 00:10:57.258 Volatile Write Cache: Present 00:10:57.258 Atomic Write Unit (Normal): 1 00:10:57.258 Atomic Write Unit (PFail): 1 00:10:57.258 Atomic Compare & Write Unit: 1 00:10:57.258 Fused Compare & Write: Supported 00:10:57.258 Scatter-Gather List 00:10:57.258 SGL Command Set: Supported (Dword aligned) 00:10:57.258 SGL Keyed: Not Supported 00:10:57.258 SGL Bit Bucket Descriptor: Not Supported 00:10:57.258 SGL Metadata Pointer: Not Supported 00:10:57.258 Oversized SGL: Not Supported 00:10:57.258 SGL Metadata Address: Not Supported 00:10:57.258 SGL Offset: Not Supported 00:10:57.258 Transport SGL Data Block: Not Supported 00:10:57.258 Replay Protected Memory Block: Not Supported 00:10:57.258 00:10:57.258 Firmware Slot Information 00:10:57.258 ========================= 00:10:57.258 Active slot: 1 00:10:57.258 Slot 1 Firmware Revision: 24.05 00:10:57.258 00:10:57.258 00:10:57.258 Commands Supported and Effects 00:10:57.258 ============================== 00:10:57.258 Admin Commands 00:10:57.258 -------------- 00:10:57.258 Get Log Page (02h): Supported 00:10:57.258 Identify (06h): Supported 00:10:57.258 Abort (08h): Supported 00:10:57.258 Set Features (09h): Supported 00:10:57.258 Get Features (0Ah): Supported 00:10:57.258 Asynchronous Event Request (0Ch): Supported 00:10:57.258 Keep Alive (18h): Supported 00:10:57.258 I/O Commands 00:10:57.258 ------------ 00:10:57.258 Flush (00h): Supported LBA-Change 00:10:57.258 Write (01h): Supported LBA-Change 00:10:57.258 Read (02h): Supported 00:10:57.258 Compare (05h): Supported 00:10:57.258 Write Zeroes (08h): Supported LBA-Change 00:10:57.258 Dataset Management (09h): Supported LBA-Change 00:10:57.258 Copy (19h): Supported LBA-Change 00:10:57.258 Unknown (79h): Supported LBA-Change 00:10:57.258 Unknown (7Ah): Supported 00:10:57.258 00:10:57.258 Error Log 00:10:57.258 ========= 00:10:57.258 00:10:57.258 Arbitration 00:10:57.258 =========== 00:10:57.258 Arbitration Burst: 1 00:10:57.258 00:10:57.258 Power Management 00:10:57.258 ================ 00:10:57.258 Number of Power States: 1 00:10:57.258 Current Power State: Power State #0 00:10:57.258 Power State #0: 00:10:57.258 Max Power: 0.00 W 00:10:57.258 Non-Operational State: Operational 00:10:57.258 Entry Latency: Not Reported 00:10:57.258 Exit Latency: Not Reported 00:10:57.258 Relative Read Throughput: 0 00:10:57.258 Relative Read Latency: 0 00:10:57.258 Relative Write Throughput: 0 00:10:57.258 Relative Write Latency: 0 00:10:57.258 Idle Power: Not Reported 00:10:57.258 Active Power: Not Reported 00:10:57.258 Non-Operational Permissive Mode: Not Supported 00:10:57.258 00:10:57.258 Health Information 00:10:57.258 ================== 00:10:57.258 Critical Warnings: 00:10:57.258 Available Spare Space: OK 00:10:57.258 Temperature: OK 00:10:57.258 Device Reliability: OK 00:10:57.258 Read Only: No 00:10:57.258 Volatile Memory Backup: OK 00:10:57.258 Current Temperature: 0 Kelvin (-2[2024-05-15 17:01:44.698497] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:10:57.258 [2024-05-15 17:01:44.698504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:10:57.258 [2024-05-15 17:01:44.698525] nvme_ctrlr.c:4222:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Prepare to destruct SSD 00:10:57.258 [2024-05-15 17:01:44.698533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:57.258 [2024-05-15 17:01:44.698539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:57.258 [2024-05-15 17:01:44.698544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:57.258 [2024-05-15 17:01:44.698549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:57.258 [2024-05-15 17:01:44.698689] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:10:57.258 [2024-05-15 17:01:44.698697] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x464001 00:10:57.258 [2024-05-15 17:01:44.699693] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:10:57.258 [2024-05-15 17:01:44.699741] nvme_ctrlr.c:1083:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] RTD3E = 0 us 00:10:57.258 [2024-05-15 17:01:44.699748] nvme_ctrlr.c:1086:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] shutdown timeout = 10000 ms 00:10:57.258 [2024-05-15 17:01:44.700701] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x9 00:10:57.258 [2024-05-15 17:01:44.700711] nvme_ctrlr.c:1205:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] shutdown complete in 0 milliseconds 00:10:57.258 [2024-05-15 17:01:44.700760] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user1/1/cntrl 00:10:57.258 [2024-05-15 17:01:44.706173] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:10:57.258 73 Celsius) 00:10:57.258 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:10:57.258 Available Spare: 0% 00:10:57.258 Available Spare Threshold: 0% 00:10:57.258 Life Percentage Used: 0% 00:10:57.258 Data Units Read: 0 00:10:57.258 Data Units Written: 0 00:10:57.258 Host Read Commands: 0 00:10:57.258 Host Write Commands: 0 00:10:57.258 Controller Busy Time: 0 minutes 00:10:57.258 Power Cycles: 0 00:10:57.258 Power On Hours: 0 hours 00:10:57.258 Unsafe Shutdowns: 0 00:10:57.258 Unrecoverable Media Errors: 0 00:10:57.258 Lifetime Error Log Entries: 0 00:10:57.258 Warning Temperature Time: 0 minutes 00:10:57.258 Critical Temperature Time: 0 minutes 00:10:57.258 00:10:57.258 Number of Queues 00:10:57.258 ================ 00:10:57.258 Number of I/O Submission Queues: 127 00:10:57.258 Number of I/O Completion Queues: 127 00:10:57.258 00:10:57.259 Active Namespaces 00:10:57.259 ================= 00:10:57.259 Namespace ID:1 00:10:57.259 Error Recovery Timeout: Unlimited 00:10:57.259 Command Set Identifier: NVM (00h) 00:10:57.259 Deallocate: Supported 00:10:57.259 Deallocated/Unwritten Error: Not Supported 00:10:57.259 Deallocated Read Value: Unknown 00:10:57.259 Deallocate in Write Zeroes: Not Supported 00:10:57.259 Deallocated Guard Field: 0xFFFF 00:10:57.259 Flush: Supported 00:10:57.259 Reservation: Supported 00:10:57.259 Namespace Sharing Capabilities: Multiple Controllers 00:10:57.259 Size (in LBAs): 131072 (0GiB) 00:10:57.259 Capacity (in LBAs): 131072 (0GiB) 00:10:57.259 Utilization (in LBAs): 131072 (0GiB) 00:10:57.259 NGUID: 29A55CED10BB46188E0F806A23F28629 00:10:57.259 UUID: 29a55ced-10bb-4618-8e0f-806a23f28629 00:10:57.259 Thin Provisioning: Not Supported 00:10:57.259 Per-NS Atomic Units: Yes 00:10:57.259 Atomic Boundary Size (Normal): 0 00:10:57.259 Atomic Boundary Size (PFail): 0 00:10:57.259 Atomic Boundary Offset: 0 00:10:57.259 Maximum Single Source Range Length: 65535 00:10:57.259 Maximum Copy Length: 65535 00:10:57.259 Maximum Source Range Count: 1 00:10:57.259 NGUID/EUI64 Never Reused: No 00:10:57.259 Namespace Write Protected: No 00:10:57.259 Number of LBA Formats: 1 00:10:57.259 Current LBA Format: LBA Format #00 00:10:57.259 LBA Format #00: Data Size: 512 Metadata Size: 0 00:10:57.259 00:10:57.259 17:01:44 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:10:57.259 EAL: No free 2048 kB hugepages reported on node 1 00:10:57.516 [2024-05-15 17:01:44.918925] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:11:02.797 Initializing NVMe Controllers 00:11:02.797 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:11:02.797 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:11:02.797 Initialization complete. Launching workers. 00:11:02.797 ======================================================== 00:11:02.797 Latency(us) 00:11:02.797 Device Information : IOPS MiB/s Average min max 00:11:02.797 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 39952.33 156.06 3203.64 971.12 6654.10 00:11:02.797 ======================================================== 00:11:02.797 Total : 39952.33 156.06 3203.64 971.12 6654.10 00:11:02.797 00:11:02.797 [2024-05-15 17:01:49.939394] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:11:02.797 17:01:49 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:11:02.797 EAL: No free 2048 kB hugepages reported on node 1 00:11:02.797 [2024-05-15 17:01:50.165403] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:11:08.048 Initializing NVMe Controllers 00:11:08.048 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:11:08.048 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:11:08.048 Initialization complete. Launching workers. 00:11:08.048 ======================================================== 00:11:08.048 Latency(us) 00:11:08.048 Device Information : IOPS MiB/s Average min max 00:11:08.048 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 16054.25 62.71 7978.30 6978.75 8045.58 00:11:08.048 ======================================================== 00:11:08.048 Total : 16054.25 62.71 7978.30 6978.75 8045.58 00:11:08.048 00:11:08.048 [2024-05-15 17:01:55.207714] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:11:08.048 17:01:55 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:11:08.048 EAL: No free 2048 kB hugepages reported on node 1 00:11:08.048 [2024-05-15 17:01:55.397633] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:11:13.307 [2024-05-15 17:02:00.462398] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:11:13.307 Initializing NVMe Controllers 00:11:13.307 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:11:13.307 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:11:13.307 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 1 00:11:13.307 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 2 00:11:13.307 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 3 00:11:13.307 Initialization complete. Launching workers. 00:11:13.307 Starting thread on core 2 00:11:13.307 Starting thread on core 3 00:11:13.307 Starting thread on core 1 00:11:13.307 17:02:00 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -d 256 -g 00:11:13.307 EAL: No free 2048 kB hugepages reported on node 1 00:11:13.307 [2024-05-15 17:02:00.744158] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:11:16.601 [2024-05-15 17:02:03.804416] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:11:16.601 Initializing NVMe Controllers 00:11:16.601 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:11:16.601 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:11:16.601 Associating SPDK bdev Controller (SPDK1 ) with lcore 0 00:11:16.601 Associating SPDK bdev Controller (SPDK1 ) with lcore 1 00:11:16.601 Associating SPDK bdev Controller (SPDK1 ) with lcore 2 00:11:16.601 Associating SPDK bdev Controller (SPDK1 ) with lcore 3 00:11:16.601 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration run with configuration: 00:11:16.601 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:11:16.601 Initialization complete. Launching workers. 00:11:16.601 Starting thread on core 1 with urgent priority queue 00:11:16.601 Starting thread on core 2 with urgent priority queue 00:11:16.601 Starting thread on core 3 with urgent priority queue 00:11:16.601 Starting thread on core 0 with urgent priority queue 00:11:16.601 SPDK bdev Controller (SPDK1 ) core 0: 7588.33 IO/s 13.18 secs/100000 ios 00:11:16.601 SPDK bdev Controller (SPDK1 ) core 1: 7804.33 IO/s 12.81 secs/100000 ios 00:11:16.601 SPDK bdev Controller (SPDK1 ) core 2: 10701.67 IO/s 9.34 secs/100000 ios 00:11:16.601 SPDK bdev Controller (SPDK1 ) core 3: 9095.67 IO/s 10.99 secs/100000 ios 00:11:16.601 ======================================================== 00:11:16.601 00:11:16.601 17:02:03 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:11:16.601 EAL: No free 2048 kB hugepages reported on node 1 00:11:16.601 [2024-05-15 17:02:04.073647] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:11:16.601 Initializing NVMe Controllers 00:11:16.601 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:11:16.601 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:11:16.601 Namespace ID: 1 size: 0GB 00:11:16.601 Initialization complete. 00:11:16.601 INFO: using host memory buffer for IO 00:11:16.601 Hello world! 00:11:16.601 [2024-05-15 17:02:04.109873] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:11:16.601 17:02:04 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:11:16.601 EAL: No free 2048 kB hugepages reported on node 1 00:11:16.859 [2024-05-15 17:02:04.377615] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:11:17.790 Initializing NVMe Controllers 00:11:17.790 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:11:17.790 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:11:17.790 Initialization complete. Launching workers. 00:11:17.790 submit (in ns) avg, min, max = 7360.1, 3226.1, 3998925.2 00:11:17.790 complete (in ns) avg, min, max = 21814.9, 1777.4, 3999467.8 00:11:17.790 00:11:17.790 Submit histogram 00:11:17.790 ================ 00:11:17.790 Range in us Cumulative Count 00:11:17.790 3.214 - 3.228: 0.0061% ( 1) 00:11:17.790 3.228 - 3.242: 0.0305% ( 4) 00:11:17.790 3.242 - 3.256: 0.0792% ( 8) 00:11:17.790 3.256 - 3.270: 0.1097% ( 5) 00:11:17.790 3.270 - 3.283: 0.1645% ( 9) 00:11:17.790 3.283 - 3.297: 0.2803% ( 19) 00:11:17.790 3.297 - 3.311: 0.9141% ( 104) 00:11:17.790 3.311 - 3.325: 3.0654% ( 353) 00:11:17.790 3.325 - 3.339: 5.7590% ( 442) 00:11:17.790 3.339 - 3.353: 9.1109% ( 550) 00:11:17.790 3.353 - 3.367: 13.4804% ( 717) 00:11:17.790 3.367 - 3.381: 18.8921% ( 888) 00:11:17.790 3.381 - 3.395: 24.4683% ( 915) 00:11:17.790 3.395 - 3.409: 30.2761% ( 953) 00:11:17.790 3.409 - 3.423: 35.8035% ( 907) 00:11:17.790 3.423 - 3.437: 41.0445% ( 860) 00:11:17.790 3.437 - 3.450: 45.7919% ( 779) 00:11:17.790 3.450 - 3.464: 51.5205% ( 940) 00:11:17.790 3.464 - 3.478: 57.0541% ( 908) 00:11:17.790 3.478 - 3.492: 61.3809% ( 710) 00:11:17.790 3.492 - 3.506: 66.6281% ( 861) 00:11:17.790 3.506 - 3.520: 72.3749% ( 943) 00:11:17.790 3.520 - 3.534: 76.5495% ( 685) 00:11:17.790 3.534 - 3.548: 79.9378% ( 556) 00:11:17.790 3.548 - 3.562: 83.1922% ( 534) 00:11:17.790 3.562 - 3.590: 86.7512% ( 584) 00:11:17.790 3.590 - 3.617: 88.2686% ( 249) 00:11:17.790 3.617 - 3.645: 89.4448% ( 193) 00:11:17.790 3.645 - 3.673: 90.9623% ( 249) 00:11:17.790 3.673 - 3.701: 92.4858% ( 250) 00:11:17.790 3.701 - 3.729: 94.2532% ( 290) 00:11:17.790 3.729 - 3.757: 95.8864% ( 268) 00:11:17.790 3.757 - 3.784: 97.1967% ( 215) 00:11:17.790 3.784 - 3.812: 98.1961% ( 164) 00:11:17.790 3.812 - 3.840: 98.7507% ( 91) 00:11:17.790 3.840 - 3.868: 99.1407% ( 64) 00:11:17.790 3.868 - 3.896: 99.4454% ( 50) 00:11:17.790 3.896 - 3.923: 99.5307% ( 14) 00:11:17.790 3.923 - 3.951: 99.6039% ( 12) 00:11:17.790 3.951 - 3.979: 99.6100% ( 1) 00:11:17.790 3.979 - 4.007: 99.6222% ( 2) 00:11:17.790 4.007 - 4.035: 99.6343% ( 2) 00:11:17.790 4.090 - 4.118: 99.6404% ( 1) 00:11:17.790 5.287 - 5.315: 99.6465% ( 1) 00:11:17.790 5.343 - 5.370: 99.6526% ( 1) 00:11:17.790 5.398 - 5.426: 99.6709% ( 3) 00:11:17.790 5.454 - 5.482: 99.6770% ( 1) 00:11:17.790 5.510 - 5.537: 99.6831% ( 1) 00:11:17.790 5.537 - 5.565: 99.6892% ( 1) 00:11:17.790 5.565 - 5.593: 99.6953% ( 1) 00:11:17.790 5.760 - 5.788: 99.7014% ( 1) 00:11:17.790 5.788 - 5.816: 99.7075% ( 1) 00:11:17.790 5.983 - 6.010: 99.7136% ( 1) 00:11:17.790 6.038 - 6.066: 99.7197% ( 1) 00:11:17.790 6.066 - 6.094: 99.7258% ( 1) 00:11:17.790 6.122 - 6.150: 99.7319% ( 1) 00:11:17.790 6.150 - 6.177: 99.7501% ( 3) 00:11:17.790 6.205 - 6.233: 99.7562% ( 1) 00:11:17.790 6.289 - 6.317: 99.7623% ( 1) 00:11:17.790 6.372 - 6.400: 99.7684% ( 1) 00:11:17.790 6.678 - 6.706: 99.7745% ( 1) 00:11:17.790 6.762 - 6.790: 99.7806% ( 1) 00:11:17.790 6.790 - 6.817: 99.7867% ( 1) 00:11:17.790 6.845 - 6.873: 99.7928% ( 1) 00:11:17.790 6.957 - 6.984: 99.7989% ( 1) 00:11:17.790 7.123 - 7.179: 99.8111% ( 2) 00:11:17.790 7.346 - 7.402: 99.8172% ( 1) 00:11:17.790 7.457 - 7.513: 99.8233% ( 1) 00:11:17.790 7.513 - 7.569: 99.8294% ( 1) 00:11:17.790 8.070 - 8.125: 99.8355% ( 1) 00:11:17.790 8.237 - 8.292: 99.8416% ( 1) 00:11:17.790 8.348 - 8.403: 99.8476% ( 1) 00:11:17.790 8.403 - 8.459: 99.8537% ( 1) 00:11:17.790 8.459 - 8.515: 99.8659% ( 2) 00:11:17.790 8.626 - 8.682: 99.8720% ( 1) 00:11:17.790 9.016 - 9.071: 99.8781% ( 1) 00:11:17.790 9.683 - 9.739: 99.8842% ( 1) 00:11:17.790 11.242 - 11.297: 99.8903% ( 1) 00:11:17.790 13.802 - 13.857: 99.8964% ( 1) 00:11:17.790 [2024-05-15 17:02:05.399647] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:11:17.790 13.969 - 14.024: 99.9025% ( 1) 00:11:17.790 3618.727 - 3632.974: 99.9086% ( 1) 00:11:17.790 3989.148 - 4017.642: 100.0000% ( 15) 00:11:17.790 00:11:17.790 Complete histogram 00:11:17.790 ================== 00:11:17.790 Range in us Cumulative Count 00:11:17.790 1.774 - 1.781: 0.0183% ( 3) 00:11:17.790 1.781 - 1.795: 0.0427% ( 4) 00:11:17.790 1.795 - 1.809: 0.0609% ( 3) 00:11:17.790 1.809 - 1.823: 0.0670% ( 1) 00:11:17.790 1.823 - 1.837: 1.1762% ( 182) 00:11:17.790 1.837 - 1.850: 3.8515% ( 439) 00:11:17.790 1.850 - 1.864: 5.5701% ( 282) 00:11:17.790 1.864 - 1.878: 9.6045% ( 662) 00:11:17.790 1.878 - 1.892: 53.0623% ( 7131) 00:11:17.790 1.892 - 1.906: 84.3501% ( 5134) 00:11:17.790 1.906 - 1.920: 91.1024% ( 1108) 00:11:17.790 1.920 - 1.934: 95.5329% ( 727) 00:11:17.790 1.934 - 1.948: 96.7944% ( 207) 00:11:17.790 1.948 - 1.962: 97.9950% ( 197) 00:11:17.790 1.962 - 1.976: 98.9091% ( 150) 00:11:17.790 1.976 - 1.990: 99.1407% ( 38) 00:11:17.790 1.990 - 2.003: 99.1834% ( 7) 00:11:17.790 2.003 - 2.017: 99.2138% ( 5) 00:11:17.790 2.017 - 2.031: 99.2199% ( 1) 00:11:17.790 2.031 - 2.045: 99.2260% ( 1) 00:11:17.790 2.045 - 2.059: 99.2443% ( 3) 00:11:17.790 2.059 - 2.073: 99.2565% ( 2) 00:11:17.790 2.101 - 2.115: 99.2626% ( 1) 00:11:17.790 2.240 - 2.254: 99.2687% ( 1) 00:11:17.790 3.673 - 3.701: 99.2748% ( 1) 00:11:17.790 3.757 - 3.784: 99.2809% ( 1) 00:11:17.790 3.784 - 3.812: 99.2931% ( 2) 00:11:17.790 3.812 - 3.840: 99.3053% ( 2) 00:11:17.790 3.840 - 3.868: 99.3114% ( 1) 00:11:17.790 3.896 - 3.923: 99.3174% ( 1) 00:11:17.790 3.951 - 3.979: 99.3235% ( 1) 00:11:17.790 4.035 - 4.063: 99.3296% ( 1) 00:11:17.790 4.174 - 4.202: 99.3357% ( 1) 00:11:17.790 4.202 - 4.230: 99.3479% ( 2) 00:11:17.790 4.480 - 4.508: 99.3540% ( 1) 00:11:17.790 4.536 - 4.563: 99.3601% ( 1) 00:11:17.790 4.814 - 4.842: 99.3662% ( 1) 00:11:17.790 4.870 - 4.897: 99.3723% ( 1) 00:11:17.790 4.897 - 4.925: 99.3845% ( 2) 00:11:17.790 5.092 - 5.120: 99.3967% ( 2) 00:11:17.790 5.148 - 5.176: 99.4028% ( 1) 00:11:17.790 5.203 - 5.231: 99.4089% ( 1) 00:11:17.790 5.231 - 5.259: 99.4150% ( 1) 00:11:17.790 5.259 - 5.287: 99.4210% ( 1) 00:11:17.790 5.370 - 5.398: 99.4332% ( 2) 00:11:17.790 5.426 - 5.454: 99.4393% ( 1) 00:11:17.790 5.510 - 5.537: 99.4454% ( 1) 00:11:17.790 5.537 - 5.565: 99.4515% ( 1) 00:11:17.790 5.649 - 5.677: 99.4576% ( 1) 00:11:17.790 5.760 - 5.788: 99.4637% ( 1) 00:11:17.790 6.511 - 6.539: 99.4698% ( 1) 00:11:17.790 7.235 - 7.290: 99.4759% ( 1) 00:11:17.790 7.569 - 7.624: 99.4820% ( 1) 00:11:17.790 7.624 - 7.680: 99.4881% ( 1) 00:11:17.790 8.459 - 8.515: 99.4942% ( 1) 00:11:17.790 14.191 - 14.247: 99.5003% ( 1) 00:11:17.791 3262.553 - 3276.800: 99.5064% ( 1) 00:11:17.791 3989.148 - 4017.642: 100.0000% ( 81) 00:11:17.791 00:11:17.791 17:02:05 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user1/1 nqn.2019-07.io.spdk:cnode1 1 00:11:17.791 17:02:05 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user1/1 00:11:17.791 17:02:05 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode1 00:11:17.791 17:02:05 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc3 00:11:17.791 17:02:05 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:11:18.046 [ 00:11:18.046 { 00:11:18.046 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:11:18.046 "subtype": "Discovery", 00:11:18.046 "listen_addresses": [], 00:11:18.046 "allow_any_host": true, 00:11:18.046 "hosts": [] 00:11:18.046 }, 00:11:18.046 { 00:11:18.046 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:11:18.046 "subtype": "NVMe", 00:11:18.046 "listen_addresses": [ 00:11:18.046 { 00:11:18.046 "trtype": "VFIOUSER", 00:11:18.046 "adrfam": "IPv4", 00:11:18.046 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:11:18.046 "trsvcid": "0" 00:11:18.046 } 00:11:18.046 ], 00:11:18.046 "allow_any_host": true, 00:11:18.046 "hosts": [], 00:11:18.046 "serial_number": "SPDK1", 00:11:18.046 "model_number": "SPDK bdev Controller", 00:11:18.046 "max_namespaces": 32, 00:11:18.046 "min_cntlid": 1, 00:11:18.046 "max_cntlid": 65519, 00:11:18.046 "namespaces": [ 00:11:18.046 { 00:11:18.046 "nsid": 1, 00:11:18.046 "bdev_name": "Malloc1", 00:11:18.046 "name": "Malloc1", 00:11:18.046 "nguid": "29A55CED10BB46188E0F806A23F28629", 00:11:18.046 "uuid": "29a55ced-10bb-4618-8e0f-806a23f28629" 00:11:18.046 } 00:11:18.046 ] 00:11:18.046 }, 00:11:18.046 { 00:11:18.046 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:11:18.046 "subtype": "NVMe", 00:11:18.046 "listen_addresses": [ 00:11:18.046 { 00:11:18.046 "trtype": "VFIOUSER", 00:11:18.046 "adrfam": "IPv4", 00:11:18.046 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:11:18.046 "trsvcid": "0" 00:11:18.046 } 00:11:18.046 ], 00:11:18.046 "allow_any_host": true, 00:11:18.046 "hosts": [], 00:11:18.046 "serial_number": "SPDK2", 00:11:18.046 "model_number": "SPDK bdev Controller", 00:11:18.047 "max_namespaces": 32, 00:11:18.047 "min_cntlid": 1, 00:11:18.047 "max_cntlid": 65519, 00:11:18.047 "namespaces": [ 00:11:18.047 { 00:11:18.047 "nsid": 1, 00:11:18.047 "bdev_name": "Malloc2", 00:11:18.047 "name": "Malloc2", 00:11:18.047 "nguid": "6E8DC514A34F411B92E498AD50A24542", 00:11:18.047 "uuid": "6e8dc514-a34f-411b-92e4-98ad50a24542" 00:11:18.047 } 00:11:18.047 ] 00:11:18.047 } 00:11:18.047 ] 00:11:18.047 17:02:05 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:11:18.047 17:02:05 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@34 -- # aerpid=2990170 00:11:18.047 17:02:05 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:11:18.047 17:02:05 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -n 2 -g -t /tmp/aer_touch_file 00:11:18.047 17:02:05 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1261 -- # local i=0 00:11:18.047 17:02:05 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1262 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:11:18.047 17:02:05 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1268 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:11:18.047 17:02:05 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1272 -- # return 0 00:11:18.047 17:02:05 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:11:18.047 17:02:05 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc3 00:11:18.047 EAL: No free 2048 kB hugepages reported on node 1 00:11:18.303 [2024-05-15 17:02:05.776727] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:11:18.303 Malloc3 00:11:18.303 17:02:05 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc3 -n 2 00:11:18.558 [2024-05-15 17:02:05.995313] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:11:18.558 17:02:06 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:11:18.559 Asynchronous Event Request test 00:11:18.559 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:11:18.559 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:11:18.559 Registering asynchronous event callbacks... 00:11:18.559 Starting namespace attribute notice tests for all controllers... 00:11:18.559 /var/run/vfio-user/domain/vfio-user1/1: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:11:18.559 aer_cb - Changed Namespace 00:11:18.559 Cleaning up... 00:11:18.559 [ 00:11:18.559 { 00:11:18.559 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:11:18.559 "subtype": "Discovery", 00:11:18.559 "listen_addresses": [], 00:11:18.559 "allow_any_host": true, 00:11:18.559 "hosts": [] 00:11:18.559 }, 00:11:18.559 { 00:11:18.559 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:11:18.559 "subtype": "NVMe", 00:11:18.559 "listen_addresses": [ 00:11:18.559 { 00:11:18.559 "trtype": "VFIOUSER", 00:11:18.559 "adrfam": "IPv4", 00:11:18.559 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:11:18.559 "trsvcid": "0" 00:11:18.559 } 00:11:18.559 ], 00:11:18.559 "allow_any_host": true, 00:11:18.559 "hosts": [], 00:11:18.559 "serial_number": "SPDK1", 00:11:18.559 "model_number": "SPDK bdev Controller", 00:11:18.559 "max_namespaces": 32, 00:11:18.559 "min_cntlid": 1, 00:11:18.559 "max_cntlid": 65519, 00:11:18.559 "namespaces": [ 00:11:18.559 { 00:11:18.559 "nsid": 1, 00:11:18.559 "bdev_name": "Malloc1", 00:11:18.559 "name": "Malloc1", 00:11:18.559 "nguid": "29A55CED10BB46188E0F806A23F28629", 00:11:18.559 "uuid": "29a55ced-10bb-4618-8e0f-806a23f28629" 00:11:18.559 }, 00:11:18.559 { 00:11:18.559 "nsid": 2, 00:11:18.559 "bdev_name": "Malloc3", 00:11:18.559 "name": "Malloc3", 00:11:18.559 "nguid": "5450154B536E449D9A0C71CA5E73F407", 00:11:18.559 "uuid": "5450154b-536e-449d-9a0c-71ca5e73f407" 00:11:18.559 } 00:11:18.559 ] 00:11:18.559 }, 00:11:18.559 { 00:11:18.559 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:11:18.559 "subtype": "NVMe", 00:11:18.559 "listen_addresses": [ 00:11:18.559 { 00:11:18.559 "trtype": "VFIOUSER", 00:11:18.559 "adrfam": "IPv4", 00:11:18.559 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:11:18.559 "trsvcid": "0" 00:11:18.559 } 00:11:18.559 ], 00:11:18.559 "allow_any_host": true, 00:11:18.559 "hosts": [], 00:11:18.559 "serial_number": "SPDK2", 00:11:18.559 "model_number": "SPDK bdev Controller", 00:11:18.559 "max_namespaces": 32, 00:11:18.559 "min_cntlid": 1, 00:11:18.559 "max_cntlid": 65519, 00:11:18.559 "namespaces": [ 00:11:18.559 { 00:11:18.559 "nsid": 1, 00:11:18.559 "bdev_name": "Malloc2", 00:11:18.559 "name": "Malloc2", 00:11:18.559 "nguid": "6E8DC514A34F411B92E498AD50A24542", 00:11:18.559 "uuid": "6e8dc514-a34f-411b-92e4-98ad50a24542" 00:11:18.559 } 00:11:18.559 ] 00:11:18.559 } 00:11:18.559 ] 00:11:18.559 17:02:06 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@44 -- # wait 2990170 00:11:18.559 17:02:06 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:11:18.559 17:02:06 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user2/2 00:11:18.559 17:02:06 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode2 00:11:18.559 17:02:06 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -L nvme -L nvme_vfio -L vfio_pci 00:11:18.559 [2024-05-15 17:02:06.214341] Starting SPDK v24.05-pre git sha1 0ba8ca574 / DPDK 23.11.0 initialization... 00:11:18.559 [2024-05-15 17:02:06.214387] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2990204 ] 00:11:18.816 EAL: No free 2048 kB hugepages reported on node 1 00:11:18.816 [2024-05-15 17:02:06.243612] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user2/2 00:11:18.816 [2024-05-15 17:02:06.250393] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:11:18.816 [2024-05-15 17:02:06.250413] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7ff1b6512000 00:11:18.816 [2024-05-15 17:02:06.251392] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:11:18.816 [2024-05-15 17:02:06.252398] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:11:18.816 [2024-05-15 17:02:06.253402] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:11:18.816 [2024-05-15 17:02:06.254418] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:11:18.816 [2024-05-15 17:02:06.255422] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:11:18.816 [2024-05-15 17:02:06.256431] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:11:18.816 [2024-05-15 17:02:06.257436] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:11:18.816 [2024-05-15 17:02:06.258442] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:11:18.816 [2024-05-15 17:02:06.259451] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:11:18.816 [2024-05-15 17:02:06.259462] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7ff1b6507000 00:11:18.816 [2024-05-15 17:02:06.260403] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:11:18.816 [2024-05-15 17:02:06.271919] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user2/2/cntrl Setup Successfully 00:11:18.816 [2024-05-15 17:02:06.271941] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to connect adminq (no timeout) 00:11:18.816 [2024-05-15 17:02:06.277012] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:11:18.816 [2024-05-15 17:02:06.277052] nvme_pcie_common.c: 132:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:11:18.816 [2024-05-15 17:02:06.277122] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for connect adminq (no timeout) 00:11:18.816 [2024-05-15 17:02:06.277135] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read vs (no timeout) 00:11:18.816 [2024-05-15 17:02:06.277140] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read vs wait for vs (no timeout) 00:11:18.816 [2024-05-15 17:02:06.278018] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x8, value 0x10300 00:11:18.816 [2024-05-15 17:02:06.278028] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read cap (no timeout) 00:11:18.816 [2024-05-15 17:02:06.278034] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read cap wait for cap (no timeout) 00:11:18.816 [2024-05-15 17:02:06.279020] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:11:18.816 [2024-05-15 17:02:06.279030] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to check en (no timeout) 00:11:18.816 [2024-05-15 17:02:06.279037] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to check en wait for cc (timeout 15000 ms) 00:11:18.816 [2024-05-15 17:02:06.280028] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x0 00:11:18.816 [2024-05-15 17:02:06.280036] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:11:18.816 [2024-05-15 17:02:06.281035] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x0 00:11:18.816 [2024-05-15 17:02:06.281043] nvme_ctrlr.c:3750:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] CC.EN = 0 && CSTS.RDY = 0 00:11:18.816 [2024-05-15 17:02:06.281048] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to controller is disabled (timeout 15000 ms) 00:11:18.816 [2024-05-15 17:02:06.281054] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:11:18.816 [2024-05-15 17:02:06.281159] nvme_ctrlr.c:3943:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Setting CC.EN = 1 00:11:18.816 [2024-05-15 17:02:06.281163] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:11:18.816 [2024-05-15 17:02:06.281173] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x28, value 0x2000003c0000 00:11:18.816 [2024-05-15 17:02:06.282044] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x30, value 0x2000003be000 00:11:18.816 [2024-05-15 17:02:06.283060] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x24, value 0xff00ff 00:11:18.816 [2024-05-15 17:02:06.284062] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:11:18.816 [2024-05-15 17:02:06.285068] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:11:18.816 [2024-05-15 17:02:06.285105] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:11:18.816 [2024-05-15 17:02:06.286077] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x1 00:11:18.816 [2024-05-15 17:02:06.286086] nvme_ctrlr.c:3785:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:11:18.816 [2024-05-15 17:02:06.286090] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to reset admin queue (timeout 30000 ms) 00:11:18.816 [2024-05-15 17:02:06.286107] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify controller (no timeout) 00:11:18.816 [2024-05-15 17:02:06.286116] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify controller (timeout 30000 ms) 00:11:18.816 [2024-05-15 17:02:06.286129] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:11:18.816 [2024-05-15 17:02:06.286133] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:11:18.816 [2024-05-15 17:02:06.286145] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:11:18.816 [2024-05-15 17:02:06.294170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:11:18.816 [2024-05-15 17:02:06.294181] nvme_ctrlr.c:1985:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] transport max_xfer_size 131072 00:11:18.816 [2024-05-15 17:02:06.294185] nvme_ctrlr.c:1989:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] MDTS max_xfer_size 131072 00:11:18.816 [2024-05-15 17:02:06.294189] nvme_ctrlr.c:1992:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] CNTLID 0x0001 00:11:18.816 [2024-05-15 17:02:06.294193] nvme_ctrlr.c:2003:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:11:18.816 [2024-05-15 17:02:06.294197] nvme_ctrlr.c:2016:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] transport max_sges 1 00:11:18.816 [2024-05-15 17:02:06.294201] nvme_ctrlr.c:2031:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] fuses compare and write: 1 00:11:18.816 [2024-05-15 17:02:06.294205] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to configure AER (timeout 30000 ms) 00:11:18.816 [2024-05-15 17:02:06.294214] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for configure aer (timeout 30000 ms) 00:11:18.816 [2024-05-15 17:02:06.294225] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:11:18.816 [2024-05-15 17:02:06.302169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:11:18.816 [2024-05-15 17:02:06.302183] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:11:18.816 [2024-05-15 17:02:06.302190] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:11:18.817 [2024-05-15 17:02:06.302198] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:11:18.817 [2024-05-15 17:02:06.302205] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:11:18.817 [2024-05-15 17:02:06.302209] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set keep alive timeout (timeout 30000 ms) 00:11:18.817 [2024-05-15 17:02:06.302215] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:11:18.817 [2024-05-15 17:02:06.302223] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:11:18.817 [2024-05-15 17:02:06.310169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:11:18.817 [2024-05-15 17:02:06.310176] nvme_ctrlr.c:2891:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Controller adjusted keep alive timeout to 0 ms 00:11:18.817 [2024-05-15 17:02:06.310182] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify controller iocs specific (timeout 30000 ms) 00:11:18.817 [2024-05-15 17:02:06.310188] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set number of queues (timeout 30000 ms) 00:11:18.817 [2024-05-15 17:02:06.310196] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for set number of queues (timeout 30000 ms) 00:11:18.817 [2024-05-15 17:02:06.310204] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:11:18.817 [2024-05-15 17:02:06.318170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:11:18.817 [2024-05-15 17:02:06.318215] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify active ns (timeout 30000 ms) 00:11:18.817 [2024-05-15 17:02:06.318222] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify active ns (timeout 30000 ms) 00:11:18.817 [2024-05-15 17:02:06.318229] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:11:18.817 [2024-05-15 17:02:06.318233] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:11:18.817 [2024-05-15 17:02:06.318239] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:11:18.817 [2024-05-15 17:02:06.326169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:11:18.817 [2024-05-15 17:02:06.326183] nvme_ctrlr.c:4558:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Namespace 1 was added 00:11:18.817 [2024-05-15 17:02:06.326194] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify ns (timeout 30000 ms) 00:11:18.817 [2024-05-15 17:02:06.326201] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify ns (timeout 30000 ms) 00:11:18.817 [2024-05-15 17:02:06.326207] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:11:18.817 [2024-05-15 17:02:06.326211] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:11:18.817 [2024-05-15 17:02:06.326217] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:11:18.817 [2024-05-15 17:02:06.334168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:11:18.817 [2024-05-15 17:02:06.334179] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify namespace id descriptors (timeout 30000 ms) 00:11:18.817 [2024-05-15 17:02:06.334186] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:11:18.817 [2024-05-15 17:02:06.334193] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:11:18.817 [2024-05-15 17:02:06.334197] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:11:18.817 [2024-05-15 17:02:06.334202] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:11:18.817 [2024-05-15 17:02:06.342169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:11:18.817 [2024-05-15 17:02:06.342182] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify ns iocs specific (timeout 30000 ms) 00:11:18.817 [2024-05-15 17:02:06.342188] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set supported log pages (timeout 30000 ms) 00:11:18.817 [2024-05-15 17:02:06.342194] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set supported features (timeout 30000 ms) 00:11:18.817 [2024-05-15 17:02:06.342201] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set doorbell buffer config (timeout 30000 ms) 00:11:18.817 [2024-05-15 17:02:06.342206] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set host ID (timeout 30000 ms) 00:11:18.817 [2024-05-15 17:02:06.342210] nvme_ctrlr.c:2991:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] NVMe-oF transport - not sending Set Features - Host ID 00:11:18.817 [2024-05-15 17:02:06.342214] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to transport ready (timeout 30000 ms) 00:11:18.817 [2024-05-15 17:02:06.342219] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to ready (no timeout) 00:11:18.817 [2024-05-15 17:02:06.342235] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:11:18.817 [2024-05-15 17:02:06.350168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:11:18.817 [2024-05-15 17:02:06.350180] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:11:18.817 [2024-05-15 17:02:06.358170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:11:18.817 [2024-05-15 17:02:06.358182] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:11:18.817 [2024-05-15 17:02:06.366168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:11:18.817 [2024-05-15 17:02:06.366180] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:11:18.817 [2024-05-15 17:02:06.374168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:11:18.817 [2024-05-15 17:02:06.374179] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:11:18.817 [2024-05-15 17:02:06.374183] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:11:18.817 [2024-05-15 17:02:06.374186] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:11:18.817 [2024-05-15 17:02:06.374189] nvme_pcie_common.c:1254:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:11:18.817 [2024-05-15 17:02:06.374195] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:11:18.817 [2024-05-15 17:02:06.374201] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:11:18.817 [2024-05-15 17:02:06.374205] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:11:18.817 [2024-05-15 17:02:06.374211] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:11:18.817 [2024-05-15 17:02:06.374217] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:11:18.817 [2024-05-15 17:02:06.374220] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:11:18.817 [2024-05-15 17:02:06.374226] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:11:18.817 [2024-05-15 17:02:06.374234] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:11:18.817 [2024-05-15 17:02:06.374238] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:11:18.817 [2024-05-15 17:02:06.374243] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:11:18.817 [2024-05-15 17:02:06.382170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:11:18.817 [2024-05-15 17:02:06.382183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:11:18.817 [2024-05-15 17:02:06.382191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:11:18.817 [2024-05-15 17:02:06.382199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:11:18.817 ===================================================== 00:11:18.817 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:11:18.817 ===================================================== 00:11:18.817 Controller Capabilities/Features 00:11:18.817 ================================ 00:11:18.817 Vendor ID: 4e58 00:11:18.817 Subsystem Vendor ID: 4e58 00:11:18.817 Serial Number: SPDK2 00:11:18.817 Model Number: SPDK bdev Controller 00:11:18.817 Firmware Version: 24.05 00:11:18.817 Recommended Arb Burst: 6 00:11:18.817 IEEE OUI Identifier: 8d 6b 50 00:11:18.817 Multi-path I/O 00:11:18.817 May have multiple subsystem ports: Yes 00:11:18.817 May have multiple controllers: Yes 00:11:18.817 Associated with SR-IOV VF: No 00:11:18.817 Max Data Transfer Size: 131072 00:11:18.817 Max Number of Namespaces: 32 00:11:18.817 Max Number of I/O Queues: 127 00:11:18.817 NVMe Specification Version (VS): 1.3 00:11:18.817 NVMe Specification Version (Identify): 1.3 00:11:18.817 Maximum Queue Entries: 256 00:11:18.817 Contiguous Queues Required: Yes 00:11:18.817 Arbitration Mechanisms Supported 00:11:18.817 Weighted Round Robin: Not Supported 00:11:18.817 Vendor Specific: Not Supported 00:11:18.817 Reset Timeout: 15000 ms 00:11:18.817 Doorbell Stride: 4 bytes 00:11:18.817 NVM Subsystem Reset: Not Supported 00:11:18.817 Command Sets Supported 00:11:18.817 NVM Command Set: Supported 00:11:18.817 Boot Partition: Not Supported 00:11:18.817 Memory Page Size Minimum: 4096 bytes 00:11:18.817 Memory Page Size Maximum: 4096 bytes 00:11:18.817 Persistent Memory Region: Not Supported 00:11:18.817 Optional Asynchronous Events Supported 00:11:18.817 Namespace Attribute Notices: Supported 00:11:18.817 Firmware Activation Notices: Not Supported 00:11:18.817 ANA Change Notices: Not Supported 00:11:18.817 PLE Aggregate Log Change Notices: Not Supported 00:11:18.817 LBA Status Info Alert Notices: Not Supported 00:11:18.817 EGE Aggregate Log Change Notices: Not Supported 00:11:18.817 Normal NVM Subsystem Shutdown event: Not Supported 00:11:18.817 Zone Descriptor Change Notices: Not Supported 00:11:18.817 Discovery Log Change Notices: Not Supported 00:11:18.817 Controller Attributes 00:11:18.817 128-bit Host Identifier: Supported 00:11:18.817 Non-Operational Permissive Mode: Not Supported 00:11:18.817 NVM Sets: Not Supported 00:11:18.817 Read Recovery Levels: Not Supported 00:11:18.817 Endurance Groups: Not Supported 00:11:18.817 Predictable Latency Mode: Not Supported 00:11:18.817 Traffic Based Keep ALive: Not Supported 00:11:18.817 Namespace Granularity: Not Supported 00:11:18.817 SQ Associations: Not Supported 00:11:18.817 UUID List: Not Supported 00:11:18.817 Multi-Domain Subsystem: Not Supported 00:11:18.817 Fixed Capacity Management: Not Supported 00:11:18.817 Variable Capacity Management: Not Supported 00:11:18.817 Delete Endurance Group: Not Supported 00:11:18.817 Delete NVM Set: Not Supported 00:11:18.817 Extended LBA Formats Supported: Not Supported 00:11:18.817 Flexible Data Placement Supported: Not Supported 00:11:18.817 00:11:18.817 Controller Memory Buffer Support 00:11:18.817 ================================ 00:11:18.817 Supported: No 00:11:18.817 00:11:18.817 Persistent Memory Region Support 00:11:18.817 ================================ 00:11:18.817 Supported: No 00:11:18.817 00:11:18.817 Admin Command Set Attributes 00:11:18.817 ============================ 00:11:18.817 Security Send/Receive: Not Supported 00:11:18.817 Format NVM: Not Supported 00:11:18.817 Firmware Activate/Download: Not Supported 00:11:18.817 Namespace Management: Not Supported 00:11:18.817 Device Self-Test: Not Supported 00:11:18.817 Directives: Not Supported 00:11:18.817 NVMe-MI: Not Supported 00:11:18.817 Virtualization Management: Not Supported 00:11:18.817 Doorbell Buffer Config: Not Supported 00:11:18.817 Get LBA Status Capability: Not Supported 00:11:18.817 Command & Feature Lockdown Capability: Not Supported 00:11:18.817 Abort Command Limit: 4 00:11:18.817 Async Event Request Limit: 4 00:11:18.817 Number of Firmware Slots: N/A 00:11:18.817 Firmware Slot 1 Read-Only: N/A 00:11:18.817 Firmware Activation Without Reset: N/A 00:11:18.817 Multiple Update Detection Support: N/A 00:11:18.817 Firmware Update Granularity: No Information Provided 00:11:18.817 Per-Namespace SMART Log: No 00:11:18.817 Asymmetric Namespace Access Log Page: Not Supported 00:11:18.817 Subsystem NQN: nqn.2019-07.io.spdk:cnode2 00:11:18.817 Command Effects Log Page: Supported 00:11:18.817 Get Log Page Extended Data: Supported 00:11:18.817 Telemetry Log Pages: Not Supported 00:11:18.817 Persistent Event Log Pages: Not Supported 00:11:18.817 Supported Log Pages Log Page: May Support 00:11:18.817 Commands Supported & Effects Log Page: Not Supported 00:11:18.817 Feature Identifiers & Effects Log Page:May Support 00:11:18.817 NVMe-MI Commands & Effects Log Page: May Support 00:11:18.817 Data Area 4 for Telemetry Log: Not Supported 00:11:18.817 Error Log Page Entries Supported: 128 00:11:18.817 Keep Alive: Supported 00:11:18.817 Keep Alive Granularity: 10000 ms 00:11:18.817 00:11:18.817 NVM Command Set Attributes 00:11:18.817 ========================== 00:11:18.817 Submission Queue Entry Size 00:11:18.817 Max: 64 00:11:18.817 Min: 64 00:11:18.817 Completion Queue Entry Size 00:11:18.817 Max: 16 00:11:18.817 Min: 16 00:11:18.817 Number of Namespaces: 32 00:11:18.817 Compare Command: Supported 00:11:18.817 Write Uncorrectable Command: Not Supported 00:11:18.817 Dataset Management Command: Supported 00:11:18.817 Write Zeroes Command: Supported 00:11:18.817 Set Features Save Field: Not Supported 00:11:18.817 Reservations: Not Supported 00:11:18.817 Timestamp: Not Supported 00:11:18.817 Copy: Supported 00:11:18.817 Volatile Write Cache: Present 00:11:18.817 Atomic Write Unit (Normal): 1 00:11:18.817 Atomic Write Unit (PFail): 1 00:11:18.817 Atomic Compare & Write Unit: 1 00:11:18.817 Fused Compare & Write: Supported 00:11:18.817 Scatter-Gather List 00:11:18.817 SGL Command Set: Supported (Dword aligned) 00:11:18.817 SGL Keyed: Not Supported 00:11:18.817 SGL Bit Bucket Descriptor: Not Supported 00:11:18.817 SGL Metadata Pointer: Not Supported 00:11:18.817 Oversized SGL: Not Supported 00:11:18.817 SGL Metadata Address: Not Supported 00:11:18.817 SGL Offset: Not Supported 00:11:18.817 Transport SGL Data Block: Not Supported 00:11:18.817 Replay Protected Memory Block: Not Supported 00:11:18.817 00:11:18.817 Firmware Slot Information 00:11:18.817 ========================= 00:11:18.817 Active slot: 1 00:11:18.817 Slot 1 Firmware Revision: 24.05 00:11:18.817 00:11:18.817 00:11:18.817 Commands Supported and Effects 00:11:18.817 ============================== 00:11:18.817 Admin Commands 00:11:18.817 -------------- 00:11:18.817 Get Log Page (02h): Supported 00:11:18.817 Identify (06h): Supported 00:11:18.817 Abort (08h): Supported 00:11:18.817 Set Features (09h): Supported 00:11:18.817 Get Features (0Ah): Supported 00:11:18.817 Asynchronous Event Request (0Ch): Supported 00:11:18.817 Keep Alive (18h): Supported 00:11:18.817 I/O Commands 00:11:18.817 ------------ 00:11:18.817 Flush (00h): Supported LBA-Change 00:11:18.817 Write (01h): Supported LBA-Change 00:11:18.817 Read (02h): Supported 00:11:18.817 Compare (05h): Supported 00:11:18.817 Write Zeroes (08h): Supported LBA-Change 00:11:18.817 Dataset Management (09h): Supported LBA-Change 00:11:18.817 Copy (19h): Supported LBA-Change 00:11:18.817 Unknown (79h): Supported LBA-Change 00:11:18.817 Unknown (7Ah): Supported 00:11:18.817 00:11:18.817 Error Log 00:11:18.817 ========= 00:11:18.817 00:11:18.818 Arbitration 00:11:18.818 =========== 00:11:18.818 Arbitration Burst: 1 00:11:18.818 00:11:18.818 Power Management 00:11:18.818 ================ 00:11:18.818 Number of Power States: 1 00:11:18.818 Current Power State: Power State #0 00:11:18.818 Power State #0: 00:11:18.818 Max Power: 0.00 W 00:11:18.818 Non-Operational State: Operational 00:11:18.818 Entry Latency: Not Reported 00:11:18.818 Exit Latency: Not Reported 00:11:18.818 Relative Read Throughput: 0 00:11:18.818 Relative Read Latency: 0 00:11:18.818 Relative Write Throughput: 0 00:11:18.818 Relative Write Latency: 0 00:11:18.818 Idle Power: Not Reported 00:11:18.818 Active Power: Not Reported 00:11:18.818 Non-Operational Permissive Mode: Not Supported 00:11:18.818 00:11:18.818 Health Information 00:11:18.818 ================== 00:11:18.818 Critical Warnings: 00:11:18.818 Available Spare Space: OK 00:11:18.818 Temperature: OK 00:11:18.818 Device Reliability: OK 00:11:18.818 Read Only: No 00:11:18.818 Volatile Memory Backup: OK 00:11:18.818 Current Temperature: 0 Kelvin (-2[2024-05-15 17:02:06.382285] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:11:18.818 [2024-05-15 17:02:06.390171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:11:18.818 [2024-05-15 17:02:06.390197] nvme_ctrlr.c:4222:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Prepare to destruct SSD 00:11:18.818 [2024-05-15 17:02:06.390205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:18.818 [2024-05-15 17:02:06.390211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:18.818 [2024-05-15 17:02:06.390216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:18.818 [2024-05-15 17:02:06.390221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:18.818 [2024-05-15 17:02:06.390280] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:11:18.818 [2024-05-15 17:02:06.390291] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x464001 00:11:18.818 [2024-05-15 17:02:06.391288] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:11:18.818 [2024-05-15 17:02:06.391330] nvme_ctrlr.c:1083:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] RTD3E = 0 us 00:11:18.818 [2024-05-15 17:02:06.391336] nvme_ctrlr.c:1086:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] shutdown timeout = 10000 ms 00:11:18.818 [2024-05-15 17:02:06.392290] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x9 00:11:18.818 [2024-05-15 17:02:06.392302] nvme_ctrlr.c:1205:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] shutdown complete in 0 milliseconds 00:11:18.818 [2024-05-15 17:02:06.392347] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user2/2/cntrl 00:11:18.818 [2024-05-15 17:02:06.393333] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:11:18.818 73 Celsius) 00:11:18.818 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:11:18.818 Available Spare: 0% 00:11:18.818 Available Spare Threshold: 0% 00:11:18.818 Life Percentage Used: 0% 00:11:18.818 Data Units Read: 0 00:11:18.818 Data Units Written: 0 00:11:18.818 Host Read Commands: 0 00:11:18.818 Host Write Commands: 0 00:11:18.818 Controller Busy Time: 0 minutes 00:11:18.818 Power Cycles: 0 00:11:18.818 Power On Hours: 0 hours 00:11:18.818 Unsafe Shutdowns: 0 00:11:18.818 Unrecoverable Media Errors: 0 00:11:18.818 Lifetime Error Log Entries: 0 00:11:18.818 Warning Temperature Time: 0 minutes 00:11:18.818 Critical Temperature Time: 0 minutes 00:11:18.818 00:11:18.818 Number of Queues 00:11:18.818 ================ 00:11:18.818 Number of I/O Submission Queues: 127 00:11:18.818 Number of I/O Completion Queues: 127 00:11:18.818 00:11:18.818 Active Namespaces 00:11:18.818 ================= 00:11:18.818 Namespace ID:1 00:11:18.818 Error Recovery Timeout: Unlimited 00:11:18.818 Command Set Identifier: NVM (00h) 00:11:18.818 Deallocate: Supported 00:11:18.818 Deallocated/Unwritten Error: Not Supported 00:11:18.818 Deallocated Read Value: Unknown 00:11:18.818 Deallocate in Write Zeroes: Not Supported 00:11:18.818 Deallocated Guard Field: 0xFFFF 00:11:18.818 Flush: Supported 00:11:18.818 Reservation: Supported 00:11:18.818 Namespace Sharing Capabilities: Multiple Controllers 00:11:18.818 Size (in LBAs): 131072 (0GiB) 00:11:18.818 Capacity (in LBAs): 131072 (0GiB) 00:11:18.818 Utilization (in LBAs): 131072 (0GiB) 00:11:18.818 NGUID: 6E8DC514A34F411B92E498AD50A24542 00:11:18.818 UUID: 6e8dc514-a34f-411b-92e4-98ad50a24542 00:11:18.818 Thin Provisioning: Not Supported 00:11:18.818 Per-NS Atomic Units: Yes 00:11:18.818 Atomic Boundary Size (Normal): 0 00:11:18.818 Atomic Boundary Size (PFail): 0 00:11:18.818 Atomic Boundary Offset: 0 00:11:18.818 Maximum Single Source Range Length: 65535 00:11:18.818 Maximum Copy Length: 65535 00:11:18.818 Maximum Source Range Count: 1 00:11:18.818 NGUID/EUI64 Never Reused: No 00:11:18.818 Namespace Write Protected: No 00:11:18.818 Number of LBA Formats: 1 00:11:18.818 Current LBA Format: LBA Format #00 00:11:18.818 LBA Format #00: Data Size: 512 Metadata Size: 0 00:11:18.818 00:11:18.818 17:02:06 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:11:18.818 EAL: No free 2048 kB hugepages reported on node 1 00:11:19.073 [2024-05-15 17:02:06.606618] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:11:24.326 Initializing NVMe Controllers 00:11:24.326 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:11:24.326 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:11:24.326 Initialization complete. Launching workers. 00:11:24.326 ======================================================== 00:11:24.326 Latency(us) 00:11:24.326 Device Information : IOPS MiB/s Average min max 00:11:24.326 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 39925.26 155.96 3205.79 951.23 10415.28 00:11:24.326 ======================================================== 00:11:24.326 Total : 39925.26 155.96 3205.79 951.23 10415.28 00:11:24.326 00:11:24.326 [2024-05-15 17:02:11.724427] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:11:24.326 17:02:11 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:11:24.326 EAL: No free 2048 kB hugepages reported on node 1 00:11:24.326 [2024-05-15 17:02:11.943074] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:11:29.582 Initializing NVMe Controllers 00:11:29.582 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:11:29.582 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:11:29.582 Initialization complete. Launching workers. 00:11:29.582 ======================================================== 00:11:29.582 Latency(us) 00:11:29.582 Device Information : IOPS MiB/s Average min max 00:11:29.582 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 39859.48 155.70 3210.88 976.87 10588.17 00:11:29.582 ======================================================== 00:11:29.582 Total : 39859.48 155.70 3210.88 976.87 10588.17 00:11:29.582 00:11:29.582 [2024-05-15 17:02:16.962637] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:11:29.582 17:02:16 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:11:29.582 EAL: No free 2048 kB hugepages reported on node 1 00:11:29.582 [2024-05-15 17:02:17.149796] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:11:34.839 [2024-05-15 17:02:22.295271] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:11:34.839 Initializing NVMe Controllers 00:11:34.839 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:11:34.839 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:11:34.839 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 1 00:11:34.839 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 2 00:11:34.839 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 3 00:11:34.839 Initialization complete. Launching workers. 00:11:34.839 Starting thread on core 2 00:11:34.839 Starting thread on core 3 00:11:34.839 Starting thread on core 1 00:11:34.839 17:02:22 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -d 256 -g 00:11:34.839 EAL: No free 2048 kB hugepages reported on node 1 00:11:35.096 [2024-05-15 17:02:22.581646] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:11:38.373 [2024-05-15 17:02:25.652343] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:11:38.373 Initializing NVMe Controllers 00:11:38.373 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:11:38.373 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:11:38.373 Associating SPDK bdev Controller (SPDK2 ) with lcore 0 00:11:38.373 Associating SPDK bdev Controller (SPDK2 ) with lcore 1 00:11:38.373 Associating SPDK bdev Controller (SPDK2 ) with lcore 2 00:11:38.373 Associating SPDK bdev Controller (SPDK2 ) with lcore 3 00:11:38.373 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration run with configuration: 00:11:38.373 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:11:38.373 Initialization complete. Launching workers. 00:11:38.373 Starting thread on core 1 with urgent priority queue 00:11:38.373 Starting thread on core 2 with urgent priority queue 00:11:38.373 Starting thread on core 3 with urgent priority queue 00:11:38.373 Starting thread on core 0 with urgent priority queue 00:11:38.373 SPDK bdev Controller (SPDK2 ) core 0: 7831.00 IO/s 12.77 secs/100000 ios 00:11:38.373 SPDK bdev Controller (SPDK2 ) core 1: 8982.67 IO/s 11.13 secs/100000 ios 00:11:38.373 SPDK bdev Controller (SPDK2 ) core 2: 7730.00 IO/s 12.94 secs/100000 ios 00:11:38.373 SPDK bdev Controller (SPDK2 ) core 3: 9248.33 IO/s 10.81 secs/100000 ios 00:11:38.373 ======================================================== 00:11:38.373 00:11:38.373 17:02:25 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:11:38.373 EAL: No free 2048 kB hugepages reported on node 1 00:11:38.373 [2024-05-15 17:02:25.925664] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:11:38.373 Initializing NVMe Controllers 00:11:38.373 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:11:38.373 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:11:38.373 Namespace ID: 1 size: 0GB 00:11:38.373 Initialization complete. 00:11:38.373 INFO: using host memory buffer for IO 00:11:38.373 Hello world! 00:11:38.373 [2024-05-15 17:02:25.935724] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:11:38.373 17:02:25 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:11:38.373 EAL: No free 2048 kB hugepages reported on node 1 00:11:38.630 [2024-05-15 17:02:26.194047] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:11:40.001 Initializing NVMe Controllers 00:11:40.001 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:11:40.001 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:11:40.001 Initialization complete. Launching workers. 00:11:40.001 submit (in ns) avg, min, max = 6989.2, 3303.5, 4002031.3 00:11:40.001 complete (in ns) avg, min, max = 19339.3, 1800.0, 4035473.9 00:11:40.001 00:11:40.001 Submit histogram 00:11:40.001 ================ 00:11:40.001 Range in us Cumulative Count 00:11:40.001 3.297 - 3.311: 0.0061% ( 1) 00:11:40.001 3.311 - 3.325: 0.0246% ( 3) 00:11:40.001 3.325 - 3.339: 0.0676% ( 7) 00:11:40.001 3.339 - 3.353: 0.1844% ( 19) 00:11:40.001 3.353 - 3.367: 0.3566% ( 28) 00:11:40.001 3.367 - 3.381: 0.9345% ( 94) 00:11:40.001 3.381 - 3.395: 2.7604% ( 297) 00:11:40.001 3.395 - 3.409: 7.0146% ( 692) 00:11:40.001 3.409 - 3.423: 12.2956% ( 859) 00:11:40.001 3.423 - 3.437: 18.4926% ( 1008) 00:11:40.001 3.437 - 3.450: 24.9416% ( 1049) 00:11:40.001 3.450 - 3.464: 30.2779% ( 868) 00:11:40.001 3.464 - 3.478: 35.0547% ( 777) 00:11:40.001 3.478 - 3.492: 40.4586% ( 879) 00:11:40.001 3.492 - 3.506: 45.7273% ( 857) 00:11:40.001 3.506 - 3.520: 49.6188% ( 633) 00:11:40.001 3.520 - 3.534: 53.1907% ( 581) 00:11:40.001 3.534 - 3.548: 58.8959% ( 928) 00:11:40.001 3.548 - 3.562: 66.1318% ( 1177) 00:11:40.001 3.562 - 3.590: 74.7326% ( 1399) 00:11:40.001 3.590 - 3.617: 83.3702% ( 1405) 00:11:40.001 3.617 - 3.645: 86.4872% ( 507) 00:11:40.001 3.645 - 3.673: 87.3417% ( 139) 00:11:40.001 3.673 - 3.701: 88.3991% ( 172) 00:11:40.001 3.701 - 3.729: 90.2988% ( 309) 00:11:40.001 3.729 - 3.757: 92.0386% ( 283) 00:11:40.001 3.757 - 3.784: 93.6370% ( 260) 00:11:40.001 3.784 - 3.812: 95.4629% ( 297) 00:11:40.001 3.812 - 3.840: 97.1536% ( 275) 00:11:40.001 3.840 - 3.868: 98.1618% ( 164) 00:11:40.001 3.868 - 3.896: 98.8196% ( 107) 00:11:40.001 3.896 - 3.923: 99.2623% ( 72) 00:11:40.001 3.923 - 3.951: 99.4713% ( 34) 00:11:40.001 3.951 - 3.979: 99.5512% ( 13) 00:11:40.001 3.979 - 4.007: 99.5758% ( 4) 00:11:40.001 4.035 - 4.063: 99.5881% ( 2) 00:11:40.001 4.063 - 4.090: 99.5942% ( 1) 00:11:40.001 5.064 - 5.092: 99.6004% ( 1) 00:11:40.001 5.287 - 5.315: 99.6065% ( 1) 00:11:40.001 5.370 - 5.398: 99.6127% ( 1) 00:11:40.001 5.426 - 5.454: 99.6188% ( 1) 00:11:40.001 5.510 - 5.537: 99.6250% ( 1) 00:11:40.001 5.537 - 5.565: 99.6311% ( 1) 00:11:40.001 5.593 - 5.621: 99.6434% ( 2) 00:11:40.001 5.899 - 5.927: 99.6496% ( 1) 00:11:40.001 6.094 - 6.122: 99.6557% ( 1) 00:11:40.001 6.344 - 6.372: 99.6619% ( 1) 00:11:40.001 6.372 - 6.400: 99.6680% ( 1) 00:11:40.001 6.400 - 6.428: 99.6742% ( 1) 00:11:40.001 6.428 - 6.456: 99.6803% ( 1) 00:11:40.001 6.650 - 6.678: 99.6865% ( 1) 00:11:40.001 6.678 - 6.706: 99.6926% ( 1) 00:11:40.001 6.762 - 6.790: 99.6988% ( 1) 00:11:40.001 6.845 - 6.873: 99.7049% ( 1) 00:11:40.001 6.873 - 6.901: 99.7111% ( 1) 00:11:40.001 6.901 - 6.929: 99.7172% ( 1) 00:11:40.001 6.929 - 6.957: 99.7295% ( 2) 00:11:40.001 6.957 - 6.984: 99.7356% ( 1) 00:11:40.001 7.040 - 7.068: 99.7418% ( 1) 00:11:40.001 7.068 - 7.096: 99.7541% ( 2) 00:11:40.001 7.179 - 7.235: 99.7664% ( 2) 00:11:40.001 7.290 - 7.346: 99.7787% ( 2) 00:11:40.001 7.346 - 7.402: 99.8033% ( 4) 00:11:40.001 7.402 - 7.457: 99.8094% ( 1) 00:11:40.001 7.457 - 7.513: 99.8156% ( 1) 00:11:40.001 7.680 - 7.736: 99.8340% ( 3) 00:11:40.001 7.903 - 7.958: 99.8402% ( 1) 00:11:40.001 7.958 - 8.014: 99.8463% ( 1) 00:11:40.001 8.070 - 8.125: 99.8525% ( 1) 00:11:40.001 8.181 - 8.237: 99.8647% ( 2) 00:11:40.001 8.237 - 8.292: 99.8709% ( 1) 00:11:40.001 8.292 - 8.348: 99.8770% ( 1) 00:11:40.001 8.515 - 8.570: 99.8832% ( 1) 00:11:40.001 8.904 - 8.960: 99.8955% ( 2) 00:11:40.001 9.183 - 9.238: 99.9078% ( 2) 00:11:40.001 10.574 - 10.630: 99.9139% ( 1) 00:11:40.001 3989.148 - 4017.642: 100.0000% ( 14) 00:11:40.001 00:11:40.001 Complete histogram 00:11:40.001 ================== 00:11:40.001 Range in us Cumulative Count 00:11:40.001 1.795 - [2024-05-15 17:02:27.286191] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:11:40.001 1.809: 0.0061% ( 1) 00:11:40.001 1.809 - 1.823: 0.0307% ( 4) 00:11:40.001 1.823 - 1.837: 0.8545% ( 134) 00:11:40.001 1.837 - 1.850: 2.9018% ( 333) 00:11:40.001 1.850 - 1.864: 4.3158% ( 230) 00:11:40.001 1.864 - 1.878: 12.4431% ( 1322) 00:11:40.001 1.878 - 1.892: 62.1972% ( 8093) 00:11:40.001 1.892 - 1.906: 87.2433% ( 4074) 00:11:40.001 1.906 - 1.920: 92.9977% ( 936) 00:11:40.001 1.920 - 1.934: 96.2867% ( 535) 00:11:40.001 1.934 - 1.948: 97.5839% ( 211) 00:11:40.001 1.948 - 1.962: 98.4016% ( 133) 00:11:40.001 1.962 - 1.976: 99.0286% ( 102) 00:11:40.001 1.976 - 1.990: 99.2192% ( 31) 00:11:40.001 1.990 - 2.003: 99.2623% ( 7) 00:11:40.001 2.003 - 2.017: 99.2930% ( 5) 00:11:40.001 2.017 - 2.031: 99.3360% ( 7) 00:11:40.001 2.045 - 2.059: 99.3483% ( 2) 00:11:40.001 2.059 - 2.073: 99.3545% ( 1) 00:11:40.001 2.073 - 2.087: 99.3791% ( 4) 00:11:40.001 2.115 - 2.129: 99.3852% ( 1) 00:11:40.001 2.143 - 2.157: 99.3914% ( 1) 00:11:40.001 2.212 - 2.226: 99.4037% ( 2) 00:11:40.001 3.868 - 3.896: 99.4098% ( 1) 00:11:40.001 4.007 - 4.035: 99.4160% ( 1) 00:11:40.001 4.842 - 4.870: 99.4221% ( 1) 00:11:40.001 5.009 - 5.037: 99.4283% ( 1) 00:11:40.001 5.343 - 5.370: 99.4344% ( 1) 00:11:40.001 5.398 - 5.426: 99.4406% ( 1) 00:11:40.001 5.426 - 5.454: 99.4467% ( 1) 00:11:40.001 5.565 - 5.593: 99.4528% ( 1) 00:11:40.001 5.593 - 5.621: 99.4590% ( 1) 00:11:40.001 5.677 - 5.704: 99.4651% ( 1) 00:11:40.001 5.732 - 5.760: 99.4774% ( 2) 00:11:40.001 5.788 - 5.816: 99.4836% ( 1) 00:11:40.001 5.927 - 5.955: 99.4959% ( 2) 00:11:40.001 5.983 - 6.010: 99.5020% ( 1) 00:11:40.001 6.038 - 6.066: 99.5082% ( 1) 00:11:40.001 6.066 - 6.094: 99.5143% ( 1) 00:11:40.001 6.233 - 6.261: 99.5205% ( 1) 00:11:40.001 6.317 - 6.344: 99.5266% ( 1) 00:11:40.001 6.344 - 6.372: 99.5328% ( 1) 00:11:40.001 6.372 - 6.400: 99.5389% ( 1) 00:11:40.001 6.456 - 6.483: 99.5451% ( 1) 00:11:40.001 7.012 - 7.040: 99.5512% ( 1) 00:11:40.001 7.068 - 7.096: 99.5574% ( 1) 00:11:40.001 39.179 - 39.402: 99.5635% ( 1) 00:11:40.001 3989.148 - 4017.642: 99.9939% ( 70) 00:11:40.001 4017.642 - 4046.136: 100.0000% ( 1) 00:11:40.001 00:11:40.002 17:02:27 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user2/2 nqn.2019-07.io.spdk:cnode2 2 00:11:40.002 17:02:27 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user2/2 00:11:40.002 17:02:27 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode2 00:11:40.002 17:02:27 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc4 00:11:40.002 17:02:27 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:11:40.002 [ 00:11:40.002 { 00:11:40.002 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:11:40.002 "subtype": "Discovery", 00:11:40.002 "listen_addresses": [], 00:11:40.002 "allow_any_host": true, 00:11:40.002 "hosts": [] 00:11:40.002 }, 00:11:40.002 { 00:11:40.002 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:11:40.002 "subtype": "NVMe", 00:11:40.002 "listen_addresses": [ 00:11:40.002 { 00:11:40.002 "trtype": "VFIOUSER", 00:11:40.002 "adrfam": "IPv4", 00:11:40.002 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:11:40.002 "trsvcid": "0" 00:11:40.002 } 00:11:40.002 ], 00:11:40.002 "allow_any_host": true, 00:11:40.002 "hosts": [], 00:11:40.002 "serial_number": "SPDK1", 00:11:40.002 "model_number": "SPDK bdev Controller", 00:11:40.002 "max_namespaces": 32, 00:11:40.002 "min_cntlid": 1, 00:11:40.002 "max_cntlid": 65519, 00:11:40.002 "namespaces": [ 00:11:40.002 { 00:11:40.002 "nsid": 1, 00:11:40.002 "bdev_name": "Malloc1", 00:11:40.002 "name": "Malloc1", 00:11:40.002 "nguid": "29A55CED10BB46188E0F806A23F28629", 00:11:40.002 "uuid": "29a55ced-10bb-4618-8e0f-806a23f28629" 00:11:40.002 }, 00:11:40.002 { 00:11:40.002 "nsid": 2, 00:11:40.002 "bdev_name": "Malloc3", 00:11:40.002 "name": "Malloc3", 00:11:40.002 "nguid": "5450154B536E449D9A0C71CA5E73F407", 00:11:40.002 "uuid": "5450154b-536e-449d-9a0c-71ca5e73f407" 00:11:40.002 } 00:11:40.002 ] 00:11:40.002 }, 00:11:40.002 { 00:11:40.002 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:11:40.002 "subtype": "NVMe", 00:11:40.002 "listen_addresses": [ 00:11:40.002 { 00:11:40.002 "trtype": "VFIOUSER", 00:11:40.002 "adrfam": "IPv4", 00:11:40.002 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:11:40.002 "trsvcid": "0" 00:11:40.002 } 00:11:40.002 ], 00:11:40.002 "allow_any_host": true, 00:11:40.002 "hosts": [], 00:11:40.002 "serial_number": "SPDK2", 00:11:40.002 "model_number": "SPDK bdev Controller", 00:11:40.002 "max_namespaces": 32, 00:11:40.002 "min_cntlid": 1, 00:11:40.002 "max_cntlid": 65519, 00:11:40.002 "namespaces": [ 00:11:40.002 { 00:11:40.002 "nsid": 1, 00:11:40.002 "bdev_name": "Malloc2", 00:11:40.002 "name": "Malloc2", 00:11:40.002 "nguid": "6E8DC514A34F411B92E498AD50A24542", 00:11:40.002 "uuid": "6e8dc514-a34f-411b-92e4-98ad50a24542" 00:11:40.002 } 00:11:40.002 ] 00:11:40.002 } 00:11:40.002 ] 00:11:40.002 17:02:27 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:11:40.002 17:02:27 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -n 2 -g -t /tmp/aer_touch_file 00:11:40.002 17:02:27 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@34 -- # aerpid=2993662 00:11:40.002 17:02:27 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:11:40.002 17:02:27 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1261 -- # local i=0 00:11:40.002 17:02:27 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1262 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:11:40.002 17:02:27 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1268 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:11:40.002 17:02:27 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1272 -- # return 0 00:11:40.002 17:02:27 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:11:40.002 17:02:27 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc4 00:11:40.002 EAL: No free 2048 kB hugepages reported on node 1 00:11:40.002 [2024-05-15 17:02:27.641582] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:11:40.259 Malloc4 00:11:40.259 17:02:27 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc4 -n 2 00:11:40.259 [2024-05-15 17:02:27.899527] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:11:40.516 17:02:27 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:11:40.516 Asynchronous Event Request test 00:11:40.516 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:11:40.516 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:11:40.516 Registering asynchronous event callbacks... 00:11:40.516 Starting namespace attribute notice tests for all controllers... 00:11:40.516 /var/run/vfio-user/domain/vfio-user2/2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:11:40.516 aer_cb - Changed Namespace 00:11:40.516 Cleaning up... 00:11:40.516 [ 00:11:40.516 { 00:11:40.516 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:11:40.516 "subtype": "Discovery", 00:11:40.516 "listen_addresses": [], 00:11:40.516 "allow_any_host": true, 00:11:40.516 "hosts": [] 00:11:40.516 }, 00:11:40.516 { 00:11:40.516 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:11:40.516 "subtype": "NVMe", 00:11:40.516 "listen_addresses": [ 00:11:40.516 { 00:11:40.516 "trtype": "VFIOUSER", 00:11:40.516 "adrfam": "IPv4", 00:11:40.516 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:11:40.516 "trsvcid": "0" 00:11:40.516 } 00:11:40.516 ], 00:11:40.516 "allow_any_host": true, 00:11:40.516 "hosts": [], 00:11:40.516 "serial_number": "SPDK1", 00:11:40.516 "model_number": "SPDK bdev Controller", 00:11:40.516 "max_namespaces": 32, 00:11:40.516 "min_cntlid": 1, 00:11:40.516 "max_cntlid": 65519, 00:11:40.516 "namespaces": [ 00:11:40.516 { 00:11:40.516 "nsid": 1, 00:11:40.516 "bdev_name": "Malloc1", 00:11:40.516 "name": "Malloc1", 00:11:40.516 "nguid": "29A55CED10BB46188E0F806A23F28629", 00:11:40.516 "uuid": "29a55ced-10bb-4618-8e0f-806a23f28629" 00:11:40.516 }, 00:11:40.516 { 00:11:40.516 "nsid": 2, 00:11:40.516 "bdev_name": "Malloc3", 00:11:40.516 "name": "Malloc3", 00:11:40.516 "nguid": "5450154B536E449D9A0C71CA5E73F407", 00:11:40.516 "uuid": "5450154b-536e-449d-9a0c-71ca5e73f407" 00:11:40.516 } 00:11:40.516 ] 00:11:40.516 }, 00:11:40.516 { 00:11:40.516 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:11:40.516 "subtype": "NVMe", 00:11:40.516 "listen_addresses": [ 00:11:40.516 { 00:11:40.516 "trtype": "VFIOUSER", 00:11:40.516 "adrfam": "IPv4", 00:11:40.516 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:11:40.516 "trsvcid": "0" 00:11:40.516 } 00:11:40.516 ], 00:11:40.516 "allow_any_host": true, 00:11:40.516 "hosts": [], 00:11:40.516 "serial_number": "SPDK2", 00:11:40.516 "model_number": "SPDK bdev Controller", 00:11:40.516 "max_namespaces": 32, 00:11:40.516 "min_cntlid": 1, 00:11:40.516 "max_cntlid": 65519, 00:11:40.516 "namespaces": [ 00:11:40.516 { 00:11:40.516 "nsid": 1, 00:11:40.516 "bdev_name": "Malloc2", 00:11:40.516 "name": "Malloc2", 00:11:40.516 "nguid": "6E8DC514A34F411B92E498AD50A24542", 00:11:40.516 "uuid": "6e8dc514-a34f-411b-92e4-98ad50a24542" 00:11:40.516 }, 00:11:40.516 { 00:11:40.516 "nsid": 2, 00:11:40.516 "bdev_name": "Malloc4", 00:11:40.516 "name": "Malloc4", 00:11:40.516 "nguid": "321BA81B4DC44E71B18ADF43BAF7041D", 00:11:40.516 "uuid": "321ba81b-4dc4-4e71-b18a-df43baf7041d" 00:11:40.516 } 00:11:40.516 ] 00:11:40.516 } 00:11:40.516 ] 00:11:40.516 17:02:28 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@44 -- # wait 2993662 00:11:40.516 17:02:28 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@105 -- # stop_nvmf_vfio_user 00:11:40.516 17:02:28 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@95 -- # killprocess 2986035 00:11:40.516 17:02:28 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@946 -- # '[' -z 2986035 ']' 00:11:40.516 17:02:28 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@950 -- # kill -0 2986035 00:11:40.516 17:02:28 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@951 -- # uname 00:11:40.516 17:02:28 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:11:40.516 17:02:28 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 2986035 00:11:40.516 17:02:28 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:11:40.516 17:02:28 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:11:40.516 17:02:28 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@964 -- # echo 'killing process with pid 2986035' 00:11:40.516 killing process with pid 2986035 00:11:40.516 17:02:28 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@965 -- # kill 2986035 00:11:40.516 [2024-05-15 17:02:28.153112] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:11:40.517 17:02:28 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@970 -- # wait 2986035 00:11:41.081 17:02:28 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:11:41.081 17:02:28 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:11:41.081 17:02:28 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@108 -- # setup_nvmf_vfio_user --interrupt-mode '-M -I' 00:11:41.081 17:02:28 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args=--interrupt-mode 00:11:41.081 17:02:28 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@52 -- # local 'transport_args=-M -I' 00:11:41.081 17:02:28 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=2993895 00:11:41.081 17:02:28 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 2993895' 00:11:41.081 Process pid: 2993895 00:11:41.081 17:02:28 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' --interrupt-mode 00:11:41.081 17:02:28 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:11:41.081 17:02:28 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 2993895 00:11:41.081 17:02:28 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@827 -- # '[' -z 2993895 ']' 00:11:41.081 17:02:28 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:41.081 17:02:28 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@832 -- # local max_retries=100 00:11:41.081 17:02:28 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:41.081 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:41.081 17:02:28 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@836 -- # xtrace_disable 00:11:41.081 17:02:28 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:11:41.081 [2024-05-15 17:02:28.487882] thread.c:2937:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:11:41.081 [2024-05-15 17:02:28.488814] Starting SPDK v24.05-pre git sha1 0ba8ca574 / DPDK 23.11.0 initialization... 00:11:41.081 [2024-05-15 17:02:28.488854] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:41.081 EAL: No free 2048 kB hugepages reported on node 1 00:11:41.081 [2024-05-15 17:02:28.543140] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:41.081 [2024-05-15 17:02:28.619561] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:41.081 [2024-05-15 17:02:28.619599] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:41.081 [2024-05-15 17:02:28.619606] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:41.081 [2024-05-15 17:02:28.619612] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:41.081 [2024-05-15 17:02:28.619617] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:41.081 [2024-05-15 17:02:28.619662] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:11:41.081 [2024-05-15 17:02:28.619759] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:11:41.081 [2024-05-15 17:02:28.619842] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:11:41.081 [2024-05-15 17:02:28.619843] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:41.081 [2024-05-15 17:02:28.698241] thread.c:2095:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:11:41.081 [2024-05-15 17:02:28.698389] thread.c:2095:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:11:41.081 [2024-05-15 17:02:28.698541] thread.c:2095:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:11:41.081 [2024-05-15 17:02:28.698859] thread.c:2095:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:11:41.081 [2024-05-15 17:02:28.699041] thread.c:2095:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:11:41.645 17:02:29 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:11:41.645 17:02:29 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@860 -- # return 0 00:11:41.645 17:02:29 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:11:43.031 17:02:30 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER -M -I 00:11:43.031 17:02:30 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:11:43.031 17:02:30 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:11:43.031 17:02:30 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:11:43.031 17:02:30 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:11:43.031 17:02:30 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:11:43.031 Malloc1 00:11:43.031 17:02:30 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:11:43.305 17:02:30 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:11:43.563 17:02:31 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:11:43.563 [2024-05-15 17:02:31.180220] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:11:43.563 17:02:31 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:11:43.563 17:02:31 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:11:43.563 17:02:31 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:11:43.820 Malloc2 00:11:43.820 17:02:31 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:11:44.077 17:02:31 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:11:44.335 17:02:31 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:11:44.335 17:02:31 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@109 -- # stop_nvmf_vfio_user 00:11:44.335 17:02:31 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@95 -- # killprocess 2993895 00:11:44.335 17:02:31 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@946 -- # '[' -z 2993895 ']' 00:11:44.335 17:02:31 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@950 -- # kill -0 2993895 00:11:44.335 17:02:31 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@951 -- # uname 00:11:44.335 17:02:31 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:11:44.335 17:02:31 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 2993895 00:11:44.335 17:02:31 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:11:44.335 17:02:31 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:11:44.335 17:02:31 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@964 -- # echo 'killing process with pid 2993895' 00:11:44.335 killing process with pid 2993895 00:11:44.335 17:02:31 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@965 -- # kill 2993895 00:11:44.335 [2024-05-15 17:02:31.978397] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:11:44.335 17:02:31 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@970 -- # wait 2993895 00:11:44.594 17:02:32 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:11:44.594 17:02:32 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:11:44.594 00:11:44.594 real 0m51.280s 00:11:44.594 user 3m22.858s 00:11:44.594 sys 0m3.609s 00:11:44.594 17:02:32 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1122 -- # xtrace_disable 00:11:44.594 17:02:32 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:11:44.594 ************************************ 00:11:44.594 END TEST nvmf_vfio_user 00:11:44.594 ************************************ 00:11:44.854 17:02:32 nvmf_tcp -- nvmf/nvmf.sh@42 -- # run_test nvmf_vfio_user_nvme_compliance /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:11:44.854 17:02:32 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:11:44.854 17:02:32 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:11:44.854 17:02:32 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:11:44.854 ************************************ 00:11:44.854 START TEST nvmf_vfio_user_nvme_compliance 00:11:44.854 ************************************ 00:11:44.854 17:02:32 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:11:44.854 * Looking for test storage... 00:11:44.854 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance 00:11:44.854 17:02:32 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:44.854 17:02:32 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@7 -- # uname -s 00:11:44.854 17:02:32 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:44.854 17:02:32 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:44.854 17:02:32 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:44.854 17:02:32 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:44.854 17:02:32 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:44.854 17:02:32 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:44.854 17:02:32 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:44.854 17:02:32 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:44.854 17:02:32 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:44.854 17:02:32 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:44.854 17:02:32 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:11:44.854 17:02:32 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:11:44.854 17:02:32 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:44.854 17:02:32 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:44.854 17:02:32 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:44.854 17:02:32 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:44.854 17:02:32 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:44.854 17:02:32 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:44.854 17:02:32 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:44.854 17:02:32 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:44.854 17:02:32 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:44.854 17:02:32 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:44.854 17:02:32 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:44.854 17:02:32 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- paths/export.sh@5 -- # export PATH 00:11:44.854 17:02:32 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:44.854 17:02:32 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@47 -- # : 0 00:11:44.854 17:02:32 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:11:44.854 17:02:32 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:11:44.854 17:02:32 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:44.854 17:02:32 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:44.854 17:02:32 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:44.854 17:02:32 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:11:44.855 17:02:32 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:11:44.855 17:02:32 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@51 -- # have_pci_nics=0 00:11:44.855 17:02:32 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@11 -- # MALLOC_BDEV_SIZE=64 00:11:44.855 17:02:32 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:11:44.855 17:02:32 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@14 -- # export TEST_TRANSPORT=VFIOUSER 00:11:44.855 17:02:32 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@14 -- # TEST_TRANSPORT=VFIOUSER 00:11:44.855 17:02:32 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@16 -- # rm -rf /var/run/vfio-user 00:11:44.855 17:02:32 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@20 -- # nvmfpid=2994660 00:11:44.855 17:02:32 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@21 -- # echo 'Process pid: 2994660' 00:11:44.855 Process pid: 2994660 00:11:44.855 17:02:32 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@23 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:11:44.855 17:02:32 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@24 -- # waitforlisten 2994660 00:11:44.855 17:02:32 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@827 -- # '[' -z 2994660 ']' 00:11:44.855 17:02:32 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:44.855 17:02:32 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@832 -- # local max_retries=100 00:11:44.855 17:02:32 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:44.855 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:44.855 17:02:32 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@836 -- # xtrace_disable 00:11:44.855 17:02:32 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:11:44.855 17:02:32 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:11:44.855 [2024-05-15 17:02:32.457321] Starting SPDK v24.05-pre git sha1 0ba8ca574 / DPDK 23.11.0 initialization... 00:11:44.855 [2024-05-15 17:02:32.457367] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:44.855 EAL: No free 2048 kB hugepages reported on node 1 00:11:44.855 [2024-05-15 17:02:32.512033] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:11:45.113 [2024-05-15 17:02:32.590930] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:45.113 [2024-05-15 17:02:32.590965] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:45.113 [2024-05-15 17:02:32.590972] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:45.113 [2024-05-15 17:02:32.590978] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:45.113 [2024-05-15 17:02:32.590984] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:45.113 [2024-05-15 17:02:32.591026] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:11:45.113 [2024-05-15 17:02:32.591043] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:11:45.113 [2024-05-15 17:02:32.591045] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:45.693 17:02:33 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:11:45.693 17:02:33 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@860 -- # return 0 00:11:45.693 17:02:33 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@26 -- # sleep 1 00:11:46.625 17:02:34 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@28 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:11:46.625 17:02:34 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@29 -- # traddr=/var/run/vfio-user 00:11:46.625 17:02:34 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@31 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:11:46.625 17:02:34 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:46.625 17:02:34 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:11:46.883 17:02:34 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:46.883 17:02:34 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@33 -- # mkdir -p /var/run/vfio-user 00:11:46.883 17:02:34 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@35 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:11:46.883 17:02:34 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:46.883 17:02:34 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:11:46.883 malloc0 00:11:46.883 17:02:34 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:46.883 17:02:34 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@36 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk -m 32 00:11:46.883 17:02:34 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:46.883 17:02:34 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:11:46.883 17:02:34 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:46.883 17:02:34 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@37 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:11:46.883 17:02:34 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:46.883 17:02:34 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:11:46.883 17:02:34 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:46.883 17:02:34 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@38 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:11:46.883 17:02:34 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:46.883 17:02:34 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:11:46.883 [2024-05-15 17:02:34.358951] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:11:46.883 17:02:34 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:46.883 17:02:34 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/nvme_compliance -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user subnqn:nqn.2021-09.io.spdk:cnode0' 00:11:46.883 EAL: No free 2048 kB hugepages reported on node 1 00:11:46.883 00:11:46.883 00:11:46.883 CUnit - A unit testing framework for C - Version 2.1-3 00:11:46.883 http://cunit.sourceforge.net/ 00:11:46.883 00:11:46.883 00:11:46.883 Suite: nvme_compliance 00:11:46.883 Test: admin_identify_ctrlr_verify_dptr ...[2024-05-15 17:02:34.505672] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:11:46.883 [2024-05-15 17:02:34.507016] vfio_user.c: 804:nvme_cmd_map_prps: *ERROR*: no PRP2, 3072 remaining 00:11:46.883 [2024-05-15 17:02:34.507031] vfio_user.c:5514:map_admin_cmd_req: *ERROR*: /var/run/vfio-user: map Admin Opc 6 failed 00:11:46.883 [2024-05-15 17:02:34.507037] vfio_user.c:5607:handle_cmd_req: *ERROR*: /var/run/vfio-user: process NVMe command opc 0x6 failed 00:11:46.883 [2024-05-15 17:02:34.508693] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:11:46.883 passed 00:11:47.140 Test: admin_identify_ctrlr_verify_fused ...[2024-05-15 17:02:34.586248] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:11:47.140 [2024-05-15 17:02:34.589263] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:11:47.140 passed 00:11:47.140 Test: admin_identify_ns ...[2024-05-15 17:02:34.667529] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:11:47.140 [2024-05-15 17:02:34.731181] ctrlr.c:2706:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:11:47.140 [2024-05-15 17:02:34.739176] ctrlr.c:2706:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 4294967295 00:11:47.140 [2024-05-15 17:02:34.760266] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:11:47.140 passed 00:11:47.398 Test: admin_get_features_mandatory_features ...[2024-05-15 17:02:34.835262] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:11:47.398 [2024-05-15 17:02:34.838279] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:11:47.398 passed 00:11:47.398 Test: admin_get_features_optional_features ...[2024-05-15 17:02:34.916800] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:11:47.398 [2024-05-15 17:02:34.919821] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:11:47.398 passed 00:11:47.398 Test: admin_set_features_number_of_queues ...[2024-05-15 17:02:34.996524] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:11:47.655 [2024-05-15 17:02:35.101247] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:11:47.655 passed 00:11:47.655 Test: admin_get_log_page_mandatory_logs ...[2024-05-15 17:02:35.178023] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:11:47.655 [2024-05-15 17:02:35.181056] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:11:47.655 passed 00:11:47.655 Test: admin_get_log_page_with_lpo ...[2024-05-15 17:02:35.258984] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:11:47.913 [2024-05-15 17:02:35.326173] ctrlr.c:2654:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: offset (516) > len (512) 00:11:47.913 [2024-05-15 17:02:35.339250] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:11:47.913 passed 00:11:47.913 Test: fabric_property_get ...[2024-05-15 17:02:35.415228] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:11:47.913 [2024-05-15 17:02:35.416451] vfio_user.c:5607:handle_cmd_req: *ERROR*: /var/run/vfio-user: process NVMe command opc 0x7f failed 00:11:47.913 [2024-05-15 17:02:35.418246] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:11:47.913 passed 00:11:47.913 Test: admin_delete_io_sq_use_admin_qid ...[2024-05-15 17:02:35.495749] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:11:47.913 [2024-05-15 17:02:35.496972] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:0 does not exist 00:11:47.913 [2024-05-15 17:02:35.498773] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:11:47.913 passed 00:11:48.170 Test: admin_delete_io_sq_delete_sq_twice ...[2024-05-15 17:02:35.576551] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:11:48.170 [2024-05-15 17:02:35.660179] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:11:48.170 [2024-05-15 17:02:35.676168] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:11:48.170 [2024-05-15 17:02:35.681251] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:11:48.170 passed 00:11:48.170 Test: admin_delete_io_cq_use_admin_qid ...[2024-05-15 17:02:35.757992] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:11:48.170 [2024-05-15 17:02:35.759224] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O cqid:0 does not exist 00:11:48.170 [2024-05-15 17:02:35.761020] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:11:48.170 passed 00:11:48.428 Test: admin_delete_io_cq_delete_cq_first ...[2024-05-15 17:02:35.837819] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:11:48.428 [2024-05-15 17:02:35.913178] vfio_user.c:2319:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:11:48.428 [2024-05-15 17:02:35.937176] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:11:48.428 [2024-05-15 17:02:35.942254] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:11:48.428 passed 00:11:48.428 Test: admin_create_io_cq_verify_iv_pc ...[2024-05-15 17:02:36.019226] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:11:48.428 [2024-05-15 17:02:36.020452] vfio_user.c:2158:handle_create_io_cq: *ERROR*: /var/run/vfio-user: IV is too big 00:11:48.428 [2024-05-15 17:02:36.020475] vfio_user.c:2152:handle_create_io_cq: *ERROR*: /var/run/vfio-user: non-PC CQ not supported 00:11:48.428 [2024-05-15 17:02:36.024269] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:11:48.428 passed 00:11:48.685 Test: admin_create_io_sq_verify_qsize_cqid ...[2024-05-15 17:02:36.099585] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:11:48.685 [2024-05-15 17:02:36.191173] vfio_user.c:2240:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 1 00:11:48.685 [2024-05-15 17:02:36.199170] vfio_user.c:2240:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 257 00:11:48.685 [2024-05-15 17:02:36.207182] vfio_user.c:2038:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:0 00:11:48.685 [2024-05-15 17:02:36.215175] vfio_user.c:2038:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:128 00:11:48.685 [2024-05-15 17:02:36.244248] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:11:48.685 passed 00:11:48.685 Test: admin_create_io_sq_verify_pc ...[2024-05-15 17:02:36.321188] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:11:48.685 [2024-05-15 17:02:36.341180] vfio_user.c:2051:handle_create_io_sq: *ERROR*: /var/run/vfio-user: non-PC SQ not supported 00:11:48.943 [2024-05-15 17:02:36.358419] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:11:48.943 passed 00:11:48.943 Test: admin_create_io_qp_max_qps ...[2024-05-15 17:02:36.433945] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:11:50.315 [2024-05-15 17:02:37.535177] nvme_ctrlr.c:5330:spdk_nvme_ctrlr_alloc_qid: *ERROR*: [/var/run/vfio-user] No free I/O queue IDs 00:11:50.315 [2024-05-15 17:02:37.915599] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:11:50.315 passed 00:11:50.573 Test: admin_create_io_sq_shared_cq ...[2024-05-15 17:02:37.993799] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:11:50.573 [2024-05-15 17:02:38.126179] vfio_user.c:2319:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:11:50.573 [2024-05-15 17:02:38.163233] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:11:50.573 passed 00:11:50.573 00:11:50.573 Run Summary: Type Total Ran Passed Failed Inactive 00:11:50.573 suites 1 1 n/a 0 0 00:11:50.573 tests 18 18 18 0 0 00:11:50.573 asserts 360 360 360 0 n/a 00:11:50.573 00:11:50.573 Elapsed time = 1.505 seconds 00:11:50.573 17:02:38 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@42 -- # killprocess 2994660 00:11:50.573 17:02:38 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@946 -- # '[' -z 2994660 ']' 00:11:50.573 17:02:38 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@950 -- # kill -0 2994660 00:11:50.573 17:02:38 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@951 -- # uname 00:11:50.573 17:02:38 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:11:50.573 17:02:38 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 2994660 00:11:50.831 17:02:38 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:11:50.831 17:02:38 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:11:50.831 17:02:38 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@964 -- # echo 'killing process with pid 2994660' 00:11:50.831 killing process with pid 2994660 00:11:50.831 17:02:38 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@965 -- # kill 2994660 00:11:50.831 [2024-05-15 17:02:38.250806] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:11:50.831 17:02:38 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@970 -- # wait 2994660 00:11:50.831 17:02:38 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@44 -- # rm -rf /var/run/vfio-user 00:11:50.831 17:02:38 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@46 -- # trap - SIGINT SIGTERM EXIT 00:11:50.831 00:11:50.831 real 0m6.174s 00:11:50.831 user 0m17.593s 00:11:50.831 sys 0m0.473s 00:11:50.831 17:02:38 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1122 -- # xtrace_disable 00:11:50.831 17:02:38 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:11:50.831 ************************************ 00:11:50.831 END TEST nvmf_vfio_user_nvme_compliance 00:11:50.831 ************************************ 00:11:51.090 17:02:38 nvmf_tcp -- nvmf/nvmf.sh@43 -- # run_test nvmf_vfio_user_fuzz /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:11:51.090 17:02:38 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:11:51.090 17:02:38 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:11:51.090 17:02:38 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:11:51.090 ************************************ 00:11:51.090 START TEST nvmf_vfio_user_fuzz 00:11:51.090 ************************************ 00:11:51.090 17:02:38 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:11:51.090 * Looking for test storage... 00:11:51.090 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:51.090 17:02:38 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:51.090 17:02:38 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@7 -- # uname -s 00:11:51.090 17:02:38 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:51.090 17:02:38 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:51.090 17:02:38 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:51.090 17:02:38 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:51.090 17:02:38 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:51.090 17:02:38 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:51.090 17:02:38 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:51.090 17:02:38 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:51.090 17:02:38 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:51.090 17:02:38 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:51.090 17:02:38 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:11:51.090 17:02:38 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:11:51.090 17:02:38 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:51.090 17:02:38 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:51.090 17:02:38 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:51.090 17:02:38 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:51.090 17:02:38 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:51.090 17:02:38 nvmf_tcp.nvmf_vfio_user_fuzz -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:51.090 17:02:38 nvmf_tcp.nvmf_vfio_user_fuzz -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:51.090 17:02:38 nvmf_tcp.nvmf_vfio_user_fuzz -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:51.090 17:02:38 nvmf_tcp.nvmf_vfio_user_fuzz -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:51.090 17:02:38 nvmf_tcp.nvmf_vfio_user_fuzz -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:51.090 17:02:38 nvmf_tcp.nvmf_vfio_user_fuzz -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:51.090 17:02:38 nvmf_tcp.nvmf_vfio_user_fuzz -- paths/export.sh@5 -- # export PATH 00:11:51.090 17:02:38 nvmf_tcp.nvmf_vfio_user_fuzz -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:51.090 17:02:38 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@47 -- # : 0 00:11:51.090 17:02:38 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:11:51.090 17:02:38 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:11:51.090 17:02:38 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:51.090 17:02:38 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:51.090 17:02:38 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:51.090 17:02:38 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:11:51.090 17:02:38 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:11:51.090 17:02:38 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@51 -- # have_pci_nics=0 00:11:51.090 17:02:38 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@12 -- # MALLOC_BDEV_SIZE=64 00:11:51.090 17:02:38 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:11:51.090 17:02:38 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@15 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:11:51.090 17:02:38 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@16 -- # traddr=/var/run/vfio-user 00:11:51.090 17:02:38 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:11:51.090 17:02:38 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:11:51.090 17:02:38 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@20 -- # rm -rf /var/run/vfio-user 00:11:51.090 17:02:38 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@24 -- # nvmfpid=2995649 00:11:51.090 17:02:38 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@25 -- # echo 'Process pid: 2995649' 00:11:51.090 Process pid: 2995649 00:11:51.090 17:02:38 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@27 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:11:51.090 17:02:38 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:11:51.090 17:02:38 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@28 -- # waitforlisten 2995649 00:11:51.090 17:02:38 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@827 -- # '[' -z 2995649 ']' 00:11:51.090 17:02:38 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:51.090 17:02:38 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@832 -- # local max_retries=100 00:11:51.090 17:02:38 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:51.090 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:51.090 17:02:38 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@836 -- # xtrace_disable 00:11:51.090 17:02:38 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:11:52.023 17:02:39 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:11:52.023 17:02:39 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@860 -- # return 0 00:11:52.023 17:02:39 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@30 -- # sleep 1 00:11:52.955 17:02:40 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@32 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:11:52.955 17:02:40 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:52.955 17:02:40 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:11:52.955 17:02:40 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:52.955 17:02:40 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@34 -- # mkdir -p /var/run/vfio-user 00:11:52.955 17:02:40 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:11:52.955 17:02:40 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:52.955 17:02:40 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:11:52.955 malloc0 00:11:52.955 17:02:40 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:52.955 17:02:40 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk 00:11:52.955 17:02:40 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:52.955 17:02:40 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:11:52.955 17:02:40 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:52.955 17:02:40 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:11:52.955 17:02:40 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:52.955 17:02:40 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:11:52.955 17:02:40 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:52.955 17:02:40 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@39 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:11:52.955 17:02:40 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:52.955 17:02:40 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:11:52.955 17:02:40 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:52.955 17:02:40 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@41 -- # trid='trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' 00:11:52.955 17:02:40 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -t 30 -S 123456 -F 'trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' -N -a 00:12:25.006 Fuzzing completed. Shutting down the fuzz application 00:12:25.006 00:12:25.006 Dumping successful admin opcodes: 00:12:25.006 8, 9, 10, 24, 00:12:25.006 Dumping successful io opcodes: 00:12:25.006 0, 00:12:25.006 NS: 0x200003a1ef00 I/O qp, Total commands completed: 1095914, total successful commands: 4319, random_seed: 3090110656 00:12:25.006 NS: 0x200003a1ef00 admin qp, Total commands completed: 270427, total successful commands: 2178, random_seed: 1897725760 00:12:25.006 17:03:10 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@44 -- # rpc_cmd nvmf_delete_subsystem nqn.2021-09.io.spdk:cnode0 00:12:25.006 17:03:10 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:25.006 17:03:10 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:12:25.006 17:03:10 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:25.006 17:03:10 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@46 -- # killprocess 2995649 00:12:25.006 17:03:10 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@946 -- # '[' -z 2995649 ']' 00:12:25.006 17:03:10 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@950 -- # kill -0 2995649 00:12:25.006 17:03:10 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@951 -- # uname 00:12:25.006 17:03:11 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:12:25.006 17:03:11 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 2995649 00:12:25.006 17:03:11 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:12:25.006 17:03:11 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:12:25.006 17:03:11 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@964 -- # echo 'killing process with pid 2995649' 00:12:25.006 killing process with pid 2995649 00:12:25.006 17:03:11 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@965 -- # kill 2995649 00:12:25.006 17:03:11 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@970 -- # wait 2995649 00:12:25.006 17:03:11 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@48 -- # rm -rf /var/run/vfio-user /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/vfio_user_fuzz_log.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/vfio_user_fuzz_tgt_output.txt 00:12:25.006 17:03:11 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@50 -- # trap - SIGINT SIGTERM EXIT 00:12:25.006 00:12:25.006 real 0m32.806s 00:12:25.006 user 0m35.394s 00:12:25.006 sys 0m25.225s 00:12:25.006 17:03:11 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1122 -- # xtrace_disable 00:12:25.006 17:03:11 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:12:25.006 ************************************ 00:12:25.006 END TEST nvmf_vfio_user_fuzz 00:12:25.006 ************************************ 00:12:25.006 17:03:11 nvmf_tcp -- nvmf/nvmf.sh@47 -- # run_test nvmf_host_management /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:12:25.006 17:03:11 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:12:25.006 17:03:11 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:12:25.006 17:03:11 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:12:25.006 ************************************ 00:12:25.006 START TEST nvmf_host_management 00:12:25.006 ************************************ 00:12:25.006 17:03:11 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:12:25.006 * Looking for test storage... 00:12:25.006 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:25.006 17:03:11 nvmf_tcp.nvmf_host_management -- target/host_management.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:25.006 17:03:11 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 00:12:25.006 17:03:11 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:25.006 17:03:11 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:25.006 17:03:11 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:25.006 17:03:11 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:25.006 17:03:11 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:25.006 17:03:11 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:25.006 17:03:11 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:25.006 17:03:11 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:25.006 17:03:11 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:25.006 17:03:11 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:25.006 17:03:11 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:12:25.006 17:03:11 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:12:25.006 17:03:11 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:25.006 17:03:11 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:25.006 17:03:11 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:25.006 17:03:11 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:25.006 17:03:11 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:25.006 17:03:11 nvmf_tcp.nvmf_host_management -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:25.006 17:03:11 nvmf_tcp.nvmf_host_management -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:25.006 17:03:11 nvmf_tcp.nvmf_host_management -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:25.006 17:03:11 nvmf_tcp.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:25.006 17:03:11 nvmf_tcp.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:25.006 17:03:11 nvmf_tcp.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:25.006 17:03:11 nvmf_tcp.nvmf_host_management -- paths/export.sh@5 -- # export PATH 00:12:25.006 17:03:11 nvmf_tcp.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:25.006 17:03:11 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@47 -- # : 0 00:12:25.006 17:03:11 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:12:25.006 17:03:11 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:12:25.006 17:03:11 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:25.006 17:03:11 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:25.006 17:03:11 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:25.006 17:03:11 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:12:25.006 17:03:11 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:12:25.006 17:03:11 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@51 -- # have_pci_nics=0 00:12:25.006 17:03:11 nvmf_tcp.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:12:25.006 17:03:11 nvmf_tcp.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:12:25.006 17:03:11 nvmf_tcp.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 00:12:25.006 17:03:11 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:12:25.006 17:03:11 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:25.006 17:03:11 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@448 -- # prepare_net_devs 00:12:25.006 17:03:11 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@410 -- # local -g is_hw=no 00:12:25.006 17:03:11 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@412 -- # remove_spdk_ns 00:12:25.006 17:03:11 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:25.006 17:03:11 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:25.006 17:03:11 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:25.006 17:03:11 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:12:25.006 17:03:11 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:12:25.006 17:03:11 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@285 -- # xtrace_disable 00:12:25.006 17:03:11 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:12:29.190 17:03:16 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:29.190 17:03:16 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@291 -- # pci_devs=() 00:12:29.190 17:03:16 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@291 -- # local -a pci_devs 00:12:29.190 17:03:16 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@292 -- # pci_net_devs=() 00:12:29.190 17:03:16 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:12:29.190 17:03:16 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@293 -- # pci_drivers=() 00:12:29.190 17:03:16 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@293 -- # local -A pci_drivers 00:12:29.190 17:03:16 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@295 -- # net_devs=() 00:12:29.190 17:03:16 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@295 -- # local -ga net_devs 00:12:29.190 17:03:16 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@296 -- # e810=() 00:12:29.190 17:03:16 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@296 -- # local -ga e810 00:12:29.190 17:03:16 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@297 -- # x722=() 00:12:29.190 17:03:16 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@297 -- # local -ga x722 00:12:29.190 17:03:16 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@298 -- # mlx=() 00:12:29.190 17:03:16 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@298 -- # local -ga mlx 00:12:29.190 17:03:16 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:29.190 17:03:16 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:29.190 17:03:16 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:29.190 17:03:16 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:29.190 17:03:16 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:29.190 17:03:16 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:29.190 17:03:16 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:29.190 17:03:16 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:29.190 17:03:16 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:29.190 17:03:16 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:29.190 17:03:16 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:29.190 17:03:16 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:12:29.190 17:03:16 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:12:29.190 17:03:16 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:12:29.190 17:03:16 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:12:29.190 17:03:16 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:12:29.190 17:03:16 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:12:29.190 17:03:16 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:29.190 17:03:16 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:12:29.190 Found 0000:86:00.0 (0x8086 - 0x159b) 00:12:29.190 17:03:16 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:29.190 17:03:16 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:29.190 17:03:16 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:29.190 17:03:16 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:29.190 17:03:16 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:29.190 17:03:16 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:29.190 17:03:16 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:12:29.190 Found 0000:86:00.1 (0x8086 - 0x159b) 00:12:29.190 17:03:16 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:29.190 17:03:16 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:29.190 17:03:16 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:29.190 17:03:16 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:29.190 17:03:16 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:29.190 17:03:16 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:12:29.190 17:03:16 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:12:29.190 17:03:16 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:12:29.190 17:03:16 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:29.190 17:03:16 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:29.190 17:03:16 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:12:29.190 17:03:16 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:29.190 17:03:16 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@390 -- # [[ up == up ]] 00:12:29.190 17:03:16 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:29.190 17:03:16 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:29.190 17:03:16 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:12:29.191 Found net devices under 0000:86:00.0: cvl_0_0 00:12:29.191 17:03:16 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:29.191 17:03:16 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:29.191 17:03:16 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:29.191 17:03:16 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:12:29.191 17:03:16 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:29.191 17:03:16 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@390 -- # [[ up == up ]] 00:12:29.191 17:03:16 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:29.191 17:03:16 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:29.191 17:03:16 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:12:29.191 Found net devices under 0000:86:00.1: cvl_0_1 00:12:29.191 17:03:16 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:29.191 17:03:16 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:12:29.191 17:03:16 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@414 -- # is_hw=yes 00:12:29.191 17:03:16 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:12:29.191 17:03:16 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:12:29.191 17:03:16 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:12:29.191 17:03:16 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:29.191 17:03:16 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:29.191 17:03:16 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:29.191 17:03:16 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:12:29.191 17:03:16 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:29.191 17:03:16 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:29.191 17:03:16 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:12:29.191 17:03:16 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:29.191 17:03:16 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:29.191 17:03:16 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:12:29.191 17:03:16 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:12:29.191 17:03:16 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:12:29.191 17:03:16 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:29.191 17:03:16 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:29.191 17:03:16 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:29.191 17:03:16 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:12:29.191 17:03:16 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:29.521 17:03:16 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:29.521 17:03:16 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:29.521 17:03:16 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:12:29.521 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:29.521 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.256 ms 00:12:29.521 00:12:29.521 --- 10.0.0.2 ping statistics --- 00:12:29.521 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:29.521 rtt min/avg/max/mdev = 0.256/0.256/0.256/0.000 ms 00:12:29.521 17:03:16 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:29.521 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:29.521 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.221 ms 00:12:29.521 00:12:29.521 --- 10.0.0.1 ping statistics --- 00:12:29.521 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:29.521 rtt min/avg/max/mdev = 0.221/0.221/0.221/0.000 ms 00:12:29.521 17:03:16 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:29.521 17:03:16 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@422 -- # return 0 00:12:29.521 17:03:16 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:12:29.521 17:03:16 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:29.521 17:03:16 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:12:29.521 17:03:16 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:12:29.521 17:03:16 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:29.521 17:03:16 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:12:29.521 17:03:16 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:12:29.521 17:03:16 nvmf_tcp.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 00:12:29.521 17:03:16 nvmf_tcp.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 00:12:29.521 17:03:16 nvmf_tcp.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:12:29.521 17:03:16 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:12:29.521 17:03:16 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@720 -- # xtrace_disable 00:12:29.521 17:03:16 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:12:29.521 17:03:16 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@481 -- # nvmfpid=3004683 00:12:29.521 17:03:16 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@482 -- # waitforlisten 3004683 00:12:29.521 17:03:16 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:12:29.521 17:03:16 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@827 -- # '[' -z 3004683 ']' 00:12:29.521 17:03:16 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:29.521 17:03:16 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@832 -- # local max_retries=100 00:12:29.521 17:03:16 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:29.521 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:29.521 17:03:16 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@836 -- # xtrace_disable 00:12:29.521 17:03:16 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:12:29.521 [2024-05-15 17:03:17.013707] Starting SPDK v24.05-pre git sha1 0ba8ca574 / DPDK 23.11.0 initialization... 00:12:29.521 [2024-05-15 17:03:17.013746] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:29.521 EAL: No free 2048 kB hugepages reported on node 1 00:12:29.521 [2024-05-15 17:03:17.070593] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:29.780 [2024-05-15 17:03:17.150913] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:29.780 [2024-05-15 17:03:17.150950] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:29.780 [2024-05-15 17:03:17.150957] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:29.780 [2024-05-15 17:03:17.150962] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:29.780 [2024-05-15 17:03:17.150967] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:29.780 [2024-05-15 17:03:17.151087] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:12:29.780 [2024-05-15 17:03:17.151185] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:12:29.780 [2024-05-15 17:03:17.151298] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:12:29.780 [2024-05-15 17:03:17.151299] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:12:30.346 17:03:17 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:12:30.346 17:03:17 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@860 -- # return 0 00:12:30.346 17:03:17 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:12:30.346 17:03:17 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable 00:12:30.346 17:03:17 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:12:30.346 17:03:17 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:30.346 17:03:17 nvmf_tcp.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:12:30.346 17:03:17 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:30.346 17:03:17 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:12:30.346 [2024-05-15 17:03:17.871175] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:30.346 17:03:17 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:30.346 17:03:17 nvmf_tcp.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:12:30.346 17:03:17 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@720 -- # xtrace_disable 00:12:30.346 17:03:17 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:12:30.347 17:03:17 nvmf_tcp.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:12:30.347 17:03:17 nvmf_tcp.nvmf_host_management -- target/host_management.sh@23 -- # cat 00:12:30.347 17:03:17 nvmf_tcp.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 00:12:30.347 17:03:17 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:30.347 17:03:17 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:12:30.347 Malloc0 00:12:30.347 [2024-05-15 17:03:17.930880] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:12:30.347 [2024-05-15 17:03:17.931141] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:30.347 17:03:17 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:30.347 17:03:17 nvmf_tcp.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:12:30.347 17:03:17 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable 00:12:30.347 17:03:17 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:12:30.347 17:03:17 nvmf_tcp.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=3004950 00:12:30.347 17:03:17 nvmf_tcp.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 3004950 /var/tmp/bdevperf.sock 00:12:30.347 17:03:17 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@827 -- # '[' -z 3004950 ']' 00:12:30.347 17:03:17 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:12:30.347 17:03:17 nvmf_tcp.nvmf_host_management -- target/host_management.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:12:30.347 17:03:17 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@832 -- # local max_retries=100 00:12:30.347 17:03:17 nvmf_tcp.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:12:30.347 17:03:17 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:12:30.347 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:12:30.347 17:03:17 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@532 -- # config=() 00:12:30.347 17:03:17 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@836 -- # xtrace_disable 00:12:30.347 17:03:17 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@532 -- # local subsystem config 00:12:30.347 17:03:17 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:12:30.347 17:03:17 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:12:30.347 17:03:17 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:12:30.347 { 00:12:30.347 "params": { 00:12:30.347 "name": "Nvme$subsystem", 00:12:30.347 "trtype": "$TEST_TRANSPORT", 00:12:30.347 "traddr": "$NVMF_FIRST_TARGET_IP", 00:12:30.347 "adrfam": "ipv4", 00:12:30.347 "trsvcid": "$NVMF_PORT", 00:12:30.347 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:12:30.347 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:12:30.347 "hdgst": ${hdgst:-false}, 00:12:30.347 "ddgst": ${ddgst:-false} 00:12:30.347 }, 00:12:30.347 "method": "bdev_nvme_attach_controller" 00:12:30.347 } 00:12:30.347 EOF 00:12:30.347 )") 00:12:30.347 17:03:17 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@554 -- # cat 00:12:30.347 17:03:17 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@556 -- # jq . 00:12:30.347 17:03:17 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@557 -- # IFS=, 00:12:30.347 17:03:17 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:12:30.347 "params": { 00:12:30.347 "name": "Nvme0", 00:12:30.347 "trtype": "tcp", 00:12:30.347 "traddr": "10.0.0.2", 00:12:30.347 "adrfam": "ipv4", 00:12:30.347 "trsvcid": "4420", 00:12:30.347 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:12:30.347 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:12:30.347 "hdgst": false, 00:12:30.347 "ddgst": false 00:12:30.347 }, 00:12:30.347 "method": "bdev_nvme_attach_controller" 00:12:30.347 }' 00:12:30.610 [2024-05-15 17:03:18.021612] Starting SPDK v24.05-pre git sha1 0ba8ca574 / DPDK 23.11.0 initialization... 00:12:30.610 [2024-05-15 17:03:18.021654] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3004950 ] 00:12:30.610 EAL: No free 2048 kB hugepages reported on node 1 00:12:30.610 [2024-05-15 17:03:18.076131] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:30.610 [2024-05-15 17:03:18.148702] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:30.868 Running I/O for 10 seconds... 00:12:31.437 17:03:18 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:12:31.437 17:03:18 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@860 -- # return 0 00:12:31.437 17:03:18 nvmf_tcp.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:12:31.437 17:03:18 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:31.437 17:03:18 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:12:31.437 17:03:18 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:31.437 17:03:18 nvmf_tcp.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:12:31.437 17:03:18 nvmf_tcp.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:12:31.437 17:03:18 nvmf_tcp.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:12:31.437 17:03:18 nvmf_tcp.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:12:31.437 17:03:18 nvmf_tcp.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 00:12:31.437 17:03:18 nvmf_tcp.nvmf_host_management -- target/host_management.sh@53 -- # local i 00:12:31.437 17:03:18 nvmf_tcp.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 00:12:31.437 17:03:18 nvmf_tcp.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:12:31.437 17:03:18 nvmf_tcp.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:12:31.437 17:03:18 nvmf_tcp.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:12:31.437 17:03:18 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:31.437 17:03:18 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:12:31.437 17:03:18 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:31.437 17:03:18 nvmf_tcp.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=899 00:12:31.437 17:03:18 nvmf_tcp.nvmf_host_management -- target/host_management.sh@58 -- # '[' 899 -ge 100 ']' 00:12:31.437 17:03:18 nvmf_tcp.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 00:12:31.437 17:03:18 nvmf_tcp.nvmf_host_management -- target/host_management.sh@60 -- # break 00:12:31.437 17:03:18 nvmf_tcp.nvmf_host_management -- target/host_management.sh@64 -- # return 0 00:12:31.437 17:03:18 nvmf_tcp.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:12:31.437 17:03:18 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:31.437 17:03:18 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:12:31.438 [2024-05-15 17:03:18.907103] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x241d1c0 is same with the state(5) to be set 00:12:31.438 [2024-05-15 17:03:18.907181] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x241d1c0 is same with the state(5) to be set 00:12:31.438 [2024-05-15 17:03:18.907190] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x241d1c0 is same with the state(5) to be set 00:12:31.438 [2024-05-15 17:03:18.907197] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x241d1c0 is same with the state(5) to be set 00:12:31.438 [2024-05-15 17:03:18.907203] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x241d1c0 is same with the state(5) to be set 00:12:31.438 [2024-05-15 17:03:18.907209] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x241d1c0 is same with the state(5) to be set 00:12:31.438 [2024-05-15 17:03:18.907215] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x241d1c0 is same with the state(5) to be set 00:12:31.438 [2024-05-15 17:03:18.907221] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x241d1c0 is same with the state(5) to be set 00:12:31.438 [2024-05-15 17:03:18.907227] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x241d1c0 is same with the state(5) to be set 00:12:31.438 [2024-05-15 17:03:18.907233] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x241d1c0 is same with the state(5) to be set 00:12:31.438 [2024-05-15 17:03:18.907239] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x241d1c0 is same with the state(5) to be set 00:12:31.438 [2024-05-15 17:03:18.907244] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x241d1c0 is same with the state(5) to be set 00:12:31.438 [2024-05-15 17:03:18.907250] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x241d1c0 is same with the state(5) to be set 00:12:31.438 [2024-05-15 17:03:18.907256] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x241d1c0 is same with the state(5) to be set 00:12:31.438 [2024-05-15 17:03:18.907261] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x241d1c0 is same with the state(5) to be set 00:12:31.438 [2024-05-15 17:03:18.907267] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x241d1c0 is same with the state(5) to be set 00:12:31.438 [2024-05-15 17:03:18.907273] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x241d1c0 is same with the state(5) to be set 00:12:31.438 [2024-05-15 17:03:18.907279] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x241d1c0 is same with the state(5) to be set 00:12:31.438 [2024-05-15 17:03:18.907285] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x241d1c0 is same with the state(5) to be set 00:12:31.438 [2024-05-15 17:03:18.907291] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x241d1c0 is same with the state(5) to be set 00:12:31.438 [2024-05-15 17:03:18.907296] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x241d1c0 is same with the state(5) to be set 00:12:31.438 [2024-05-15 17:03:18.907302] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x241d1c0 is same with the state(5) to be set 00:12:31.438 [2024-05-15 17:03:18.907307] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x241d1c0 is same with the state(5) to be set 00:12:31.438 [2024-05-15 17:03:18.907317] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x241d1c0 is same with the state(5) to be set 00:12:31.438 [2024-05-15 17:03:18.907323] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x241d1c0 is same with the state(5) to be set 00:12:31.438 [2024-05-15 17:03:18.907329] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x241d1c0 is same with the state(5) to be set 00:12:31.438 [2024-05-15 17:03:18.907335] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x241d1c0 is same with the state(5) to be set 00:12:31.438 [2024-05-15 17:03:18.907340] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x241d1c0 is same with the state(5) to be set 00:12:31.438 [2024-05-15 17:03:18.907346] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x241d1c0 is same with the state(5) to be set 00:12:31.438 [2024-05-15 17:03:18.907351] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x241d1c0 is same with the state(5) to be set 00:12:31.438 [2024-05-15 17:03:18.907357] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x241d1c0 is same with the state(5) to be set 00:12:31.438 [2024-05-15 17:03:18.907363] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x241d1c0 is same with the state(5) to be set 00:12:31.438 [2024-05-15 17:03:18.907368] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x241d1c0 is same with the state(5) to be set 00:12:31.438 [2024-05-15 17:03:18.907374] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x241d1c0 is same with the state(5) to be set 00:12:31.438 [2024-05-15 17:03:18.907380] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x241d1c0 is same with the state(5) to be set 00:12:31.438 [2024-05-15 17:03:18.907386] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x241d1c0 is same with the state(5) to be set 00:12:31.438 [2024-05-15 17:03:18.907392] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x241d1c0 is same with the state(5) to be set 00:12:31.438 [2024-05-15 17:03:18.907397] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x241d1c0 is same with the state(5) to be set 00:12:31.438 [2024-05-15 17:03:18.907403] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x241d1c0 is same with the state(5) to be set 00:12:31.438 [2024-05-15 17:03:18.907410] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x241d1c0 is same with the state(5) to be set 00:12:31.438 [2024-05-15 17:03:18.907416] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x241d1c0 is same with the state(5) to be set 00:12:31.438 [2024-05-15 17:03:18.907422] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x241d1c0 is same with the state(5) to be set 00:12:31.438 [2024-05-15 17:03:18.907427] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x241d1c0 is same with the state(5) to be set 00:12:31.438 [2024-05-15 17:03:18.907433] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x241d1c0 is same with the state(5) to be set 00:12:31.438 [2024-05-15 17:03:18.907439] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x241d1c0 is same with the state(5) to be set 00:12:31.438 [2024-05-15 17:03:18.907445] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x241d1c0 is same with the state(5) to be set 00:12:31.438 [2024-05-15 17:03:18.907451] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x241d1c0 is same with the state(5) to be set 00:12:31.438 [2024-05-15 17:03:18.907456] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x241d1c0 is same with the state(5) to be set 00:12:31.438 [2024-05-15 17:03:18.907462] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x241d1c0 is same with the state(5) to be set 00:12:31.438 [2024-05-15 17:03:18.907468] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x241d1c0 is same with the state(5) to be set 00:12:31.438 [2024-05-15 17:03:18.907475] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x241d1c0 is same with the state(5) to be set 00:12:31.438 [2024-05-15 17:03:18.907481] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x241d1c0 is same with the state(5) to be set 00:12:31.438 [2024-05-15 17:03:18.907486] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x241d1c0 is same with the state(5) to be set 00:12:31.438 [2024-05-15 17:03:18.907492] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x241d1c0 is same with the state(5) to be set 00:12:31.438 [2024-05-15 17:03:18.907498] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x241d1c0 is same with the state(5) to be set 00:12:31.438 [2024-05-15 17:03:18.907503] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x241d1c0 is same with the state(5) to be set 00:12:31.438 [2024-05-15 17:03:18.907509] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x241d1c0 is same with the state(5) to be set 00:12:31.438 [2024-05-15 17:03:18.907514] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x241d1c0 is same with the state(5) to be set 00:12:31.438 [2024-05-15 17:03:18.907520] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x241d1c0 is same with the state(5) to be set 00:12:31.438 [2024-05-15 17:03:18.907526] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x241d1c0 is same with the state(5) to be set 00:12:31.438 [2024-05-15 17:03:18.907629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:122880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:31.438 [2024-05-15 17:03:18.907661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:31.438 [2024-05-15 17:03:18.907678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:123008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:31.438 [2024-05-15 17:03:18.907685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:31.438 [2024-05-15 17:03:18.907694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:123136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:31.438 [2024-05-15 17:03:18.907701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:31.438 [2024-05-15 17:03:18.907710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:123264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:31.438 [2024-05-15 17:03:18.907716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:31.438 [2024-05-15 17:03:18.907725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:123392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:31.438 [2024-05-15 17:03:18.907731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:31.438 [2024-05-15 17:03:18.907740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:123520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:31.438 [2024-05-15 17:03:18.907746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:31.438 [2024-05-15 17:03:18.907754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:123648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:31.438 [2024-05-15 17:03:18.907761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:31.438 [2024-05-15 17:03:18.907769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:123776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:31.438 [2024-05-15 17:03:18.907776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:31.438 [2024-05-15 17:03:18.907789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:123904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:31.439 [2024-05-15 17:03:18.907795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:31.439 [2024-05-15 17:03:18.907803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:124032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:31.439 [2024-05-15 17:03:18.907810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:31.439 [2024-05-15 17:03:18.907818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:124160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:31.439 [2024-05-15 17:03:18.907825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:31.439 [2024-05-15 17:03:18.907833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:124288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:31.439 [2024-05-15 17:03:18.907840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:31.439 [2024-05-15 17:03:18.907848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:124416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:31.439 [2024-05-15 17:03:18.907855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:31.439 [2024-05-15 17:03:18.907863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:124544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:31.439 [2024-05-15 17:03:18.907870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:31.439 [2024-05-15 17:03:18.907878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:124672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:31.439 [2024-05-15 17:03:18.907885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:31.439 [2024-05-15 17:03:18.907893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:124800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:31.439 [2024-05-15 17:03:18.907900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:31.439 [2024-05-15 17:03:18.907908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:124928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:31.439 [2024-05-15 17:03:18.907915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:31.439 [2024-05-15 17:03:18.907924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:125056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:31.439 [2024-05-15 17:03:18.907930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:31.439 [2024-05-15 17:03:18.907939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:125184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:31.439 [2024-05-15 17:03:18.907946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:31.439 [2024-05-15 17:03:18.907954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:125312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:31.439 [2024-05-15 17:03:18.907962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:31.439 [2024-05-15 17:03:18.907970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:125440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:31.439 [2024-05-15 17:03:18.907979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:31.439 [2024-05-15 17:03:18.907987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:125568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:31.439 [2024-05-15 17:03:18.907994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:31.439 [2024-05-15 17:03:18.908002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:125696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:31.439 [2024-05-15 17:03:18.908009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:31.439 [2024-05-15 17:03:18.908017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:125824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:31.439 [2024-05-15 17:03:18.908024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:31.439 [2024-05-15 17:03:18.908032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:125952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:31.439 [2024-05-15 17:03:18.908039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:31.439 [2024-05-15 17:03:18.908047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:126080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:31.439 [2024-05-15 17:03:18.908054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:31.439 [2024-05-15 17:03:18.908061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:126208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:31.439 [2024-05-15 17:03:18.908068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:31.439 [2024-05-15 17:03:18.908076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:126336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:31.439 [2024-05-15 17:03:18.908083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:31.439 [2024-05-15 17:03:18.908092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:126464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:31.439 [2024-05-15 17:03:18.908098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:31.439 [2024-05-15 17:03:18.908107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:126592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:31.439 [2024-05-15 17:03:18.908114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:31.439 [2024-05-15 17:03:18.908122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:126720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:31.439 [2024-05-15 17:03:18.908129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:31.439 [2024-05-15 17:03:18.908137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:126848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:31.439 [2024-05-15 17:03:18.908144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:31.439 [2024-05-15 17:03:18.908152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:126976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:31.439 [2024-05-15 17:03:18.908159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:31.439 [2024-05-15 17:03:18.908175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:127104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:31.439 [2024-05-15 17:03:18.908182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:31.439 [2024-05-15 17:03:18.908190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:127232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:31.439 [2024-05-15 17:03:18.908197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:31.439 [2024-05-15 17:03:18.908205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:127360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:31.439 [2024-05-15 17:03:18.908212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:31.439 [2024-05-15 17:03:18.908220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:127488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:31.439 [2024-05-15 17:03:18.908227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:31.439 [2024-05-15 17:03:18.908235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:127616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:31.439 [2024-05-15 17:03:18.908242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:31.439 [2024-05-15 17:03:18.908251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:127744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:31.439 [2024-05-15 17:03:18.908258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:31.439 [2024-05-15 17:03:18.908266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:127872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:31.439 [2024-05-15 17:03:18.908273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:31.439 [2024-05-15 17:03:18.908281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:128000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:31.440 [2024-05-15 17:03:18.908288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:31.440 [2024-05-15 17:03:18.908297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:128128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:31.440 [2024-05-15 17:03:18.908304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:31.440 [2024-05-15 17:03:18.908312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:128256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:31.440 [2024-05-15 17:03:18.908319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:31.440 [2024-05-15 17:03:18.908327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:128384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:31.440 [2024-05-15 17:03:18.908334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:31.440 [2024-05-15 17:03:18.908343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:128512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:31.440 [2024-05-15 17:03:18.908350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:31.440 [2024-05-15 17:03:18.908358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:128640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:31.440 [2024-05-15 17:03:18.908367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:31.440 [2024-05-15 17:03:18.908375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:128768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:31.440 [2024-05-15 17:03:18.908382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:31.440 [2024-05-15 17:03:18.908390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:128896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:31.440 [2024-05-15 17:03:18.908397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:31.440 [2024-05-15 17:03:18.908406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:129024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:31.440 [2024-05-15 17:03:18.908414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:31.440 [2024-05-15 17:03:18.908422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:129152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:31.440 [2024-05-15 17:03:18.908429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:31.440 [2024-05-15 17:03:18.908437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:129280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:31.440 [2024-05-15 17:03:18.908444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:31.440 [2024-05-15 17:03:18.908453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:129408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:31.440 [2024-05-15 17:03:18.908461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:31.440 [2024-05-15 17:03:18.908469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:129536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:31.440 [2024-05-15 17:03:18.908476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:31.440 [2024-05-15 17:03:18.908484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:129664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:31.440 [2024-05-15 17:03:18.908491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:31.440 [2024-05-15 17:03:18.908500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:129792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:31.440 [2024-05-15 17:03:18.908507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:31.440 [2024-05-15 17:03:18.908515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:129920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:31.440 [2024-05-15 17:03:18.908522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:31.440 [2024-05-15 17:03:18.908530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:130048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:31.440 [2024-05-15 17:03:18.908537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:31.440 [2024-05-15 17:03:18.908546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:130176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:31.440 [2024-05-15 17:03:18.908553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:31.440 [2024-05-15 17:03:18.908566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:130304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:31.440 [2024-05-15 17:03:18.908573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:31.440 [2024-05-15 17:03:18.908582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:130432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:31.440 [2024-05-15 17:03:18.908589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:31.440 [2024-05-15 17:03:18.908597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:130560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:31.440 [2024-05-15 17:03:18.908604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:31.440 [2024-05-15 17:03:18.908612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:130688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:31.440 [2024-05-15 17:03:18.908619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:31.440 [2024-05-15 17:03:18.908628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:130816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:31.440 [2024-05-15 17:03:18.908635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:31.440 [2024-05-15 17:03:18.908643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:130944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:31.440 [2024-05-15 17:03:18.908650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:31.440 [2024-05-15 17:03:18.908658] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b1740 is same with the state(5) to be set 00:12:31.440 [2024-05-15 17:03:18.908709] bdev_nvme.c:1602:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x9b1740 was disconnected and freed. reset controller. 00:12:31.440 [2024-05-15 17:03:18.909635] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:12:31.440 task offset: 122880 on job bdev=Nvme0n1 fails 00:12:31.440 00:12:31.440 Latency(us) 00:12:31.440 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:31.440 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:12:31.440 Job: Nvme0n1 ended in about 0.60 seconds with error 00:12:31.440 Verification LBA range: start 0x0 length 0x400 00:12:31.440 Nvme0n1 : 0.60 1597.12 99.82 106.47 0.00 36831.71 3647.22 31457.28 00:12:31.440 =================================================================================================================== 00:12:31.440 Total : 1597.12 99.82 106.47 0.00 36831.71 3647.22 31457.28 00:12:31.440 [2024-05-15 17:03:18.911255] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:12:31.440 [2024-05-15 17:03:18.911271] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5a0840 (9): Bad file descriptor 00:12:31.440 17:03:18 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:31.440 17:03:18 nvmf_tcp.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:12:31.440 17:03:18 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:31.440 17:03:18 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:12:31.440 [2024-05-15 17:03:18.913921] ctrlr.c: 816:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode0' does not allow host 'nqn.2016-06.io.spdk:host0' 00:12:31.440 [2024-05-15 17:03:18.914035] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:3 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:12:31.440 [2024-05-15 17:03:18.914059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND SPECIFIC (01/84) qid:0 cid:3 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:31.440 [2024-05-15 17:03:18.914074] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode0 00:12:31.440 [2024-05-15 17:03:18.914085] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 132 00:12:31.440 [2024-05-15 17:03:18.914091] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:12:31.440 [2024-05-15 17:03:18.914098] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5a0840 00:12:31.440 [2024-05-15 17:03:18.914117] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5a0840 (9): Bad file descriptor 00:12:31.440 [2024-05-15 17:03:18.914128] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:12:31.440 [2024-05-15 17:03:18.914135] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:12:31.440 [2024-05-15 17:03:18.914143] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:12:31.440 [2024-05-15 17:03:18.914155] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:12:31.440 17:03:18 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:31.440 17:03:18 nvmf_tcp.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 00:12:32.375 17:03:19 nvmf_tcp.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 3004950 00:12:32.375 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh: line 91: kill: (3004950) - No such process 00:12:32.375 17:03:19 nvmf_tcp.nvmf_host_management -- target/host_management.sh@91 -- # true 00:12:32.375 17:03:19 nvmf_tcp.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:12:32.375 17:03:19 nvmf_tcp.nvmf_host_management -- target/host_management.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:12:32.375 17:03:19 nvmf_tcp.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:12:32.375 17:03:19 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@532 -- # config=() 00:12:32.375 17:03:19 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@532 -- # local subsystem config 00:12:32.375 17:03:19 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:12:32.375 17:03:19 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:12:32.375 { 00:12:32.375 "params": { 00:12:32.375 "name": "Nvme$subsystem", 00:12:32.375 "trtype": "$TEST_TRANSPORT", 00:12:32.375 "traddr": "$NVMF_FIRST_TARGET_IP", 00:12:32.375 "adrfam": "ipv4", 00:12:32.375 "trsvcid": "$NVMF_PORT", 00:12:32.375 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:12:32.375 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:12:32.375 "hdgst": ${hdgst:-false}, 00:12:32.375 "ddgst": ${ddgst:-false} 00:12:32.375 }, 00:12:32.375 "method": "bdev_nvme_attach_controller" 00:12:32.375 } 00:12:32.375 EOF 00:12:32.375 )") 00:12:32.375 17:03:19 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@554 -- # cat 00:12:32.375 17:03:19 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@556 -- # jq . 00:12:32.375 17:03:19 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@557 -- # IFS=, 00:12:32.375 17:03:19 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:12:32.375 "params": { 00:12:32.375 "name": "Nvme0", 00:12:32.375 "trtype": "tcp", 00:12:32.375 "traddr": "10.0.0.2", 00:12:32.375 "adrfam": "ipv4", 00:12:32.375 "trsvcid": "4420", 00:12:32.375 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:12:32.375 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:12:32.375 "hdgst": false, 00:12:32.375 "ddgst": false 00:12:32.375 }, 00:12:32.375 "method": "bdev_nvme_attach_controller" 00:12:32.375 }' 00:12:32.375 [2024-05-15 17:03:19.975287] Starting SPDK v24.05-pre git sha1 0ba8ca574 / DPDK 23.11.0 initialization... 00:12:32.375 [2024-05-15 17:03:19.975333] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3005202 ] 00:12:32.375 EAL: No free 2048 kB hugepages reported on node 1 00:12:32.375 [2024-05-15 17:03:20.029236] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:32.632 [2024-05-15 17:03:20.106316] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:32.632 Running I/O for 1 seconds... 00:12:34.020 00:12:34.020 Latency(us) 00:12:34.020 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:34.020 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:12:34.020 Verification LBA range: start 0x0 length 0x400 00:12:34.020 Nvme0n1 : 1.01 1842.81 115.18 0.00 0.00 34200.46 7864.32 29177.77 00:12:34.020 =================================================================================================================== 00:12:34.020 Total : 1842.81 115.18 0.00 0.00 34200.46 7864.32 29177.77 00:12:34.020 17:03:21 nvmf_tcp.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 00:12:34.020 17:03:21 nvmf_tcp.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:12:34.020 17:03:21 nvmf_tcp.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:12:34.020 17:03:21 nvmf_tcp.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:12:34.020 17:03:21 nvmf_tcp.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 00:12:34.020 17:03:21 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@488 -- # nvmfcleanup 00:12:34.020 17:03:21 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@117 -- # sync 00:12:34.020 17:03:21 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:12:34.020 17:03:21 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@120 -- # set +e 00:12:34.020 17:03:21 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@121 -- # for i in {1..20} 00:12:34.020 17:03:21 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:12:34.020 rmmod nvme_tcp 00:12:34.020 rmmod nvme_fabrics 00:12:34.020 rmmod nvme_keyring 00:12:34.020 17:03:21 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:12:34.020 17:03:21 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@124 -- # set -e 00:12:34.020 17:03:21 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@125 -- # return 0 00:12:34.020 17:03:21 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@489 -- # '[' -n 3004683 ']' 00:12:34.020 17:03:21 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@490 -- # killprocess 3004683 00:12:34.020 17:03:21 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@946 -- # '[' -z 3004683 ']' 00:12:34.020 17:03:21 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@950 -- # kill -0 3004683 00:12:34.020 17:03:21 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@951 -- # uname 00:12:34.020 17:03:21 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:12:34.020 17:03:21 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3004683 00:12:34.020 17:03:21 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:12:34.020 17:03:21 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:12:34.020 17:03:21 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3004683' 00:12:34.020 killing process with pid 3004683 00:12:34.020 17:03:21 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@965 -- # kill 3004683 00:12:34.021 [2024-05-15 17:03:21.598901] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:12:34.021 17:03:21 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@970 -- # wait 3004683 00:12:34.279 [2024-05-15 17:03:21.807489] app.c: 711:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:12:34.279 17:03:21 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:12:34.279 17:03:21 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:12:34.279 17:03:21 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:12:34.279 17:03:21 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:12:34.279 17:03:21 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@278 -- # remove_spdk_ns 00:12:34.279 17:03:21 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:34.279 17:03:21 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:34.279 17:03:21 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:36.811 17:03:23 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:12:36.811 17:03:23 nvmf_tcp.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:12:36.811 00:12:36.811 real 0m12.471s 00:12:36.811 user 0m22.570s 00:12:36.811 sys 0m5.196s 00:12:36.811 17:03:23 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@1122 -- # xtrace_disable 00:12:36.811 17:03:23 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:12:36.811 ************************************ 00:12:36.811 END TEST nvmf_host_management 00:12:36.811 ************************************ 00:12:36.811 17:03:23 nvmf_tcp -- nvmf/nvmf.sh@48 -- # run_test nvmf_lvol /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:12:36.811 17:03:23 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:12:36.811 17:03:23 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:12:36.811 17:03:23 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:12:36.811 ************************************ 00:12:36.811 START TEST nvmf_lvol 00:12:36.811 ************************************ 00:12:36.811 17:03:23 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:12:36.811 * Looking for test storage... 00:12:36.811 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:36.811 17:03:24 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:36.811 17:03:24 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 00:12:36.811 17:03:24 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:36.811 17:03:24 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:36.811 17:03:24 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:36.811 17:03:24 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:36.811 17:03:24 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:36.811 17:03:24 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:36.811 17:03:24 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:36.811 17:03:24 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:36.811 17:03:24 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:36.811 17:03:24 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:36.811 17:03:24 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:12:36.811 17:03:24 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:12:36.811 17:03:24 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:36.811 17:03:24 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:36.811 17:03:24 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:36.811 17:03:24 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:36.811 17:03:24 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:36.811 17:03:24 nvmf_tcp.nvmf_lvol -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:36.811 17:03:24 nvmf_tcp.nvmf_lvol -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:36.811 17:03:24 nvmf_tcp.nvmf_lvol -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:36.811 17:03:24 nvmf_tcp.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:36.812 17:03:24 nvmf_tcp.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:36.812 17:03:24 nvmf_tcp.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:36.812 17:03:24 nvmf_tcp.nvmf_lvol -- paths/export.sh@5 -- # export PATH 00:12:36.812 17:03:24 nvmf_tcp.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:36.812 17:03:24 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@47 -- # : 0 00:12:36.812 17:03:24 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:12:36.812 17:03:24 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:12:36.812 17:03:24 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:36.812 17:03:24 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:36.812 17:03:24 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:36.812 17:03:24 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:12:36.812 17:03:24 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:12:36.812 17:03:24 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@51 -- # have_pci_nics=0 00:12:36.812 17:03:24 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:12:36.812 17:03:24 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:12:36.812 17:03:24 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:12:36.812 17:03:24 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:12:36.812 17:03:24 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:12:36.812 17:03:24 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:12:36.812 17:03:24 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:12:36.812 17:03:24 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:36.812 17:03:24 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@448 -- # prepare_net_devs 00:12:36.812 17:03:24 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@410 -- # local -g is_hw=no 00:12:36.812 17:03:24 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@412 -- # remove_spdk_ns 00:12:36.812 17:03:24 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:36.812 17:03:24 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:36.812 17:03:24 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:36.812 17:03:24 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:12:36.812 17:03:24 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:12:36.812 17:03:24 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@285 -- # xtrace_disable 00:12:36.812 17:03:24 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:12:42.081 17:03:29 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:42.081 17:03:29 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@291 -- # pci_devs=() 00:12:42.081 17:03:29 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@291 -- # local -a pci_devs 00:12:42.081 17:03:29 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@292 -- # pci_net_devs=() 00:12:42.081 17:03:29 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:12:42.081 17:03:29 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@293 -- # pci_drivers=() 00:12:42.081 17:03:29 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@293 -- # local -A pci_drivers 00:12:42.081 17:03:29 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@295 -- # net_devs=() 00:12:42.081 17:03:29 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@295 -- # local -ga net_devs 00:12:42.081 17:03:29 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@296 -- # e810=() 00:12:42.081 17:03:29 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@296 -- # local -ga e810 00:12:42.081 17:03:29 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@297 -- # x722=() 00:12:42.081 17:03:29 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@297 -- # local -ga x722 00:12:42.081 17:03:29 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@298 -- # mlx=() 00:12:42.081 17:03:29 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@298 -- # local -ga mlx 00:12:42.081 17:03:29 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:42.081 17:03:29 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:42.081 17:03:29 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:42.081 17:03:29 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:42.081 17:03:29 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:42.081 17:03:29 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:42.081 17:03:29 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:42.081 17:03:29 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:42.081 17:03:29 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:42.081 17:03:29 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:42.081 17:03:29 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:42.081 17:03:29 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:12:42.081 17:03:29 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:12:42.081 17:03:29 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:12:42.081 17:03:29 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:12:42.081 17:03:29 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:12:42.081 17:03:29 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:12:42.081 17:03:29 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:42.081 17:03:29 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:12:42.081 Found 0000:86:00.0 (0x8086 - 0x159b) 00:12:42.081 17:03:29 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:42.081 17:03:29 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:42.081 17:03:29 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:42.081 17:03:29 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:42.081 17:03:29 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:42.081 17:03:29 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:42.082 17:03:29 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:12:42.082 Found 0000:86:00.1 (0x8086 - 0x159b) 00:12:42.082 17:03:29 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:42.082 17:03:29 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:42.082 17:03:29 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:42.082 17:03:29 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:42.082 17:03:29 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:42.082 17:03:29 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:12:42.082 17:03:29 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:12:42.082 17:03:29 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:12:42.082 17:03:29 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:42.082 17:03:29 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:42.082 17:03:29 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:12:42.082 17:03:29 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:42.082 17:03:29 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@390 -- # [[ up == up ]] 00:12:42.082 17:03:29 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:42.082 17:03:29 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:42.082 17:03:29 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:12:42.082 Found net devices under 0000:86:00.0: cvl_0_0 00:12:42.082 17:03:29 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:42.082 17:03:29 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:42.082 17:03:29 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:42.082 17:03:29 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:12:42.082 17:03:29 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:42.082 17:03:29 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@390 -- # [[ up == up ]] 00:12:42.082 17:03:29 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:42.082 17:03:29 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:42.082 17:03:29 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:12:42.082 Found net devices under 0000:86:00.1: cvl_0_1 00:12:42.082 17:03:29 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:42.082 17:03:29 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:12:42.082 17:03:29 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@414 -- # is_hw=yes 00:12:42.082 17:03:29 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:12:42.082 17:03:29 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:12:42.082 17:03:29 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:12:42.082 17:03:29 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:42.082 17:03:29 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:42.082 17:03:29 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:42.082 17:03:29 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:12:42.082 17:03:29 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:42.082 17:03:29 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:42.082 17:03:29 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:12:42.082 17:03:29 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:42.082 17:03:29 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:42.082 17:03:29 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:12:42.082 17:03:29 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:12:42.082 17:03:29 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:12:42.082 17:03:29 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:42.082 17:03:29 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:42.082 17:03:29 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:42.082 17:03:29 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:12:42.082 17:03:29 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:42.082 17:03:29 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:42.082 17:03:29 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:42.082 17:03:29 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:12:42.082 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:42.082 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.179 ms 00:12:42.082 00:12:42.082 --- 10.0.0.2 ping statistics --- 00:12:42.082 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:42.082 rtt min/avg/max/mdev = 0.179/0.179/0.179/0.000 ms 00:12:42.082 17:03:29 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:42.082 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:42.082 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.219 ms 00:12:42.082 00:12:42.082 --- 10.0.0.1 ping statistics --- 00:12:42.082 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:42.082 rtt min/avg/max/mdev = 0.219/0.219/0.219/0.000 ms 00:12:42.082 17:03:29 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:42.082 17:03:29 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@422 -- # return 0 00:12:42.082 17:03:29 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:12:42.082 17:03:29 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:42.082 17:03:29 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:12:42.082 17:03:29 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:12:42.082 17:03:29 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:42.082 17:03:29 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:12:42.082 17:03:29 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:12:42.082 17:03:29 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:12:42.082 17:03:29 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:12:42.082 17:03:29 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@720 -- # xtrace_disable 00:12:42.082 17:03:29 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:12:42.082 17:03:29 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@481 -- # nvmfpid=3008953 00:12:42.082 17:03:29 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@482 -- # waitforlisten 3008953 00:12:42.082 17:03:29 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:12:42.082 17:03:29 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@827 -- # '[' -z 3008953 ']' 00:12:42.082 17:03:29 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:42.082 17:03:29 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@832 -- # local max_retries=100 00:12:42.082 17:03:29 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:42.082 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:42.082 17:03:29 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@836 -- # xtrace_disable 00:12:42.082 17:03:29 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:12:42.082 [2024-05-15 17:03:29.623390] Starting SPDK v24.05-pre git sha1 0ba8ca574 / DPDK 23.11.0 initialization... 00:12:42.082 [2024-05-15 17:03:29.623435] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:42.082 EAL: No free 2048 kB hugepages reported on node 1 00:12:42.082 [2024-05-15 17:03:29.681757] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:12:42.340 [2024-05-15 17:03:29.760868] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:42.340 [2024-05-15 17:03:29.760902] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:42.340 [2024-05-15 17:03:29.760909] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:42.340 [2024-05-15 17:03:29.760916] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:42.340 [2024-05-15 17:03:29.760921] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:42.341 [2024-05-15 17:03:29.760961] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:12:42.341 [2024-05-15 17:03:29.761060] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:12:42.341 [2024-05-15 17:03:29.761062] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:42.907 17:03:30 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:12:42.907 17:03:30 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@860 -- # return 0 00:12:42.907 17:03:30 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:12:42.907 17:03:30 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@726 -- # xtrace_disable 00:12:42.907 17:03:30 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:12:42.907 17:03:30 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:42.907 17:03:30 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:12:43.165 [2024-05-15 17:03:30.618900] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:43.165 17:03:30 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:12:43.423 17:03:30 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:12:43.423 17:03:30 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:12:43.423 17:03:31 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:12:43.423 17:03:31 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:12:43.681 17:03:31 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:12:43.940 17:03:31 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=5f8e8e2c-abc1-4c01-8911-ac620b773f7b 00:12:43.940 17:03:31 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 5f8e8e2c-abc1-4c01-8911-ac620b773f7b lvol 20 00:12:43.940 17:03:31 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=4d2f6cb1-9d38-449f-87ee-2419657dc652 00:12:43.940 17:03:31 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:12:44.197 17:03:31 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 4d2f6cb1-9d38-449f-87ee-2419657dc652 00:12:44.455 17:03:31 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:12:44.455 [2024-05-15 17:03:32.096883] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:12:44.455 [2024-05-15 17:03:32.097154] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:44.712 17:03:32 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:12:44.712 17:03:32 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:12:44.712 17:03:32 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=3009448 00:12:44.712 17:03:32 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 00:12:44.712 EAL: No free 2048 kB hugepages reported on node 1 00:12:45.647 17:03:33 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_snapshot 4d2f6cb1-9d38-449f-87ee-2419657dc652 MY_SNAPSHOT 00:12:45.905 17:03:33 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=090ff370-a987-4f52-85aa-fc32f794e476 00:12:45.905 17:03:33 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_resize 4d2f6cb1-9d38-449f-87ee-2419657dc652 30 00:12:46.163 17:03:33 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_clone 090ff370-a987-4f52-85aa-fc32f794e476 MY_CLONE 00:12:46.421 17:03:34 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=f87c6186-35fa-47cd-96bc-5dfda36d1e84 00:12:46.421 17:03:34 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_inflate f87c6186-35fa-47cd-96bc-5dfda36d1e84 00:12:46.987 17:03:34 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 3009448 00:12:55.105 Initializing NVMe Controllers 00:12:55.105 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:12:55.105 Controller IO queue size 128, less than required. 00:12:55.105 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:12:55.105 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:12:55.105 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:12:55.105 Initialization complete. Launching workers. 00:12:55.105 ======================================================== 00:12:55.105 Latency(us) 00:12:55.105 Device Information : IOPS MiB/s Average min max 00:12:55.105 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 11752.20 45.91 10899.30 1732.07 102620.04 00:12:55.105 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 11666.00 45.57 10973.66 3733.90 45644.70 00:12:55.105 ======================================================== 00:12:55.105 Total : 23418.20 91.48 10936.34 1732.07 102620.04 00:12:55.105 00:12:55.105 17:03:42 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:12:55.408 17:03:42 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 4d2f6cb1-9d38-449f-87ee-2419657dc652 00:12:55.667 17:03:43 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 5f8e8e2c-abc1-4c01-8911-ac620b773f7b 00:12:55.667 17:03:43 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 00:12:55.667 17:03:43 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:12:55.667 17:03:43 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:12:55.667 17:03:43 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@488 -- # nvmfcleanup 00:12:55.667 17:03:43 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@117 -- # sync 00:12:55.667 17:03:43 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:12:55.667 17:03:43 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@120 -- # set +e 00:12:55.667 17:03:43 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@121 -- # for i in {1..20} 00:12:55.667 17:03:43 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:12:55.667 rmmod nvme_tcp 00:12:55.667 rmmod nvme_fabrics 00:12:55.925 rmmod nvme_keyring 00:12:55.925 17:03:43 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:12:55.925 17:03:43 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@124 -- # set -e 00:12:55.925 17:03:43 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@125 -- # return 0 00:12:55.925 17:03:43 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@489 -- # '[' -n 3008953 ']' 00:12:55.925 17:03:43 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@490 -- # killprocess 3008953 00:12:55.925 17:03:43 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@946 -- # '[' -z 3008953 ']' 00:12:55.925 17:03:43 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@950 -- # kill -0 3008953 00:12:55.925 17:03:43 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@951 -- # uname 00:12:55.925 17:03:43 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:12:55.925 17:03:43 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3008953 00:12:55.925 17:03:43 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:12:55.925 17:03:43 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:12:55.925 17:03:43 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3008953' 00:12:55.925 killing process with pid 3008953 00:12:55.925 17:03:43 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@965 -- # kill 3008953 00:12:55.925 [2024-05-15 17:03:43.401001] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:12:55.925 17:03:43 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@970 -- # wait 3008953 00:12:56.183 17:03:43 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:12:56.183 17:03:43 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:12:56.183 17:03:43 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:12:56.183 17:03:43 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:12:56.183 17:03:43 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@278 -- # remove_spdk_ns 00:12:56.183 17:03:43 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:56.183 17:03:43 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:56.183 17:03:43 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:58.086 17:03:45 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:12:58.086 00:12:58.086 real 0m21.742s 00:12:58.086 user 1m4.285s 00:12:58.086 sys 0m6.697s 00:12:58.086 17:03:45 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@1122 -- # xtrace_disable 00:12:58.086 17:03:45 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:12:58.086 ************************************ 00:12:58.086 END TEST nvmf_lvol 00:12:58.086 ************************************ 00:12:58.345 17:03:45 nvmf_tcp -- nvmf/nvmf.sh@49 -- # run_test nvmf_lvs_grow /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:12:58.345 17:03:45 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:12:58.345 17:03:45 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:12:58.345 17:03:45 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:12:58.345 ************************************ 00:12:58.345 START TEST nvmf_lvs_grow 00:12:58.345 ************************************ 00:12:58.345 17:03:45 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:12:58.345 * Looking for test storage... 00:12:58.345 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:58.345 17:03:45 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:58.345 17:03:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 00:12:58.345 17:03:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:58.345 17:03:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:58.345 17:03:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:58.345 17:03:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:58.345 17:03:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:58.345 17:03:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:58.345 17:03:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:58.345 17:03:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:58.345 17:03:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:58.345 17:03:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:58.346 17:03:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:12:58.346 17:03:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:12:58.346 17:03:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:58.346 17:03:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:58.346 17:03:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:58.346 17:03:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:58.346 17:03:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:58.346 17:03:45 nvmf_tcp.nvmf_lvs_grow -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:58.346 17:03:45 nvmf_tcp.nvmf_lvs_grow -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:58.346 17:03:45 nvmf_tcp.nvmf_lvs_grow -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:58.346 17:03:45 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:58.346 17:03:45 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:58.346 17:03:45 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:58.346 17:03:45 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 00:12:58.346 17:03:45 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:58.346 17:03:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@47 -- # : 0 00:12:58.346 17:03:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:12:58.346 17:03:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:12:58.346 17:03:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:58.346 17:03:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:58.346 17:03:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:58.346 17:03:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:12:58.346 17:03:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:12:58.346 17:03:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@51 -- # have_pci_nics=0 00:12:58.346 17:03:45 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:12:58.346 17:03:45 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:12:58.346 17:03:45 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:12:58.346 17:03:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:12:58.346 17:03:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:58.346 17:03:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@448 -- # prepare_net_devs 00:12:58.346 17:03:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@410 -- # local -g is_hw=no 00:12:58.346 17:03:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@412 -- # remove_spdk_ns 00:12:58.346 17:03:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:58.346 17:03:45 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:58.346 17:03:45 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:58.346 17:03:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:12:58.346 17:03:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:12:58.346 17:03:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@285 -- # xtrace_disable 00:12:58.346 17:03:45 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:13:03.609 17:03:50 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:03.609 17:03:50 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@291 -- # pci_devs=() 00:13:03.609 17:03:50 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@291 -- # local -a pci_devs 00:13:03.609 17:03:50 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@292 -- # pci_net_devs=() 00:13:03.609 17:03:50 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:13:03.609 17:03:50 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@293 -- # pci_drivers=() 00:13:03.609 17:03:50 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@293 -- # local -A pci_drivers 00:13:03.609 17:03:50 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@295 -- # net_devs=() 00:13:03.609 17:03:50 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@295 -- # local -ga net_devs 00:13:03.609 17:03:50 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@296 -- # e810=() 00:13:03.609 17:03:50 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@296 -- # local -ga e810 00:13:03.609 17:03:50 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@297 -- # x722=() 00:13:03.609 17:03:50 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@297 -- # local -ga x722 00:13:03.609 17:03:50 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@298 -- # mlx=() 00:13:03.609 17:03:50 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@298 -- # local -ga mlx 00:13:03.609 17:03:50 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:03.609 17:03:50 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:03.609 17:03:50 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:03.609 17:03:50 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:03.609 17:03:50 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:03.609 17:03:50 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:03.609 17:03:50 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:03.609 17:03:50 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:03.609 17:03:50 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:03.609 17:03:50 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:03.609 17:03:50 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:03.609 17:03:50 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:13:03.609 17:03:50 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:13:03.609 17:03:50 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:13:03.609 17:03:50 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:13:03.609 17:03:50 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:13:03.609 17:03:50 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:13:03.609 17:03:50 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:03.609 17:03:50 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:13:03.609 Found 0000:86:00.0 (0x8086 - 0x159b) 00:13:03.609 17:03:50 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:03.609 17:03:50 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:03.609 17:03:50 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:03.609 17:03:50 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:03.609 17:03:50 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:03.609 17:03:50 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:03.609 17:03:50 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:13:03.609 Found 0000:86:00.1 (0x8086 - 0x159b) 00:13:03.609 17:03:50 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:03.609 17:03:50 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:03.609 17:03:50 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:03.609 17:03:50 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:03.609 17:03:50 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:03.609 17:03:50 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:13:03.609 17:03:50 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:13:03.609 17:03:50 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:13:03.609 17:03:50 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:03.609 17:03:50 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:03.609 17:03:50 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:13:03.609 17:03:50 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:03.609 17:03:50 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@390 -- # [[ up == up ]] 00:13:03.610 17:03:50 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:03.610 17:03:50 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:03.610 17:03:50 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:13:03.610 Found net devices under 0000:86:00.0: cvl_0_0 00:13:03.610 17:03:50 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:03.610 17:03:50 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:03.610 17:03:50 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:03.610 17:03:50 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:13:03.610 17:03:50 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:03.610 17:03:50 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@390 -- # [[ up == up ]] 00:13:03.610 17:03:50 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:03.610 17:03:50 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:03.610 17:03:50 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:13:03.610 Found net devices under 0000:86:00.1: cvl_0_1 00:13:03.610 17:03:50 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:03.610 17:03:50 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:13:03.610 17:03:50 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@414 -- # is_hw=yes 00:13:03.610 17:03:50 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:13:03.610 17:03:50 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:13:03.610 17:03:50 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:13:03.610 17:03:50 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:03.610 17:03:50 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:03.610 17:03:50 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:03.610 17:03:50 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:13:03.610 17:03:50 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:03.610 17:03:50 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:03.610 17:03:50 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:13:03.610 17:03:50 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:03.610 17:03:50 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:03.610 17:03:50 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:13:03.610 17:03:50 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:13:03.610 17:03:50 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:13:03.610 17:03:50 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:03.610 17:03:50 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:03.610 17:03:50 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:03.610 17:03:50 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:13:03.610 17:03:50 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:03.610 17:03:50 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:03.610 17:03:50 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:03.610 17:03:50 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:13:03.610 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:03.610 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.261 ms 00:13:03.610 00:13:03.610 --- 10.0.0.2 ping statistics --- 00:13:03.610 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:03.610 rtt min/avg/max/mdev = 0.261/0.261/0.261/0.000 ms 00:13:03.610 17:03:50 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:03.610 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:03.610 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.237 ms 00:13:03.610 00:13:03.610 --- 10.0.0.1 ping statistics --- 00:13:03.610 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:03.610 rtt min/avg/max/mdev = 0.237/0.237/0.237/0.000 ms 00:13:03.610 17:03:50 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:03.610 17:03:50 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@422 -- # return 0 00:13:03.610 17:03:50 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:13:03.610 17:03:50 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:03.610 17:03:50 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:13:03.610 17:03:50 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:13:03.610 17:03:50 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:03.610 17:03:50 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:13:03.610 17:03:50 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:13:03.610 17:03:50 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:13:03.610 17:03:50 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:13:03.610 17:03:50 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@720 -- # xtrace_disable 00:13:03.610 17:03:50 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:13:03.610 17:03:50 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@481 -- # nvmfpid=3014718 00:13:03.610 17:03:50 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@482 -- # waitforlisten 3014718 00:13:03.610 17:03:50 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@827 -- # '[' -z 3014718 ']' 00:13:03.610 17:03:50 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:03.610 17:03:50 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@832 -- # local max_retries=100 00:13:03.610 17:03:50 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:03.610 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:03.610 17:03:50 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@836 -- # xtrace_disable 00:13:03.610 17:03:50 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:13:03.610 17:03:50 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:13:03.610 [2024-05-15 17:03:50.982608] Starting SPDK v24.05-pre git sha1 0ba8ca574 / DPDK 23.11.0 initialization... 00:13:03.610 [2024-05-15 17:03:50.982654] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:03.610 EAL: No free 2048 kB hugepages reported on node 1 00:13:03.610 [2024-05-15 17:03:51.038008] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:03.610 [2024-05-15 17:03:51.116244] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:03.610 [2024-05-15 17:03:51.116281] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:03.610 [2024-05-15 17:03:51.116289] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:03.610 [2024-05-15 17:03:51.116295] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:03.610 [2024-05-15 17:03:51.116300] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:03.610 [2024-05-15 17:03:51.116318] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:04.175 17:03:51 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:13:04.175 17:03:51 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@860 -- # return 0 00:13:04.175 17:03:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:13:04.175 17:03:51 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@726 -- # xtrace_disable 00:13:04.175 17:03:51 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:13:04.175 17:03:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:04.175 17:03:51 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:13:04.432 [2024-05-15 17:03:51.966346] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:04.432 17:03:51 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:13:04.432 17:03:51 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:13:04.432 17:03:51 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1103 -- # xtrace_disable 00:13:04.432 17:03:51 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:13:04.432 ************************************ 00:13:04.432 START TEST lvs_grow_clean 00:13:04.432 ************************************ 00:13:04.432 17:03:52 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1121 -- # lvs_grow 00:13:04.432 17:03:52 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:13:04.432 17:03:52 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:13:04.432 17:03:52 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:13:04.432 17:03:52 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:13:04.432 17:03:52 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:13:04.432 17:03:52 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:13:04.432 17:03:52 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:13:04.432 17:03:52 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:13:04.432 17:03:52 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:13:04.689 17:03:52 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:13:04.689 17:03:52 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:13:04.946 17:03:52 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=bed1dc6f-5fd4-41e9-bb9b-cd9e98bbc13e 00:13:04.946 17:03:52 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u bed1dc6f-5fd4-41e9-bb9b-cd9e98bbc13e 00:13:04.946 17:03:52 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:13:04.946 17:03:52 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:13:04.946 17:03:52 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:13:04.946 17:03:52 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u bed1dc6f-5fd4-41e9-bb9b-cd9e98bbc13e lvol 150 00:13:05.204 17:03:52 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=89430ace-3261-48c7-9d9d-795f65a06958 00:13:05.204 17:03:52 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:13:05.204 17:03:52 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:13:05.460 [2024-05-15 17:03:52.897813] bdev_aio.c:1030:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:13:05.460 [2024-05-15 17:03:52.897864] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:13:05.460 true 00:13:05.460 17:03:52 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u bed1dc6f-5fd4-41e9-bb9b-cd9e98bbc13e 00:13:05.460 17:03:52 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:13:05.460 17:03:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:13:05.460 17:03:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:13:05.716 17:03:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 89430ace-3261-48c7-9d9d-795f65a06958 00:13:05.973 17:03:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:13:05.973 [2024-05-15 17:03:53.579674] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:13:05.973 [2024-05-15 17:03:53.579900] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:05.973 17:03:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:13:06.231 17:03:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=3015249 00:13:06.231 17:03:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:13:06.231 17:03:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 3015249 /var/tmp/bdevperf.sock 00:13:06.231 17:03:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@827 -- # '[' -z 3015249 ']' 00:13:06.231 17:03:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:13:06.231 17:03:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@832 -- # local max_retries=100 00:13:06.231 17:03:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:13:06.231 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:13:06.231 17:03:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:13:06.231 17:03:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@836 -- # xtrace_disable 00:13:06.231 17:03:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:13:06.231 [2024-05-15 17:03:53.800041] Starting SPDK v24.05-pre git sha1 0ba8ca574 / DPDK 23.11.0 initialization... 00:13:06.231 [2024-05-15 17:03:53.800089] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3015249 ] 00:13:06.231 EAL: No free 2048 kB hugepages reported on node 1 00:13:06.231 [2024-05-15 17:03:53.853689] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:06.489 [2024-05-15 17:03:53.934716] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:13:07.054 17:03:54 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:13:07.055 17:03:54 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@860 -- # return 0 00:13:07.055 17:03:54 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:13:07.312 Nvme0n1 00:13:07.312 17:03:54 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:13:07.569 [ 00:13:07.569 { 00:13:07.569 "name": "Nvme0n1", 00:13:07.569 "aliases": [ 00:13:07.569 "89430ace-3261-48c7-9d9d-795f65a06958" 00:13:07.569 ], 00:13:07.569 "product_name": "NVMe disk", 00:13:07.569 "block_size": 4096, 00:13:07.569 "num_blocks": 38912, 00:13:07.569 "uuid": "89430ace-3261-48c7-9d9d-795f65a06958", 00:13:07.569 "assigned_rate_limits": { 00:13:07.569 "rw_ios_per_sec": 0, 00:13:07.569 "rw_mbytes_per_sec": 0, 00:13:07.569 "r_mbytes_per_sec": 0, 00:13:07.569 "w_mbytes_per_sec": 0 00:13:07.569 }, 00:13:07.569 "claimed": false, 00:13:07.569 "zoned": false, 00:13:07.569 "supported_io_types": { 00:13:07.569 "read": true, 00:13:07.569 "write": true, 00:13:07.569 "unmap": true, 00:13:07.569 "write_zeroes": true, 00:13:07.569 "flush": true, 00:13:07.569 "reset": true, 00:13:07.569 "compare": true, 00:13:07.569 "compare_and_write": true, 00:13:07.569 "abort": true, 00:13:07.569 "nvme_admin": true, 00:13:07.569 "nvme_io": true 00:13:07.569 }, 00:13:07.569 "memory_domains": [ 00:13:07.569 { 00:13:07.569 "dma_device_id": "system", 00:13:07.569 "dma_device_type": 1 00:13:07.569 } 00:13:07.569 ], 00:13:07.569 "driver_specific": { 00:13:07.569 "nvme": [ 00:13:07.569 { 00:13:07.569 "trid": { 00:13:07.569 "trtype": "TCP", 00:13:07.569 "adrfam": "IPv4", 00:13:07.569 "traddr": "10.0.0.2", 00:13:07.569 "trsvcid": "4420", 00:13:07.569 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:13:07.569 }, 00:13:07.569 "ctrlr_data": { 00:13:07.569 "cntlid": 1, 00:13:07.569 "vendor_id": "0x8086", 00:13:07.569 "model_number": "SPDK bdev Controller", 00:13:07.569 "serial_number": "SPDK0", 00:13:07.569 "firmware_revision": "24.05", 00:13:07.569 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:13:07.569 "oacs": { 00:13:07.569 "security": 0, 00:13:07.569 "format": 0, 00:13:07.569 "firmware": 0, 00:13:07.569 "ns_manage": 0 00:13:07.569 }, 00:13:07.569 "multi_ctrlr": true, 00:13:07.569 "ana_reporting": false 00:13:07.569 }, 00:13:07.569 "vs": { 00:13:07.569 "nvme_version": "1.3" 00:13:07.569 }, 00:13:07.569 "ns_data": { 00:13:07.569 "id": 1, 00:13:07.569 "can_share": true 00:13:07.569 } 00:13:07.569 } 00:13:07.570 ], 00:13:07.570 "mp_policy": "active_passive" 00:13:07.570 } 00:13:07.570 } 00:13:07.570 ] 00:13:07.570 17:03:55 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=3015469 00:13:07.570 17:03:55 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:13:07.570 17:03:55 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:13:07.570 Running I/O for 10 seconds... 00:13:08.942 Latency(us) 00:13:08.942 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:08.942 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:13:08.942 Nvme0n1 : 1.00 22879.00 89.37 0.00 0.00 0.00 0.00 0.00 00:13:08.942 =================================================================================================================== 00:13:08.942 Total : 22879.00 89.37 0.00 0.00 0.00 0.00 0.00 00:13:08.942 00:13:09.506 17:03:57 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u bed1dc6f-5fd4-41e9-bb9b-cd9e98bbc13e 00:13:09.763 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:13:09.763 Nvme0n1 : 2.00 23037.00 89.99 0.00 0.00 0.00 0.00 0.00 00:13:09.763 =================================================================================================================== 00:13:09.764 Total : 23037.00 89.99 0.00 0.00 0.00 0.00 0.00 00:13:09.764 00:13:09.764 true 00:13:09.764 17:03:57 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u bed1dc6f-5fd4-41e9-bb9b-cd9e98bbc13e 00:13:09.764 17:03:57 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:13:10.021 17:03:57 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:13:10.021 17:03:57 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:13:10.021 17:03:57 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 3015469 00:13:10.587 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:13:10.587 Nvme0n1 : 3.00 23094.67 90.21 0.00 0.00 0.00 0.00 0.00 00:13:10.587 =================================================================================================================== 00:13:10.587 Total : 23094.67 90.21 0.00 0.00 0.00 0.00 0.00 00:13:10.587 00:13:11.959 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:13:11.959 Nvme0n1 : 4.00 23141.75 90.40 0.00 0.00 0.00 0.00 0.00 00:13:11.959 =================================================================================================================== 00:13:11.959 Total : 23141.75 90.40 0.00 0.00 0.00 0.00 0.00 00:13:11.959 00:13:12.891 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:13:12.891 Nvme0n1 : 5.00 23182.20 90.56 0.00 0.00 0.00 0.00 0.00 00:13:12.891 =================================================================================================================== 00:13:12.891 Total : 23182.20 90.56 0.00 0.00 0.00 0.00 0.00 00:13:12.891 00:13:13.824 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:13:13.824 Nvme0n1 : 6.00 23201.00 90.63 0.00 0.00 0.00 0.00 0.00 00:13:13.824 =================================================================================================================== 00:13:13.824 Total : 23201.00 90.63 0.00 0.00 0.00 0.00 0.00 00:13:13.824 00:13:14.754 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:13:14.754 Nvme0n1 : 7.00 23226.14 90.73 0.00 0.00 0.00 0.00 0.00 00:13:14.754 =================================================================================================================== 00:13:14.754 Total : 23226.14 90.73 0.00 0.00 0.00 0.00 0.00 00:13:14.754 00:13:15.682 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:13:15.682 Nvme0n1 : 8.00 23270.38 90.90 0.00 0.00 0.00 0.00 0.00 00:13:15.682 =================================================================================================================== 00:13:15.682 Total : 23270.38 90.90 0.00 0.00 0.00 0.00 0.00 00:13:15.682 00:13:16.618 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:13:16.618 Nvme0n1 : 9.00 23297.33 91.01 0.00 0.00 0.00 0.00 0.00 00:13:16.618 =================================================================================================================== 00:13:16.618 Total : 23297.33 91.01 0.00 0.00 0.00 0.00 0.00 00:13:16.618 00:13:17.596 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:13:17.596 Nvme0n1 : 10.00 23312.20 91.06 0.00 0.00 0.00 0.00 0.00 00:13:17.596 =================================================================================================================== 00:13:17.596 Total : 23312.20 91.06 0.00 0.00 0.00 0.00 0.00 00:13:17.596 00:13:17.596 00:13:17.596 Latency(us) 00:13:17.596 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:17.596 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:13:17.596 Nvme0n1 : 10.01 23311.33 91.06 0.00 0.00 5487.05 3276.80 14075.99 00:13:17.596 =================================================================================================================== 00:13:17.596 Total : 23311.33 91.06 0.00 0.00 5487.05 3276.80 14075.99 00:13:17.596 0 00:13:17.596 17:04:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 3015249 00:13:17.596 17:04:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@946 -- # '[' -z 3015249 ']' 00:13:17.596 17:04:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@950 -- # kill -0 3015249 00:13:17.596 17:04:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@951 -- # uname 00:13:17.596 17:04:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:13:17.596 17:04:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3015249 00:13:17.854 17:04:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:13:17.854 17:04:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:13:17.854 17:04:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3015249' 00:13:17.854 killing process with pid 3015249 00:13:17.854 17:04:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@965 -- # kill 3015249 00:13:17.854 Received shutdown signal, test time was about 10.000000 seconds 00:13:17.854 00:13:17.854 Latency(us) 00:13:17.854 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:17.854 =================================================================================================================== 00:13:17.854 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:13:17.854 17:04:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@970 -- # wait 3015249 00:13:17.854 17:04:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:13:18.111 17:04:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:13:18.369 17:04:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:13:18.369 17:04:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u bed1dc6f-5fd4-41e9-bb9b-cd9e98bbc13e 00:13:18.627 17:04:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:13:18.627 17:04:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:13:18.627 17:04:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:13:18.627 [2024-05-15 17:04:06.188715] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:13:18.627 17:04:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u bed1dc6f-5fd4-41e9-bb9b-cd9e98bbc13e 00:13:18.627 17:04:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@648 -- # local es=0 00:13:18.627 17:04:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u bed1dc6f-5fd4-41e9-bb9b-cd9e98bbc13e 00:13:18.627 17:04:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:13:18.627 17:04:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:13:18.627 17:04:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:13:18.627 17:04:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:13:18.627 17:04:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:13:18.627 17:04:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:13:18.627 17:04:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:13:18.627 17:04:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:13:18.627 17:04:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u bed1dc6f-5fd4-41e9-bb9b-cd9e98bbc13e 00:13:18.885 request: 00:13:18.885 { 00:13:18.885 "uuid": "bed1dc6f-5fd4-41e9-bb9b-cd9e98bbc13e", 00:13:18.885 "method": "bdev_lvol_get_lvstores", 00:13:18.885 "req_id": 1 00:13:18.885 } 00:13:18.885 Got JSON-RPC error response 00:13:18.885 response: 00:13:18.885 { 00:13:18.885 "code": -19, 00:13:18.885 "message": "No such device" 00:13:18.885 } 00:13:18.885 17:04:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@651 -- # es=1 00:13:18.885 17:04:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:13:18.885 17:04:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:13:18.885 17:04:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:13:18.885 17:04:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:13:19.143 aio_bdev 00:13:19.143 17:04:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 89430ace-3261-48c7-9d9d-795f65a06958 00:13:19.143 17:04:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@895 -- # local bdev_name=89430ace-3261-48c7-9d9d-795f65a06958 00:13:19.143 17:04:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:13:19.143 17:04:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@897 -- # local i 00:13:19.143 17:04:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:13:19.143 17:04:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:13:19.143 17:04:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@900 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:13:19.143 17:04:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 89430ace-3261-48c7-9d9d-795f65a06958 -t 2000 00:13:19.401 [ 00:13:19.401 { 00:13:19.401 "name": "89430ace-3261-48c7-9d9d-795f65a06958", 00:13:19.401 "aliases": [ 00:13:19.401 "lvs/lvol" 00:13:19.401 ], 00:13:19.401 "product_name": "Logical Volume", 00:13:19.401 "block_size": 4096, 00:13:19.401 "num_blocks": 38912, 00:13:19.401 "uuid": "89430ace-3261-48c7-9d9d-795f65a06958", 00:13:19.401 "assigned_rate_limits": { 00:13:19.401 "rw_ios_per_sec": 0, 00:13:19.401 "rw_mbytes_per_sec": 0, 00:13:19.401 "r_mbytes_per_sec": 0, 00:13:19.401 "w_mbytes_per_sec": 0 00:13:19.401 }, 00:13:19.401 "claimed": false, 00:13:19.401 "zoned": false, 00:13:19.401 "supported_io_types": { 00:13:19.401 "read": true, 00:13:19.401 "write": true, 00:13:19.401 "unmap": true, 00:13:19.401 "write_zeroes": true, 00:13:19.401 "flush": false, 00:13:19.401 "reset": true, 00:13:19.401 "compare": false, 00:13:19.401 "compare_and_write": false, 00:13:19.401 "abort": false, 00:13:19.401 "nvme_admin": false, 00:13:19.401 "nvme_io": false 00:13:19.401 }, 00:13:19.401 "driver_specific": { 00:13:19.401 "lvol": { 00:13:19.401 "lvol_store_uuid": "bed1dc6f-5fd4-41e9-bb9b-cd9e98bbc13e", 00:13:19.401 "base_bdev": "aio_bdev", 00:13:19.401 "thin_provision": false, 00:13:19.401 "num_allocated_clusters": 38, 00:13:19.401 "snapshot": false, 00:13:19.401 "clone": false, 00:13:19.401 "esnap_clone": false 00:13:19.401 } 00:13:19.401 } 00:13:19.401 } 00:13:19.401 ] 00:13:19.401 17:04:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@903 -- # return 0 00:13:19.401 17:04:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u bed1dc6f-5fd4-41e9-bb9b-cd9e98bbc13e 00:13:19.401 17:04:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:13:19.659 17:04:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:13:19.659 17:04:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u bed1dc6f-5fd4-41e9-bb9b-cd9e98bbc13e 00:13:19.659 17:04:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:13:19.659 17:04:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:13:19.659 17:04:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 89430ace-3261-48c7-9d9d-795f65a06958 00:13:19.917 17:04:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u bed1dc6f-5fd4-41e9-bb9b-cd9e98bbc13e 00:13:20.174 17:04:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:13:20.174 17:04:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:13:20.432 00:13:20.432 real 0m15.823s 00:13:20.432 user 0m15.504s 00:13:20.432 sys 0m1.360s 00:13:20.432 17:04:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1122 -- # xtrace_disable 00:13:20.432 17:04:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:13:20.432 ************************************ 00:13:20.432 END TEST lvs_grow_clean 00:13:20.432 ************************************ 00:13:20.432 17:04:07 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:13:20.432 17:04:07 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:13:20.432 17:04:07 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1103 -- # xtrace_disable 00:13:20.432 17:04:07 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:13:20.432 ************************************ 00:13:20.432 START TEST lvs_grow_dirty 00:13:20.432 ************************************ 00:13:20.432 17:04:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1121 -- # lvs_grow dirty 00:13:20.432 17:04:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:13:20.432 17:04:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:13:20.432 17:04:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:13:20.432 17:04:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:13:20.432 17:04:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:13:20.432 17:04:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:13:20.432 17:04:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:13:20.432 17:04:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:13:20.432 17:04:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:13:20.690 17:04:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:13:20.690 17:04:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:13:20.690 17:04:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=c7aaaa63-9040-437b-88c0-3fb3adbea119 00:13:20.690 17:04:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u c7aaaa63-9040-437b-88c0-3fb3adbea119 00:13:20.690 17:04:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:13:20.949 17:04:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:13:20.949 17:04:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:13:20.949 17:04:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u c7aaaa63-9040-437b-88c0-3fb3adbea119 lvol 150 00:13:21.208 17:04:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=e742dc47-6958-4f5c-83d5-fa74165a3a47 00:13:21.208 17:04:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:13:21.208 17:04:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:13:21.208 [2024-05-15 17:04:08.799914] bdev_aio.c:1030:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:13:21.209 [2024-05-15 17:04:08.799961] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:13:21.209 true 00:13:21.209 17:04:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u c7aaaa63-9040-437b-88c0-3fb3adbea119 00:13:21.209 17:04:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:13:21.467 17:04:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:13:21.467 17:04:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:13:21.724 17:04:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 e742dc47-6958-4f5c-83d5-fa74165a3a47 00:13:21.724 17:04:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:13:21.982 [2024-05-15 17:04:09.489982] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:21.982 17:04:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:13:22.240 17:04:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=3017916 00:13:22.240 17:04:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:13:22.240 17:04:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 3017916 /var/tmp/bdevperf.sock 00:13:22.240 17:04:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@827 -- # '[' -z 3017916 ']' 00:13:22.240 17:04:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:13:22.240 17:04:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:13:22.240 17:04:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@832 -- # local max_retries=100 00:13:22.240 17:04:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:13:22.240 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:13:22.240 17:04:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@836 -- # xtrace_disable 00:13:22.240 17:04:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:13:22.240 [2024-05-15 17:04:09.718083] Starting SPDK v24.05-pre git sha1 0ba8ca574 / DPDK 23.11.0 initialization... 00:13:22.240 [2024-05-15 17:04:09.718131] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3017916 ] 00:13:22.240 EAL: No free 2048 kB hugepages reported on node 1 00:13:22.240 [2024-05-15 17:04:09.769968] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:22.240 [2024-05-15 17:04:09.841314] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:13:23.173 17:04:10 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:13:23.173 17:04:10 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@860 -- # return 0 00:13:23.173 17:04:10 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:13:23.431 Nvme0n1 00:13:23.431 17:04:10 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:13:23.431 [ 00:13:23.431 { 00:13:23.431 "name": "Nvme0n1", 00:13:23.431 "aliases": [ 00:13:23.431 "e742dc47-6958-4f5c-83d5-fa74165a3a47" 00:13:23.431 ], 00:13:23.431 "product_name": "NVMe disk", 00:13:23.431 "block_size": 4096, 00:13:23.431 "num_blocks": 38912, 00:13:23.431 "uuid": "e742dc47-6958-4f5c-83d5-fa74165a3a47", 00:13:23.431 "assigned_rate_limits": { 00:13:23.431 "rw_ios_per_sec": 0, 00:13:23.431 "rw_mbytes_per_sec": 0, 00:13:23.431 "r_mbytes_per_sec": 0, 00:13:23.431 "w_mbytes_per_sec": 0 00:13:23.431 }, 00:13:23.431 "claimed": false, 00:13:23.431 "zoned": false, 00:13:23.431 "supported_io_types": { 00:13:23.431 "read": true, 00:13:23.431 "write": true, 00:13:23.431 "unmap": true, 00:13:23.431 "write_zeroes": true, 00:13:23.431 "flush": true, 00:13:23.431 "reset": true, 00:13:23.431 "compare": true, 00:13:23.431 "compare_and_write": true, 00:13:23.431 "abort": true, 00:13:23.431 "nvme_admin": true, 00:13:23.431 "nvme_io": true 00:13:23.431 }, 00:13:23.431 "memory_domains": [ 00:13:23.431 { 00:13:23.431 "dma_device_id": "system", 00:13:23.431 "dma_device_type": 1 00:13:23.431 } 00:13:23.431 ], 00:13:23.431 "driver_specific": { 00:13:23.431 "nvme": [ 00:13:23.431 { 00:13:23.431 "trid": { 00:13:23.431 "trtype": "TCP", 00:13:23.431 "adrfam": "IPv4", 00:13:23.431 "traddr": "10.0.0.2", 00:13:23.431 "trsvcid": "4420", 00:13:23.431 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:13:23.431 }, 00:13:23.431 "ctrlr_data": { 00:13:23.431 "cntlid": 1, 00:13:23.431 "vendor_id": "0x8086", 00:13:23.431 "model_number": "SPDK bdev Controller", 00:13:23.431 "serial_number": "SPDK0", 00:13:23.431 "firmware_revision": "24.05", 00:13:23.431 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:13:23.431 "oacs": { 00:13:23.431 "security": 0, 00:13:23.431 "format": 0, 00:13:23.431 "firmware": 0, 00:13:23.431 "ns_manage": 0 00:13:23.431 }, 00:13:23.431 "multi_ctrlr": true, 00:13:23.431 "ana_reporting": false 00:13:23.431 }, 00:13:23.431 "vs": { 00:13:23.431 "nvme_version": "1.3" 00:13:23.431 }, 00:13:23.431 "ns_data": { 00:13:23.431 "id": 1, 00:13:23.431 "can_share": true 00:13:23.431 } 00:13:23.431 } 00:13:23.431 ], 00:13:23.431 "mp_policy": "active_passive" 00:13:23.431 } 00:13:23.431 } 00:13:23.431 ] 00:13:23.431 17:04:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=3018150 00:13:23.431 17:04:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:13:23.431 17:04:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:13:23.688 Running I/O for 10 seconds... 00:13:24.621 Latency(us) 00:13:24.621 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:24.621 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:13:24.621 Nvme0n1 : 1.00 22109.00 86.36 0.00 0.00 0.00 0.00 0.00 00:13:24.621 =================================================================================================================== 00:13:24.621 Total : 22109.00 86.36 0.00 0.00 0.00 0.00 0.00 00:13:24.621 00:13:25.554 17:04:13 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u c7aaaa63-9040-437b-88c0-3fb3adbea119 00:13:25.554 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:13:25.554 Nvme0n1 : 2.00 22202.50 86.73 0.00 0.00 0.00 0.00 0.00 00:13:25.554 =================================================================================================================== 00:13:25.554 Total : 22202.50 86.73 0.00 0.00 0.00 0.00 0.00 00:13:25.554 00:13:25.812 true 00:13:25.812 17:04:13 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u c7aaaa63-9040-437b-88c0-3fb3adbea119 00:13:25.812 17:04:13 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:13:25.812 17:04:13 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:13:25.812 17:04:13 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:13:25.812 17:04:13 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 3018150 00:13:26.744 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:13:26.744 Nvme0n1 : 3.00 22241.67 86.88 0.00 0.00 0.00 0.00 0.00 00:13:26.744 =================================================================================================================== 00:13:26.744 Total : 22241.67 86.88 0.00 0.00 0.00 0.00 0.00 00:13:26.744 00:13:27.678 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:13:27.678 Nvme0n1 : 4.00 22313.25 87.16 0.00 0.00 0.00 0.00 0.00 00:13:27.678 =================================================================================================================== 00:13:27.678 Total : 22313.25 87.16 0.00 0.00 0.00 0.00 0.00 00:13:27.678 00:13:28.612 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:13:28.612 Nvme0n1 : 5.00 22359.40 87.34 0.00 0.00 0.00 0.00 0.00 00:13:28.612 =================================================================================================================== 00:13:28.612 Total : 22359.40 87.34 0.00 0.00 0.00 0.00 0.00 00:13:28.612 00:13:29.548 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:13:29.548 Nvme0n1 : 6.00 22400.83 87.50 0.00 0.00 0.00 0.00 0.00 00:13:29.548 =================================================================================================================== 00:13:29.548 Total : 22400.83 87.50 0.00 0.00 0.00 0.00 0.00 00:13:29.548 00:13:30.923 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:13:30.923 Nvme0n1 : 7.00 22429.29 87.61 0.00 0.00 0.00 0.00 0.00 00:13:30.923 =================================================================================================================== 00:13:30.923 Total : 22429.29 87.61 0.00 0.00 0.00 0.00 0.00 00:13:30.923 00:13:31.858 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:13:31.858 Nvme0n1 : 8.00 22447.62 87.69 0.00 0.00 0.00 0.00 0.00 00:13:31.858 =================================================================================================================== 00:13:31.858 Total : 22447.62 87.69 0.00 0.00 0.00 0.00 0.00 00:13:31.858 00:13:32.795 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:13:32.795 Nvme0n1 : 9.00 22435.22 87.64 0.00 0.00 0.00 0.00 0.00 00:13:32.795 =================================================================================================================== 00:13:32.795 Total : 22435.22 87.64 0.00 0.00 0.00 0.00 0.00 00:13:32.795 00:13:33.732 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:13:33.732 Nvme0n1 : 10.00 22449.30 87.69 0.00 0.00 0.00 0.00 0.00 00:13:33.732 =================================================================================================================== 00:13:33.732 Total : 22449.30 87.69 0.00 0.00 0.00 0.00 0.00 00:13:33.732 00:13:33.732 00:13:33.732 Latency(us) 00:13:33.732 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:33.732 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:13:33.732 Nvme0n1 : 10.01 22449.91 87.69 0.00 0.00 5697.31 4274.09 11055.64 00:13:33.732 =================================================================================================================== 00:13:33.732 Total : 22449.91 87.69 0.00 0.00 5697.31 4274.09 11055.64 00:13:33.732 0 00:13:33.732 17:04:21 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 3017916 00:13:33.732 17:04:21 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@946 -- # '[' -z 3017916 ']' 00:13:33.732 17:04:21 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@950 -- # kill -0 3017916 00:13:33.732 17:04:21 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@951 -- # uname 00:13:33.732 17:04:21 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:13:33.732 17:04:21 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3017916 00:13:33.732 17:04:21 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:13:33.732 17:04:21 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:13:33.732 17:04:21 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3017916' 00:13:33.732 killing process with pid 3017916 00:13:33.732 17:04:21 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@965 -- # kill 3017916 00:13:33.732 Received shutdown signal, test time was about 10.000000 seconds 00:13:33.732 00:13:33.732 Latency(us) 00:13:33.732 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:33.732 =================================================================================================================== 00:13:33.732 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:13:33.732 17:04:21 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@970 -- # wait 3017916 00:13:33.992 17:04:21 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:13:33.992 17:04:21 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:13:34.251 17:04:21 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u c7aaaa63-9040-437b-88c0-3fb3adbea119 00:13:34.251 17:04:21 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:13:34.510 17:04:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:13:34.510 17:04:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:13:34.510 17:04:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 3014718 00:13:34.510 17:04:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 3014718 00:13:34.510 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 3014718 Killed "${NVMF_APP[@]}" "$@" 00:13:34.510 17:04:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 00:13:34.510 17:04:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:13:34.510 17:04:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:13:34.510 17:04:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@720 -- # xtrace_disable 00:13:34.510 17:04:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:13:34.510 17:04:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@481 -- # nvmfpid=3019994 00:13:34.510 17:04:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@482 -- # waitforlisten 3019994 00:13:34.510 17:04:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:13:34.510 17:04:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@827 -- # '[' -z 3019994 ']' 00:13:34.510 17:04:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:34.510 17:04:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@832 -- # local max_retries=100 00:13:34.510 17:04:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:34.510 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:34.510 17:04:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@836 -- # xtrace_disable 00:13:34.510 17:04:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:13:34.510 [2024-05-15 17:04:22.105802] Starting SPDK v24.05-pre git sha1 0ba8ca574 / DPDK 23.11.0 initialization... 00:13:34.510 [2024-05-15 17:04:22.105847] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:34.510 EAL: No free 2048 kB hugepages reported on node 1 00:13:34.510 [2024-05-15 17:04:22.161671] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:34.786 [2024-05-15 17:04:22.242073] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:34.786 [2024-05-15 17:04:22.242106] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:34.786 [2024-05-15 17:04:22.242114] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:34.786 [2024-05-15 17:04:22.242119] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:34.786 [2024-05-15 17:04:22.242124] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:34.786 [2024-05-15 17:04:22.242140] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:35.364 17:04:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:13:35.365 17:04:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@860 -- # return 0 00:13:35.365 17:04:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:13:35.365 17:04:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@726 -- # xtrace_disable 00:13:35.365 17:04:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:13:35.365 17:04:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:35.365 17:04:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:13:35.623 [2024-05-15 17:04:23.099440] blobstore.c:4865:bs_recover: *NOTICE*: Performing recovery on blobstore 00:13:35.623 [2024-05-15 17:04:23.099522] blobstore.c:4812:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:13:35.623 [2024-05-15 17:04:23.099546] blobstore.c:4812:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:13:35.623 17:04:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:13:35.623 17:04:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev e742dc47-6958-4f5c-83d5-fa74165a3a47 00:13:35.623 17:04:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@895 -- # local bdev_name=e742dc47-6958-4f5c-83d5-fa74165a3a47 00:13:35.623 17:04:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:13:35.623 17:04:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@897 -- # local i 00:13:35.623 17:04:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:13:35.623 17:04:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:13:35.623 17:04:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:13:35.882 17:04:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b e742dc47-6958-4f5c-83d5-fa74165a3a47 -t 2000 00:13:35.882 [ 00:13:35.882 { 00:13:35.882 "name": "e742dc47-6958-4f5c-83d5-fa74165a3a47", 00:13:35.882 "aliases": [ 00:13:35.882 "lvs/lvol" 00:13:35.882 ], 00:13:35.882 "product_name": "Logical Volume", 00:13:35.882 "block_size": 4096, 00:13:35.882 "num_blocks": 38912, 00:13:35.882 "uuid": "e742dc47-6958-4f5c-83d5-fa74165a3a47", 00:13:35.882 "assigned_rate_limits": { 00:13:35.882 "rw_ios_per_sec": 0, 00:13:35.882 "rw_mbytes_per_sec": 0, 00:13:35.882 "r_mbytes_per_sec": 0, 00:13:35.882 "w_mbytes_per_sec": 0 00:13:35.882 }, 00:13:35.882 "claimed": false, 00:13:35.882 "zoned": false, 00:13:35.882 "supported_io_types": { 00:13:35.882 "read": true, 00:13:35.882 "write": true, 00:13:35.882 "unmap": true, 00:13:35.882 "write_zeroes": true, 00:13:35.882 "flush": false, 00:13:35.882 "reset": true, 00:13:35.882 "compare": false, 00:13:35.882 "compare_and_write": false, 00:13:35.882 "abort": false, 00:13:35.882 "nvme_admin": false, 00:13:35.882 "nvme_io": false 00:13:35.882 }, 00:13:35.882 "driver_specific": { 00:13:35.882 "lvol": { 00:13:35.882 "lvol_store_uuid": "c7aaaa63-9040-437b-88c0-3fb3adbea119", 00:13:35.882 "base_bdev": "aio_bdev", 00:13:35.882 "thin_provision": false, 00:13:35.882 "num_allocated_clusters": 38, 00:13:35.882 "snapshot": false, 00:13:35.882 "clone": false, 00:13:35.882 "esnap_clone": false 00:13:35.882 } 00:13:35.882 } 00:13:35.882 } 00:13:35.882 ] 00:13:35.882 17:04:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # return 0 00:13:35.882 17:04:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u c7aaaa63-9040-437b-88c0-3fb3adbea119 00:13:35.882 17:04:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:13:36.141 17:04:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:13:36.141 17:04:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u c7aaaa63-9040-437b-88c0-3fb3adbea119 00:13:36.141 17:04:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:13:36.141 17:04:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:13:36.141 17:04:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:13:36.400 [2024-05-15 17:04:23.939774] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:13:36.400 17:04:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u c7aaaa63-9040-437b-88c0-3fb3adbea119 00:13:36.400 17:04:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@648 -- # local es=0 00:13:36.400 17:04:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u c7aaaa63-9040-437b-88c0-3fb3adbea119 00:13:36.400 17:04:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:13:36.400 17:04:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:13:36.400 17:04:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:13:36.400 17:04:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:13:36.400 17:04:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:13:36.400 17:04:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:13:36.400 17:04:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:13:36.400 17:04:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:13:36.400 17:04:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u c7aaaa63-9040-437b-88c0-3fb3adbea119 00:13:36.658 request: 00:13:36.658 { 00:13:36.658 "uuid": "c7aaaa63-9040-437b-88c0-3fb3adbea119", 00:13:36.658 "method": "bdev_lvol_get_lvstores", 00:13:36.658 "req_id": 1 00:13:36.658 } 00:13:36.658 Got JSON-RPC error response 00:13:36.658 response: 00:13:36.658 { 00:13:36.658 "code": -19, 00:13:36.658 "message": "No such device" 00:13:36.658 } 00:13:36.658 17:04:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@651 -- # es=1 00:13:36.658 17:04:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:13:36.658 17:04:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:13:36.658 17:04:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:13:36.658 17:04:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:13:36.917 aio_bdev 00:13:36.917 17:04:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev e742dc47-6958-4f5c-83d5-fa74165a3a47 00:13:36.917 17:04:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@895 -- # local bdev_name=e742dc47-6958-4f5c-83d5-fa74165a3a47 00:13:36.917 17:04:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:13:36.917 17:04:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@897 -- # local i 00:13:36.917 17:04:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:13:36.917 17:04:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:13:36.917 17:04:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:13:36.917 17:04:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b e742dc47-6958-4f5c-83d5-fa74165a3a47 -t 2000 00:13:37.175 [ 00:13:37.175 { 00:13:37.175 "name": "e742dc47-6958-4f5c-83d5-fa74165a3a47", 00:13:37.175 "aliases": [ 00:13:37.175 "lvs/lvol" 00:13:37.175 ], 00:13:37.175 "product_name": "Logical Volume", 00:13:37.175 "block_size": 4096, 00:13:37.175 "num_blocks": 38912, 00:13:37.175 "uuid": "e742dc47-6958-4f5c-83d5-fa74165a3a47", 00:13:37.175 "assigned_rate_limits": { 00:13:37.175 "rw_ios_per_sec": 0, 00:13:37.175 "rw_mbytes_per_sec": 0, 00:13:37.175 "r_mbytes_per_sec": 0, 00:13:37.175 "w_mbytes_per_sec": 0 00:13:37.175 }, 00:13:37.175 "claimed": false, 00:13:37.175 "zoned": false, 00:13:37.175 "supported_io_types": { 00:13:37.175 "read": true, 00:13:37.175 "write": true, 00:13:37.175 "unmap": true, 00:13:37.175 "write_zeroes": true, 00:13:37.175 "flush": false, 00:13:37.175 "reset": true, 00:13:37.175 "compare": false, 00:13:37.175 "compare_and_write": false, 00:13:37.175 "abort": false, 00:13:37.175 "nvme_admin": false, 00:13:37.175 "nvme_io": false 00:13:37.175 }, 00:13:37.175 "driver_specific": { 00:13:37.176 "lvol": { 00:13:37.176 "lvol_store_uuid": "c7aaaa63-9040-437b-88c0-3fb3adbea119", 00:13:37.176 "base_bdev": "aio_bdev", 00:13:37.176 "thin_provision": false, 00:13:37.176 "num_allocated_clusters": 38, 00:13:37.176 "snapshot": false, 00:13:37.176 "clone": false, 00:13:37.176 "esnap_clone": false 00:13:37.176 } 00:13:37.176 } 00:13:37.176 } 00:13:37.176 ] 00:13:37.176 17:04:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # return 0 00:13:37.176 17:04:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u c7aaaa63-9040-437b-88c0-3fb3adbea119 00:13:37.176 17:04:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:13:37.176 17:04:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:13:37.176 17:04:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u c7aaaa63-9040-437b-88c0-3fb3adbea119 00:13:37.176 17:04:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:13:37.434 17:04:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:13:37.434 17:04:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete e742dc47-6958-4f5c-83d5-fa74165a3a47 00:13:37.693 17:04:25 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u c7aaaa63-9040-437b-88c0-3fb3adbea119 00:13:37.952 17:04:25 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:13:37.952 17:04:25 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:13:37.952 00:13:37.952 real 0m17.652s 00:13:37.952 user 0m44.970s 00:13:37.952 sys 0m4.071s 00:13:37.952 17:04:25 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1122 -- # xtrace_disable 00:13:37.952 17:04:25 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:13:37.952 ************************************ 00:13:37.952 END TEST lvs_grow_dirty 00:13:37.952 ************************************ 00:13:37.952 17:04:25 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:13:37.952 17:04:25 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@804 -- # type=--id 00:13:37.952 17:04:25 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@805 -- # id=0 00:13:37.952 17:04:25 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@806 -- # '[' --id = --pid ']' 00:13:37.952 17:04:25 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@810 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:13:38.211 17:04:25 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@810 -- # shm_files=nvmf_trace.0 00:13:38.211 17:04:25 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@812 -- # [[ -z nvmf_trace.0 ]] 00:13:38.211 17:04:25 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@816 -- # for n in $shm_files 00:13:38.211 17:04:25 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@817 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:13:38.211 nvmf_trace.0 00:13:38.211 17:04:25 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@819 -- # return 0 00:13:38.211 17:04:25 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:13:38.211 17:04:25 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@488 -- # nvmfcleanup 00:13:38.211 17:04:25 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@117 -- # sync 00:13:38.211 17:04:25 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:13:38.211 17:04:25 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@120 -- # set +e 00:13:38.211 17:04:25 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@121 -- # for i in {1..20} 00:13:38.211 17:04:25 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:13:38.211 rmmod nvme_tcp 00:13:38.211 rmmod nvme_fabrics 00:13:38.211 rmmod nvme_keyring 00:13:38.211 17:04:25 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:13:38.211 17:04:25 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@124 -- # set -e 00:13:38.211 17:04:25 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@125 -- # return 0 00:13:38.211 17:04:25 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@489 -- # '[' -n 3019994 ']' 00:13:38.211 17:04:25 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@490 -- # killprocess 3019994 00:13:38.211 17:04:25 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@946 -- # '[' -z 3019994 ']' 00:13:38.211 17:04:25 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@950 -- # kill -0 3019994 00:13:38.211 17:04:25 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@951 -- # uname 00:13:38.211 17:04:25 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:13:38.211 17:04:25 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3019994 00:13:38.211 17:04:25 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:13:38.211 17:04:25 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:13:38.211 17:04:25 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3019994' 00:13:38.211 killing process with pid 3019994 00:13:38.211 17:04:25 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@965 -- # kill 3019994 00:13:38.211 17:04:25 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@970 -- # wait 3019994 00:13:38.470 17:04:25 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:13:38.470 17:04:25 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:13:38.470 17:04:25 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:13:38.470 17:04:25 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:13:38.470 17:04:25 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@278 -- # remove_spdk_ns 00:13:38.470 17:04:25 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:38.471 17:04:25 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:38.471 17:04:25 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:41.002 17:04:28 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:13:41.002 00:13:41.002 real 0m42.253s 00:13:41.002 user 1m6.128s 00:13:41.002 sys 0m9.610s 00:13:41.002 17:04:28 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1122 -- # xtrace_disable 00:13:41.002 17:04:28 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:13:41.002 ************************************ 00:13:41.002 END TEST nvmf_lvs_grow 00:13:41.002 ************************************ 00:13:41.002 17:04:28 nvmf_tcp -- nvmf/nvmf.sh@50 -- # run_test nvmf_bdev_io_wait /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:13:41.002 17:04:28 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:13:41.002 17:04:28 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:13:41.002 17:04:28 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:13:41.002 ************************************ 00:13:41.002 START TEST nvmf_bdev_io_wait 00:13:41.002 ************************************ 00:13:41.002 17:04:28 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:13:41.002 * Looking for test storage... 00:13:41.002 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:41.002 17:04:28 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:41.002 17:04:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 00:13:41.003 17:04:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:41.003 17:04:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:41.003 17:04:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:41.003 17:04:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:41.003 17:04:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:41.003 17:04:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:41.003 17:04:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:41.003 17:04:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:41.003 17:04:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:41.003 17:04:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:41.003 17:04:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:13:41.003 17:04:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:13:41.003 17:04:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:41.003 17:04:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:41.003 17:04:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:41.003 17:04:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:41.003 17:04:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:41.003 17:04:28 nvmf_tcp.nvmf_bdev_io_wait -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:41.003 17:04:28 nvmf_tcp.nvmf_bdev_io_wait -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:41.003 17:04:28 nvmf_tcp.nvmf_bdev_io_wait -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:41.003 17:04:28 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:41.003 17:04:28 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:41.003 17:04:28 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:41.003 17:04:28 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 00:13:41.003 17:04:28 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:41.003 17:04:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@47 -- # : 0 00:13:41.003 17:04:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:13:41.003 17:04:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:13:41.003 17:04:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:41.003 17:04:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:41.003 17:04:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:41.003 17:04:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:13:41.003 17:04:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:13:41.003 17:04:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # have_pci_nics=0 00:13:41.003 17:04:28 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:13:41.003 17:04:28 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:13:41.003 17:04:28 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:13:41.003 17:04:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:13:41.003 17:04:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:41.003 17:04:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@448 -- # prepare_net_devs 00:13:41.003 17:04:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # local -g is_hw=no 00:13:41.003 17:04:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@412 -- # remove_spdk_ns 00:13:41.003 17:04:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:41.003 17:04:28 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:41.003 17:04:28 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:41.003 17:04:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:13:41.003 17:04:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:13:41.003 17:04:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@285 -- # xtrace_disable 00:13:41.003 17:04:28 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:13:46.268 17:04:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:46.268 17:04:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@291 -- # pci_devs=() 00:13:46.268 17:04:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@291 -- # local -a pci_devs 00:13:46.268 17:04:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@292 -- # pci_net_devs=() 00:13:46.268 17:04:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:13:46.268 17:04:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@293 -- # pci_drivers=() 00:13:46.268 17:04:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@293 -- # local -A pci_drivers 00:13:46.268 17:04:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@295 -- # net_devs=() 00:13:46.268 17:04:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@295 -- # local -ga net_devs 00:13:46.268 17:04:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@296 -- # e810=() 00:13:46.268 17:04:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@296 -- # local -ga e810 00:13:46.268 17:04:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # x722=() 00:13:46.268 17:04:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # local -ga x722 00:13:46.268 17:04:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # mlx=() 00:13:46.268 17:04:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # local -ga mlx 00:13:46.268 17:04:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:46.268 17:04:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:46.268 17:04:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:46.268 17:04:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:46.268 17:04:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:46.268 17:04:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:46.268 17:04:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:46.268 17:04:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:46.268 17:04:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:46.268 17:04:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:46.268 17:04:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:46.268 17:04:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:13:46.268 17:04:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:13:46.268 17:04:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:13:46.268 17:04:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:13:46.268 17:04:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:13:46.268 17:04:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:13:46.268 17:04:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:46.268 17:04:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:13:46.268 Found 0000:86:00.0 (0x8086 - 0x159b) 00:13:46.268 17:04:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:46.268 17:04:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:46.268 17:04:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:46.268 17:04:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:46.268 17:04:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:46.268 17:04:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:46.268 17:04:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:13:46.268 Found 0000:86:00.1 (0x8086 - 0x159b) 00:13:46.268 17:04:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:46.268 17:04:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:46.268 17:04:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:46.268 17:04:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:46.268 17:04:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:46.268 17:04:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:13:46.268 17:04:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:13:46.268 17:04:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:13:46.268 17:04:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:46.268 17:04:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:46.268 17:04:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:13:46.268 17:04:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:46.268 17:04:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@390 -- # [[ up == up ]] 00:13:46.268 17:04:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:46.268 17:04:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:46.268 17:04:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:13:46.268 Found net devices under 0000:86:00.0: cvl_0_0 00:13:46.268 17:04:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:46.268 17:04:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:46.268 17:04:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:46.268 17:04:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:13:46.268 17:04:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:46.268 17:04:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@390 -- # [[ up == up ]] 00:13:46.268 17:04:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:46.268 17:04:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:46.268 17:04:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:13:46.268 Found net devices under 0000:86:00.1: cvl_0_1 00:13:46.268 17:04:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:46.268 17:04:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:13:46.268 17:04:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # is_hw=yes 00:13:46.268 17:04:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:13:46.268 17:04:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:13:46.268 17:04:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:13:46.268 17:04:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:46.268 17:04:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:46.268 17:04:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:46.268 17:04:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:13:46.268 17:04:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:46.268 17:04:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:46.268 17:04:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:13:46.268 17:04:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:46.268 17:04:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:46.269 17:04:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:13:46.269 17:04:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:13:46.269 17:04:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:13:46.269 17:04:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:46.269 17:04:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:46.269 17:04:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:46.269 17:04:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:13:46.269 17:04:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:46.269 17:04:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:46.269 17:04:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:46.269 17:04:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:13:46.269 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:46.269 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.149 ms 00:13:46.269 00:13:46.269 --- 10.0.0.2 ping statistics --- 00:13:46.269 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:46.269 rtt min/avg/max/mdev = 0.149/0.149/0.149/0.000 ms 00:13:46.269 17:04:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:46.269 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:46.269 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.150 ms 00:13:46.269 00:13:46.269 --- 10.0.0.1 ping statistics --- 00:13:46.269 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:46.269 rtt min/avg/max/mdev = 0.150/0.150/0.150/0.000 ms 00:13:46.269 17:04:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:46.269 17:04:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # return 0 00:13:46.269 17:04:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:13:46.269 17:04:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:46.269 17:04:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:13:46.269 17:04:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:13:46.269 17:04:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:46.269 17:04:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:13:46.269 17:04:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:13:46.269 17:04:33 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:13:46.269 17:04:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:13:46.269 17:04:33 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@720 -- # xtrace_disable 00:13:46.269 17:04:33 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:13:46.269 17:04:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@481 -- # nvmfpid=3024048 00:13:46.269 17:04:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@482 -- # waitforlisten 3024048 00:13:46.269 17:04:33 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:13:46.269 17:04:33 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@827 -- # '[' -z 3024048 ']' 00:13:46.269 17:04:33 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:46.269 17:04:33 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@832 -- # local max_retries=100 00:13:46.269 17:04:33 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:46.269 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:46.269 17:04:33 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@836 -- # xtrace_disable 00:13:46.269 17:04:33 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:13:46.269 [2024-05-15 17:04:33.846589] Starting SPDK v24.05-pre git sha1 0ba8ca574 / DPDK 23.11.0 initialization... 00:13:46.269 [2024-05-15 17:04:33.846629] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:46.269 EAL: No free 2048 kB hugepages reported on node 1 00:13:46.269 [2024-05-15 17:04:33.904675] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:46.527 [2024-05-15 17:04:33.980039] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:46.527 [2024-05-15 17:04:33.980081] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:46.527 [2024-05-15 17:04:33.980089] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:46.527 [2024-05-15 17:04:33.980095] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:46.527 [2024-05-15 17:04:33.980101] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:46.527 [2024-05-15 17:04:33.980148] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:13:46.527 [2024-05-15 17:04:33.980264] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:13:46.527 [2024-05-15 17:04:33.980284] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:13:46.527 [2024-05-15 17:04:33.980286] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:47.091 17:04:34 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:13:47.091 17:04:34 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@860 -- # return 0 00:13:47.091 17:04:34 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:13:47.091 17:04:34 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@726 -- # xtrace_disable 00:13:47.091 17:04:34 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:13:47.091 17:04:34 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:47.091 17:04:34 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:13:47.091 17:04:34 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:47.091 17:04:34 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:13:47.091 17:04:34 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:47.091 17:04:34 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:13:47.091 17:04:34 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:47.091 17:04:34 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:13:47.348 17:04:34 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:47.349 17:04:34 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:13:47.349 17:04:34 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:47.349 17:04:34 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:13:47.349 [2024-05-15 17:04:34.766612] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:47.349 17:04:34 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:47.349 17:04:34 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:13:47.349 17:04:34 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:47.349 17:04:34 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:13:47.349 Malloc0 00:13:47.349 17:04:34 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:47.349 17:04:34 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:13:47.349 17:04:34 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:47.349 17:04:34 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:13:47.349 17:04:34 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:47.349 17:04:34 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:13:47.349 17:04:34 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:47.349 17:04:34 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:13:47.349 17:04:34 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:47.349 17:04:34 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:47.349 17:04:34 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:47.349 17:04:34 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:13:47.349 [2024-05-15 17:04:34.831936] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:13:47.349 [2024-05-15 17:04:34.832202] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:47.349 17:04:34 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:47.349 17:04:34 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=3024292 00:13:47.349 17:04:34 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:13:47.349 17:04:34 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:13:47.349 17:04:34 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=3024294 00:13:47.349 17:04:34 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:13:47.349 17:04:34 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:13:47.349 17:04:34 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:13:47.349 17:04:34 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:13:47.349 { 00:13:47.349 "params": { 00:13:47.349 "name": "Nvme$subsystem", 00:13:47.349 "trtype": "$TEST_TRANSPORT", 00:13:47.349 "traddr": "$NVMF_FIRST_TARGET_IP", 00:13:47.349 "adrfam": "ipv4", 00:13:47.349 "trsvcid": "$NVMF_PORT", 00:13:47.349 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:13:47.349 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:13:47.349 "hdgst": ${hdgst:-false}, 00:13:47.349 "ddgst": ${ddgst:-false} 00:13:47.349 }, 00:13:47.349 "method": "bdev_nvme_attach_controller" 00:13:47.349 } 00:13:47.349 EOF 00:13:47.349 )") 00:13:47.349 17:04:34 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:13:47.349 17:04:34 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:13:47.349 17:04:34 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=3024296 00:13:47.349 17:04:34 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:13:47.349 17:04:34 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:13:47.349 17:04:34 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:13:47.349 17:04:34 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:13:47.349 { 00:13:47.349 "params": { 00:13:47.349 "name": "Nvme$subsystem", 00:13:47.349 "trtype": "$TEST_TRANSPORT", 00:13:47.349 "traddr": "$NVMF_FIRST_TARGET_IP", 00:13:47.349 "adrfam": "ipv4", 00:13:47.349 "trsvcid": "$NVMF_PORT", 00:13:47.349 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:13:47.349 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:13:47.349 "hdgst": ${hdgst:-false}, 00:13:47.349 "ddgst": ${ddgst:-false} 00:13:47.349 }, 00:13:47.349 "method": "bdev_nvme_attach_controller" 00:13:47.349 } 00:13:47.349 EOF 00:13:47.349 )") 00:13:47.349 17:04:34 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:13:47.349 17:04:34 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:13:47.349 17:04:34 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=3024299 00:13:47.349 17:04:34 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:13:47.349 17:04:34 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 00:13:47.349 17:04:34 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:13:47.349 17:04:34 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:13:47.349 17:04:34 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:13:47.349 17:04:34 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:13:47.349 { 00:13:47.349 "params": { 00:13:47.349 "name": "Nvme$subsystem", 00:13:47.349 "trtype": "$TEST_TRANSPORT", 00:13:47.349 "traddr": "$NVMF_FIRST_TARGET_IP", 00:13:47.349 "adrfam": "ipv4", 00:13:47.349 "trsvcid": "$NVMF_PORT", 00:13:47.349 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:13:47.349 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:13:47.349 "hdgst": ${hdgst:-false}, 00:13:47.349 "ddgst": ${ddgst:-false} 00:13:47.349 }, 00:13:47.349 "method": "bdev_nvme_attach_controller" 00:13:47.349 } 00:13:47.349 EOF 00:13:47.349 )") 00:13:47.349 17:04:34 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:13:47.349 17:04:34 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:13:47.349 17:04:34 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:13:47.349 17:04:34 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:13:47.349 17:04:34 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:13:47.349 17:04:34 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:13:47.349 17:04:34 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:13:47.349 { 00:13:47.349 "params": { 00:13:47.349 "name": "Nvme$subsystem", 00:13:47.349 "trtype": "$TEST_TRANSPORT", 00:13:47.349 "traddr": "$NVMF_FIRST_TARGET_IP", 00:13:47.349 "adrfam": "ipv4", 00:13:47.349 "trsvcid": "$NVMF_PORT", 00:13:47.349 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:13:47.349 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:13:47.349 "hdgst": ${hdgst:-false}, 00:13:47.349 "ddgst": ${ddgst:-false} 00:13:47.349 }, 00:13:47.349 "method": "bdev_nvme_attach_controller" 00:13:47.349 } 00:13:47.349 EOF 00:13:47.349 )") 00:13:47.349 17:04:34 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:13:47.349 17:04:34 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 3024292 00:13:47.349 17:04:34 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:13:47.349 17:04:34 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:13:47.349 17:04:34 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:13:47.349 17:04:34 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:13:47.349 17:04:34 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:13:47.349 17:04:34 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:13:47.349 "params": { 00:13:47.349 "name": "Nvme1", 00:13:47.349 "trtype": "tcp", 00:13:47.349 "traddr": "10.0.0.2", 00:13:47.349 "adrfam": "ipv4", 00:13:47.349 "trsvcid": "4420", 00:13:47.349 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:13:47.349 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:13:47.349 "hdgst": false, 00:13:47.349 "ddgst": false 00:13:47.349 }, 00:13:47.349 "method": "bdev_nvme_attach_controller" 00:13:47.349 }' 00:13:47.349 17:04:34 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:13:47.349 17:04:34 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:13:47.349 17:04:34 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:13:47.349 "params": { 00:13:47.349 "name": "Nvme1", 00:13:47.349 "trtype": "tcp", 00:13:47.349 "traddr": "10.0.0.2", 00:13:47.349 "adrfam": "ipv4", 00:13:47.349 "trsvcid": "4420", 00:13:47.349 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:13:47.349 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:13:47.349 "hdgst": false, 00:13:47.349 "ddgst": false 00:13:47.349 }, 00:13:47.349 "method": "bdev_nvme_attach_controller" 00:13:47.349 }' 00:13:47.349 17:04:34 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:13:47.349 17:04:34 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:13:47.349 "params": { 00:13:47.349 "name": "Nvme1", 00:13:47.349 "trtype": "tcp", 00:13:47.349 "traddr": "10.0.0.2", 00:13:47.349 "adrfam": "ipv4", 00:13:47.349 "trsvcid": "4420", 00:13:47.349 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:13:47.349 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:13:47.349 "hdgst": false, 00:13:47.349 "ddgst": false 00:13:47.349 }, 00:13:47.349 "method": "bdev_nvme_attach_controller" 00:13:47.349 }' 00:13:47.349 17:04:34 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:13:47.349 17:04:34 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:13:47.349 "params": { 00:13:47.349 "name": "Nvme1", 00:13:47.349 "trtype": "tcp", 00:13:47.349 "traddr": "10.0.0.2", 00:13:47.349 "adrfam": "ipv4", 00:13:47.350 "trsvcid": "4420", 00:13:47.350 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:13:47.350 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:13:47.350 "hdgst": false, 00:13:47.350 "ddgst": false 00:13:47.350 }, 00:13:47.350 "method": "bdev_nvme_attach_controller" 00:13:47.350 }' 00:13:47.350 [2024-05-15 17:04:34.875593] Starting SPDK v24.05-pre git sha1 0ba8ca574 / DPDK 23.11.0 initialization... 00:13:47.350 [2024-05-15 17:04:34.875642] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:13:47.350 [2024-05-15 17:04:34.884557] Starting SPDK v24.05-pre git sha1 0ba8ca574 / DPDK 23.11.0 initialization... 00:13:47.350 [2024-05-15 17:04:34.884602] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ] 00:13:47.350 [2024-05-15 17:04:34.884747] Starting SPDK v24.05-pre git sha1 0ba8ca574 / DPDK 23.11.0 initialization... 00:13:47.350 [2024-05-15 17:04:34.884785] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:13:47.350 [2024-05-15 17:04:34.885436] Starting SPDK v24.05-pre git sha1 0ba8ca574 / DPDK 23.11.0 initialization... 00:13:47.350 [2024-05-15 17:04:34.885475] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:13:47.350 EAL: No free 2048 kB hugepages reported on node 1 00:13:47.350 EAL: No free 2048 kB hugepages reported on node 1 00:13:47.607 [2024-05-15 17:04:35.044988] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:47.607 EAL: No free 2048 kB hugepages reported on node 1 00:13:47.607 [2024-05-15 17:04:35.119820] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:13:47.607 [2024-05-15 17:04:35.136969] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:47.607 EAL: No free 2048 kB hugepages reported on node 1 00:13:47.607 [2024-05-15 17:04:35.197929] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:47.607 [2024-05-15 17:04:35.226559] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:13:47.607 [2024-05-15 17:04:35.251997] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:47.865 [2024-05-15 17:04:35.275545] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:13:47.865 [2024-05-15 17:04:35.329530] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 7 00:13:47.865 Running I/O for 1 seconds... 00:13:47.865 Running I/O for 1 seconds... 00:13:47.865 Running I/O for 1 seconds... 00:13:47.865 Running I/O for 1 seconds... 00:13:48.797 00:13:48.797 Latency(us) 00:13:48.797 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:48.797 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:13:48.797 Nvme1n1 : 1.01 11569.91 45.19 0.00 0.00 11025.45 6154.69 18236.10 00:13:48.797 =================================================================================================================== 00:13:48.797 Total : 11569.91 45.19 0.00 0.00 11025.45 6154.69 18236.10 00:13:48.797 00:13:48.797 Latency(us) 00:13:48.797 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:48.797 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:13:48.797 Nvme1n1 : 1.00 245707.06 959.79 0.00 0.00 518.94 208.36 680.29 00:13:48.797 =================================================================================================================== 00:13:48.797 Total : 245707.06 959.79 0.00 0.00 518.94 208.36 680.29 00:13:48.797 00:13:48.797 Latency(us) 00:13:48.797 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:48.797 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:13:48.797 Nvme1n1 : 1.01 9882.93 38.61 0.00 0.00 12897.31 5271.37 16754.42 00:13:48.797 =================================================================================================================== 00:13:48.797 Total : 9882.93 38.61 0.00 0.00 12897.31 5271.37 16754.42 00:13:49.054 00:13:49.054 Latency(us) 00:13:49.055 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:49.055 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:13:49.055 Nvme1n1 : 1.00 10710.47 41.84 0.00 0.00 11919.50 4559.03 25644.52 00:13:49.055 =================================================================================================================== 00:13:49.055 Total : 10710.47 41.84 0.00 0.00 11919.50 4559.03 25644.52 00:13:49.055 17:04:36 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 3024294 00:13:49.055 17:04:36 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 3024296 00:13:49.055 17:04:36 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 3024299 00:13:49.312 17:04:36 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:49.312 17:04:36 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:49.312 17:04:36 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:13:49.312 17:04:36 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:49.312 17:04:36 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:13:49.312 17:04:36 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:13:49.312 17:04:36 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@488 -- # nvmfcleanup 00:13:49.312 17:04:36 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@117 -- # sync 00:13:49.312 17:04:36 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:13:49.312 17:04:36 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@120 -- # set +e 00:13:49.312 17:04:36 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@121 -- # for i in {1..20} 00:13:49.312 17:04:36 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:13:49.312 rmmod nvme_tcp 00:13:49.312 rmmod nvme_fabrics 00:13:49.312 rmmod nvme_keyring 00:13:49.312 17:04:36 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:13:49.313 17:04:36 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@124 -- # set -e 00:13:49.313 17:04:36 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # return 0 00:13:49.313 17:04:36 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@489 -- # '[' -n 3024048 ']' 00:13:49.313 17:04:36 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@490 -- # killprocess 3024048 00:13:49.313 17:04:36 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@946 -- # '[' -z 3024048 ']' 00:13:49.313 17:04:36 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@950 -- # kill -0 3024048 00:13:49.313 17:04:36 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@951 -- # uname 00:13:49.313 17:04:36 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:13:49.313 17:04:36 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3024048 00:13:49.313 17:04:36 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:13:49.313 17:04:36 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:13:49.313 17:04:36 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3024048' 00:13:49.313 killing process with pid 3024048 00:13:49.313 17:04:36 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@965 -- # kill 3024048 00:13:49.313 [2024-05-15 17:04:36.846066] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:13:49.313 17:04:36 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@970 -- # wait 3024048 00:13:49.571 17:04:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:13:49.571 17:04:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:13:49.571 17:04:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:13:49.571 17:04:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:13:49.571 17:04:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@278 -- # remove_spdk_ns 00:13:49.571 17:04:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:49.571 17:04:37 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:49.571 17:04:37 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:51.472 17:04:39 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:13:51.472 00:13:51.472 real 0m10.986s 00:13:51.472 user 0m18.983s 00:13:51.472 sys 0m5.840s 00:13:51.472 17:04:39 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@1122 -- # xtrace_disable 00:13:51.472 17:04:39 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:13:51.472 ************************************ 00:13:51.472 END TEST nvmf_bdev_io_wait 00:13:51.472 ************************************ 00:13:51.730 17:04:39 nvmf_tcp -- nvmf/nvmf.sh@51 -- # run_test nvmf_queue_depth /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:13:51.730 17:04:39 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:13:51.730 17:04:39 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:13:51.730 17:04:39 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:13:51.730 ************************************ 00:13:51.730 START TEST nvmf_queue_depth 00:13:51.730 ************************************ 00:13:51.730 17:04:39 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:13:51.730 * Looking for test storage... 00:13:51.730 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:51.730 17:04:39 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:51.730 17:04:39 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 00:13:51.730 17:04:39 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:51.730 17:04:39 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:51.730 17:04:39 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:51.730 17:04:39 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:51.730 17:04:39 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:51.730 17:04:39 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:51.730 17:04:39 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:51.730 17:04:39 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:51.730 17:04:39 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:51.730 17:04:39 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:51.730 17:04:39 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:13:51.730 17:04:39 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:13:51.730 17:04:39 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:51.731 17:04:39 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:51.731 17:04:39 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:51.731 17:04:39 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:51.731 17:04:39 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:51.731 17:04:39 nvmf_tcp.nvmf_queue_depth -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:51.731 17:04:39 nvmf_tcp.nvmf_queue_depth -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:51.731 17:04:39 nvmf_tcp.nvmf_queue_depth -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:51.731 17:04:39 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:51.731 17:04:39 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:51.731 17:04:39 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:51.731 17:04:39 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 00:13:51.731 17:04:39 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:51.731 17:04:39 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@47 -- # : 0 00:13:51.731 17:04:39 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:13:51.731 17:04:39 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:13:51.731 17:04:39 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:51.731 17:04:39 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:51.731 17:04:39 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:51.731 17:04:39 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:13:51.731 17:04:39 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:13:51.731 17:04:39 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@51 -- # have_pci_nics=0 00:13:51.731 17:04:39 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:13:51.731 17:04:39 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:13:51.731 17:04:39 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:13:51.731 17:04:39 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 00:13:51.731 17:04:39 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:13:51.731 17:04:39 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:51.731 17:04:39 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@448 -- # prepare_net_devs 00:13:51.731 17:04:39 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@410 -- # local -g is_hw=no 00:13:51.731 17:04:39 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@412 -- # remove_spdk_ns 00:13:51.731 17:04:39 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:51.731 17:04:39 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:51.731 17:04:39 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:51.731 17:04:39 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:13:51.731 17:04:39 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:13:51.731 17:04:39 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@285 -- # xtrace_disable 00:13:51.731 17:04:39 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:13:56.990 17:04:44 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:56.990 17:04:44 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@291 -- # pci_devs=() 00:13:56.990 17:04:44 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@291 -- # local -a pci_devs 00:13:56.990 17:04:44 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@292 -- # pci_net_devs=() 00:13:56.990 17:04:44 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:13:56.990 17:04:44 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@293 -- # pci_drivers=() 00:13:56.990 17:04:44 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@293 -- # local -A pci_drivers 00:13:56.990 17:04:44 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@295 -- # net_devs=() 00:13:56.990 17:04:44 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@295 -- # local -ga net_devs 00:13:56.990 17:04:44 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@296 -- # e810=() 00:13:56.990 17:04:44 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@296 -- # local -ga e810 00:13:56.990 17:04:44 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@297 -- # x722=() 00:13:56.990 17:04:44 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@297 -- # local -ga x722 00:13:56.990 17:04:44 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@298 -- # mlx=() 00:13:56.990 17:04:44 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@298 -- # local -ga mlx 00:13:56.990 17:04:44 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:56.990 17:04:44 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:56.990 17:04:44 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:56.990 17:04:44 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:56.990 17:04:44 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:56.990 17:04:44 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:56.990 17:04:44 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:56.990 17:04:44 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:56.990 17:04:44 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:56.990 17:04:44 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:56.990 17:04:44 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:56.990 17:04:44 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:13:56.990 17:04:44 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:13:56.990 17:04:44 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:13:56.990 17:04:44 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:13:56.990 17:04:44 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:13:56.990 17:04:44 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:13:56.990 17:04:44 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:56.990 17:04:44 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:13:56.990 Found 0000:86:00.0 (0x8086 - 0x159b) 00:13:56.990 17:04:44 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:56.990 17:04:44 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:56.990 17:04:44 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:56.990 17:04:44 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:56.990 17:04:44 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:56.990 17:04:44 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:56.990 17:04:44 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:13:56.990 Found 0000:86:00.1 (0x8086 - 0x159b) 00:13:56.990 17:04:44 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:56.990 17:04:44 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:56.990 17:04:44 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:56.990 17:04:44 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:56.990 17:04:44 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:56.990 17:04:44 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:13:56.990 17:04:44 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:13:56.990 17:04:44 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:13:56.990 17:04:44 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:56.990 17:04:44 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:56.990 17:04:44 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:13:56.990 17:04:44 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:56.990 17:04:44 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@390 -- # [[ up == up ]] 00:13:56.990 17:04:44 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:56.990 17:04:44 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:56.990 17:04:44 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:13:56.990 Found net devices under 0000:86:00.0: cvl_0_0 00:13:56.990 17:04:44 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:56.990 17:04:44 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:56.990 17:04:44 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:56.990 17:04:44 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:13:56.990 17:04:44 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:56.990 17:04:44 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@390 -- # [[ up == up ]] 00:13:56.990 17:04:44 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:56.990 17:04:44 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:56.990 17:04:44 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:13:56.990 Found net devices under 0000:86:00.1: cvl_0_1 00:13:56.990 17:04:44 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:56.990 17:04:44 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:13:56.990 17:04:44 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@414 -- # is_hw=yes 00:13:56.990 17:04:44 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:13:56.990 17:04:44 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:13:56.990 17:04:44 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:13:56.990 17:04:44 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:56.990 17:04:44 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:56.990 17:04:44 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:56.990 17:04:44 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:13:56.990 17:04:44 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:56.990 17:04:44 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:56.990 17:04:44 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:13:56.990 17:04:44 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:56.990 17:04:44 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:56.990 17:04:44 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:13:56.990 17:04:44 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:13:56.990 17:04:44 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:13:56.990 17:04:44 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:57.246 17:04:44 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:57.246 17:04:44 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:57.246 17:04:44 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:13:57.246 17:04:44 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:57.246 17:04:44 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:57.246 17:04:44 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:57.246 17:04:44 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:13:57.246 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:57.246 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.196 ms 00:13:57.246 00:13:57.246 --- 10.0.0.2 ping statistics --- 00:13:57.246 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:57.246 rtt min/avg/max/mdev = 0.196/0.196/0.196/0.000 ms 00:13:57.246 17:04:44 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:57.246 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:57.246 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.243 ms 00:13:57.246 00:13:57.246 --- 10.0.0.1 ping statistics --- 00:13:57.246 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:57.246 rtt min/avg/max/mdev = 0.243/0.243/0.243/0.000 ms 00:13:57.246 17:04:44 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:57.246 17:04:44 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@422 -- # return 0 00:13:57.246 17:04:44 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:13:57.246 17:04:44 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:57.246 17:04:44 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:13:57.246 17:04:44 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:13:57.246 17:04:44 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:57.246 17:04:44 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:13:57.246 17:04:44 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:13:57.246 17:04:44 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:13:57.246 17:04:44 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:13:57.246 17:04:44 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@720 -- # xtrace_disable 00:13:57.246 17:04:44 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:13:57.246 17:04:44 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@481 -- # nvmfpid=3028069 00:13:57.246 17:04:44 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@482 -- # waitforlisten 3028069 00:13:57.246 17:04:44 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@827 -- # '[' -z 3028069 ']' 00:13:57.246 17:04:44 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:57.246 17:04:44 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@832 -- # local max_retries=100 00:13:57.246 17:04:44 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:57.246 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:57.246 17:04:44 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@836 -- # xtrace_disable 00:13:57.246 17:04:44 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:13:57.246 17:04:44 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:13:57.246 [2024-05-15 17:04:44.856130] Starting SPDK v24.05-pre git sha1 0ba8ca574 / DPDK 23.11.0 initialization... 00:13:57.246 [2024-05-15 17:04:44.856186] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:57.246 EAL: No free 2048 kB hugepages reported on node 1 00:13:57.502 [2024-05-15 17:04:44.912831] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:57.502 [2024-05-15 17:04:44.990491] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:57.502 [2024-05-15 17:04:44.990528] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:57.502 [2024-05-15 17:04:44.990534] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:57.502 [2024-05-15 17:04:44.990541] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:57.502 [2024-05-15 17:04:44.990546] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:57.502 [2024-05-15 17:04:44.990562] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:13:58.063 17:04:45 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:13:58.063 17:04:45 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@860 -- # return 0 00:13:58.063 17:04:45 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:13:58.063 17:04:45 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@726 -- # xtrace_disable 00:13:58.063 17:04:45 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:13:58.063 17:04:45 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:58.063 17:04:45 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:13:58.063 17:04:45 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:58.063 17:04:45 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:13:58.063 [2024-05-15 17:04:45.712529] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:58.063 17:04:45 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:58.063 17:04:45 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:13:58.063 17:04:45 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:58.063 17:04:45 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:13:58.320 Malloc0 00:13:58.320 17:04:45 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:58.320 17:04:45 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:13:58.320 17:04:45 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:58.320 17:04:45 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:13:58.320 17:04:45 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:58.320 17:04:45 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:13:58.320 17:04:45 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:58.320 17:04:45 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:13:58.320 17:04:45 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:58.320 17:04:45 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:58.320 17:04:45 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:58.320 17:04:45 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:13:58.321 [2024-05-15 17:04:45.765143] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:13:58.321 [2024-05-15 17:04:45.765363] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:58.321 17:04:45 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:58.321 17:04:45 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=3028316 00:13:58.321 17:04:45 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:13:58.321 17:04:45 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 3028316 /var/tmp/bdevperf.sock 00:13:58.321 17:04:45 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@827 -- # '[' -z 3028316 ']' 00:13:58.321 17:04:45 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:13:58.321 17:04:45 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@832 -- # local max_retries=100 00:13:58.321 17:04:45 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:13:58.321 17:04:45 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:13:58.321 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:13:58.321 17:04:45 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@836 -- # xtrace_disable 00:13:58.321 17:04:45 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:13:58.321 [2024-05-15 17:04:45.812549] Starting SPDK v24.05-pre git sha1 0ba8ca574 / DPDK 23.11.0 initialization... 00:13:58.321 [2024-05-15 17:04:45.812589] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3028316 ] 00:13:58.321 EAL: No free 2048 kB hugepages reported on node 1 00:13:58.321 [2024-05-15 17:04:45.864453] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:58.321 [2024-05-15 17:04:45.936825] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:59.284 17:04:46 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:13:59.284 17:04:46 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@860 -- # return 0 00:13:59.284 17:04:46 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:13:59.284 17:04:46 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:59.284 17:04:46 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:13:59.284 NVMe0n1 00:13:59.284 17:04:46 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:59.284 17:04:46 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:13:59.284 Running I/O for 10 seconds... 00:14:09.253 00:14:09.253 Latency(us) 00:14:09.253 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:09.253 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:14:09.253 Verification LBA range: start 0x0 length 0x4000 00:14:09.253 NVMe0n1 : 10.05 12144.59 47.44 0.00 0.00 84014.93 10029.86 59723.24 00:14:09.253 =================================================================================================================== 00:14:09.253 Total : 12144.59 47.44 0.00 0.00 84014.93 10029.86 59723.24 00:14:09.253 0 00:14:09.253 17:04:56 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 3028316 00:14:09.253 17:04:56 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@946 -- # '[' -z 3028316 ']' 00:14:09.253 17:04:56 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@950 -- # kill -0 3028316 00:14:09.253 17:04:56 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@951 -- # uname 00:14:09.253 17:04:56 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:14:09.253 17:04:56 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3028316 00:14:09.511 17:04:56 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:14:09.511 17:04:56 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:14:09.511 17:04:56 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3028316' 00:14:09.511 killing process with pid 3028316 00:14:09.511 17:04:56 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@965 -- # kill 3028316 00:14:09.511 Received shutdown signal, test time was about 10.000000 seconds 00:14:09.511 00:14:09.511 Latency(us) 00:14:09.512 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:09.512 =================================================================================================================== 00:14:09.512 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:14:09.512 17:04:56 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@970 -- # wait 3028316 00:14:09.512 17:04:57 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:14:09.512 17:04:57 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 00:14:09.512 17:04:57 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@488 -- # nvmfcleanup 00:14:09.512 17:04:57 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@117 -- # sync 00:14:09.512 17:04:57 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:14:09.512 17:04:57 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@120 -- # set +e 00:14:09.512 17:04:57 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@121 -- # for i in {1..20} 00:14:09.512 17:04:57 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:14:09.512 rmmod nvme_tcp 00:14:09.512 rmmod nvme_fabrics 00:14:09.770 rmmod nvme_keyring 00:14:09.770 17:04:57 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:14:09.770 17:04:57 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@124 -- # set -e 00:14:09.770 17:04:57 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@125 -- # return 0 00:14:09.770 17:04:57 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@489 -- # '[' -n 3028069 ']' 00:14:09.770 17:04:57 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@490 -- # killprocess 3028069 00:14:09.770 17:04:57 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@946 -- # '[' -z 3028069 ']' 00:14:09.770 17:04:57 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@950 -- # kill -0 3028069 00:14:09.770 17:04:57 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@951 -- # uname 00:14:09.770 17:04:57 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:14:09.770 17:04:57 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3028069 00:14:09.770 17:04:57 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:14:09.770 17:04:57 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:14:09.770 17:04:57 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3028069' 00:14:09.770 killing process with pid 3028069 00:14:09.770 17:04:57 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@965 -- # kill 3028069 00:14:09.770 [2024-05-15 17:04:57.244256] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:14:09.770 17:04:57 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@970 -- # wait 3028069 00:14:10.030 17:04:57 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:14:10.030 17:04:57 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:14:10.030 17:04:57 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:14:10.030 17:04:57 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:10.030 17:04:57 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@278 -- # remove_spdk_ns 00:14:10.030 17:04:57 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:10.030 17:04:57 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:10.030 17:04:57 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:11.934 17:04:59 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:14:11.934 00:14:11.934 real 0m20.339s 00:14:11.934 user 0m24.847s 00:14:11.934 sys 0m5.676s 00:14:11.934 17:04:59 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@1122 -- # xtrace_disable 00:14:11.934 17:04:59 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:14:11.934 ************************************ 00:14:11.934 END TEST nvmf_queue_depth 00:14:11.935 ************************************ 00:14:11.935 17:04:59 nvmf_tcp -- nvmf/nvmf.sh@52 -- # run_test nvmf_target_multipath /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:14:11.935 17:04:59 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:14:11.935 17:04:59 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:14:11.935 17:04:59 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:14:12.193 ************************************ 00:14:12.193 START TEST nvmf_target_multipath 00:14:12.193 ************************************ 00:14:12.193 17:04:59 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:14:12.193 * Looking for test storage... 00:14:12.193 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:12.193 17:04:59 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:12.193 17:04:59 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s 00:14:12.193 17:04:59 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:12.193 17:04:59 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:12.193 17:04:59 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:12.193 17:04:59 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:12.193 17:04:59 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:12.193 17:04:59 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:12.193 17:04:59 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:12.193 17:04:59 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:12.193 17:04:59 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:12.193 17:04:59 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:12.193 17:04:59 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:14:12.193 17:04:59 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:14:12.193 17:04:59 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:12.193 17:04:59 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:12.193 17:04:59 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:12.193 17:04:59 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:12.193 17:04:59 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:12.193 17:04:59 nvmf_tcp.nvmf_target_multipath -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:12.193 17:04:59 nvmf_tcp.nvmf_target_multipath -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:12.193 17:04:59 nvmf_tcp.nvmf_target_multipath -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:12.193 17:04:59 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:12.193 17:04:59 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:12.193 17:04:59 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:12.193 17:04:59 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH 00:14:12.193 17:04:59 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:12.193 17:04:59 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@47 -- # : 0 00:14:12.193 17:04:59 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:14:12.193 17:04:59 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:14:12.193 17:04:59 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:12.193 17:04:59 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:12.193 17:04:59 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:12.193 17:04:59 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:14:12.193 17:04:59 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:14:12.193 17:04:59 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@51 -- # have_pci_nics=0 00:14:12.193 17:04:59 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:14:12.193 17:04:59 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:14:12.193 17:04:59 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:14:12.193 17:04:59 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:12.193 17:04:59 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit 00:14:12.193 17:04:59 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:14:12.193 17:04:59 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:12.193 17:04:59 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@448 -- # prepare_net_devs 00:14:12.193 17:04:59 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@410 -- # local -g is_hw=no 00:14:12.193 17:04:59 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@412 -- # remove_spdk_ns 00:14:12.193 17:04:59 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:12.193 17:04:59 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:12.193 17:04:59 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:12.193 17:04:59 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:14:12.193 17:04:59 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:14:12.193 17:04:59 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@285 -- # xtrace_disable 00:14:12.193 17:04:59 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:14:17.458 17:05:04 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:14:17.458 17:05:04 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@291 -- # pci_devs=() 00:14:17.458 17:05:04 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@291 -- # local -a pci_devs 00:14:17.458 17:05:04 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@292 -- # pci_net_devs=() 00:14:17.458 17:05:04 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:14:17.458 17:05:04 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@293 -- # pci_drivers=() 00:14:17.458 17:05:04 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@293 -- # local -A pci_drivers 00:14:17.458 17:05:04 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@295 -- # net_devs=() 00:14:17.458 17:05:04 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@295 -- # local -ga net_devs 00:14:17.458 17:05:04 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@296 -- # e810=() 00:14:17.458 17:05:04 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@296 -- # local -ga e810 00:14:17.458 17:05:04 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@297 -- # x722=() 00:14:17.458 17:05:04 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@297 -- # local -ga x722 00:14:17.458 17:05:04 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@298 -- # mlx=() 00:14:17.458 17:05:04 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@298 -- # local -ga mlx 00:14:17.458 17:05:04 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:17.458 17:05:04 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:17.458 17:05:04 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:17.458 17:05:04 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:17.458 17:05:04 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:17.458 17:05:04 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:17.458 17:05:04 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:17.458 17:05:04 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:17.458 17:05:04 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:17.458 17:05:04 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:17.458 17:05:04 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:17.458 17:05:04 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:14:17.458 17:05:04 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:14:17.458 17:05:04 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:14:17.458 17:05:04 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:14:17.458 17:05:04 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:14:17.458 17:05:04 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:14:17.458 17:05:04 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:17.458 17:05:04 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:14:17.458 Found 0000:86:00.0 (0x8086 - 0x159b) 00:14:17.458 17:05:04 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:17.458 17:05:04 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:17.458 17:05:04 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:17.458 17:05:04 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:17.458 17:05:04 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:17.458 17:05:04 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:17.458 17:05:04 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:14:17.458 Found 0000:86:00.1 (0x8086 - 0x159b) 00:14:17.458 17:05:04 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:17.458 17:05:04 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:17.458 17:05:04 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:17.458 17:05:04 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:17.458 17:05:04 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:17.458 17:05:04 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:14:17.458 17:05:04 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:14:17.458 17:05:04 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:14:17.458 17:05:04 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:17.458 17:05:04 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:17.458 17:05:04 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:17.458 17:05:04 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:17.458 17:05:04 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:17.458 17:05:04 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:17.458 17:05:04 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:17.459 17:05:04 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:14:17.459 Found net devices under 0000:86:00.0: cvl_0_0 00:14:17.459 17:05:04 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:17.459 17:05:04 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:17.459 17:05:04 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:17.459 17:05:04 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:17.459 17:05:04 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:17.459 17:05:04 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:17.459 17:05:04 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:17.459 17:05:04 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:17.459 17:05:04 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:14:17.459 Found net devices under 0000:86:00.1: cvl_0_1 00:14:17.459 17:05:04 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:17.459 17:05:04 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:14:17.459 17:05:04 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@414 -- # is_hw=yes 00:14:17.459 17:05:04 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:14:17.459 17:05:04 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:14:17.459 17:05:04 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:14:17.459 17:05:04 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:17.459 17:05:04 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:17.459 17:05:04 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:17.459 17:05:04 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:14:17.459 17:05:04 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:17.459 17:05:04 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:17.459 17:05:04 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:14:17.459 17:05:04 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:17.459 17:05:04 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:17.459 17:05:04 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:14:17.459 17:05:04 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:14:17.459 17:05:04 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:14:17.459 17:05:04 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:17.459 17:05:04 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:17.459 17:05:04 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:17.459 17:05:04 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:14:17.459 17:05:04 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:17.459 17:05:05 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:17.459 17:05:05 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:17.459 17:05:05 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:14:17.459 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:17.459 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.311 ms 00:14:17.459 00:14:17.459 --- 10.0.0.2 ping statistics --- 00:14:17.459 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:17.459 rtt min/avg/max/mdev = 0.311/0.311/0.311/0.000 ms 00:14:17.459 17:05:05 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:17.459 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:17.459 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.106 ms 00:14:17.459 00:14:17.459 --- 10.0.0.1 ping statistics --- 00:14:17.459 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:17.459 rtt min/avg/max/mdev = 0.106/0.106/0.106/0.000 ms 00:14:17.459 17:05:05 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:17.459 17:05:05 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@422 -- # return 0 00:14:17.459 17:05:05 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:14:17.459 17:05:05 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:17.459 17:05:05 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:14:17.459 17:05:05 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:14:17.459 17:05:05 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:17.459 17:05:05 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:14:17.459 17:05:05 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:14:17.717 17:05:05 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z ']' 00:14:17.717 17:05:05 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@46 -- # echo 'only one NIC for nvmf test' 00:14:17.717 only one NIC for nvmf test 00:14:17.717 17:05:05 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@47 -- # nvmftestfini 00:14:17.717 17:05:05 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@488 -- # nvmfcleanup 00:14:17.717 17:05:05 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@117 -- # sync 00:14:17.717 17:05:05 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:14:17.717 17:05:05 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@120 -- # set +e 00:14:17.717 17:05:05 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@121 -- # for i in {1..20} 00:14:17.717 17:05:05 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:14:17.717 rmmod nvme_tcp 00:14:17.717 rmmod nvme_fabrics 00:14:17.717 rmmod nvme_keyring 00:14:17.717 17:05:05 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:14:17.717 17:05:05 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@124 -- # set -e 00:14:17.717 17:05:05 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@125 -- # return 0 00:14:17.717 17:05:05 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:14:17.717 17:05:05 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:14:17.717 17:05:05 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:14:17.717 17:05:05 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:14:17.717 17:05:05 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:17.717 17:05:05 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@278 -- # remove_spdk_ns 00:14:17.717 17:05:05 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:17.717 17:05:05 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:17.717 17:05:05 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:19.618 17:05:07 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:14:19.618 17:05:07 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@48 -- # exit 0 00:14:19.618 17:05:07 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@1 -- # nvmftestfini 00:14:19.618 17:05:07 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@488 -- # nvmfcleanup 00:14:19.618 17:05:07 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@117 -- # sync 00:14:19.618 17:05:07 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:14:19.618 17:05:07 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@120 -- # set +e 00:14:19.618 17:05:07 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@121 -- # for i in {1..20} 00:14:19.618 17:05:07 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:14:19.618 17:05:07 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:14:19.618 17:05:07 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@124 -- # set -e 00:14:19.618 17:05:07 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@125 -- # return 0 00:14:19.618 17:05:07 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:14:19.618 17:05:07 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:14:19.618 17:05:07 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:14:19.618 17:05:07 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:14:19.618 17:05:07 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:19.618 17:05:07 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@278 -- # remove_spdk_ns 00:14:19.618 17:05:07 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:19.618 17:05:07 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:19.618 17:05:07 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:19.618 17:05:07 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:14:19.618 00:14:19.618 real 0m7.668s 00:14:19.618 user 0m1.566s 00:14:19.618 sys 0m4.085s 00:14:19.618 17:05:07 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1122 -- # xtrace_disable 00:14:19.618 17:05:07 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:14:19.618 ************************************ 00:14:19.618 END TEST nvmf_target_multipath 00:14:19.618 ************************************ 00:14:19.877 17:05:07 nvmf_tcp -- nvmf/nvmf.sh@53 -- # run_test nvmf_zcopy /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:14:19.877 17:05:07 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:14:19.877 17:05:07 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:14:19.877 17:05:07 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:14:19.877 ************************************ 00:14:19.877 START TEST nvmf_zcopy 00:14:19.877 ************************************ 00:14:19.877 17:05:07 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:14:19.877 * Looking for test storage... 00:14:19.877 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:19.877 17:05:07 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:19.877 17:05:07 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 00:14:19.877 17:05:07 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:19.877 17:05:07 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:19.877 17:05:07 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:19.877 17:05:07 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:19.877 17:05:07 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:19.877 17:05:07 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:19.877 17:05:07 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:19.877 17:05:07 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:19.877 17:05:07 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:19.877 17:05:07 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:19.877 17:05:07 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:14:19.877 17:05:07 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:14:19.877 17:05:07 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:19.877 17:05:07 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:19.877 17:05:07 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:19.877 17:05:07 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:19.877 17:05:07 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:19.877 17:05:07 nvmf_tcp.nvmf_zcopy -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:19.877 17:05:07 nvmf_tcp.nvmf_zcopy -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:19.877 17:05:07 nvmf_tcp.nvmf_zcopy -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:19.877 17:05:07 nvmf_tcp.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:19.877 17:05:07 nvmf_tcp.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:19.877 17:05:07 nvmf_tcp.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:19.877 17:05:07 nvmf_tcp.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 00:14:19.877 17:05:07 nvmf_tcp.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:19.877 17:05:07 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@47 -- # : 0 00:14:19.877 17:05:07 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:14:19.877 17:05:07 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:14:19.877 17:05:07 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:19.877 17:05:07 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:19.877 17:05:07 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:19.877 17:05:07 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:14:19.877 17:05:07 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:14:19.877 17:05:07 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@51 -- # have_pci_nics=0 00:14:19.877 17:05:07 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 00:14:19.877 17:05:07 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:14:19.877 17:05:07 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:19.877 17:05:07 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@448 -- # prepare_net_devs 00:14:19.877 17:05:07 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@410 -- # local -g is_hw=no 00:14:19.877 17:05:07 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@412 -- # remove_spdk_ns 00:14:19.877 17:05:07 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:19.877 17:05:07 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:19.877 17:05:07 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:19.877 17:05:07 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:14:19.877 17:05:07 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:14:19.877 17:05:07 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@285 -- # xtrace_disable 00:14:19.877 17:05:07 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:14:25.146 17:05:12 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:14:25.147 17:05:12 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@291 -- # pci_devs=() 00:14:25.147 17:05:12 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@291 -- # local -a pci_devs 00:14:25.147 17:05:12 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@292 -- # pci_net_devs=() 00:14:25.147 17:05:12 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:14:25.147 17:05:12 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@293 -- # pci_drivers=() 00:14:25.147 17:05:12 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@293 -- # local -A pci_drivers 00:14:25.147 17:05:12 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@295 -- # net_devs=() 00:14:25.147 17:05:12 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@295 -- # local -ga net_devs 00:14:25.147 17:05:12 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@296 -- # e810=() 00:14:25.147 17:05:12 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@296 -- # local -ga e810 00:14:25.147 17:05:12 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@297 -- # x722=() 00:14:25.147 17:05:12 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@297 -- # local -ga x722 00:14:25.147 17:05:12 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@298 -- # mlx=() 00:14:25.147 17:05:12 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@298 -- # local -ga mlx 00:14:25.147 17:05:12 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:25.147 17:05:12 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:25.147 17:05:12 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:25.147 17:05:12 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:25.147 17:05:12 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:25.147 17:05:12 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:25.147 17:05:12 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:25.147 17:05:12 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:25.147 17:05:12 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:25.147 17:05:12 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:25.147 17:05:12 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:25.147 17:05:12 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:14:25.147 17:05:12 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:14:25.147 17:05:12 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:14:25.147 17:05:12 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:14:25.147 17:05:12 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:14:25.147 17:05:12 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:14:25.147 17:05:12 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:25.147 17:05:12 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:14:25.147 Found 0000:86:00.0 (0x8086 - 0x159b) 00:14:25.147 17:05:12 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:25.147 17:05:12 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:25.147 17:05:12 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:25.147 17:05:12 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:25.147 17:05:12 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:25.147 17:05:12 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:25.147 17:05:12 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:14:25.147 Found 0000:86:00.1 (0x8086 - 0x159b) 00:14:25.147 17:05:12 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:25.147 17:05:12 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:25.147 17:05:12 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:25.147 17:05:12 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:25.147 17:05:12 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:25.147 17:05:12 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:14:25.147 17:05:12 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:14:25.147 17:05:12 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:14:25.147 17:05:12 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:25.147 17:05:12 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:25.147 17:05:12 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:25.147 17:05:12 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:25.147 17:05:12 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:25.147 17:05:12 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:25.147 17:05:12 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:25.147 17:05:12 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:14:25.147 Found net devices under 0000:86:00.0: cvl_0_0 00:14:25.147 17:05:12 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:25.147 17:05:12 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:25.147 17:05:12 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:25.147 17:05:12 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:25.147 17:05:12 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:25.147 17:05:12 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:25.147 17:05:12 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:25.147 17:05:12 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:25.147 17:05:12 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:14:25.147 Found net devices under 0000:86:00.1: cvl_0_1 00:14:25.147 17:05:12 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:25.147 17:05:12 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:14:25.147 17:05:12 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@414 -- # is_hw=yes 00:14:25.147 17:05:12 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:14:25.147 17:05:12 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:14:25.147 17:05:12 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:14:25.147 17:05:12 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:25.147 17:05:12 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:25.147 17:05:12 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:25.147 17:05:12 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:14:25.147 17:05:12 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:25.147 17:05:12 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:25.147 17:05:12 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:14:25.147 17:05:12 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:25.147 17:05:12 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:25.147 17:05:12 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:14:25.147 17:05:12 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:14:25.147 17:05:12 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:14:25.147 17:05:12 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:25.147 17:05:12 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:25.147 17:05:12 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:25.147 17:05:12 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:14:25.147 17:05:12 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:25.147 17:05:12 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:25.147 17:05:12 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:25.147 17:05:12 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:14:25.147 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:25.147 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.319 ms 00:14:25.147 00:14:25.147 --- 10.0.0.2 ping statistics --- 00:14:25.147 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:25.147 rtt min/avg/max/mdev = 0.319/0.319/0.319/0.000 ms 00:14:25.147 17:05:12 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:25.147 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:25.147 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.107 ms 00:14:25.147 00:14:25.147 --- 10.0.0.1 ping statistics --- 00:14:25.147 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:25.147 rtt min/avg/max/mdev = 0.107/0.107/0.107/0.000 ms 00:14:25.147 17:05:12 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:25.147 17:05:12 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@422 -- # return 0 00:14:25.147 17:05:12 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:14:25.147 17:05:12 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:25.147 17:05:12 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:14:25.147 17:05:12 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:14:25.147 17:05:12 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:25.147 17:05:12 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:14:25.147 17:05:12 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:14:25.147 17:05:12 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:14:25.147 17:05:12 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:14:25.147 17:05:12 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@720 -- # xtrace_disable 00:14:25.147 17:05:12 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:14:25.147 17:05:12 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@481 -- # nvmfpid=3036964 00:14:25.147 17:05:12 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@482 -- # waitforlisten 3036964 00:14:25.147 17:05:12 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:14:25.147 17:05:12 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@827 -- # '[' -z 3036964 ']' 00:14:25.147 17:05:12 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:25.147 17:05:12 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@832 -- # local max_retries=100 00:14:25.147 17:05:12 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:25.147 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:25.147 17:05:12 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@836 -- # xtrace_disable 00:14:25.147 17:05:12 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:14:25.407 [2024-05-15 17:05:12.818370] Starting SPDK v24.05-pre git sha1 0ba8ca574 / DPDK 23.11.0 initialization... 00:14:25.407 [2024-05-15 17:05:12.818417] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:25.407 EAL: No free 2048 kB hugepages reported on node 1 00:14:25.407 [2024-05-15 17:05:12.876890] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:25.407 [2024-05-15 17:05:12.956742] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:25.407 [2024-05-15 17:05:12.956776] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:25.407 [2024-05-15 17:05:12.956783] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:25.407 [2024-05-15 17:05:12.956789] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:25.407 [2024-05-15 17:05:12.956795] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:25.407 [2024-05-15 17:05:12.956813] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:14:25.971 17:05:13 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:14:25.971 17:05:13 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@860 -- # return 0 00:14:25.971 17:05:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:14:25.971 17:05:13 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@726 -- # xtrace_disable 00:14:25.971 17:05:13 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:14:26.230 17:05:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:26.230 17:05:13 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:14:26.230 17:05:13 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:14:26.230 17:05:13 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:26.230 17:05:13 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:14:26.230 [2024-05-15 17:05:13.664269] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:26.230 17:05:13 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:26.230 17:05:13 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:14:26.230 17:05:13 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:26.230 17:05:13 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:14:26.230 17:05:13 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:26.230 17:05:13 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:26.230 17:05:13 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:26.230 17:05:13 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:14:26.230 [2024-05-15 17:05:13.684268] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:14:26.230 [2024-05-15 17:05:13.684449] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:26.230 17:05:13 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:26.230 17:05:13 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:14:26.230 17:05:13 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:26.230 17:05:13 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:14:26.230 17:05:13 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:26.230 17:05:13 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:14:26.230 17:05:13 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:26.230 17:05:13 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:14:26.230 malloc0 00:14:26.230 17:05:13 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:26.230 17:05:13 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:14:26.230 17:05:13 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:26.230 17:05:13 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:14:26.230 17:05:13 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:26.230 17:05:13 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:14:26.230 17:05:13 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:14:26.230 17:05:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@532 -- # config=() 00:14:26.230 17:05:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@532 -- # local subsystem config 00:14:26.230 17:05:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:14:26.230 17:05:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:14:26.230 { 00:14:26.230 "params": { 00:14:26.230 "name": "Nvme$subsystem", 00:14:26.230 "trtype": "$TEST_TRANSPORT", 00:14:26.230 "traddr": "$NVMF_FIRST_TARGET_IP", 00:14:26.230 "adrfam": "ipv4", 00:14:26.230 "trsvcid": "$NVMF_PORT", 00:14:26.230 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:14:26.230 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:14:26.230 "hdgst": ${hdgst:-false}, 00:14:26.230 "ddgst": ${ddgst:-false} 00:14:26.230 }, 00:14:26.230 "method": "bdev_nvme_attach_controller" 00:14:26.230 } 00:14:26.230 EOF 00:14:26.230 )") 00:14:26.230 17:05:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@554 -- # cat 00:14:26.230 17:05:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@556 -- # jq . 00:14:26.230 17:05:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@557 -- # IFS=, 00:14:26.230 17:05:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:14:26.230 "params": { 00:14:26.230 "name": "Nvme1", 00:14:26.230 "trtype": "tcp", 00:14:26.230 "traddr": "10.0.0.2", 00:14:26.230 "adrfam": "ipv4", 00:14:26.230 "trsvcid": "4420", 00:14:26.230 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:14:26.230 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:14:26.230 "hdgst": false, 00:14:26.230 "ddgst": false 00:14:26.230 }, 00:14:26.230 "method": "bdev_nvme_attach_controller" 00:14:26.230 }' 00:14:26.230 [2024-05-15 17:05:13.763504] Starting SPDK v24.05-pre git sha1 0ba8ca574 / DPDK 23.11.0 initialization... 00:14:26.230 [2024-05-15 17:05:13.763544] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3037213 ] 00:14:26.230 EAL: No free 2048 kB hugepages reported on node 1 00:14:26.230 [2024-05-15 17:05:13.817214] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:26.489 [2024-05-15 17:05:13.890046] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:26.747 Running I/O for 10 seconds... 00:14:36.784 00:14:36.784 Latency(us) 00:14:36.784 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:36.784 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:14:36.784 Verification LBA range: start 0x0 length 0x1000 00:14:36.784 Nvme1n1 : 10.01 8689.48 67.89 0.00 0.00 14687.87 2407.74 25758.50 00:14:36.784 =================================================================================================================== 00:14:36.784 Total : 8689.48 67.89 0.00 0.00 14687.87 2407.74 25758.50 00:14:37.045 17:05:24 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@39 -- # perfpid=3039031 00:14:37.045 17:05:24 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@41 -- # xtrace_disable 00:14:37.045 17:05:24 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:14:37.045 17:05:24 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:14:37.045 17:05:24 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:14:37.045 17:05:24 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@532 -- # config=() 00:14:37.045 17:05:24 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@532 -- # local subsystem config 00:14:37.045 17:05:24 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:14:37.045 17:05:24 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:14:37.045 { 00:14:37.045 "params": { 00:14:37.045 "name": "Nvme$subsystem", 00:14:37.045 "trtype": "$TEST_TRANSPORT", 00:14:37.045 "traddr": "$NVMF_FIRST_TARGET_IP", 00:14:37.045 "adrfam": "ipv4", 00:14:37.045 "trsvcid": "$NVMF_PORT", 00:14:37.045 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:14:37.045 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:14:37.045 "hdgst": ${hdgst:-false}, 00:14:37.045 "ddgst": ${ddgst:-false} 00:14:37.045 }, 00:14:37.045 "method": "bdev_nvme_attach_controller" 00:14:37.045 } 00:14:37.045 EOF 00:14:37.045 )") 00:14:37.045 17:05:24 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@554 -- # cat 00:14:37.045 [2024-05-15 17:05:24.456178] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:37.045 [2024-05-15 17:05:24.456210] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:37.045 17:05:24 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@556 -- # jq . 00:14:37.045 17:05:24 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@557 -- # IFS=, 00:14:37.045 17:05:24 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:14:37.045 "params": { 00:14:37.045 "name": "Nvme1", 00:14:37.045 "trtype": "tcp", 00:14:37.045 "traddr": "10.0.0.2", 00:14:37.045 "adrfam": "ipv4", 00:14:37.045 "trsvcid": "4420", 00:14:37.045 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:14:37.045 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:14:37.045 "hdgst": false, 00:14:37.045 "ddgst": false 00:14:37.045 }, 00:14:37.045 "method": "bdev_nvme_attach_controller" 00:14:37.045 }' 00:14:37.045 [2024-05-15 17:05:24.468173] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:37.045 [2024-05-15 17:05:24.468186] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:37.045 [2024-05-15 17:05:24.480203] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:37.045 [2024-05-15 17:05:24.480214] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:37.045 [2024-05-15 17:05:24.492234] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:37.045 [2024-05-15 17:05:24.492243] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:37.045 [2024-05-15 17:05:24.493153] Starting SPDK v24.05-pre git sha1 0ba8ca574 / DPDK 23.11.0 initialization... 00:14:37.045 [2024-05-15 17:05:24.493201] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3039031 ] 00:14:37.045 [2024-05-15 17:05:24.504267] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:37.045 [2024-05-15 17:05:24.504278] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:37.045 EAL: No free 2048 kB hugepages reported on node 1 00:14:37.045 [2024-05-15 17:05:24.516300] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:37.045 [2024-05-15 17:05:24.516310] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:37.045 [2024-05-15 17:05:24.528333] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:37.045 [2024-05-15 17:05:24.528344] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:37.045 [2024-05-15 17:05:24.540363] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:37.045 [2024-05-15 17:05:24.540373] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:37.045 [2024-05-15 17:05:24.545975] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:37.045 [2024-05-15 17:05:24.552401] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:37.045 [2024-05-15 17:05:24.552412] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:37.045 [2024-05-15 17:05:24.564442] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:37.045 [2024-05-15 17:05:24.564453] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:37.045 [2024-05-15 17:05:24.576473] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:37.045 [2024-05-15 17:05:24.576484] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:37.045 [2024-05-15 17:05:24.588504] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:37.045 [2024-05-15 17:05:24.588523] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:37.045 [2024-05-15 17:05:24.600530] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:37.045 [2024-05-15 17:05:24.600541] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:37.045 [2024-05-15 17:05:24.612573] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:37.045 [2024-05-15 17:05:24.612583] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:37.045 [2024-05-15 17:05:24.621508] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:37.045 [2024-05-15 17:05:24.624595] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:37.045 [2024-05-15 17:05:24.624605] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:37.045 [2024-05-15 17:05:24.636640] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:37.045 [2024-05-15 17:05:24.636658] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:37.045 [2024-05-15 17:05:24.648670] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:37.045 [2024-05-15 17:05:24.648683] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:37.045 [2024-05-15 17:05:24.660692] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:37.045 [2024-05-15 17:05:24.660704] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:37.045 [2024-05-15 17:05:24.672723] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:37.045 [2024-05-15 17:05:24.672733] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:37.045 [2024-05-15 17:05:24.684761] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:37.045 [2024-05-15 17:05:24.684772] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:37.045 [2024-05-15 17:05:24.696786] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:37.045 [2024-05-15 17:05:24.696796] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:37.305 [2024-05-15 17:05:24.708837] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:37.305 [2024-05-15 17:05:24.708859] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:37.305 [2024-05-15 17:05:24.720856] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:37.305 [2024-05-15 17:05:24.720871] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:37.305 [2024-05-15 17:05:24.732897] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:37.305 [2024-05-15 17:05:24.732911] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:37.305 [2024-05-15 17:05:24.744921] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:37.305 [2024-05-15 17:05:24.744931] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:37.305 [2024-05-15 17:05:24.756953] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:37.305 [2024-05-15 17:05:24.756964] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:37.305 [2024-05-15 17:05:24.768989] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:37.305 [2024-05-15 17:05:24.769000] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:37.305 [2024-05-15 17:05:24.781024] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:37.305 [2024-05-15 17:05:24.781038] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:37.305 [2024-05-15 17:05:24.793064] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:37.305 [2024-05-15 17:05:24.793079] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:37.305 [2024-05-15 17:05:24.805093] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:37.305 [2024-05-15 17:05:24.805103] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:37.305 [2024-05-15 17:05:24.817131] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:37.305 [2024-05-15 17:05:24.817149] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:37.305 Running I/O for 5 seconds... 00:14:37.305 [2024-05-15 17:05:24.833803] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:37.305 [2024-05-15 17:05:24.833823] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:37.305 [2024-05-15 17:05:24.849472] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:37.305 [2024-05-15 17:05:24.849492] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:37.305 [2024-05-15 17:05:24.863789] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:37.305 [2024-05-15 17:05:24.863808] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:37.305 [2024-05-15 17:05:24.874692] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:37.305 [2024-05-15 17:05:24.874711] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:37.305 [2024-05-15 17:05:24.884013] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:37.305 [2024-05-15 17:05:24.884032] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:37.305 [2024-05-15 17:05:24.893157] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:37.305 [2024-05-15 17:05:24.893180] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:37.305 [2024-05-15 17:05:24.908362] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:37.305 [2024-05-15 17:05:24.908384] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:37.305 [2024-05-15 17:05:24.919092] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:37.305 [2024-05-15 17:05:24.919112] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:37.305 [2024-05-15 17:05:24.933851] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:37.305 [2024-05-15 17:05:24.933869] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:37.305 [2024-05-15 17:05:24.949126] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:37.305 [2024-05-15 17:05:24.949150] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:37.305 [2024-05-15 17:05:24.963481] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:37.305 [2024-05-15 17:05:24.963500] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:37.565 [2024-05-15 17:05:24.974142] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:37.565 [2024-05-15 17:05:24.974161] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:37.565 [2024-05-15 17:05:24.983446] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:37.565 [2024-05-15 17:05:24.983464] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:37.565 [2024-05-15 17:05:24.992591] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:37.565 [2024-05-15 17:05:24.992610] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:37.565 [2024-05-15 17:05:25.001991] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:37.565 [2024-05-15 17:05:25.002010] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:37.565 [2024-05-15 17:05:25.016270] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:37.565 [2024-05-15 17:05:25.016289] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:37.565 [2024-05-15 17:05:25.030106] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:37.565 [2024-05-15 17:05:25.030126] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:37.565 [2024-05-15 17:05:25.043985] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:37.565 [2024-05-15 17:05:25.044004] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:37.565 [2024-05-15 17:05:25.057748] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:37.565 [2024-05-15 17:05:25.057767] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:37.565 [2024-05-15 17:05:25.071420] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:37.565 [2024-05-15 17:05:25.071439] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:37.565 [2024-05-15 17:05:25.080219] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:37.565 [2024-05-15 17:05:25.080238] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:37.565 [2024-05-15 17:05:25.094547] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:37.565 [2024-05-15 17:05:25.094567] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:37.565 [2024-05-15 17:05:25.108174] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:37.565 [2024-05-15 17:05:25.108192] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:37.565 [2024-05-15 17:05:25.122759] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:37.565 [2024-05-15 17:05:25.122778] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:37.565 [2024-05-15 17:05:25.133896] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:37.565 [2024-05-15 17:05:25.133915] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:37.565 [2024-05-15 17:05:25.148176] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:37.565 [2024-05-15 17:05:25.148196] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:37.565 [2024-05-15 17:05:25.161278] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:37.565 [2024-05-15 17:05:25.161297] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:37.565 [2024-05-15 17:05:25.170282] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:37.565 [2024-05-15 17:05:25.170301] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:37.565 [2024-05-15 17:05:25.184780] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:37.565 [2024-05-15 17:05:25.184804] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:37.565 [2024-05-15 17:05:25.198470] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:37.565 [2024-05-15 17:05:25.198489] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:37.565 [2024-05-15 17:05:25.207443] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:37.565 [2024-05-15 17:05:25.207461] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:37.565 [2024-05-15 17:05:25.221560] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:37.565 [2024-05-15 17:05:25.221579] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:37.825 [2024-05-15 17:05:25.229209] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:37.825 [2024-05-15 17:05:25.229227] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:37.825 [2024-05-15 17:05:25.238902] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:37.825 [2024-05-15 17:05:25.238921] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:37.825 [2024-05-15 17:05:25.247901] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:37.825 [2024-05-15 17:05:25.247920] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:37.825 [2024-05-15 17:05:25.257114] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:37.825 [2024-05-15 17:05:25.257133] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:37.825 [2024-05-15 17:05:25.266373] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:37.825 [2024-05-15 17:05:25.266391] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:37.825 [2024-05-15 17:05:25.275590] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:37.825 [2024-05-15 17:05:25.275609] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:37.825 [2024-05-15 17:05:25.284066] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:37.825 [2024-05-15 17:05:25.284084] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:37.825 [2024-05-15 17:05:25.292625] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:37.825 [2024-05-15 17:05:25.292643] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:37.825 [2024-05-15 17:05:25.301260] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:37.825 [2024-05-15 17:05:25.301279] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:37.825 [2024-05-15 17:05:25.310394] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:37.825 [2024-05-15 17:05:25.310412] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:37.825 [2024-05-15 17:05:25.319013] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:37.825 [2024-05-15 17:05:25.319031] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:37.825 [2024-05-15 17:05:25.327677] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:37.825 [2024-05-15 17:05:25.327694] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:37.825 [2024-05-15 17:05:25.336969] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:37.825 [2024-05-15 17:05:25.336986] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:37.825 [2024-05-15 17:05:25.345493] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:37.825 [2024-05-15 17:05:25.345511] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:37.825 [2024-05-15 17:05:25.354660] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:37.825 [2024-05-15 17:05:25.354678] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:37.825 [2024-05-15 17:05:25.363780] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:37.825 [2024-05-15 17:05:25.363803] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:37.825 [2024-05-15 17:05:25.373219] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:37.825 [2024-05-15 17:05:25.373237] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:37.825 [2024-05-15 17:05:25.381766] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:37.825 [2024-05-15 17:05:25.381784] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:37.825 [2024-05-15 17:05:25.391120] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:37.825 [2024-05-15 17:05:25.391138] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:37.825 [2024-05-15 17:05:25.397958] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:37.825 [2024-05-15 17:05:25.397976] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:37.825 [2024-05-15 17:05:25.408352] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:37.825 [2024-05-15 17:05:25.408371] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:37.825 [2024-05-15 17:05:25.416820] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:37.825 [2024-05-15 17:05:25.416839] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:37.825 [2024-05-15 17:05:25.425327] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:37.825 [2024-05-15 17:05:25.425345] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:37.825 [2024-05-15 17:05:25.434648] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:37.825 [2024-05-15 17:05:25.434666] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:37.825 [2024-05-15 17:05:25.443320] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:37.825 [2024-05-15 17:05:25.443339] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:37.825 [2024-05-15 17:05:25.452560] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:37.825 [2024-05-15 17:05:25.452578] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:37.825 [2024-05-15 17:05:25.461884] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:37.825 [2024-05-15 17:05:25.461902] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:37.825 [2024-05-15 17:05:25.471145] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:37.825 [2024-05-15 17:05:25.471170] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:37.825 [2024-05-15 17:05:25.480189] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:37.825 [2024-05-15 17:05:25.480208] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:38.085 [2024-05-15 17:05:25.489073] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:38.085 [2024-05-15 17:05:25.489093] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:38.085 [2024-05-15 17:05:25.498262] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:38.085 [2024-05-15 17:05:25.498282] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:38.085 [2024-05-15 17:05:25.507433] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:38.085 [2024-05-15 17:05:25.507452] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:38.085 [2024-05-15 17:05:25.516154] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:38.085 [2024-05-15 17:05:25.516179] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:38.085 [2024-05-15 17:05:25.525288] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:38.085 [2024-05-15 17:05:25.525307] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:38.085 [2024-05-15 17:05:25.533829] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:38.085 [2024-05-15 17:05:25.533847] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:38.085 [2024-05-15 17:05:25.543201] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:38.085 [2024-05-15 17:05:25.543220] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:38.085 [2024-05-15 17:05:25.552736] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:38.085 [2024-05-15 17:05:25.552755] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:38.085 [2024-05-15 17:05:25.562557] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:38.085 [2024-05-15 17:05:25.562576] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:38.085 [2024-05-15 17:05:25.571460] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:38.085 [2024-05-15 17:05:25.571478] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:38.085 [2024-05-15 17:05:25.580546] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:38.085 [2024-05-15 17:05:25.580564] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:38.085 [2024-05-15 17:05:25.589789] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:38.085 [2024-05-15 17:05:25.589808] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:38.085 [2024-05-15 17:05:25.598397] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:38.085 [2024-05-15 17:05:25.598414] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:38.085 [2024-05-15 17:05:25.607450] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:38.085 [2024-05-15 17:05:25.607468] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:38.085 [2024-05-15 17:05:25.616534] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:38.085 [2024-05-15 17:05:25.616552] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:38.085 [2024-05-15 17:05:25.625786] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:38.085 [2024-05-15 17:05:25.625805] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:38.085 [2024-05-15 17:05:25.634391] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:38.085 [2024-05-15 17:05:25.634410] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:38.085 [2024-05-15 17:05:25.643141] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:38.085 [2024-05-15 17:05:25.643160] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:38.085 [2024-05-15 17:05:25.650032] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:38.085 [2024-05-15 17:05:25.650051] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:38.085 [2024-05-15 17:05:25.660935] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:38.085 [2024-05-15 17:05:25.660954] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:38.085 [2024-05-15 17:05:25.669603] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:38.085 [2024-05-15 17:05:25.669620] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:38.085 [2024-05-15 17:05:25.678826] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:38.085 [2024-05-15 17:05:25.678845] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:38.085 [2024-05-15 17:05:25.687463] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:38.085 [2024-05-15 17:05:25.687482] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:38.085 [2024-05-15 17:05:25.696797] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:38.085 [2024-05-15 17:05:25.696816] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:38.085 [2024-05-15 17:05:25.705940] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:38.085 [2024-05-15 17:05:25.705959] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:38.085 [2024-05-15 17:05:25.714463] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:38.085 [2024-05-15 17:05:25.714482] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:38.085 [2024-05-15 17:05:25.723772] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:38.085 [2024-05-15 17:05:25.723791] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:38.085 [2024-05-15 17:05:25.732846] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:38.085 [2024-05-15 17:05:25.732864] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:38.085 [2024-05-15 17:05:25.741490] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:38.085 [2024-05-15 17:05:25.741509] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:38.345 [2024-05-15 17:05:25.750595] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:38.345 [2024-05-15 17:05:25.750613] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:38.345 [2024-05-15 17:05:25.759333] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:38.345 [2024-05-15 17:05:25.759351] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:38.345 [2024-05-15 17:05:25.768567] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:38.345 [2024-05-15 17:05:25.768585] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:38.345 [2024-05-15 17:05:25.777091] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:38.345 [2024-05-15 17:05:25.777110] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:38.345 [2024-05-15 17:05:25.785650] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:38.345 [2024-05-15 17:05:25.785668] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:38.345 [2024-05-15 17:05:25.794965] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:38.345 [2024-05-15 17:05:25.794984] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:38.345 [2024-05-15 17:05:25.803693] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:38.345 [2024-05-15 17:05:25.803711] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:38.345 [2024-05-15 17:05:25.812725] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:38.345 [2024-05-15 17:05:25.812743] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:38.345 [2024-05-15 17:05:25.821710] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:38.345 [2024-05-15 17:05:25.821729] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:38.345 [2024-05-15 17:05:25.831005] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:38.345 [2024-05-15 17:05:25.831024] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:38.345 [2024-05-15 17:05:25.840262] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:38.345 [2024-05-15 17:05:25.840280] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:38.345 [2024-05-15 17:05:25.848857] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:38.345 [2024-05-15 17:05:25.848876] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:38.345 [2024-05-15 17:05:25.857476] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:38.345 [2024-05-15 17:05:25.857496] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:38.345 [2024-05-15 17:05:25.867013] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:38.345 [2024-05-15 17:05:25.867031] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:38.345 [2024-05-15 17:05:25.876306] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:38.345 [2024-05-15 17:05:25.876323] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:38.345 [2024-05-15 17:05:25.885545] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:38.345 [2024-05-15 17:05:25.885563] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:38.345 [2024-05-15 17:05:25.894159] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:38.345 [2024-05-15 17:05:25.894182] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:38.345 [2024-05-15 17:05:25.901227] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:38.345 [2024-05-15 17:05:25.901245] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:38.346 [2024-05-15 17:05:25.911869] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:38.346 [2024-05-15 17:05:25.911888] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:38.346 [2024-05-15 17:05:25.920509] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:38.346 [2024-05-15 17:05:25.920528] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:38.346 [2024-05-15 17:05:25.929741] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:38.346 [2024-05-15 17:05:25.929761] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:38.346 [2024-05-15 17:05:25.938488] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:38.346 [2024-05-15 17:05:25.938506] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:38.346 [2024-05-15 17:05:25.947064] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:38.346 [2024-05-15 17:05:25.947082] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:38.346 [2024-05-15 17:05:25.956450] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:38.346 [2024-05-15 17:05:25.956468] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:38.346 [2024-05-15 17:05:25.965724] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:38.346 [2024-05-15 17:05:25.965742] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:38.346 [2024-05-15 17:05:25.975108] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:38.346 [2024-05-15 17:05:25.975126] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:38.346 [2024-05-15 17:05:25.984358] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:38.346 [2024-05-15 17:05:25.984376] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:38.346 [2024-05-15 17:05:25.993170] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:38.346 [2024-05-15 17:05:25.993190] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:38.346 [2024-05-15 17:05:26.000203] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:38.346 [2024-05-15 17:05:26.000222] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:38.605 [2024-05-15 17:05:26.011358] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:38.605 [2024-05-15 17:05:26.011378] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:38.605 [2024-05-15 17:05:26.020209] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:38.605 [2024-05-15 17:05:26.020228] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:38.605 [2024-05-15 17:05:26.029494] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:38.605 [2024-05-15 17:05:26.029512] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:38.605 [2024-05-15 17:05:26.038241] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:38.605 [2024-05-15 17:05:26.038263] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:38.605 [2024-05-15 17:05:26.047462] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:38.605 [2024-05-15 17:05:26.047480] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:38.605 [2024-05-15 17:05:26.056614] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:38.605 [2024-05-15 17:05:26.056632] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:38.605 [2024-05-15 17:05:26.063560] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:38.605 [2024-05-15 17:05:26.063577] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:38.605 [2024-05-15 17:05:26.073979] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:38.606 [2024-05-15 17:05:26.073997] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:38.606 [2024-05-15 17:05:26.082502] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:38.606 [2024-05-15 17:05:26.082520] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:38.606 [2024-05-15 17:05:26.091176] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:38.606 [2024-05-15 17:05:26.091194] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:38.606 [2024-05-15 17:05:26.099805] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:38.606 [2024-05-15 17:05:26.099823] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:38.606 [2024-05-15 17:05:26.106686] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:38.606 [2024-05-15 17:05:26.106703] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:38.606 [2024-05-15 17:05:26.117008] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:38.606 [2024-05-15 17:05:26.117026] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:38.606 [2024-05-15 17:05:26.125515] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:38.606 [2024-05-15 17:05:26.125534] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:38.606 [2024-05-15 17:05:26.134615] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:38.606 [2024-05-15 17:05:26.134633] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:38.606 [2024-05-15 17:05:26.143677] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:38.606 [2024-05-15 17:05:26.143696] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:38.606 [2024-05-15 17:05:26.152812] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:38.606 [2024-05-15 17:05:26.152830] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:38.606 [2024-05-15 17:05:26.159667] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:38.606 [2024-05-15 17:05:26.159684] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:38.606 [2024-05-15 17:05:26.170968] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:38.606 [2024-05-15 17:05:26.170986] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:38.606 [2024-05-15 17:05:26.179908] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:38.606 [2024-05-15 17:05:26.179927] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:38.606 [2024-05-15 17:05:26.189328] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:38.606 [2024-05-15 17:05:26.189346] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:38.606 [2024-05-15 17:05:26.197823] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:38.606 [2024-05-15 17:05:26.197841] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:38.606 [2024-05-15 17:05:26.206976] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:38.606 [2024-05-15 17:05:26.206998] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:38.606 [2024-05-15 17:05:26.216142] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:38.606 [2024-05-15 17:05:26.216161] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:38.606 [2024-05-15 17:05:26.224659] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:38.606 [2024-05-15 17:05:26.224677] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:38.606 [2024-05-15 17:05:26.233137] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:38.606 [2024-05-15 17:05:26.233154] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:38.606 [2024-05-15 17:05:26.242626] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:38.606 [2024-05-15 17:05:26.242645] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:38.606 [2024-05-15 17:05:26.251351] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:38.606 [2024-05-15 17:05:26.251369] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:38.606 [2024-05-15 17:05:26.259953] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:38.606 [2024-05-15 17:05:26.259971] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:38.866 [2024-05-15 17:05:26.269084] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:38.866 [2024-05-15 17:05:26.269103] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:38.866 [2024-05-15 17:05:26.277749] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:38.866 [2024-05-15 17:05:26.277768] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:38.866 [2024-05-15 17:05:26.287007] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:38.866 [2024-05-15 17:05:26.287025] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:38.866 [2024-05-15 17:05:26.295591] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:38.866 [2024-05-15 17:05:26.295610] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:38.866 [2024-05-15 17:05:26.304812] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:38.866 [2024-05-15 17:05:26.304831] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:38.866 [2024-05-15 17:05:26.313713] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:38.866 [2024-05-15 17:05:26.313731] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:38.866 [2024-05-15 17:05:26.322877] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:38.866 [2024-05-15 17:05:26.322896] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:38.866 [2024-05-15 17:05:26.332138] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:38.866 [2024-05-15 17:05:26.332158] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:38.866 [2024-05-15 17:05:26.340699] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:38.866 [2024-05-15 17:05:26.340718] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:38.866 [2024-05-15 17:05:26.349097] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:38.866 [2024-05-15 17:05:26.349115] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:38.866 [2024-05-15 17:05:26.358475] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:38.866 [2024-05-15 17:05:26.358494] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:38.867 [2024-05-15 17:05:26.367318] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:38.867 [2024-05-15 17:05:26.367336] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:38.867 [2024-05-15 17:05:26.375848] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:38.867 [2024-05-15 17:05:26.375871] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:38.867 [2024-05-15 17:05:26.385007] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:38.867 [2024-05-15 17:05:26.385025] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:38.867 [2024-05-15 17:05:26.393626] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:38.867 [2024-05-15 17:05:26.393644] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:38.867 [2024-05-15 17:05:26.402928] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:38.867 [2024-05-15 17:05:26.402947] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:38.867 [2024-05-15 17:05:26.412275] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:38.867 [2024-05-15 17:05:26.412293] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:38.867 [2024-05-15 17:05:26.420831] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:38.867 [2024-05-15 17:05:26.420849] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:38.867 [2024-05-15 17:05:26.429847] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:38.867 [2024-05-15 17:05:26.429865] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:38.867 [2024-05-15 17:05:26.438871] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:38.867 [2024-05-15 17:05:26.438888] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:38.867 [2024-05-15 17:05:26.448079] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:38.867 [2024-05-15 17:05:26.448097] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:38.867 [2024-05-15 17:05:26.457349] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:38.867 [2024-05-15 17:05:26.457367] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:38.867 [2024-05-15 17:05:26.466511] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:38.867 [2024-05-15 17:05:26.466529] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:38.867 [2024-05-15 17:05:26.475133] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:38.867 [2024-05-15 17:05:26.475151] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:38.867 [2024-05-15 17:05:26.483873] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:38.867 [2024-05-15 17:05:26.483892] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:38.867 [2024-05-15 17:05:26.492592] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:38.867 [2024-05-15 17:05:26.492611] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:38.867 [2024-05-15 17:05:26.502011] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:38.867 [2024-05-15 17:05:26.502029] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:38.867 [2024-05-15 17:05:26.511199] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:38.867 [2024-05-15 17:05:26.511217] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:38.867 [2024-05-15 17:05:26.519963] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:38.867 [2024-05-15 17:05:26.519981] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:39.126 [2024-05-15 17:05:26.529107] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:39.126 [2024-05-15 17:05:26.529125] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:39.126 [2024-05-15 17:05:26.538234] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:39.126 [2024-05-15 17:05:26.538253] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:39.126 [2024-05-15 17:05:26.546980] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:39.126 [2024-05-15 17:05:26.547003] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:39.126 [2024-05-15 17:05:26.555687] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:39.126 [2024-05-15 17:05:26.555705] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:39.126 [2024-05-15 17:05:26.564096] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:39.126 [2024-05-15 17:05:26.564114] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:39.126 [2024-05-15 17:05:26.573464] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:39.126 [2024-05-15 17:05:26.573482] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:39.126 [2024-05-15 17:05:26.582161] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:39.126 [2024-05-15 17:05:26.582185] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:39.126 [2024-05-15 17:05:26.591638] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:39.126 [2024-05-15 17:05:26.591656] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:39.126 [2024-05-15 17:05:26.600449] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:39.126 [2024-05-15 17:05:26.600467] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:39.126 [2024-05-15 17:05:26.609106] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:39.126 [2024-05-15 17:05:26.609124] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:39.126 [2024-05-15 17:05:26.618369] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:39.126 [2024-05-15 17:05:26.618387] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:39.126 [2024-05-15 17:05:26.627818] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:39.126 [2024-05-15 17:05:26.627836] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:39.126 [2024-05-15 17:05:26.635051] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:39.126 [2024-05-15 17:05:26.635070] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:39.127 [2024-05-15 17:05:26.645361] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:39.127 [2024-05-15 17:05:26.645380] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:39.127 [2024-05-15 17:05:26.654699] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:39.127 [2024-05-15 17:05:26.654718] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:39.127 [2024-05-15 17:05:26.663911] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:39.127 [2024-05-15 17:05:26.663930] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:39.127 [2024-05-15 17:05:26.673134] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:39.127 [2024-05-15 17:05:26.673152] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:39.127 [2024-05-15 17:05:26.681839] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:39.127 [2024-05-15 17:05:26.681857] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:39.127 [2024-05-15 17:05:26.691315] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:39.127 [2024-05-15 17:05:26.691334] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:39.127 [2024-05-15 17:05:26.699972] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:39.127 [2024-05-15 17:05:26.699990] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:39.127 [2024-05-15 17:05:26.709319] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:39.127 [2024-05-15 17:05:26.709336] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:39.127 [2024-05-15 17:05:26.718658] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:39.127 [2024-05-15 17:05:26.718680] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:39.127 [2024-05-15 17:05:26.727761] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:39.127 [2024-05-15 17:05:26.727780] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:39.127 [2024-05-15 17:05:26.736940] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:39.127 [2024-05-15 17:05:26.736959] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:39.127 [2024-05-15 17:05:26.745623] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:39.127 [2024-05-15 17:05:26.745642] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:39.127 [2024-05-15 17:05:26.755031] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:39.127 [2024-05-15 17:05:26.755048] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:39.127 [2024-05-15 17:05:26.764441] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:39.127 [2024-05-15 17:05:26.764460] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:39.127 [2024-05-15 17:05:26.773760] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:39.127 [2024-05-15 17:05:26.773786] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:39.127 [2024-05-15 17:05:26.781180] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:39.127 [2024-05-15 17:05:26.781198] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:39.386 [2024-05-15 17:05:26.791865] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:39.386 [2024-05-15 17:05:26.791884] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:39.386 [2024-05-15 17:05:26.800565] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:39.386 [2024-05-15 17:05:26.800583] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:39.386 [2024-05-15 17:05:26.808961] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:39.386 [2024-05-15 17:05:26.808978] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:39.386 [2024-05-15 17:05:26.815766] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:39.386 [2024-05-15 17:05:26.815783] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:39.386 [2024-05-15 17:05:26.827095] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:39.386 [2024-05-15 17:05:26.827114] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:39.386 [2024-05-15 17:05:26.835847] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:39.386 [2024-05-15 17:05:26.835865] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:39.386 [2024-05-15 17:05:26.844477] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:39.386 [2024-05-15 17:05:26.844495] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:39.387 [2024-05-15 17:05:26.852993] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:39.387 [2024-05-15 17:05:26.853011] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:39.387 [2024-05-15 17:05:26.862877] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:39.387 [2024-05-15 17:05:26.862896] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:39.387 [2024-05-15 17:05:26.871669] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:39.387 [2024-05-15 17:05:26.871689] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:39.387 [2024-05-15 17:05:26.880360] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:39.387 [2024-05-15 17:05:26.880379] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:39.387 [2024-05-15 17:05:26.889430] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:39.387 [2024-05-15 17:05:26.889450] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:39.387 [2024-05-15 17:05:26.898791] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:39.387 [2024-05-15 17:05:26.898810] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:39.387 [2024-05-15 17:05:26.908122] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:39.387 [2024-05-15 17:05:26.908142] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:39.387 [2024-05-15 17:05:26.916757] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:39.387 [2024-05-15 17:05:26.916775] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:39.387 [2024-05-15 17:05:26.925517] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:39.387 [2024-05-15 17:05:26.925535] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:39.387 [2024-05-15 17:05:26.934607] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:39.387 [2024-05-15 17:05:26.934625] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:39.387 [2024-05-15 17:05:26.943850] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:39.387 [2024-05-15 17:05:26.943870] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:39.387 [2024-05-15 17:05:26.952506] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:39.387 [2024-05-15 17:05:26.952525] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:39.387 [2024-05-15 17:05:26.961894] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:39.387 [2024-05-15 17:05:26.961912] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:39.387 [2024-05-15 17:05:26.971231] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:39.387 [2024-05-15 17:05:26.971249] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:39.387 [2024-05-15 17:05:26.978713] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:39.387 [2024-05-15 17:05:26.978731] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:39.387 [2024-05-15 17:05:26.989140] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:39.387 [2024-05-15 17:05:26.989159] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:39.387 [2024-05-15 17:05:26.997707] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:39.387 [2024-05-15 17:05:26.997725] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:39.387 [2024-05-15 17:05:27.007010] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:39.387 [2024-05-15 17:05:27.007029] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:39.387 [2024-05-15 17:05:27.013947] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:39.387 [2024-05-15 17:05:27.013965] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:39.387 [2024-05-15 17:05:27.024564] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:39.387 [2024-05-15 17:05:27.024582] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:39.387 [2024-05-15 17:05:27.033039] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:39.387 [2024-05-15 17:05:27.033058] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:39.387 [2024-05-15 17:05:27.041861] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:39.387 [2024-05-15 17:05:27.041879] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:39.646 [2024-05-15 17:05:27.050538] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:39.647 [2024-05-15 17:05:27.050557] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:39.647 [2024-05-15 17:05:27.059219] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:39.647 [2024-05-15 17:05:27.059237] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:39.647 [2024-05-15 17:05:27.066666] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:39.647 [2024-05-15 17:05:27.066686] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:39.647 [2024-05-15 17:05:27.077215] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:39.647 [2024-05-15 17:05:27.077234] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:39.647 [2024-05-15 17:05:27.086855] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:39.647 [2024-05-15 17:05:27.086875] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:39.647 [2024-05-15 17:05:27.095410] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:39.647 [2024-05-15 17:05:27.095428] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:39.647 [2024-05-15 17:05:27.104566] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:39.647 [2024-05-15 17:05:27.104584] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:39.647 [2024-05-15 17:05:27.113898] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:39.647 [2024-05-15 17:05:27.113917] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:39.647 [2024-05-15 17:05:27.122986] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:39.647 [2024-05-15 17:05:27.123004] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:39.647 [2024-05-15 17:05:27.131515] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:39.647 [2024-05-15 17:05:27.131533] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:39.647 [2024-05-15 17:05:27.140842] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:39.647 [2024-05-15 17:05:27.140861] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:39.647 [2024-05-15 17:05:27.147791] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:39.647 [2024-05-15 17:05:27.147809] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:39.647 [2024-05-15 17:05:27.158380] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:39.647 [2024-05-15 17:05:27.158398] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:39.647 [2024-05-15 17:05:27.167285] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:39.647 [2024-05-15 17:05:27.167304] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:39.647 [2024-05-15 17:05:27.177065] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:39.647 [2024-05-15 17:05:27.177084] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:39.647 [2024-05-15 17:05:27.185518] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:39.647 [2024-05-15 17:05:27.185537] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:39.647 [2024-05-15 17:05:27.194024] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:39.647 [2024-05-15 17:05:27.194041] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:39.647 [2024-05-15 17:05:27.203400] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:39.647 [2024-05-15 17:05:27.203419] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:39.647 [2024-05-15 17:05:27.212765] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:39.647 [2024-05-15 17:05:27.212784] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:39.647 [2024-05-15 17:05:27.222160] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:39.647 [2024-05-15 17:05:27.222185] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:39.647 [2024-05-15 17:05:27.231367] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:39.647 [2024-05-15 17:05:27.231386] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:39.647 [2024-05-15 17:05:27.240038] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:39.647 [2024-05-15 17:05:27.240056] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:39.647 [2024-05-15 17:05:27.247150] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:39.647 [2024-05-15 17:05:27.247175] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:39.647 [2024-05-15 17:05:27.257775] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:39.647 [2024-05-15 17:05:27.257793] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:39.647 [2024-05-15 17:05:27.266783] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:39.647 [2024-05-15 17:05:27.266801] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:39.647 [2024-05-15 17:05:27.276317] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:39.647 [2024-05-15 17:05:27.276335] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:39.647 [2024-05-15 17:05:27.284947] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:39.647 [2024-05-15 17:05:27.284966] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:39.647 [2024-05-15 17:05:27.294264] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:39.647 [2024-05-15 17:05:27.294282] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:39.647 [2024-05-15 17:05:27.302994] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:39.647 [2024-05-15 17:05:27.303013] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:39.906 [2024-05-15 17:05:27.312492] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:39.906 [2024-05-15 17:05:27.312510] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:39.906 [2024-05-15 17:05:27.321344] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:39.906 [2024-05-15 17:05:27.321362] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:39.906 [2024-05-15 17:05:27.329930] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:39.906 [2024-05-15 17:05:27.329948] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:39.906 [2024-05-15 17:05:27.338571] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:39.906 [2024-05-15 17:05:27.338589] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:39.906 [2024-05-15 17:05:27.347581] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:39.906 [2024-05-15 17:05:27.347599] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:39.906 [2024-05-15 17:05:27.356719] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:39.907 [2024-05-15 17:05:27.356737] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:39.907 [2024-05-15 17:05:27.365371] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:39.907 [2024-05-15 17:05:27.365389] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:39.907 [2024-05-15 17:05:27.374354] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:39.907 [2024-05-15 17:05:27.374372] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:39.907 [2024-05-15 17:05:27.383538] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:39.907 [2024-05-15 17:05:27.383557] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:39.907 [2024-05-15 17:05:27.392736] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:39.907 [2024-05-15 17:05:27.392762] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:39.907 [2024-05-15 17:05:27.401357] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:39.907 [2024-05-15 17:05:27.401375] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:39.907 [2024-05-15 17:05:27.408262] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:39.907 [2024-05-15 17:05:27.408279] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:39.907 [2024-05-15 17:05:27.419510] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:39.907 [2024-05-15 17:05:27.419528] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:39.907 [2024-05-15 17:05:27.428392] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:39.907 [2024-05-15 17:05:27.428410] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:39.907 [2024-05-15 17:05:27.437721] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:39.907 [2024-05-15 17:05:27.437739] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:39.907 [2024-05-15 17:05:27.447022] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:39.907 [2024-05-15 17:05:27.447041] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:39.907 [2024-05-15 17:05:27.455836] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:39.907 [2024-05-15 17:05:27.455854] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:39.907 [2024-05-15 17:05:27.464381] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:39.907 [2024-05-15 17:05:27.464399] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:39.907 [2024-05-15 17:05:27.473483] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:39.907 [2024-05-15 17:05:27.473501] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:39.907 [2024-05-15 17:05:27.482804] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:39.907 [2024-05-15 17:05:27.482822] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:39.907 [2024-05-15 17:05:27.491975] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:39.907 [2024-05-15 17:05:27.491993] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:39.907 [2024-05-15 17:05:27.501273] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:39.907 [2024-05-15 17:05:27.501291] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:39.907 [2024-05-15 17:05:27.510508] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:39.907 [2024-05-15 17:05:27.510526] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:39.907 [2024-05-15 17:05:27.519223] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:39.907 [2024-05-15 17:05:27.519242] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:39.907 [2024-05-15 17:05:27.529045] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:39.907 [2024-05-15 17:05:27.529064] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:39.907 [2024-05-15 17:05:27.537917] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:39.907 [2024-05-15 17:05:27.537936] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:39.907 [2024-05-15 17:05:27.547172] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:39.907 [2024-05-15 17:05:27.547190] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:39.907 [2024-05-15 17:05:27.555703] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:39.907 [2024-05-15 17:05:27.555724] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:39.907 [2024-05-15 17:05:27.564353] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:39.907 [2024-05-15 17:05:27.564376] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:40.166 [2024-05-15 17:05:27.573644] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:40.166 [2024-05-15 17:05:27.573662] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:40.166 [2024-05-15 17:05:27.582662] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:40.166 [2024-05-15 17:05:27.582680] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:40.166 [2024-05-15 17:05:27.591920] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:40.166 [2024-05-15 17:05:27.591938] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:40.166 [2024-05-15 17:05:27.601473] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:40.166 [2024-05-15 17:05:27.601492] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:40.166 [2024-05-15 17:05:27.610144] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:40.166 [2024-05-15 17:05:27.610161] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:40.166 [2024-05-15 17:05:27.619495] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:40.166 [2024-05-15 17:05:27.619513] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:40.166 [2024-05-15 17:05:27.628154] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:40.166 [2024-05-15 17:05:27.628179] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:40.166 [2024-05-15 17:05:27.637298] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:40.166 [2024-05-15 17:05:27.637316] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:40.166 [2024-05-15 17:05:27.646384] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:40.166 [2024-05-15 17:05:27.646401] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:40.166 [2024-05-15 17:05:27.655691] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:40.166 [2024-05-15 17:05:27.655709] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:40.166 [2024-05-15 17:05:27.664083] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:40.166 [2024-05-15 17:05:27.664101] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:40.166 [2024-05-15 17:05:27.672678] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:40.166 [2024-05-15 17:05:27.672696] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:40.166 [2024-05-15 17:05:27.681385] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:40.166 [2024-05-15 17:05:27.681403] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:40.166 [2024-05-15 17:05:27.690431] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:40.166 [2024-05-15 17:05:27.690449] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:40.166 [2024-05-15 17:05:27.699558] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:40.166 [2024-05-15 17:05:27.699576] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:40.166 [2024-05-15 17:05:27.708877] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:40.166 [2024-05-15 17:05:27.708894] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:40.166 [2024-05-15 17:05:27.718047] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:40.166 [2024-05-15 17:05:27.718065] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:40.166 [2024-05-15 17:05:27.726856] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:40.166 [2024-05-15 17:05:27.726875] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:40.166 [2024-05-15 17:05:27.735429] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:40.166 [2024-05-15 17:05:27.735451] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:40.166 [2024-05-15 17:05:27.744545] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:40.166 [2024-05-15 17:05:27.744564] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:40.166 [2024-05-15 17:05:27.753281] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:40.166 [2024-05-15 17:05:27.753300] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:40.166 [2024-05-15 17:05:27.761996] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:40.166 [2024-05-15 17:05:27.762015] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:40.166 [2024-05-15 17:05:27.771266] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:40.166 [2024-05-15 17:05:27.771284] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:40.166 [2024-05-15 17:05:27.779979] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:40.166 [2024-05-15 17:05:27.779997] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:40.166 [2024-05-15 17:05:27.789276] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:40.166 [2024-05-15 17:05:27.789294] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:40.166 [2024-05-15 17:05:27.798874] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:40.166 [2024-05-15 17:05:27.798892] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:40.166 [2024-05-15 17:05:27.807410] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:40.166 [2024-05-15 17:05:27.807428] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:40.166 [2024-05-15 17:05:27.816585] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:40.166 [2024-05-15 17:05:27.816603] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:40.424 [2024-05-15 17:05:27.825948] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:40.424 [2024-05-15 17:05:27.825968] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:40.424 [2024-05-15 17:05:27.834704] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:40.424 [2024-05-15 17:05:27.834721] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:40.425 [2024-05-15 17:05:27.843927] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:40.425 [2024-05-15 17:05:27.843945] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:40.425 [2024-05-15 17:05:27.852470] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:40.425 [2024-05-15 17:05:27.852487] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:40.425 [2024-05-15 17:05:27.859361] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:40.425 [2024-05-15 17:05:27.859379] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:40.425 [2024-05-15 17:05:27.870206] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:40.425 [2024-05-15 17:05:27.870225] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:40.425 [2024-05-15 17:05:27.878919] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:40.425 [2024-05-15 17:05:27.878937] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:40.425 [2024-05-15 17:05:27.887548] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:40.425 [2024-05-15 17:05:27.887565] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:40.425 [2024-05-15 17:05:27.896265] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:40.425 [2024-05-15 17:05:27.896282] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:40.425 [2024-05-15 17:05:27.905349] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:40.425 [2024-05-15 17:05:27.905371] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:40.425 [2024-05-15 17:05:27.914277] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:40.425 [2024-05-15 17:05:27.914295] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:40.425 [2024-05-15 17:05:27.922873] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:40.425 [2024-05-15 17:05:27.922891] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:40.425 [2024-05-15 17:05:27.931473] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:40.425 [2024-05-15 17:05:27.931492] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:40.425 [2024-05-15 17:05:27.940140] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:40.425 [2024-05-15 17:05:27.940159] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:40.425 [2024-05-15 17:05:27.949493] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:40.425 [2024-05-15 17:05:27.949511] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:40.425 [2024-05-15 17:05:27.956576] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:40.425 [2024-05-15 17:05:27.956595] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:40.425 [2024-05-15 17:05:27.966701] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:40.425 [2024-05-15 17:05:27.966720] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:40.425 [2024-05-15 17:05:27.975552] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:40.425 [2024-05-15 17:05:27.975570] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:40.425 [2024-05-15 17:05:27.984872] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:40.425 [2024-05-15 17:05:27.984890] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:40.425 [2024-05-15 17:05:27.993689] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:40.425 [2024-05-15 17:05:27.993707] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:40.425 [2024-05-15 17:05:28.002734] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:40.425 [2024-05-15 17:05:28.002751] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:40.425 [2024-05-15 17:05:28.011369] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:40.425 [2024-05-15 17:05:28.011387] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:40.425 [2024-05-15 17:05:28.020575] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:40.425 [2024-05-15 17:05:28.020593] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:40.425 [2024-05-15 17:05:28.027977] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:40.425 [2024-05-15 17:05:28.027995] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:40.425 [2024-05-15 17:05:28.038624] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:40.425 [2024-05-15 17:05:28.038643] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:40.425 [2024-05-15 17:05:28.047384] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:40.425 [2024-05-15 17:05:28.047402] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:40.425 [2024-05-15 17:05:28.055943] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:40.425 [2024-05-15 17:05:28.055962] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:40.425 [2024-05-15 17:05:28.064792] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:40.425 [2024-05-15 17:05:28.064810] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:40.425 [2024-05-15 17:05:28.073377] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:40.425 [2024-05-15 17:05:28.073400] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:40.425 [2024-05-15 17:05:28.082803] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:40.425 [2024-05-15 17:05:28.082820] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:40.684 [2024-05-15 17:05:28.091373] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:40.684 [2024-05-15 17:05:28.091390] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:40.684 [2024-05-15 17:05:28.100198] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:40.684 [2024-05-15 17:05:28.100216] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:40.684 [2024-05-15 17:05:28.109180] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:40.684 [2024-05-15 17:05:28.109199] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:40.684 [2024-05-15 17:05:28.115996] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:40.684 [2024-05-15 17:05:28.116014] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:40.684 [2024-05-15 17:05:28.127213] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:40.684 [2024-05-15 17:05:28.127232] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:40.684 [2024-05-15 17:05:28.135925] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:40.684 [2024-05-15 17:05:28.135944] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:40.684 [2024-05-15 17:05:28.144448] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:40.684 [2024-05-15 17:05:28.144466] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:40.684 [2024-05-15 17:05:28.153148] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:40.684 [2024-05-15 17:05:28.153173] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:40.684 [2024-05-15 17:05:28.162352] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:40.684 [2024-05-15 17:05:28.162371] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:40.684 [2024-05-15 17:05:28.170845] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:40.684 [2024-05-15 17:05:28.170863] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:40.684 [2024-05-15 17:05:28.180157] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:40.684 [2024-05-15 17:05:28.180182] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:40.684 [2024-05-15 17:05:28.189040] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:40.684 [2024-05-15 17:05:28.189058] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:40.684 [2024-05-15 17:05:28.198159] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:40.684 [2024-05-15 17:05:28.198183] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:40.684 [2024-05-15 17:05:28.206788] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:40.684 [2024-05-15 17:05:28.206807] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:40.684 [2024-05-15 17:05:28.215135] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:40.684 [2024-05-15 17:05:28.215153] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:40.684 [2024-05-15 17:05:28.224374] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:40.684 [2024-05-15 17:05:28.224393] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:40.684 [2024-05-15 17:05:28.232762] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:40.685 [2024-05-15 17:05:28.232780] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:40.685 [2024-05-15 17:05:28.241253] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:40.685 [2024-05-15 17:05:28.241271] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:40.685 [2024-05-15 17:05:28.249674] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:40.685 [2024-05-15 17:05:28.249692] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:40.685 [2024-05-15 17:05:28.258457] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:40.685 [2024-05-15 17:05:28.258478] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:40.685 [2024-05-15 17:05:28.266917] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:40.685 [2024-05-15 17:05:28.266936] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:40.685 [2024-05-15 17:05:28.276257] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:40.685 [2024-05-15 17:05:28.276276] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:40.685 [2024-05-15 17:05:28.285643] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:40.685 [2024-05-15 17:05:28.285663] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:40.685 [2024-05-15 17:05:28.295070] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:40.685 [2024-05-15 17:05:28.295091] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:40.685 [2024-05-15 17:05:28.304393] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:40.685 [2024-05-15 17:05:28.304413] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:40.685 [2024-05-15 17:05:28.313056] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:40.685 [2024-05-15 17:05:28.313075] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:40.685 [2024-05-15 17:05:28.321721] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:40.685 [2024-05-15 17:05:28.321740] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:40.685 [2024-05-15 17:05:28.331012] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:40.685 [2024-05-15 17:05:28.331031] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:40.685 [2024-05-15 17:05:28.339748] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:40.685 [2024-05-15 17:05:28.339766] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:40.944 [2024-05-15 17:05:28.346959] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:40.944 [2024-05-15 17:05:28.346978] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:40.944 [2024-05-15 17:05:28.357577] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:40.944 [2024-05-15 17:05:28.357596] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:40.944 [2024-05-15 17:05:28.366890] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:40.944 [2024-05-15 17:05:28.366909] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:40.944 [2024-05-15 17:05:28.376337] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:40.945 [2024-05-15 17:05:28.376356] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:40.945 [2024-05-15 17:05:28.385186] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:40.945 [2024-05-15 17:05:28.385205] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:40.945 [2024-05-15 17:05:28.394527] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:40.945 [2024-05-15 17:05:28.394546] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:40.945 [2024-05-15 17:05:28.403730] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:40.945 [2024-05-15 17:05:28.403749] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:40.945 [2024-05-15 17:05:28.412253] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:40.945 [2024-05-15 17:05:28.412271] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:40.945 [2024-05-15 17:05:28.420623] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:40.945 [2024-05-15 17:05:28.420642] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:40.945 [2024-05-15 17:05:28.429735] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:40.945 [2024-05-15 17:05:28.429754] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:40.945 [2024-05-15 17:05:28.438378] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:40.945 [2024-05-15 17:05:28.438396] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:40.945 [2024-05-15 17:05:28.447606] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:40.945 [2024-05-15 17:05:28.447625] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:40.945 [2024-05-15 17:05:28.456170] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:40.945 [2024-05-15 17:05:28.456190] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:40.945 [2024-05-15 17:05:28.465595] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:40.945 [2024-05-15 17:05:28.465614] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:40.945 [2024-05-15 17:05:28.472669] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:40.945 [2024-05-15 17:05:28.472688] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:40.945 [2024-05-15 17:05:28.483369] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:40.945 [2024-05-15 17:05:28.483390] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:40.945 [2024-05-15 17:05:28.491963] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:40.945 [2024-05-15 17:05:28.491982] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:40.945 [2024-05-15 17:05:28.500953] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:40.945 [2024-05-15 17:05:28.500971] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:40.945 [2024-05-15 17:05:28.510374] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:40.945 [2024-05-15 17:05:28.510393] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:40.945 [2024-05-15 17:05:28.518818] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:40.945 [2024-05-15 17:05:28.518836] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:40.945 [2024-05-15 17:05:28.527497] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:40.945 [2024-05-15 17:05:28.527516] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:40.945 [2024-05-15 17:05:28.536714] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:40.945 [2024-05-15 17:05:28.536732] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:40.945 [2024-05-15 17:05:28.546051] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:40.945 [2024-05-15 17:05:28.546070] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:40.945 [2024-05-15 17:05:28.555414] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:40.945 [2024-05-15 17:05:28.555432] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:40.945 [2024-05-15 17:05:28.564459] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:40.945 [2024-05-15 17:05:28.564478] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:40.945 [2024-05-15 17:05:28.573130] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:40.945 [2024-05-15 17:05:28.573149] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:40.945 [2024-05-15 17:05:28.582214] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:40.945 [2024-05-15 17:05:28.582233] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:40.945 [2024-05-15 17:05:28.591373] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:40.945 [2024-05-15 17:05:28.591392] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:40.945 [2024-05-15 17:05:28.600758] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:40.945 [2024-05-15 17:05:28.600777] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:41.205 [2024-05-15 17:05:28.610049] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:41.205 [2024-05-15 17:05:28.610067] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:41.205 [2024-05-15 17:05:28.618698] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:41.205 [2024-05-15 17:05:28.618716] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:41.205 [2024-05-15 17:05:28.627842] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:41.205 [2024-05-15 17:05:28.627860] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:41.205 [2024-05-15 17:05:28.636426] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:41.205 [2024-05-15 17:05:28.636446] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:41.205 [2024-05-15 17:05:28.645667] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:41.205 [2024-05-15 17:05:28.645686] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:41.205 [2024-05-15 17:05:28.654084] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:41.205 [2024-05-15 17:05:28.654102] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:41.205 [2024-05-15 17:05:28.663352] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:41.205 [2024-05-15 17:05:28.663371] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:41.205 [2024-05-15 17:05:28.672722] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:41.205 [2024-05-15 17:05:28.672742] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:41.205 [2024-05-15 17:05:28.681936] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:41.205 [2024-05-15 17:05:28.681954] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:41.205 [2024-05-15 17:05:28.689175] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:41.205 [2024-05-15 17:05:28.689192] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:41.205 [2024-05-15 17:05:28.699539] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:41.205 [2024-05-15 17:05:28.699558] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:41.205 [2024-05-15 17:05:28.708246] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:41.205 [2024-05-15 17:05:28.708264] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:41.205 [2024-05-15 17:05:28.722627] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:41.205 [2024-05-15 17:05:28.722646] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:41.205 [2024-05-15 17:05:28.731565] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:41.205 [2024-05-15 17:05:28.731583] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:41.205 [2024-05-15 17:05:28.740145] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:41.205 [2024-05-15 17:05:28.740163] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:41.205 [2024-05-15 17:05:28.748912] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:41.205 [2024-05-15 17:05:28.748934] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:41.205 [2024-05-15 17:05:28.758119] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:41.205 [2024-05-15 17:05:28.758137] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:41.205 [2024-05-15 17:05:28.767490] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:41.205 [2024-05-15 17:05:28.767509] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:41.205 [2024-05-15 17:05:28.776027] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:41.205 [2024-05-15 17:05:28.776044] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:41.205 [2024-05-15 17:05:28.784591] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:41.205 [2024-05-15 17:05:28.784610] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:41.205 [2024-05-15 17:05:28.794008] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:41.205 [2024-05-15 17:05:28.794027] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:41.205 [2024-05-15 17:05:28.803291] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:41.205 [2024-05-15 17:05:28.803309] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:41.205 [2024-05-15 17:05:28.812529] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:41.205 [2024-05-15 17:05:28.812548] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:41.205 [2024-05-15 17:05:28.821720] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:41.205 [2024-05-15 17:05:28.821738] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:41.205 [2024-05-15 17:05:28.830212] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:41.205 [2024-05-15 17:05:28.830230] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:41.205 [2024-05-15 17:05:28.839283] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:41.205 [2024-05-15 17:05:28.839301] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:41.205 [2024-05-15 17:05:28.848475] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:41.205 [2024-05-15 17:05:28.848492] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:41.205 [2024-05-15 17:05:28.857752] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:41.205 [2024-05-15 17:05:28.857770] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:41.464 [2024-05-15 17:05:28.866443] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:41.464 [2024-05-15 17:05:28.866462] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:41.464 [2024-05-15 17:05:28.875758] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:41.464 [2024-05-15 17:05:28.875776] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:41.464 [2024-05-15 17:05:28.885174] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:41.464 [2024-05-15 17:05:28.885194] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:41.464 [2024-05-15 17:05:28.893828] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:41.464 [2024-05-15 17:05:28.893846] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:41.464 [2024-05-15 17:05:28.903105] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:41.464 [2024-05-15 17:05:28.903123] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:41.464 [2024-05-15 17:05:28.911714] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:41.464 [2024-05-15 17:05:28.911732] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:41.464 [2024-05-15 17:05:28.920362] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:41.464 [2024-05-15 17:05:28.920384] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:41.464 [2024-05-15 17:05:28.929456] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:41.464 [2024-05-15 17:05:28.929474] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:41.464 [2024-05-15 17:05:28.938668] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:41.464 [2024-05-15 17:05:28.938686] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:41.464 [2024-05-15 17:05:28.947270] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:41.464 [2024-05-15 17:05:28.947288] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:41.464 [2024-05-15 17:05:28.955803] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:41.464 [2024-05-15 17:05:28.955822] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:41.464 [2024-05-15 17:05:28.964877] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:41.464 [2024-05-15 17:05:28.964895] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:41.464 [2024-05-15 17:05:28.974010] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:41.464 [2024-05-15 17:05:28.974029] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:41.464 [2024-05-15 17:05:28.982588] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:41.465 [2024-05-15 17:05:28.982608] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:41.465 [2024-05-15 17:05:28.990822] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:41.465 [2024-05-15 17:05:28.990841] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:41.465 [2024-05-15 17:05:28.999394] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:41.465 [2024-05-15 17:05:28.999412] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:41.465 [2024-05-15 17:05:29.007956] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:41.465 [2024-05-15 17:05:29.007974] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:41.465 [2024-05-15 17:05:29.017089] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:41.465 [2024-05-15 17:05:29.017107] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:41.465 [2024-05-15 17:05:29.025727] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:41.465 [2024-05-15 17:05:29.025745] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:41.465 [2024-05-15 17:05:29.034795] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:41.465 [2024-05-15 17:05:29.034813] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:41.465 [2024-05-15 17:05:29.043417] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:41.465 [2024-05-15 17:05:29.043436] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:41.465 [2024-05-15 17:05:29.052758] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:41.465 [2024-05-15 17:05:29.052777] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:41.465 [2024-05-15 17:05:29.061948] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:41.465 [2024-05-15 17:05:29.061966] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:41.465 [2024-05-15 17:05:29.071602] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:41.465 [2024-05-15 17:05:29.071621] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:41.465 [2024-05-15 17:05:29.080231] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:41.465 [2024-05-15 17:05:29.080249] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:41.465 [2024-05-15 17:05:29.089555] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:41.465 [2024-05-15 17:05:29.089578] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:41.465 [2024-05-15 17:05:29.098695] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:41.465 [2024-05-15 17:05:29.098712] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:41.465 [2024-05-15 17:05:29.107700] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:41.465 [2024-05-15 17:05:29.107719] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:41.465 [2024-05-15 17:05:29.117008] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:41.465 [2024-05-15 17:05:29.117026] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:41.724 [2024-05-15 17:05:29.126358] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:41.724 [2024-05-15 17:05:29.126376] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:41.724 [2024-05-15 17:05:29.135441] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:41.724 [2024-05-15 17:05:29.135459] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:41.724 [2024-05-15 17:05:29.144700] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:41.724 [2024-05-15 17:05:29.144719] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:41.724 [2024-05-15 17:05:29.153374] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:41.724 [2024-05-15 17:05:29.153392] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:41.724 [2024-05-15 17:05:29.161929] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:41.724 [2024-05-15 17:05:29.161947] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:41.724 [2024-05-15 17:05:29.171160] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:41.724 [2024-05-15 17:05:29.171185] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:41.724 [2024-05-15 17:05:29.179862] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:41.725 [2024-05-15 17:05:29.179880] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:41.725 [2024-05-15 17:05:29.186674] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:41.725 [2024-05-15 17:05:29.186692] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:41.725 [2024-05-15 17:05:29.197853] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:41.725 [2024-05-15 17:05:29.197871] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:41.725 [2024-05-15 17:05:29.206423] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:41.725 [2024-05-15 17:05:29.206441] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:41.725 [2024-05-15 17:05:29.214952] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:41.725 [2024-05-15 17:05:29.214970] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:41.725 [2024-05-15 17:05:29.224084] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:41.725 [2024-05-15 17:05:29.224102] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:41.725 [2024-05-15 17:05:29.232642] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:41.725 [2024-05-15 17:05:29.232660] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:41.725 [2024-05-15 17:05:29.241939] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:41.725 [2024-05-15 17:05:29.241957] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:41.725 [2024-05-15 17:05:29.248761] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:41.725 [2024-05-15 17:05:29.248779] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:41.725 [2024-05-15 17:05:29.259119] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:41.725 [2024-05-15 17:05:29.259143] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:41.725 [2024-05-15 17:05:29.268052] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:41.725 [2024-05-15 17:05:29.268071] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:41.725 [2024-05-15 17:05:29.276450] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:41.725 [2024-05-15 17:05:29.276468] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:41.725 [2024-05-15 17:05:29.285492] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:41.725 [2024-05-15 17:05:29.285510] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:41.725 [2024-05-15 17:05:29.294725] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:41.725 [2024-05-15 17:05:29.294744] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:41.725 [2024-05-15 17:05:29.302003] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:41.725 [2024-05-15 17:05:29.302021] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:41.725 [2024-05-15 17:05:29.312609] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:41.725 [2024-05-15 17:05:29.312628] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:41.725 [2024-05-15 17:05:29.321388] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:41.725 [2024-05-15 17:05:29.321406] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:41.725 [2024-05-15 17:05:29.330616] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:41.725 [2024-05-15 17:05:29.330635] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:41.725 [2024-05-15 17:05:29.339790] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:41.725 [2024-05-15 17:05:29.339808] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:41.725 [2024-05-15 17:05:29.349069] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:41.725 [2024-05-15 17:05:29.349086] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:41.725 [2024-05-15 17:05:29.357667] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:41.725 [2024-05-15 17:05:29.357685] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:41.725 [2024-05-15 17:05:29.366442] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:41.725 [2024-05-15 17:05:29.366460] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:41.725 [2024-05-15 17:05:29.373313] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:41.725 [2024-05-15 17:05:29.373330] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:41.984 [2024-05-15 17:05:29.384374] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:41.985 [2024-05-15 17:05:29.384393] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:41.985 [2024-05-15 17:05:29.393003] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:41.985 [2024-05-15 17:05:29.393021] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:41.985 [2024-05-15 17:05:29.402098] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:41.985 [2024-05-15 17:05:29.402115] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:41.985 [2024-05-15 17:05:29.410796] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:41.985 [2024-05-15 17:05:29.410814] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:41.985 [2024-05-15 17:05:29.419996] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:41.985 [2024-05-15 17:05:29.420014] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:41.985 [2024-05-15 17:05:29.429446] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:41.985 [2024-05-15 17:05:29.429468] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:41.985 [2024-05-15 17:05:29.437800] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:41.985 [2024-05-15 17:05:29.437819] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:41.985 [2024-05-15 17:05:29.446491] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:41.985 [2024-05-15 17:05:29.446509] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:41.985 [2024-05-15 17:05:29.455734] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:41.985 [2024-05-15 17:05:29.455753] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:41.985 [2024-05-15 17:05:29.464930] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:41.985 [2024-05-15 17:05:29.464948] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:41.985 [2024-05-15 17:05:29.474433] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:41.985 [2024-05-15 17:05:29.474451] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:41.985 [2024-05-15 17:05:29.483206] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:41.985 [2024-05-15 17:05:29.483225] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:41.985 [2024-05-15 17:05:29.492896] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:41.985 [2024-05-15 17:05:29.492914] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:41.985 [2024-05-15 17:05:29.502099] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:41.985 [2024-05-15 17:05:29.502117] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:41.985 [2024-05-15 17:05:29.510827] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:41.985 [2024-05-15 17:05:29.510846] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:41.985 [2024-05-15 17:05:29.519558] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:41.985 [2024-05-15 17:05:29.519576] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:41.985 [2024-05-15 17:05:29.528263] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:41.985 [2024-05-15 17:05:29.528282] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:41.985 [2024-05-15 17:05:29.537051] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:41.985 [2024-05-15 17:05:29.537070] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:41.985 [2024-05-15 17:05:29.546083] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:41.985 [2024-05-15 17:05:29.546101] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:41.985 [2024-05-15 17:05:29.554751] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:41.985 [2024-05-15 17:05:29.554771] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:41.985 [2024-05-15 17:05:29.563431] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:41.985 [2024-05-15 17:05:29.563450] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:41.985 [2024-05-15 17:05:29.570311] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:41.985 [2024-05-15 17:05:29.570328] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:41.985 [2024-05-15 17:05:29.581593] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:41.985 [2024-05-15 17:05:29.581612] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:41.985 [2024-05-15 17:05:29.590597] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:41.985 [2024-05-15 17:05:29.590615] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:41.985 [2024-05-15 17:05:29.597626] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:41.985 [2024-05-15 17:05:29.597644] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:41.985 [2024-05-15 17:05:29.608908] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:41.985 [2024-05-15 17:05:29.608926] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:41.985 [2024-05-15 17:05:29.617719] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:41.985 [2024-05-15 17:05:29.617737] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:41.985 [2024-05-15 17:05:29.627087] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:41.985 [2024-05-15 17:05:29.627106] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:41.985 [2024-05-15 17:05:29.635675] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:41.985 [2024-05-15 17:05:29.635694] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:42.245 [2024-05-15 17:05:29.644187] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:42.245 [2024-05-15 17:05:29.644206] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:42.245 [2024-05-15 17:05:29.653658] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:42.245 [2024-05-15 17:05:29.653677] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:42.245 [2024-05-15 17:05:29.662555] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:42.245 [2024-05-15 17:05:29.662575] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:42.245 [2024-05-15 17:05:29.669438] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:42.245 [2024-05-15 17:05:29.669456] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:42.245 [2024-05-15 17:05:29.680457] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:42.245 [2024-05-15 17:05:29.680476] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:42.245 [2024-05-15 17:05:29.689301] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:42.245 [2024-05-15 17:05:29.689321] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:42.245 [2024-05-15 17:05:29.698387] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:42.245 [2024-05-15 17:05:29.698406] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:42.245 [2024-05-15 17:05:29.707561] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:42.245 [2024-05-15 17:05:29.707581] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:42.245 [2024-05-15 17:05:29.717070] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:42.245 [2024-05-15 17:05:29.717089] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:42.245 [2024-05-15 17:05:29.725772] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:42.245 [2024-05-15 17:05:29.725790] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:42.245 [2024-05-15 17:05:29.735064] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:42.245 [2024-05-15 17:05:29.735083] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:42.245 [2024-05-15 17:05:29.741880] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:42.245 [2024-05-15 17:05:29.741898] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:42.245 [2024-05-15 17:05:29.752344] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:42.245 [2024-05-15 17:05:29.752363] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:42.245 [2024-05-15 17:05:29.760868] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:42.245 [2024-05-15 17:05:29.760886] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:42.245 [2024-05-15 17:05:29.770355] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:42.245 [2024-05-15 17:05:29.770374] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:42.245 [2024-05-15 17:05:29.779793] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:42.245 [2024-05-15 17:05:29.779812] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:42.245 [2024-05-15 17:05:29.786932] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:42.245 [2024-05-15 17:05:29.786950] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:42.245 [2024-05-15 17:05:29.797147] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:42.245 [2024-05-15 17:05:29.797172] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:42.245 [2024-05-15 17:05:29.806067] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:42.245 [2024-05-15 17:05:29.806086] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:42.245 [2024-05-15 17:05:29.814865] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:42.245 [2024-05-15 17:05:29.814884] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:42.245 [2024-05-15 17:05:29.824338] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:42.245 [2024-05-15 17:05:29.824357] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:42.245 [2024-05-15 17:05:29.833282] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:42.245 [2024-05-15 17:05:29.833301] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:42.245 [2024-05-15 17:05:29.839343] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:42.245 [2024-05-15 17:05:29.839360] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:42.245 00:14:42.245 Latency(us) 00:14:42.245 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:42.245 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:14:42.245 Nvme1n1 : 5.01 16617.91 129.83 0.00 0.00 7695.12 3305.29 19945.74 00:14:42.245 =================================================================================================================== 00:14:42.245 Total : 16617.91 129.83 0.00 0.00 7695.12 3305.29 19945.74 00:14:42.245 [2024-05-15 17:05:29.847361] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:42.245 [2024-05-15 17:05:29.847376] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:42.245 [2024-05-15 17:05:29.855379] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:42.245 [2024-05-15 17:05:29.855392] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:42.245 [2024-05-15 17:05:29.863406] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:42.245 [2024-05-15 17:05:29.863417] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:42.245 [2024-05-15 17:05:29.871446] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:42.245 [2024-05-15 17:05:29.871464] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:42.245 [2024-05-15 17:05:29.879452] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:42.245 [2024-05-15 17:05:29.879465] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:42.245 [2024-05-15 17:05:29.887474] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:42.245 [2024-05-15 17:05:29.887485] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:42.245 [2024-05-15 17:05:29.895501] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:42.245 [2024-05-15 17:05:29.895515] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:42.245 [2024-05-15 17:05:29.903515] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:42.245 [2024-05-15 17:05:29.903527] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:42.504 [2024-05-15 17:05:29.911536] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:42.504 [2024-05-15 17:05:29.911549] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:42.504 [2024-05-15 17:05:29.919555] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:42.504 [2024-05-15 17:05:29.919566] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:42.504 [2024-05-15 17:05:29.927577] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:42.504 [2024-05-15 17:05:29.927589] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:42.504 [2024-05-15 17:05:29.935598] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:42.504 [2024-05-15 17:05:29.935610] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:42.504 [2024-05-15 17:05:29.943621] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:42.504 [2024-05-15 17:05:29.943633] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:42.504 [2024-05-15 17:05:29.951642] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:42.504 [2024-05-15 17:05:29.951653] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:42.504 [2024-05-15 17:05:29.959663] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:42.504 [2024-05-15 17:05:29.959673] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:42.504 [2024-05-15 17:05:29.967682] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:42.504 [2024-05-15 17:05:29.967692] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:42.504 [2024-05-15 17:05:29.975705] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:42.504 [2024-05-15 17:05:29.975715] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:42.504 [2024-05-15 17:05:29.983732] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:42.504 [2024-05-15 17:05:29.983744] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:42.505 [2024-05-15 17:05:29.991749] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:42.505 [2024-05-15 17:05:29.991761] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:42.505 [2024-05-15 17:05:29.999769] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:42.505 [2024-05-15 17:05:29.999782] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:42.505 [2024-05-15 17:05:30.007791] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:42.505 [2024-05-15 17:05:30.007802] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:42.505 [2024-05-15 17:05:30.015813] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:42.505 [2024-05-15 17:05:30.015823] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:42.505 [2024-05-15 17:05:30.023835] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:42.505 [2024-05-15 17:05:30.023846] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:42.505 [2024-05-15 17:05:30.031857] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:42.505 [2024-05-15 17:05:30.031872] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:42.505 [2024-05-15 17:05:30.039876] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:42.505 [2024-05-15 17:05:30.039886] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:42.505 [2024-05-15 17:05:30.047896] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:42.505 [2024-05-15 17:05:30.047911] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:42.505 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (3039031) - No such process 00:14:42.505 17:05:30 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@49 -- # wait 3039031 00:14:42.505 17:05:30 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:42.505 17:05:30 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:42.505 17:05:30 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:14:42.505 17:05:30 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:42.505 17:05:30 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:14:42.505 17:05:30 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:42.505 17:05:30 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:14:42.505 delay0 00:14:42.505 17:05:30 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:42.505 17:05:30 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:14:42.505 17:05:30 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:42.505 17:05:30 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:14:42.505 17:05:30 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:42.505 17:05:30 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 ns:1' 00:14:42.505 EAL: No free 2048 kB hugepages reported on node 1 00:14:42.763 [2024-05-15 17:05:30.202291] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:14:49.328 Initializing NVMe Controllers 00:14:49.329 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:14:49.329 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:14:49.329 Initialization complete. Launching workers. 00:14:49.329 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 320, failed: 775 00:14:49.329 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 1062, failed to submit 33 00:14:49.329 success 875, unsuccess 187, failed 0 00:14:49.329 17:05:36 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:14:49.329 17:05:36 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@60 -- # nvmftestfini 00:14:49.329 17:05:36 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@488 -- # nvmfcleanup 00:14:49.329 17:05:36 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@117 -- # sync 00:14:49.329 17:05:36 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:14:49.329 17:05:36 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@120 -- # set +e 00:14:49.329 17:05:36 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@121 -- # for i in {1..20} 00:14:49.329 17:05:36 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:14:49.329 rmmod nvme_tcp 00:14:49.329 rmmod nvme_fabrics 00:14:49.329 rmmod nvme_keyring 00:14:49.329 17:05:36 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:14:49.329 17:05:36 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@124 -- # set -e 00:14:49.329 17:05:36 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@125 -- # return 0 00:14:49.329 17:05:36 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@489 -- # '[' -n 3036964 ']' 00:14:49.329 17:05:36 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@490 -- # killprocess 3036964 00:14:49.329 17:05:36 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@946 -- # '[' -z 3036964 ']' 00:14:49.329 17:05:36 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@950 -- # kill -0 3036964 00:14:49.329 17:05:36 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@951 -- # uname 00:14:49.329 17:05:36 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:14:49.329 17:05:36 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3036964 00:14:49.329 17:05:36 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:14:49.329 17:05:36 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:14:49.329 17:05:36 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3036964' 00:14:49.329 killing process with pid 3036964 00:14:49.329 17:05:36 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@965 -- # kill 3036964 00:14:49.329 [2024-05-15 17:05:36.608759] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:14:49.329 17:05:36 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@970 -- # wait 3036964 00:14:49.329 17:05:36 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:14:49.329 17:05:36 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:14:49.329 17:05:36 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:14:49.329 17:05:36 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:49.329 17:05:36 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@278 -- # remove_spdk_ns 00:14:49.329 17:05:36 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:49.329 17:05:36 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:49.329 17:05:36 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:51.234 17:05:38 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:14:51.234 00:14:51.234 real 0m31.543s 00:14:51.234 user 0m43.322s 00:14:51.234 sys 0m10.579s 00:14:51.234 17:05:38 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@1122 -- # xtrace_disable 00:14:51.234 17:05:38 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:14:51.234 ************************************ 00:14:51.234 END TEST nvmf_zcopy 00:14:51.234 ************************************ 00:14:51.493 17:05:38 nvmf_tcp -- nvmf/nvmf.sh@54 -- # run_test nvmf_nmic /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:14:51.493 17:05:38 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:14:51.493 17:05:38 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:14:51.493 17:05:38 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:14:51.493 ************************************ 00:14:51.493 START TEST nvmf_nmic 00:14:51.493 ************************************ 00:14:51.493 17:05:38 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:14:51.493 * Looking for test storage... 00:14:51.493 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:51.493 17:05:39 nvmf_tcp.nvmf_nmic -- target/nmic.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:51.493 17:05:39 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 00:14:51.493 17:05:39 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:51.493 17:05:39 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:51.493 17:05:39 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:51.493 17:05:39 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:51.493 17:05:39 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:51.493 17:05:39 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:51.493 17:05:39 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:51.493 17:05:39 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:51.493 17:05:39 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:51.493 17:05:39 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:51.493 17:05:39 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:14:51.493 17:05:39 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:14:51.493 17:05:39 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:51.493 17:05:39 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:51.493 17:05:39 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:51.493 17:05:39 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:51.493 17:05:39 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:51.493 17:05:39 nvmf_tcp.nvmf_nmic -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:51.493 17:05:39 nvmf_tcp.nvmf_nmic -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:51.493 17:05:39 nvmf_tcp.nvmf_nmic -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:51.493 17:05:39 nvmf_tcp.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:51.493 17:05:39 nvmf_tcp.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:51.493 17:05:39 nvmf_tcp.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:51.493 17:05:39 nvmf_tcp.nvmf_nmic -- paths/export.sh@5 -- # export PATH 00:14:51.494 17:05:39 nvmf_tcp.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:51.494 17:05:39 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@47 -- # : 0 00:14:51.494 17:05:39 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:14:51.494 17:05:39 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:14:51.494 17:05:39 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:51.494 17:05:39 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:51.494 17:05:39 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:51.494 17:05:39 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:14:51.494 17:05:39 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:14:51.494 17:05:39 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@51 -- # have_pci_nics=0 00:14:51.494 17:05:39 nvmf_tcp.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:14:51.494 17:05:39 nvmf_tcp.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:14:51.494 17:05:39 nvmf_tcp.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 00:14:51.494 17:05:39 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:14:51.494 17:05:39 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:51.494 17:05:39 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@448 -- # prepare_net_devs 00:14:51.494 17:05:39 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@410 -- # local -g is_hw=no 00:14:51.494 17:05:39 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@412 -- # remove_spdk_ns 00:14:51.494 17:05:39 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:51.494 17:05:39 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:51.494 17:05:39 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:51.494 17:05:39 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:14:51.494 17:05:39 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:14:51.494 17:05:39 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@285 -- # xtrace_disable 00:14:51.494 17:05:39 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:14:56.759 17:05:44 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:14:56.759 17:05:44 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@291 -- # pci_devs=() 00:14:56.759 17:05:44 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@291 -- # local -a pci_devs 00:14:56.759 17:05:44 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@292 -- # pci_net_devs=() 00:14:56.759 17:05:44 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:14:56.759 17:05:44 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@293 -- # pci_drivers=() 00:14:56.759 17:05:44 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@293 -- # local -A pci_drivers 00:14:56.759 17:05:44 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@295 -- # net_devs=() 00:14:56.759 17:05:44 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@295 -- # local -ga net_devs 00:14:56.759 17:05:44 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@296 -- # e810=() 00:14:56.759 17:05:44 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@296 -- # local -ga e810 00:14:56.759 17:05:44 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@297 -- # x722=() 00:14:56.759 17:05:44 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@297 -- # local -ga x722 00:14:56.759 17:05:44 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@298 -- # mlx=() 00:14:56.759 17:05:44 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@298 -- # local -ga mlx 00:14:56.759 17:05:44 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:56.759 17:05:44 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:56.760 17:05:44 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:56.760 17:05:44 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:56.760 17:05:44 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:56.760 17:05:44 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:56.760 17:05:44 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:56.760 17:05:44 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:56.760 17:05:44 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:56.760 17:05:44 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:56.760 17:05:44 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:56.760 17:05:44 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:14:56.760 17:05:44 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:14:56.760 17:05:44 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:14:56.760 17:05:44 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:14:56.760 17:05:44 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:14:56.760 17:05:44 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:14:56.760 17:05:44 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:56.760 17:05:44 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:14:56.760 Found 0000:86:00.0 (0x8086 - 0x159b) 00:14:56.760 17:05:44 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:56.760 17:05:44 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:56.760 17:05:44 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:56.760 17:05:44 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:56.760 17:05:44 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:56.760 17:05:44 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:56.760 17:05:44 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:14:56.760 Found 0000:86:00.1 (0x8086 - 0x159b) 00:14:56.760 17:05:44 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:56.760 17:05:44 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:56.760 17:05:44 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:56.760 17:05:44 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:56.760 17:05:44 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:56.760 17:05:44 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:14:56.760 17:05:44 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:14:56.760 17:05:44 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:14:56.760 17:05:44 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:56.760 17:05:44 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:56.760 17:05:44 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:56.760 17:05:44 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:56.760 17:05:44 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:56.760 17:05:44 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:56.760 17:05:44 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:56.760 17:05:44 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:14:56.760 Found net devices under 0000:86:00.0: cvl_0_0 00:14:56.760 17:05:44 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:56.760 17:05:44 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:56.760 17:05:44 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:56.760 17:05:44 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:56.760 17:05:44 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:56.760 17:05:44 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:56.760 17:05:44 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:56.760 17:05:44 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:56.760 17:05:44 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:14:56.760 Found net devices under 0000:86:00.1: cvl_0_1 00:14:56.760 17:05:44 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:56.760 17:05:44 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:14:56.760 17:05:44 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@414 -- # is_hw=yes 00:14:56.760 17:05:44 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:14:56.760 17:05:44 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:14:56.760 17:05:44 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:14:56.760 17:05:44 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:56.760 17:05:44 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:56.760 17:05:44 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:56.760 17:05:44 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:14:56.760 17:05:44 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:56.760 17:05:44 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:56.760 17:05:44 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:14:56.760 17:05:44 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:56.760 17:05:44 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:56.760 17:05:44 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:14:56.760 17:05:44 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:14:56.760 17:05:44 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:14:56.760 17:05:44 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:56.760 17:05:44 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:56.760 17:05:44 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:56.760 17:05:44 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:14:56.760 17:05:44 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:57.019 17:05:44 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:57.019 17:05:44 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:57.019 17:05:44 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:14:57.019 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:57.019 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.277 ms 00:14:57.019 00:14:57.019 --- 10.0.0.2 ping statistics --- 00:14:57.019 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:57.019 rtt min/avg/max/mdev = 0.277/0.277/0.277/0.000 ms 00:14:57.019 17:05:44 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:57.019 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:57.019 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.217 ms 00:14:57.019 00:14:57.019 --- 10.0.0.1 ping statistics --- 00:14:57.019 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:57.019 rtt min/avg/max/mdev = 0.217/0.217/0.217/0.000 ms 00:14:57.019 17:05:44 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:57.019 17:05:44 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@422 -- # return 0 00:14:57.019 17:05:44 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:14:57.019 17:05:44 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:57.019 17:05:44 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:14:57.019 17:05:44 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:14:57.019 17:05:44 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:57.019 17:05:44 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:14:57.019 17:05:44 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:14:57.019 17:05:44 nvmf_tcp.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:14:57.019 17:05:44 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:14:57.019 17:05:44 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@720 -- # xtrace_disable 00:14:57.019 17:05:44 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:14:57.019 17:05:44 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@481 -- # nvmfpid=3044393 00:14:57.019 17:05:44 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@482 -- # waitforlisten 3044393 00:14:57.019 17:05:44 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:14:57.019 17:05:44 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@827 -- # '[' -z 3044393 ']' 00:14:57.019 17:05:44 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:57.019 17:05:44 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@832 -- # local max_retries=100 00:14:57.019 17:05:44 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:57.019 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:57.019 17:05:44 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@836 -- # xtrace_disable 00:14:57.019 17:05:44 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:14:57.019 [2024-05-15 17:05:44.583584] Starting SPDK v24.05-pre git sha1 0ba8ca574 / DPDK 23.11.0 initialization... 00:14:57.019 [2024-05-15 17:05:44.583634] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:57.019 EAL: No free 2048 kB hugepages reported on node 1 00:14:57.019 [2024-05-15 17:05:44.639773] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:57.277 [2024-05-15 17:05:44.721457] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:57.277 [2024-05-15 17:05:44.721492] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:57.277 [2024-05-15 17:05:44.721499] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:57.277 [2024-05-15 17:05:44.721505] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:57.277 [2024-05-15 17:05:44.721510] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:57.277 [2024-05-15 17:05:44.721552] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:14:57.277 [2024-05-15 17:05:44.721638] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:14:57.277 [2024-05-15 17:05:44.721725] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:14:57.277 [2024-05-15 17:05:44.721726] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:57.841 17:05:45 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:14:57.841 17:05:45 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@860 -- # return 0 00:14:57.841 17:05:45 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:14:57.841 17:05:45 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@726 -- # xtrace_disable 00:14:57.841 17:05:45 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:14:57.841 17:05:45 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:57.841 17:05:45 nvmf_tcp.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:14:57.841 17:05:45 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:57.841 17:05:45 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:14:57.841 [2024-05-15 17:05:45.450108] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:57.841 17:05:45 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:57.841 17:05:45 nvmf_tcp.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:14:57.841 17:05:45 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:57.841 17:05:45 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:14:57.841 Malloc0 00:14:57.841 17:05:45 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:57.841 17:05:45 nvmf_tcp.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:14:57.841 17:05:45 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:57.841 17:05:45 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:14:57.841 17:05:45 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:57.841 17:05:45 nvmf_tcp.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:14:57.842 17:05:45 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:57.842 17:05:45 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:14:57.842 17:05:45 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:57.842 17:05:45 nvmf_tcp.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:57.842 17:05:45 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:57.842 17:05:45 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:14:57.842 [2024-05-15 17:05:45.493432] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:14:57.842 [2024-05-15 17:05:45.493681] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:57.842 17:05:45 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:57.842 17:05:45 nvmf_tcp.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:14:57.842 test case1: single bdev can't be used in multiple subsystems 00:14:57.842 17:05:45 nvmf_tcp.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:14:57.842 17:05:45 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:57.842 17:05:45 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:14:58.098 17:05:45 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:58.098 17:05:45 nvmf_tcp.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:14:58.098 17:05:45 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:58.098 17:05:45 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:14:58.098 17:05:45 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:58.098 17:05:45 nvmf_tcp.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 00:14:58.098 17:05:45 nvmf_tcp.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:14:58.098 17:05:45 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:58.098 17:05:45 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:14:58.098 [2024-05-15 17:05:45.517546] bdev.c:8030:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:14:58.098 [2024-05-15 17:05:45.517564] subsystem.c:2063:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:14:58.098 [2024-05-15 17:05:45.517571] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:58.098 request: 00:14:58.098 { 00:14:58.098 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:14:58.098 "namespace": { 00:14:58.098 "bdev_name": "Malloc0", 00:14:58.098 "no_auto_visible": false 00:14:58.098 }, 00:14:58.098 "method": "nvmf_subsystem_add_ns", 00:14:58.098 "req_id": 1 00:14:58.098 } 00:14:58.098 Got JSON-RPC error response 00:14:58.098 response: 00:14:58.098 { 00:14:58.098 "code": -32602, 00:14:58.098 "message": "Invalid parameters" 00:14:58.098 } 00:14:58.098 17:05:45 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:14:58.099 17:05:45 nvmf_tcp.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 00:14:58.099 17:05:45 nvmf_tcp.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:14:58.099 17:05:45 nvmf_tcp.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:14:58.099 Adding namespace failed - expected result. 00:14:58.099 17:05:45 nvmf_tcp.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:14:58.099 test case2: host connect to nvmf target in multiple paths 00:14:58.099 17:05:45 nvmf_tcp.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:14:58.099 17:05:45 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:58.099 17:05:45 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:14:58.099 [2024-05-15 17:05:45.529674] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:14:58.099 17:05:45 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:58.099 17:05:45 nvmf_tcp.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:14:59.028 17:05:46 nvmf_tcp.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4421 00:15:00.427 17:05:47 nvmf_tcp.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:15:00.427 17:05:47 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1194 -- # local i=0 00:15:00.427 17:05:47 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:15:00.427 17:05:47 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:15:00.427 17:05:47 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1201 -- # sleep 2 00:15:02.320 17:05:49 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:15:02.320 17:05:49 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:15:02.320 17:05:49 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1203 -- # grep -c SPDKISFASTANDAWESOME 00:15:02.320 17:05:49 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:15:02.320 17:05:49 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:15:02.320 17:05:49 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1204 -- # return 0 00:15:02.320 17:05:49 nvmf_tcp.nvmf_nmic -- target/nmic.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:15:02.320 [global] 00:15:02.320 thread=1 00:15:02.320 invalidate=1 00:15:02.320 rw=write 00:15:02.320 time_based=1 00:15:02.320 runtime=1 00:15:02.320 ioengine=libaio 00:15:02.320 direct=1 00:15:02.320 bs=4096 00:15:02.320 iodepth=1 00:15:02.320 norandommap=0 00:15:02.320 numjobs=1 00:15:02.320 00:15:02.320 verify_dump=1 00:15:02.320 verify_backlog=512 00:15:02.320 verify_state_save=0 00:15:02.320 do_verify=1 00:15:02.320 verify=crc32c-intel 00:15:02.320 [job0] 00:15:02.320 filename=/dev/nvme0n1 00:15:02.320 Could not set queue depth (nvme0n1) 00:15:02.576 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:15:02.576 fio-3.35 00:15:02.576 Starting 1 thread 00:15:03.944 00:15:03.944 job0: (groupid=0, jobs=1): err= 0: pid=3045471: Wed May 15 17:05:51 2024 00:15:03.944 read: IOPS=1450, BW=5803KiB/s (5943kB/s)(5960KiB/1027msec) 00:15:03.944 slat (nsec): min=7104, max=40762, avg=8115.79, stdev=1472.29 00:15:03.944 clat (usec): min=259, max=41083, avg=458.03, stdev=2353.51 00:15:03.944 lat (usec): min=267, max=41108, avg=466.15, stdev=2354.19 00:15:03.944 clat percentiles (usec): 00:15:03.944 | 1.00th=[ 265], 5.00th=[ 269], 10.00th=[ 273], 20.00th=[ 277], 00:15:03.944 | 30.00th=[ 285], 40.00th=[ 306], 50.00th=[ 310], 60.00th=[ 314], 00:15:03.944 | 70.00th=[ 318], 80.00th=[ 347], 90.00th=[ 424], 95.00th=[ 449], 00:15:03.944 | 99.00th=[ 461], 99.50th=[ 478], 99.90th=[41157], 99.95th=[41157], 00:15:03.944 | 99.99th=[41157] 00:15:03.944 write: IOPS=1495, BW=5982KiB/s (6126kB/s)(6144KiB/1027msec); 0 zone resets 00:15:03.944 slat (usec): min=10, max=23323, avg=27.13, stdev=594.81 00:15:03.944 clat (usec): min=144, max=371, avg=182.89, stdev=14.07 00:15:03.944 lat (usec): min=156, max=23563, avg=210.02, stdev=596.44 00:15:03.944 clat percentiles (usec): 00:15:03.944 | 1.00th=[ 151], 5.00th=[ 155], 10.00th=[ 161], 20.00th=[ 178], 00:15:03.944 | 30.00th=[ 180], 40.00th=[ 184], 50.00th=[ 184], 60.00th=[ 186], 00:15:03.944 | 70.00th=[ 190], 80.00th=[ 192], 90.00th=[ 198], 95.00th=[ 200], 00:15:03.944 | 99.00th=[ 210], 99.50th=[ 215], 99.90th=[ 262], 99.95th=[ 371], 00:15:03.944 | 99.99th=[ 371] 00:15:03.944 bw ( KiB/s): min= 4096, max= 8192, per=100.00%, avg=6144.00, stdev=2896.31, samples=2 00:15:03.944 iops : min= 1024, max= 2048, avg=1536.00, stdev=724.08, samples=2 00:15:03.944 lat (usec) : 250=50.66%, 500=49.17% 00:15:03.944 lat (msec) : 50=0.17% 00:15:03.944 cpu : usr=3.41%, sys=3.89%, ctx=3028, majf=0, minf=2 00:15:03.944 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:15:03.944 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:03.944 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:03.944 issued rwts: total=1490,1536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:03.944 latency : target=0, window=0, percentile=100.00%, depth=1 00:15:03.944 00:15:03.944 Run status group 0 (all jobs): 00:15:03.944 READ: bw=5803KiB/s (5943kB/s), 5803KiB/s-5803KiB/s (5943kB/s-5943kB/s), io=5960KiB (6103kB), run=1027-1027msec 00:15:03.944 WRITE: bw=5982KiB/s (6126kB/s), 5982KiB/s-5982KiB/s (6126kB/s-6126kB/s), io=6144KiB (6291kB), run=1027-1027msec 00:15:03.944 00:15:03.944 Disk stats (read/write): 00:15:03.944 nvme0n1: ios=1511/1536, merge=0/0, ticks=1494/266, in_queue=1760, util=98.50% 00:15:03.944 17:05:51 nvmf_tcp.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:15:03.944 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:15:03.944 17:05:51 nvmf_tcp.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:15:03.944 17:05:51 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1215 -- # local i=0 00:15:03.944 17:05:51 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:15:03.944 17:05:51 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:03.944 17:05:51 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:15:03.944 17:05:51 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1223 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:03.944 17:05:51 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1227 -- # return 0 00:15:03.944 17:05:51 nvmf_tcp.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:15:03.944 17:05:51 nvmf_tcp.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 00:15:03.944 17:05:51 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@488 -- # nvmfcleanup 00:15:03.944 17:05:51 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@117 -- # sync 00:15:03.944 17:05:51 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:15:03.944 17:05:51 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@120 -- # set +e 00:15:03.944 17:05:51 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@121 -- # for i in {1..20} 00:15:03.944 17:05:51 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:15:03.944 rmmod nvme_tcp 00:15:03.944 rmmod nvme_fabrics 00:15:03.944 rmmod nvme_keyring 00:15:03.944 17:05:51 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:15:03.944 17:05:51 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@124 -- # set -e 00:15:03.944 17:05:51 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@125 -- # return 0 00:15:03.944 17:05:51 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@489 -- # '[' -n 3044393 ']' 00:15:03.944 17:05:51 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@490 -- # killprocess 3044393 00:15:03.944 17:05:51 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@946 -- # '[' -z 3044393 ']' 00:15:03.944 17:05:51 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@950 -- # kill -0 3044393 00:15:03.944 17:05:51 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@951 -- # uname 00:15:03.944 17:05:51 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:15:03.944 17:05:51 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3044393 00:15:03.944 17:05:51 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:15:03.945 17:05:51 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:15:03.945 17:05:51 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3044393' 00:15:03.945 killing process with pid 3044393 00:15:03.945 17:05:51 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@965 -- # kill 3044393 00:15:03.945 [2024-05-15 17:05:51.584584] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:15:03.945 17:05:51 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@970 -- # wait 3044393 00:15:04.203 17:05:51 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:15:04.204 17:05:51 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:15:04.204 17:05:51 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:15:04.204 17:05:51 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:04.204 17:05:51 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@278 -- # remove_spdk_ns 00:15:04.204 17:05:51 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:04.204 17:05:51 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:04.204 17:05:51 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:06.730 17:05:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:15:06.730 00:15:06.730 real 0m14.938s 00:15:06.730 user 0m34.904s 00:15:06.730 sys 0m4.924s 00:15:06.730 17:05:53 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1122 -- # xtrace_disable 00:15:06.730 17:05:53 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:15:06.730 ************************************ 00:15:06.730 END TEST nvmf_nmic 00:15:06.730 ************************************ 00:15:06.730 17:05:53 nvmf_tcp -- nvmf/nvmf.sh@55 -- # run_test nvmf_fio_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:15:06.730 17:05:53 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:15:06.730 17:05:53 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:15:06.730 17:05:53 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:15:06.730 ************************************ 00:15:06.730 START TEST nvmf_fio_target 00:15:06.730 ************************************ 00:15:06.730 17:05:53 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:15:06.730 * Looking for test storage... 00:15:06.730 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:15:06.730 17:05:54 nvmf_tcp.nvmf_fio_target -- target/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:06.730 17:05:54 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 00:15:06.730 17:05:54 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:06.730 17:05:54 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:06.730 17:05:54 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:06.730 17:05:54 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:06.730 17:05:54 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:06.730 17:05:54 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:06.730 17:05:54 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:06.730 17:05:54 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:06.730 17:05:54 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:06.730 17:05:54 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:06.730 17:05:54 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:15:06.730 17:05:54 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:15:06.730 17:05:54 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:06.730 17:05:54 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:06.730 17:05:54 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:06.730 17:05:54 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:06.730 17:05:54 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:06.730 17:05:54 nvmf_tcp.nvmf_fio_target -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:06.730 17:05:54 nvmf_tcp.nvmf_fio_target -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:06.730 17:05:54 nvmf_tcp.nvmf_fio_target -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:06.730 17:05:54 nvmf_tcp.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:06.730 17:05:54 nvmf_tcp.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:06.730 17:05:54 nvmf_tcp.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:06.730 17:05:54 nvmf_tcp.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 00:15:06.730 17:05:54 nvmf_tcp.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:06.730 17:05:54 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@47 -- # : 0 00:15:06.731 17:05:54 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:15:06.731 17:05:54 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:15:06.731 17:05:54 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:06.731 17:05:54 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:06.731 17:05:54 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:06.731 17:05:54 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:15:06.731 17:05:54 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:15:06.731 17:05:54 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@51 -- # have_pci_nics=0 00:15:06.731 17:05:54 nvmf_tcp.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:15:06.731 17:05:54 nvmf_tcp.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:15:06.731 17:05:54 nvmf_tcp.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:15:06.731 17:05:54 nvmf_tcp.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 00:15:06.731 17:05:54 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:15:06.731 17:05:54 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:06.731 17:05:54 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@448 -- # prepare_net_devs 00:15:06.731 17:05:54 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@410 -- # local -g is_hw=no 00:15:06.731 17:05:54 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@412 -- # remove_spdk_ns 00:15:06.731 17:05:54 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:06.731 17:05:54 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:06.731 17:05:54 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:06.731 17:05:54 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:15:06.731 17:05:54 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:15:06.731 17:05:54 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@285 -- # xtrace_disable 00:15:06.731 17:05:54 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:15:11.989 17:05:59 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:15:11.989 17:05:59 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@291 -- # pci_devs=() 00:15:11.989 17:05:59 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@291 -- # local -a pci_devs 00:15:11.989 17:05:59 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@292 -- # pci_net_devs=() 00:15:11.989 17:05:59 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:15:11.989 17:05:59 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@293 -- # pci_drivers=() 00:15:11.989 17:05:59 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@293 -- # local -A pci_drivers 00:15:11.989 17:05:59 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@295 -- # net_devs=() 00:15:11.989 17:05:59 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@295 -- # local -ga net_devs 00:15:11.990 17:05:59 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@296 -- # e810=() 00:15:11.990 17:05:59 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@296 -- # local -ga e810 00:15:11.990 17:05:59 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@297 -- # x722=() 00:15:11.990 17:05:59 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@297 -- # local -ga x722 00:15:11.990 17:05:59 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@298 -- # mlx=() 00:15:11.990 17:05:59 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@298 -- # local -ga mlx 00:15:11.990 17:05:59 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:15:11.990 17:05:59 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:15:11.990 17:05:59 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:15:11.990 17:05:59 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:15:11.990 17:05:59 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:15:11.990 17:05:59 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:15:11.990 17:05:59 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:15:11.990 17:05:59 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:15:11.990 17:05:59 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:15:11.990 17:05:59 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:15:11.990 17:05:59 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:15:11.990 17:05:59 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:15:11.990 17:05:59 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:15:11.990 17:05:59 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:15:11.990 17:05:59 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:15:11.990 17:05:59 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:15:11.990 17:05:59 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:15:11.990 17:05:59 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:11.990 17:05:59 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:15:11.990 Found 0000:86:00.0 (0x8086 - 0x159b) 00:15:11.990 17:05:59 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:11.990 17:05:59 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:11.990 17:05:59 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:11.990 17:05:59 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:11.990 17:05:59 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:15:11.990 17:05:59 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:11.990 17:05:59 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:15:11.990 Found 0000:86:00.1 (0x8086 - 0x159b) 00:15:11.990 17:05:59 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:11.990 17:05:59 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:11.990 17:05:59 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:11.990 17:05:59 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:11.990 17:05:59 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:15:11.990 17:05:59 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:15:11.990 17:05:59 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:15:11.990 17:05:59 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:15:11.990 17:05:59 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:11.990 17:05:59 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:11.990 17:05:59 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:15:11.990 17:05:59 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:11.990 17:05:59 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:15:11.990 17:05:59 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:15:11.990 17:05:59 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:11.990 17:05:59 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:15:11.990 Found net devices under 0000:86:00.0: cvl_0_0 00:15:11.990 17:05:59 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:15:11.990 17:05:59 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:11.990 17:05:59 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:11.990 17:05:59 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:15:11.990 17:05:59 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:11.990 17:05:59 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:15:11.990 17:05:59 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:15:11.990 17:05:59 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:11.990 17:05:59 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:15:11.990 Found net devices under 0000:86:00.1: cvl_0_1 00:15:11.990 17:05:59 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:15:11.990 17:05:59 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:15:11.990 17:05:59 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@414 -- # is_hw=yes 00:15:11.990 17:05:59 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:15:11.990 17:05:59 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:15:11.990 17:05:59 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:15:11.990 17:05:59 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:11.990 17:05:59 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:11.990 17:05:59 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:15:11.990 17:05:59 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:15:11.990 17:05:59 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:15:11.990 17:05:59 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:15:11.990 17:05:59 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:15:11.990 17:05:59 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:15:11.990 17:05:59 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:11.990 17:05:59 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:15:11.990 17:05:59 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:15:11.990 17:05:59 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:15:11.990 17:05:59 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:15:11.990 17:05:59 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:15:11.990 17:05:59 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:15:11.990 17:05:59 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:15:11.990 17:05:59 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:15:11.990 17:05:59 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:15:11.990 17:05:59 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:15:11.990 17:05:59 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:15:11.990 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:11.990 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.174 ms 00:15:11.990 00:15:11.990 --- 10.0.0.2 ping statistics --- 00:15:11.990 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:11.990 rtt min/avg/max/mdev = 0.174/0.174/0.174/0.000 ms 00:15:11.990 17:05:59 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:15:11.990 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:11.990 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.170 ms 00:15:11.990 00:15:11.990 --- 10.0.0.1 ping statistics --- 00:15:11.990 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:11.990 rtt min/avg/max/mdev = 0.170/0.170/0.170/0.000 ms 00:15:11.990 17:05:59 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:11.990 17:05:59 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@422 -- # return 0 00:15:11.990 17:05:59 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:15:11.990 17:05:59 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:11.990 17:05:59 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:15:11.990 17:05:59 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:15:11.990 17:05:59 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:11.990 17:05:59 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:15:11.990 17:05:59 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:15:11.990 17:05:59 nvmf_tcp.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:15:11.990 17:05:59 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:15:11.990 17:05:59 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@720 -- # xtrace_disable 00:15:11.990 17:05:59 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:15:11.990 17:05:59 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@481 -- # nvmfpid=3049219 00:15:11.990 17:05:59 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:15:11.990 17:05:59 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@482 -- # waitforlisten 3049219 00:15:11.990 17:05:59 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@827 -- # '[' -z 3049219 ']' 00:15:11.990 17:05:59 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:11.990 17:05:59 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@832 -- # local max_retries=100 00:15:11.990 17:05:59 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:11.990 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:11.990 17:05:59 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@836 -- # xtrace_disable 00:15:11.990 17:05:59 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:15:11.990 [2024-05-15 17:05:59.543272] Starting SPDK v24.05-pre git sha1 0ba8ca574 / DPDK 23.11.0 initialization... 00:15:11.990 [2024-05-15 17:05:59.543319] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:11.990 EAL: No free 2048 kB hugepages reported on node 1 00:15:11.990 [2024-05-15 17:05:59.599059] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:15:12.248 [2024-05-15 17:05:59.680600] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:12.248 [2024-05-15 17:05:59.680634] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:12.248 [2024-05-15 17:05:59.680641] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:12.248 [2024-05-15 17:05:59.680648] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:12.248 [2024-05-15 17:05:59.680653] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:12.248 [2024-05-15 17:05:59.680693] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:15:12.248 [2024-05-15 17:05:59.680786] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:15:12.248 [2024-05-15 17:05:59.680849] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:15:12.248 [2024-05-15 17:05:59.680850] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:12.812 17:06:00 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:15:12.812 17:06:00 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@860 -- # return 0 00:15:12.812 17:06:00 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:15:12.812 17:06:00 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:15:12.812 17:06:00 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:15:12.812 17:06:00 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:12.812 17:06:00 nvmf_tcp.nvmf_fio_target -- target/fio.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:15:13.069 [2024-05-15 17:06:00.554572] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:13.069 17:06:00 nvmf_tcp.nvmf_fio_target -- target/fio.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:15:13.325 17:06:00 nvmf_tcp.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:15:13.325 17:06:00 nvmf_tcp.nvmf_fio_target -- target/fio.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:15:13.581 17:06:00 nvmf_tcp.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:15:13.581 17:06:00 nvmf_tcp.nvmf_fio_target -- target/fio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:15:13.581 17:06:01 nvmf_tcp.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:15:13.581 17:06:01 nvmf_tcp.nvmf_fio_target -- target/fio.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:15:13.839 17:06:01 nvmf_tcp.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:15:13.839 17:06:01 nvmf_tcp.nvmf_fio_target -- target/fio.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:15:14.097 17:06:01 nvmf_tcp.nvmf_fio_target -- target/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:15:14.355 17:06:01 nvmf_tcp.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:15:14.355 17:06:01 nvmf_tcp.nvmf_fio_target -- target/fio.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:15:14.355 17:06:01 nvmf_tcp.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:15:14.355 17:06:01 nvmf_tcp.nvmf_fio_target -- target/fio.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:15:14.612 17:06:02 nvmf_tcp.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:15:14.612 17:06:02 nvmf_tcp.nvmf_fio_target -- target/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:15:14.869 17:06:02 nvmf_tcp.nvmf_fio_target -- target/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:15:15.127 17:06:02 nvmf_tcp.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:15:15.127 17:06:02 nvmf_tcp.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:15:15.127 17:06:02 nvmf_tcp.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:15:15.127 17:06:02 nvmf_tcp.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:15:15.385 17:06:02 nvmf_tcp.nvmf_fio_target -- target/fio.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:15.642 [2024-05-15 17:06:03.068766] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:15:15.642 [2024-05-15 17:06:03.069015] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:15.642 17:06:03 nvmf_tcp.nvmf_fio_target -- target/fio.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:15:15.642 17:06:03 nvmf_tcp.nvmf_fio_target -- target/fio.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:15:15.899 17:06:03 nvmf_tcp.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:15:17.272 17:06:04 nvmf_tcp.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:15:17.272 17:06:04 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1194 -- # local i=0 00:15:17.272 17:06:04 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:15:17.272 17:06:04 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1196 -- # [[ -n 4 ]] 00:15:17.272 17:06:04 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1197 -- # nvme_device_counter=4 00:15:17.272 17:06:04 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1201 -- # sleep 2 00:15:19.169 17:06:06 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:15:19.169 17:06:06 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:15:19.169 17:06:06 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1203 -- # grep -c SPDKISFASTANDAWESOME 00:15:19.169 17:06:06 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1203 -- # nvme_devices=4 00:15:19.169 17:06:06 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:15:19.169 17:06:06 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1204 -- # return 0 00:15:19.169 17:06:06 nvmf_tcp.nvmf_fio_target -- target/fio.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:15:19.169 [global] 00:15:19.169 thread=1 00:15:19.169 invalidate=1 00:15:19.169 rw=write 00:15:19.169 time_based=1 00:15:19.169 runtime=1 00:15:19.169 ioengine=libaio 00:15:19.169 direct=1 00:15:19.169 bs=4096 00:15:19.169 iodepth=1 00:15:19.169 norandommap=0 00:15:19.169 numjobs=1 00:15:19.169 00:15:19.169 verify_dump=1 00:15:19.169 verify_backlog=512 00:15:19.169 verify_state_save=0 00:15:19.169 do_verify=1 00:15:19.169 verify=crc32c-intel 00:15:19.169 [job0] 00:15:19.169 filename=/dev/nvme0n1 00:15:19.169 [job1] 00:15:19.169 filename=/dev/nvme0n2 00:15:19.169 [job2] 00:15:19.169 filename=/dev/nvme0n3 00:15:19.169 [job3] 00:15:19.169 filename=/dev/nvme0n4 00:15:19.169 Could not set queue depth (nvme0n1) 00:15:19.169 Could not set queue depth (nvme0n2) 00:15:19.169 Could not set queue depth (nvme0n3) 00:15:19.169 Could not set queue depth (nvme0n4) 00:15:19.426 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:15:19.426 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:15:19.426 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:15:19.426 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:15:19.426 fio-3.35 00:15:19.426 Starting 4 threads 00:15:20.798 00:15:20.798 job0: (groupid=0, jobs=1): err= 0: pid=3051065: Wed May 15 17:06:08 2024 00:15:20.798 read: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec) 00:15:20.798 slat (nsec): min=7158, max=33454, avg=8064.07, stdev=1055.51 00:15:20.798 clat (usec): min=290, max=672, avg=365.66, stdev=39.59 00:15:20.798 lat (usec): min=298, max=680, avg=373.72, stdev=39.62 00:15:20.798 clat percentiles (usec): 00:15:20.798 | 1.00th=[ 306], 5.00th=[ 314], 10.00th=[ 322], 20.00th=[ 334], 00:15:20.798 | 30.00th=[ 343], 40.00th=[ 351], 50.00th=[ 359], 60.00th=[ 367], 00:15:20.798 | 70.00th=[ 375], 80.00th=[ 396], 90.00th=[ 424], 95.00th=[ 445], 00:15:20.798 | 99.00th=[ 490], 99.50th=[ 506], 99.90th=[ 523], 99.95th=[ 676], 00:15:20.798 | 99.99th=[ 676] 00:15:20.798 write: IOPS=1887, BW=7548KiB/s (7730kB/s)(7556KiB/1001msec); 0 zone resets 00:15:20.798 slat (usec): min=10, max=14516, avg=19.71, stdev=333.73 00:15:20.798 clat (usec): min=146, max=290, avg=200.10, stdev=19.99 00:15:20.798 lat (usec): min=157, max=14724, avg=219.81, stdev=334.53 00:15:20.798 clat percentiles (usec): 00:15:20.798 | 1.00th=[ 161], 5.00th=[ 174], 10.00th=[ 180], 20.00th=[ 186], 00:15:20.798 | 30.00th=[ 190], 40.00th=[ 194], 50.00th=[ 198], 60.00th=[ 200], 00:15:20.798 | 70.00th=[ 206], 80.00th=[ 215], 90.00th=[ 229], 95.00th=[ 239], 00:15:20.798 | 99.00th=[ 262], 99.50th=[ 273], 99.90th=[ 281], 99.95th=[ 289], 00:15:20.798 | 99.99th=[ 289] 00:15:20.798 bw ( KiB/s): min= 8175, max= 8175, per=35.39%, avg=8175.00, stdev= 0.00, samples=1 00:15:20.798 iops : min= 2043, max= 2043, avg=2043.00, stdev= 0.00, samples=1 00:15:20.798 lat (usec) : 250=53.99%, 500=45.72%, 750=0.29% 00:15:20.798 cpu : usr=3.00%, sys=5.50%, ctx=3427, majf=0, minf=1 00:15:20.798 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:15:20.798 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:20.798 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:20.798 issued rwts: total=1536,1889,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:20.798 latency : target=0, window=0, percentile=100.00%, depth=1 00:15:20.798 job1: (groupid=0, jobs=1): err= 0: pid=3051069: Wed May 15 17:06:08 2024 00:15:20.798 read: IOPS=21, BW=86.3KiB/s (88.3kB/s)(88.0KiB/1020msec) 00:15:20.798 slat (nsec): min=10120, max=24385, avg=20586.14, stdev=3605.14 00:15:20.798 clat (usec): min=40872, max=42104, avg=41474.01, stdev=509.05 00:15:20.798 lat (usec): min=40893, max=42125, avg=41494.60, stdev=510.11 00:15:20.798 clat percentiles (usec): 00:15:20.798 | 1.00th=[40633], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:15:20.798 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41681], 00:15:20.798 | 70.00th=[41681], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:15:20.798 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:15:20.798 | 99.99th=[42206] 00:15:20.798 write: IOPS=501, BW=2008KiB/s (2056kB/s)(2048KiB/1020msec); 0 zone resets 00:15:20.798 slat (nsec): min=10515, max=34550, avg=11921.33, stdev=1723.84 00:15:20.798 clat (usec): min=142, max=321, avg=192.91, stdev=19.93 00:15:20.798 lat (usec): min=154, max=355, avg=204.84, stdev=20.38 00:15:20.798 clat percentiles (usec): 00:15:20.798 | 1.00th=[ 151], 5.00th=[ 161], 10.00th=[ 167], 20.00th=[ 178], 00:15:20.798 | 30.00th=[ 184], 40.00th=[ 188], 50.00th=[ 192], 60.00th=[ 198], 00:15:20.798 | 70.00th=[ 204], 80.00th=[ 208], 90.00th=[ 217], 95.00th=[ 223], 00:15:20.798 | 99.00th=[ 239], 99.50th=[ 253], 99.90th=[ 322], 99.95th=[ 322], 00:15:20.798 | 99.99th=[ 322] 00:15:20.798 bw ( KiB/s): min= 4096, max= 4096, per=17.73%, avg=4096.00, stdev= 0.00, samples=1 00:15:20.798 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:15:20.798 lat (usec) : 250=95.13%, 500=0.75% 00:15:20.798 lat (msec) : 50=4.12% 00:15:20.798 cpu : usr=1.18%, sys=0.20%, ctx=535, majf=0, minf=2 00:15:20.798 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:15:20.798 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:20.798 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:20.798 issued rwts: total=22,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:20.798 latency : target=0, window=0, percentile=100.00%, depth=1 00:15:20.798 job2: (groupid=0, jobs=1): err= 0: pid=3051072: Wed May 15 17:06:08 2024 00:15:20.798 read: IOPS=1022, BW=4092KiB/s (4190kB/s)(4096KiB/1001msec) 00:15:20.798 slat (nsec): min=6276, max=25382, avg=7368.39, stdev=1500.76 00:15:20.798 clat (usec): min=228, max=42173, avg=673.93, stdev=3829.01 00:15:20.798 lat (usec): min=235, max=42180, avg=681.30, stdev=3830.23 00:15:20.798 clat percentiles (usec): 00:15:20.798 | 1.00th=[ 245], 5.00th=[ 258], 10.00th=[ 262], 20.00th=[ 269], 00:15:20.798 | 30.00th=[ 273], 40.00th=[ 281], 50.00th=[ 293], 60.00th=[ 318], 00:15:20.798 | 70.00th=[ 330], 80.00th=[ 347], 90.00th=[ 412], 95.00th=[ 457], 00:15:20.798 | 99.00th=[ 562], 99.50th=[41157], 99.90th=[41681], 99.95th=[42206], 00:15:20.798 | 99.99th=[42206] 00:15:20.798 write: IOPS=1440, BW=5762KiB/s (5901kB/s)(5768KiB/1001msec); 0 zone resets 00:15:20.798 slat (nsec): min=9102, max=40443, avg=10388.77, stdev=1330.32 00:15:20.798 clat (usec): min=151, max=431, avg=195.78, stdev=19.85 00:15:20.798 lat (usec): min=161, max=471, avg=206.17, stdev=20.15 00:15:20.798 clat percentiles (usec): 00:15:20.798 | 1.00th=[ 163], 5.00th=[ 172], 10.00th=[ 176], 20.00th=[ 182], 00:15:20.798 | 30.00th=[ 186], 40.00th=[ 190], 50.00th=[ 194], 60.00th=[ 196], 00:15:20.798 | 70.00th=[ 202], 80.00th=[ 208], 90.00th=[ 221], 95.00th=[ 231], 00:15:20.798 | 99.00th=[ 247], 99.50th=[ 258], 99.90th=[ 404], 99.95th=[ 433], 00:15:20.798 | 99.99th=[ 433] 00:15:20.798 bw ( KiB/s): min= 8175, max= 8175, per=35.39%, avg=8175.00, stdev= 0.00, samples=1 00:15:20.799 iops : min= 2043, max= 2043, avg=2043.00, stdev= 0.00, samples=1 00:15:20.799 lat (usec) : 250=58.68%, 500=40.67%, 750=0.28% 00:15:20.799 lat (msec) : 50=0.36% 00:15:20.799 cpu : usr=1.50%, sys=2.00%, ctx=2466, majf=0, minf=1 00:15:20.799 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:15:20.799 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:20.799 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:20.799 issued rwts: total=1024,1442,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:20.799 latency : target=0, window=0, percentile=100.00%, depth=1 00:15:20.799 job3: (groupid=0, jobs=1): err= 0: pid=3051073: Wed May 15 17:06:08 2024 00:15:20.799 read: IOPS=1599, BW=6398KiB/s (6551kB/s)(6404KiB/1001msec) 00:15:20.799 slat (nsec): min=6356, max=22566, avg=7174.88, stdev=978.94 00:15:20.799 clat (usec): min=247, max=671, avg=338.03, stdev=46.92 00:15:20.799 lat (usec): min=254, max=679, avg=345.21, stdev=47.08 00:15:20.799 clat percentiles (usec): 00:15:20.799 | 1.00th=[ 269], 5.00th=[ 289], 10.00th=[ 297], 20.00th=[ 306], 00:15:20.799 | 30.00th=[ 314], 40.00th=[ 322], 50.00th=[ 330], 60.00th=[ 334], 00:15:20.799 | 70.00th=[ 347], 80.00th=[ 355], 90.00th=[ 388], 95.00th=[ 453], 00:15:20.799 | 99.00th=[ 494], 99.50th=[ 519], 99.90th=[ 652], 99.95th=[ 676], 00:15:20.799 | 99.99th=[ 676] 00:15:20.799 write: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec); 0 zone resets 00:15:20.799 slat (nsec): min=9168, max=37496, avg=10272.72, stdev=1426.04 00:15:20.799 clat (usec): min=153, max=394, avg=204.29, stdev=22.73 00:15:20.799 lat (usec): min=163, max=426, avg=214.57, stdev=22.92 00:15:20.799 clat percentiles (usec): 00:15:20.799 | 1.00th=[ 163], 5.00th=[ 176], 10.00th=[ 182], 20.00th=[ 188], 00:15:20.799 | 30.00th=[ 192], 40.00th=[ 196], 50.00th=[ 200], 60.00th=[ 206], 00:15:20.799 | 70.00th=[ 215], 80.00th=[ 223], 90.00th=[ 233], 95.00th=[ 243], 00:15:20.799 | 99.00th=[ 277], 99.50th=[ 302], 99.90th=[ 363], 99.95th=[ 392], 00:15:20.799 | 99.99th=[ 396] 00:15:20.799 bw ( KiB/s): min= 8192, max= 8192, per=35.46%, avg=8192.00, stdev= 0.00, samples=1 00:15:20.799 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:15:20.799 lat (usec) : 250=54.75%, 500=44.92%, 750=0.33% 00:15:20.799 cpu : usr=1.70%, sys=3.50%, ctx=3649, majf=0, minf=1 00:15:20.799 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:15:20.799 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:20.799 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:20.799 issued rwts: total=1601,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:20.799 latency : target=0, window=0, percentile=100.00%, depth=1 00:15:20.799 00:15:20.799 Run status group 0 (all jobs): 00:15:20.799 READ: bw=16.0MiB/s (16.8MB/s), 86.3KiB/s-6398KiB/s (88.3kB/s-6551kB/s), io=16.3MiB (17.1MB), run=1001-1020msec 00:15:20.799 WRITE: bw=22.6MiB/s (23.7MB/s), 2008KiB/s-8184KiB/s (2056kB/s-8380kB/s), io=23.0MiB (24.1MB), run=1001-1020msec 00:15:20.799 00:15:20.799 Disk stats (read/write): 00:15:20.799 nvme0n1: ios=1339/1536, merge=0/0, ticks=1464/287, in_queue=1751, util=97.19% 00:15:20.799 nvme0n2: ios=17/512, merge=0/0, ticks=705/101, in_queue=806, util=86.56% 00:15:20.799 nvme0n3: ios=866/1024, merge=0/0, ticks=804/192, in_queue=996, util=90.00% 00:15:20.799 nvme0n4: ios=1480/1536, merge=0/0, ticks=483/305, in_queue=788, util=89.50% 00:15:20.799 17:06:08 nvmf_tcp.nvmf_fio_target -- target/fio.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:15:20.799 [global] 00:15:20.799 thread=1 00:15:20.799 invalidate=1 00:15:20.799 rw=randwrite 00:15:20.799 time_based=1 00:15:20.799 runtime=1 00:15:20.799 ioengine=libaio 00:15:20.799 direct=1 00:15:20.799 bs=4096 00:15:20.799 iodepth=1 00:15:20.799 norandommap=0 00:15:20.799 numjobs=1 00:15:20.799 00:15:20.799 verify_dump=1 00:15:20.799 verify_backlog=512 00:15:20.799 verify_state_save=0 00:15:20.799 do_verify=1 00:15:20.799 verify=crc32c-intel 00:15:20.799 [job0] 00:15:20.799 filename=/dev/nvme0n1 00:15:20.799 [job1] 00:15:20.799 filename=/dev/nvme0n2 00:15:20.799 [job2] 00:15:20.799 filename=/dev/nvme0n3 00:15:20.799 [job3] 00:15:20.799 filename=/dev/nvme0n4 00:15:20.799 Could not set queue depth (nvme0n1) 00:15:20.799 Could not set queue depth (nvme0n2) 00:15:20.799 Could not set queue depth (nvme0n3) 00:15:20.799 Could not set queue depth (nvme0n4) 00:15:21.056 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:15:21.056 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:15:21.056 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:15:21.056 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:15:21.056 fio-3.35 00:15:21.056 Starting 4 threads 00:15:22.440 00:15:22.440 job0: (groupid=0, jobs=1): err= 0: pid=3051467: Wed May 15 17:06:09 2024 00:15:22.440 read: IOPS=1788, BW=7153KiB/s (7325kB/s)(7160KiB/1001msec) 00:15:22.440 slat (nsec): min=7092, max=23899, avg=8120.31, stdev=1122.65 00:15:22.440 clat (usec): min=229, max=41046, avg=324.45, stdev=1362.37 00:15:22.440 lat (usec): min=237, max=41068, avg=332.57, stdev=1362.66 00:15:22.440 clat percentiles (usec): 00:15:22.440 | 1.00th=[ 235], 5.00th=[ 243], 10.00th=[ 247], 20.00th=[ 253], 00:15:22.440 | 30.00th=[ 258], 40.00th=[ 265], 50.00th=[ 269], 60.00th=[ 273], 00:15:22.440 | 70.00th=[ 281], 80.00th=[ 293], 90.00th=[ 334], 95.00th=[ 355], 00:15:22.440 | 99.00th=[ 375], 99.50th=[ 437], 99.90th=[41157], 99.95th=[41157], 00:15:22.440 | 99.99th=[41157] 00:15:22.440 write: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec); 0 zone resets 00:15:22.440 slat (nsec): min=10413, max=40815, avg=11873.27, stdev=1710.98 00:15:22.440 clat (usec): min=145, max=381, avg=179.76, stdev=21.33 00:15:22.440 lat (usec): min=156, max=393, avg=191.63, stdev=21.63 00:15:22.440 clat percentiles (usec): 00:15:22.440 | 1.00th=[ 151], 5.00th=[ 155], 10.00th=[ 159], 20.00th=[ 163], 00:15:22.440 | 30.00th=[ 165], 40.00th=[ 169], 50.00th=[ 174], 60.00th=[ 180], 00:15:22.440 | 70.00th=[ 188], 80.00th=[ 198], 90.00th=[ 210], 95.00th=[ 221], 00:15:22.440 | 99.00th=[ 241], 99.50th=[ 258], 99.90th=[ 289], 99.95th=[ 306], 00:15:22.440 | 99.99th=[ 383] 00:15:22.440 bw ( KiB/s): min= 8192, max= 8192, per=42.76%, avg=8192.00, stdev= 0.00, samples=1 00:15:22.440 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:15:22.440 lat (usec) : 250=60.03%, 500=39.86%, 750=0.03% 00:15:22.440 lat (msec) : 4=0.03%, 50=0.05% 00:15:22.440 cpu : usr=3.50%, sys=5.90%, ctx=3840, majf=0, minf=1 00:15:22.440 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:15:22.440 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:22.440 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:22.440 issued rwts: total=1790,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:22.440 latency : target=0, window=0, percentile=100.00%, depth=1 00:15:22.440 job1: (groupid=0, jobs=1): err= 0: pid=3051468: Wed May 15 17:06:09 2024 00:15:22.440 read: IOPS=1022, BW=4092KiB/s (4190kB/s)(4096KiB/1001msec) 00:15:22.441 slat (nsec): min=2374, max=24607, avg=8145.65, stdev=2016.03 00:15:22.441 clat (usec): min=229, max=42014, avg=698.45, stdev=4015.93 00:15:22.441 lat (usec): min=237, max=42038, avg=706.60, stdev=4017.19 00:15:22.441 clat percentiles (usec): 00:15:22.441 | 1.00th=[ 241], 5.00th=[ 249], 10.00th=[ 255], 20.00th=[ 262], 00:15:22.441 | 30.00th=[ 269], 40.00th=[ 273], 50.00th=[ 285], 60.00th=[ 302], 00:15:22.441 | 70.00th=[ 334], 80.00th=[ 347], 90.00th=[ 355], 95.00th=[ 367], 00:15:22.441 | 99.00th=[ 775], 99.50th=[41157], 99.90th=[41681], 99.95th=[42206], 00:15:22.441 | 99.99th=[42206] 00:15:22.441 write: IOPS=1314, BW=5259KiB/s (5385kB/s)(5264KiB/1001msec); 0 zone resets 00:15:22.441 slat (nsec): min=10282, max=35450, avg=11817.98, stdev=1886.82 00:15:22.441 clat (usec): min=145, max=346, avg=192.62, stdev=26.39 00:15:22.441 lat (usec): min=157, max=377, avg=204.44, stdev=26.69 00:15:22.441 clat percentiles (usec): 00:15:22.441 | 1.00th=[ 155], 5.00th=[ 161], 10.00th=[ 165], 20.00th=[ 169], 00:15:22.441 | 30.00th=[ 174], 40.00th=[ 180], 50.00th=[ 188], 60.00th=[ 196], 00:15:22.441 | 70.00th=[ 204], 80.00th=[ 217], 90.00th=[ 235], 95.00th=[ 241], 00:15:22.441 | 99.00th=[ 258], 99.50th=[ 269], 99.90th=[ 334], 99.95th=[ 347], 00:15:22.441 | 99.99th=[ 347] 00:15:22.441 bw ( KiB/s): min= 4096, max= 4096, per=21.38%, avg=4096.00, stdev= 0.00, samples=1 00:15:22.441 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:15:22.441 lat (usec) : 250=58.03%, 500=41.45%, 750=0.04%, 1000=0.04% 00:15:22.441 lat (msec) : 50=0.43% 00:15:22.441 cpu : usr=2.30%, sys=3.50%, ctx=2341, majf=0, minf=1 00:15:22.441 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:15:22.441 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:22.441 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:22.441 issued rwts: total=1024,1316,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:22.441 latency : target=0, window=0, percentile=100.00%, depth=1 00:15:22.441 job2: (groupid=0, jobs=1): err= 0: pid=3051469: Wed May 15 17:06:09 2024 00:15:22.441 read: IOPS=624, BW=2497KiB/s (2557kB/s)(2552KiB/1022msec) 00:15:22.441 slat (nsec): min=7077, max=37692, avg=8473.71, stdev=2812.75 00:15:22.441 clat (usec): min=251, max=42019, avg=1245.78, stdev=6201.72 00:15:22.441 lat (usec): min=258, max=42040, avg=1254.25, stdev=6203.86 00:15:22.441 clat percentiles (usec): 00:15:22.441 | 1.00th=[ 258], 5.00th=[ 265], 10.00th=[ 269], 20.00th=[ 273], 00:15:22.441 | 30.00th=[ 277], 40.00th=[ 281], 50.00th=[ 285], 60.00th=[ 285], 00:15:22.441 | 70.00th=[ 289], 80.00th=[ 297], 90.00th=[ 306], 95.00th=[ 310], 00:15:22.441 | 99.00th=[41157], 99.50th=[41157], 99.90th=[42206], 99.95th=[42206], 00:15:22.441 | 99.99th=[42206] 00:15:22.441 write: IOPS=1001, BW=4008KiB/s (4104kB/s)(4096KiB/1022msec); 0 zone resets 00:15:22.441 slat (nsec): min=10068, max=39967, avg=11395.58, stdev=1884.45 00:15:22.441 clat (usec): min=160, max=423, avg=199.46, stdev=17.89 00:15:22.441 lat (usec): min=171, max=461, avg=210.85, stdev=18.37 00:15:22.441 clat percentiles (usec): 00:15:22.441 | 1.00th=[ 169], 5.00th=[ 178], 10.00th=[ 182], 20.00th=[ 188], 00:15:22.441 | 30.00th=[ 192], 40.00th=[ 194], 50.00th=[ 198], 60.00th=[ 202], 00:15:22.441 | 70.00th=[ 204], 80.00th=[ 210], 90.00th=[ 219], 95.00th=[ 225], 00:15:22.441 | 99.00th=[ 273], 99.50th=[ 281], 99.90th=[ 297], 99.95th=[ 424], 00:15:22.441 | 99.99th=[ 424] 00:15:22.441 bw ( KiB/s): min= 8192, max= 8192, per=42.76%, avg=8192.00, stdev= 0.00, samples=1 00:15:22.441 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:15:22.441 lat (usec) : 250=60.53%, 500=38.57% 00:15:22.441 lat (msec) : 50=0.90% 00:15:22.441 cpu : usr=1.27%, sys=2.64%, ctx=1662, majf=0, minf=1 00:15:22.441 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:15:22.441 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:22.441 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:22.441 issued rwts: total=638,1024,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:22.441 latency : target=0, window=0, percentile=100.00%, depth=1 00:15:22.441 job3: (groupid=0, jobs=1): err= 0: pid=3051470: Wed May 15 17:06:09 2024 00:15:22.441 read: IOPS=36, BW=145KiB/s (148kB/s)(148KiB/1023msec) 00:15:22.441 slat (nsec): min=6729, max=30329, avg=16439.27, stdev=7529.64 00:15:22.441 clat (usec): min=319, max=41278, avg=24517.41, stdev=20196.63 00:15:22.441 lat (usec): min=328, max=41302, avg=24533.85, stdev=20202.45 00:15:22.441 clat percentiles (usec): 00:15:22.441 | 1.00th=[ 322], 5.00th=[ 322], 10.00th=[ 351], 20.00th=[ 367], 00:15:22.441 | 30.00th=[ 445], 40.00th=[ 474], 50.00th=[40633], 60.00th=[41157], 00:15:22.441 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:15:22.441 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:15:22.441 | 99.99th=[41157] 00:15:22.441 write: IOPS=500, BW=2002KiB/s (2050kB/s)(2048KiB/1023msec); 0 zone resets 00:15:22.441 slat (nsec): min=9189, max=47819, avg=11572.39, stdev=2611.18 00:15:22.441 clat (usec): min=171, max=339, avg=208.82, stdev=21.85 00:15:22.441 lat (usec): min=183, max=384, avg=220.39, stdev=23.03 00:15:22.441 clat percentiles (usec): 00:15:22.441 | 1.00th=[ 178], 5.00th=[ 184], 10.00th=[ 188], 20.00th=[ 192], 00:15:22.441 | 30.00th=[ 196], 40.00th=[ 200], 50.00th=[ 204], 60.00th=[ 208], 00:15:22.441 | 70.00th=[ 215], 80.00th=[ 223], 90.00th=[ 241], 95.00th=[ 253], 00:15:22.441 | 99.00th=[ 269], 99.50th=[ 277], 99.90th=[ 338], 99.95th=[ 338], 00:15:22.441 | 99.99th=[ 338] 00:15:22.441 bw ( KiB/s): min= 4096, max= 4096, per=21.38%, avg=4096.00, stdev= 0.00, samples=1 00:15:22.441 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:15:22.441 lat (usec) : 250=87.43%, 500=8.56% 00:15:22.441 lat (msec) : 50=4.01% 00:15:22.441 cpu : usr=0.29%, sys=0.78%, ctx=550, majf=0, minf=2 00:15:22.441 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:15:22.441 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:22.441 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:22.441 issued rwts: total=37,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:22.441 latency : target=0, window=0, percentile=100.00%, depth=1 00:15:22.441 00:15:22.441 Run status group 0 (all jobs): 00:15:22.441 READ: bw=13.3MiB/s (14.0MB/s), 145KiB/s-7153KiB/s (148kB/s-7325kB/s), io=13.6MiB (14.3MB), run=1001-1023msec 00:15:22.441 WRITE: bw=18.7MiB/s (19.6MB/s), 2002KiB/s-8184KiB/s (2050kB/s-8380kB/s), io=19.1MiB (20.1MB), run=1001-1023msec 00:15:22.441 00:15:22.441 Disk stats (read/write): 00:15:22.441 nvme0n1: ios=1561/1690, merge=0/0, ticks=1487/281, in_queue=1768, util=98.40% 00:15:22.441 nvme0n2: ios=804/1024, merge=0/0, ticks=1189/185, in_queue=1374, util=98.78% 00:15:22.441 nvme0n3: ios=634/1024, merge=0/0, ticks=623/197, in_queue=820, util=88.99% 00:15:22.441 nvme0n4: ios=90/512, merge=0/0, ticks=1046/101, in_queue=1147, util=98.74% 00:15:22.441 17:06:09 nvmf_tcp.nvmf_fio_target -- target/fio.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:15:22.441 [global] 00:15:22.441 thread=1 00:15:22.441 invalidate=1 00:15:22.441 rw=write 00:15:22.441 time_based=1 00:15:22.441 runtime=1 00:15:22.441 ioengine=libaio 00:15:22.441 direct=1 00:15:22.441 bs=4096 00:15:22.441 iodepth=128 00:15:22.441 norandommap=0 00:15:22.441 numjobs=1 00:15:22.441 00:15:22.441 verify_dump=1 00:15:22.441 verify_backlog=512 00:15:22.441 verify_state_save=0 00:15:22.441 do_verify=1 00:15:22.441 verify=crc32c-intel 00:15:22.441 [job0] 00:15:22.441 filename=/dev/nvme0n1 00:15:22.441 [job1] 00:15:22.441 filename=/dev/nvme0n2 00:15:22.441 [job2] 00:15:22.441 filename=/dev/nvme0n3 00:15:22.441 [job3] 00:15:22.441 filename=/dev/nvme0n4 00:15:22.441 Could not set queue depth (nvme0n1) 00:15:22.441 Could not set queue depth (nvme0n2) 00:15:22.441 Could not set queue depth (nvme0n3) 00:15:22.441 Could not set queue depth (nvme0n4) 00:15:22.745 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:15:22.745 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:15:22.745 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:15:22.745 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:15:22.745 fio-3.35 00:15:22.745 Starting 4 threads 00:15:24.118 00:15:24.118 job0: (groupid=0, jobs=1): err= 0: pid=3051842: Wed May 15 17:06:11 2024 00:15:24.118 read: IOPS=5982, BW=23.4MiB/s (24.5MB/s)(24.5MiB/1048msec) 00:15:24.118 slat (nsec): min=1322, max=9487.1k, avg=90586.47, stdev=655685.48 00:15:24.118 clat (usec): min=2919, max=60612, avg=11646.93, stdev=6916.84 00:15:24.118 lat (usec): min=2924, max=60614, avg=11737.52, stdev=6939.19 00:15:24.118 clat percentiles (usec): 00:15:24.118 | 1.00th=[ 3982], 5.00th=[ 7373], 10.00th=[ 7963], 20.00th=[ 8717], 00:15:24.118 | 30.00th=[ 9634], 40.00th=[10028], 50.00th=[10159], 60.00th=[10421], 00:15:24.118 | 70.00th=[10945], 80.00th=[12256], 90.00th=[16450], 95.00th=[18220], 00:15:24.118 | 99.00th=[55313], 99.50th=[57934], 99.90th=[60031], 99.95th=[60556], 00:15:24.118 | 99.99th=[60556] 00:15:24.118 write: IOPS=6351, BW=24.8MiB/s (26.0MB/s)(26.0MiB/1048msec); 0 zone resets 00:15:24.118 slat (usec): min=2, max=6253, avg=61.16, stdev=211.77 00:15:24.118 clat (usec): min=1558, max=60616, avg=8971.84, stdev=2317.90 00:15:24.118 lat (usec): min=1573, max=60620, avg=9033.00, stdev=2331.64 00:15:24.118 clat percentiles (usec): 00:15:24.118 | 1.00th=[ 2638], 5.00th=[ 4178], 10.00th=[ 5604], 20.00th=[ 7832], 00:15:24.118 | 30.00th=[ 8455], 40.00th=[ 8586], 50.00th=[ 9634], 60.00th=[10290], 00:15:24.118 | 70.00th=[10421], 80.00th=[10552], 90.00th=[10814], 95.00th=[11076], 00:15:24.118 | 99.00th=[12911], 99.50th=[13042], 99.90th=[19268], 99.95th=[19268], 00:15:24.118 | 99.99th=[60556] 00:15:24.118 bw ( KiB/s): min=24576, max=28656, per=39.69%, avg=26616.00, stdev=2885.00, samples=2 00:15:24.118 iops : min= 6144, max= 7164, avg=6654.00, stdev=721.25, samples=2 00:15:24.118 lat (msec) : 2=0.15%, 4=2.57%, 10=44.14%, 20=51.69%, 50=0.47% 00:15:24.118 lat (msec) : 100=0.97% 00:15:24.118 cpu : usr=4.39%, sys=4.97%, ctx=892, majf=0, minf=1 00:15:24.118 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.5% 00:15:24.118 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:24.118 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:15:24.118 issued rwts: total=6270,6656,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:24.118 latency : target=0, window=0, percentile=100.00%, depth=128 00:15:24.118 job1: (groupid=0, jobs=1): err= 0: pid=3051843: Wed May 15 17:06:11 2024 00:15:24.118 read: IOPS=4553, BW=17.8MiB/s (18.7MB/s)(18.0MiB/1012msec) 00:15:24.118 slat (nsec): min=1061, max=14462k, avg=84472.08, stdev=669994.44 00:15:24.118 clat (usec): min=1243, max=46879, avg=13023.76, stdev=4515.86 00:15:24.118 lat (usec): min=1249, max=46883, avg=13108.23, stdev=4559.23 00:15:24.118 clat percentiles (usec): 00:15:24.118 | 1.00th=[ 2671], 5.00th=[ 5342], 10.00th=[ 9765], 20.00th=[11076], 00:15:24.118 | 30.00th=[11338], 40.00th=[11469], 50.00th=[11863], 60.00th=[12518], 00:15:24.118 | 70.00th=[14746], 80.00th=[16319], 90.00th=[17957], 95.00th=[21103], 00:15:24.118 | 99.00th=[25822], 99.50th=[29754], 99.90th=[43254], 99.95th=[43254], 00:15:24.118 | 99.99th=[46924] 00:15:24.118 write: IOPS=4804, BW=18.8MiB/s (19.7MB/s)(19.0MiB/1012msec); 0 zone resets 00:15:24.118 slat (usec): min=2, max=10336, avg=90.70, stdev=686.40 00:15:24.118 clat (usec): min=871, max=62378, avg=13922.38, stdev=9860.79 00:15:24.118 lat (usec): min=878, max=62388, avg=14013.08, stdev=9919.66 00:15:24.118 clat percentiles (usec): 00:15:24.118 | 1.00th=[ 2802], 5.00th=[ 5145], 10.00th=[ 6652], 20.00th=[ 7570], 00:15:24.118 | 30.00th=[ 8979], 40.00th=[10159], 50.00th=[10945], 60.00th=[13304], 00:15:24.118 | 70.00th=[15008], 80.00th=[17957], 90.00th=[22152], 95.00th=[32900], 00:15:24.118 | 99.00th=[62129], 99.50th=[62129], 99.90th=[62129], 99.95th=[62129], 00:15:24.118 | 99.99th=[62129] 00:15:24.118 bw ( KiB/s): min=17392, max=20480, per=28.24%, avg=18936.00, stdev=2183.55, samples=2 00:15:24.118 iops : min= 4348, max= 5120, avg=4734.00, stdev=545.89, samples=2 00:15:24.118 lat (usec) : 1000=0.04% 00:15:24.118 lat (msec) : 2=0.41%, 4=2.76%, 10=22.82%, 20=63.47%, 50=9.35% 00:15:24.118 lat (msec) : 100=1.15% 00:15:24.118 cpu : usr=3.56%, sys=6.33%, ctx=332, majf=0, minf=1 00:15:24.118 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.3% 00:15:24.118 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:24.118 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:15:24.118 issued rwts: total=4608,4862,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:24.118 latency : target=0, window=0, percentile=100.00%, depth=128 00:15:24.118 job2: (groupid=0, jobs=1): err= 0: pid=3051844: Wed May 15 17:06:11 2024 00:15:24.118 read: IOPS=1765, BW=7063KiB/s (7233kB/s)(7120KiB/1008msec) 00:15:24.118 slat (nsec): min=1315, max=14114k, avg=144249.28, stdev=1084328.55 00:15:24.118 clat (usec): min=5356, max=49190, avg=19922.13, stdev=9373.80 00:15:24.118 lat (usec): min=5590, max=49216, avg=20066.38, stdev=9456.26 00:15:24.118 clat percentiles (usec): 00:15:24.118 | 1.00th=[ 6587], 5.00th=[ 7898], 10.00th=[ 9765], 20.00th=[12518], 00:15:24.118 | 30.00th=[12911], 40.00th=[14091], 50.00th=[17957], 60.00th=[23200], 00:15:24.118 | 70.00th=[24249], 80.00th=[25035], 90.00th=[32113], 95.00th=[39060], 00:15:24.118 | 99.00th=[49021], 99.50th=[49021], 99.90th=[49021], 99.95th=[49021], 00:15:24.118 | 99.99th=[49021] 00:15:24.118 write: IOPS=2031, BW=8127KiB/s (8322kB/s)(8192KiB/1008msec); 0 zone resets 00:15:24.118 slat (usec): min=2, max=10769, avg=343.02, stdev=1430.08 00:15:24.118 clat (usec): min=810, max=118553, avg=44816.11, stdev=29517.50 00:15:24.118 lat (usec): min=822, max=118567, avg=45159.13, stdev=29699.32 00:15:24.118 clat percentiles (msec): 00:15:24.118 | 1.00th=[ 5], 5.00th=[ 16], 10.00th=[ 19], 20.00th=[ 20], 00:15:24.118 | 30.00th=[ 22], 40.00th=[ 25], 50.00th=[ 38], 60.00th=[ 47], 00:15:24.118 | 70.00th=[ 55], 80.00th=[ 65], 90.00th=[ 100], 95.00th=[ 107], 00:15:24.118 | 99.00th=[ 116], 99.50th=[ 118], 99.90th=[ 120], 99.95th=[ 120], 00:15:24.118 | 99.99th=[ 120] 00:15:24.118 bw ( KiB/s): min= 7824, max= 8560, per=12.22%, avg=8192.00, stdev=520.43, samples=2 00:15:24.118 iops : min= 1956, max= 2140, avg=2048.00, stdev=130.11, samples=2 00:15:24.118 lat (usec) : 1000=0.18% 00:15:24.118 lat (msec) : 4=0.16%, 10=5.77%, 20=32.26%, 50=42.32%, 100=14.34% 00:15:24.118 lat (msec) : 250=4.96% 00:15:24.118 cpu : usr=1.59%, sys=3.08%, ctx=263, majf=0, minf=1 00:15:24.118 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.8%, >=64=98.4% 00:15:24.118 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:24.118 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:15:24.118 issued rwts: total=1780,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:24.118 latency : target=0, window=0, percentile=100.00%, depth=128 00:15:24.118 job3: (groupid=0, jobs=1): err= 0: pid=3051845: Wed May 15 17:06:11 2024 00:15:24.118 read: IOPS=3580, BW=14.0MiB/s (14.7MB/s)(14.0MiB/1001msec) 00:15:24.118 slat (nsec): min=1534, max=16429k, avg=110173.45, stdev=701927.78 00:15:24.118 clat (usec): min=7051, max=44760, avg=13577.30, stdev=4765.50 00:15:24.118 lat (usec): min=7057, max=44788, avg=13687.47, stdev=4836.10 00:15:24.118 clat percentiles (usec): 00:15:24.118 | 1.00th=[ 8586], 5.00th=[ 9241], 10.00th=[10290], 20.00th=[11076], 00:15:24.118 | 30.00th=[11469], 40.00th=[11731], 50.00th=[12125], 60.00th=[12518], 00:15:24.118 | 70.00th=[13173], 80.00th=[14877], 90.00th=[18220], 95.00th=[25297], 00:15:24.118 | 99.00th=[32637], 99.50th=[32637], 99.90th=[44303], 99.95th=[44303], 00:15:24.118 | 99.99th=[44827] 00:15:24.118 write: IOPS=3998, BW=15.6MiB/s (16.4MB/s)(15.6MiB/1001msec); 0 zone resets 00:15:24.118 slat (usec): min=2, max=13662, avg=145.09, stdev=818.81 00:15:24.118 clat (usec): min=282, max=83559, avg=19210.00, stdev=17782.76 00:15:24.118 lat (usec): min=4784, max=83572, avg=19355.09, stdev=17907.27 00:15:24.118 clat percentiles (usec): 00:15:24.118 | 1.00th=[ 5604], 5.00th=[ 9110], 10.00th=[10814], 20.00th=[11469], 00:15:24.118 | 30.00th=[11731], 40.00th=[11994], 50.00th=[12125], 60.00th=[12256], 00:15:24.118 | 70.00th=[12780], 80.00th=[18220], 90.00th=[51119], 95.00th=[68682], 00:15:24.118 | 99.00th=[79168], 99.50th=[82314], 99.90th=[83362], 99.95th=[83362], 00:15:24.118 | 99.99th=[83362] 00:15:24.118 bw ( KiB/s): min= 8888, max= 8888, per=13.26%, avg=8888.00, stdev= 0.00, samples=1 00:15:24.118 iops : min= 2222, max= 2222, avg=2222.00, stdev= 0.00, samples=1 00:15:24.118 lat (usec) : 500=0.01% 00:15:24.118 lat (msec) : 10=7.08%, 20=81.16%, 50=6.31%, 100=5.43% 00:15:24.118 cpu : usr=3.20%, sys=5.50%, ctx=354, majf=0, minf=1 00:15:24.118 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:15:24.118 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:24.118 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:15:24.118 issued rwts: total=3584,4002,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:24.118 latency : target=0, window=0, percentile=100.00%, depth=128 00:15:24.118 00:15:24.118 Run status group 0 (all jobs): 00:15:24.118 READ: bw=60.5MiB/s (63.5MB/s), 7063KiB/s-23.4MiB/s (7233kB/s-24.5MB/s), io=63.4MiB (66.5MB), run=1001-1048msec 00:15:24.118 WRITE: bw=65.5MiB/s (68.7MB/s), 8127KiB/s-24.8MiB/s (8322kB/s-26.0MB/s), io=68.6MiB (72.0MB), run=1001-1048msec 00:15:24.118 00:15:24.118 Disk stats (read/write): 00:15:24.118 nvme0n1: ios=5416/5632, merge=0/0, ticks=56141/49175, in_queue=105316, util=98.50% 00:15:24.118 nvme0n2: ios=3854/4096, merge=0/0, ticks=45226/47794, in_queue=93020, util=96.75% 00:15:24.118 nvme0n3: ios=1578/1799, merge=0/0, ticks=23856/52290, in_queue=76146, util=99.17% 00:15:24.118 nvme0n4: ios=2939/3072, merge=0/0, ticks=21223/31901, in_queue=53124, util=99.27% 00:15:24.118 17:06:11 nvmf_tcp.nvmf_fio_target -- target/fio.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:15:24.118 [global] 00:15:24.118 thread=1 00:15:24.118 invalidate=1 00:15:24.118 rw=randwrite 00:15:24.118 time_based=1 00:15:24.118 runtime=1 00:15:24.118 ioengine=libaio 00:15:24.118 direct=1 00:15:24.118 bs=4096 00:15:24.118 iodepth=128 00:15:24.118 norandommap=0 00:15:24.118 numjobs=1 00:15:24.118 00:15:24.118 verify_dump=1 00:15:24.118 verify_backlog=512 00:15:24.118 verify_state_save=0 00:15:24.118 do_verify=1 00:15:24.118 verify=crc32c-intel 00:15:24.118 [job0] 00:15:24.118 filename=/dev/nvme0n1 00:15:24.118 [job1] 00:15:24.118 filename=/dev/nvme0n2 00:15:24.118 [job2] 00:15:24.118 filename=/dev/nvme0n3 00:15:24.118 [job3] 00:15:24.119 filename=/dev/nvme0n4 00:15:24.119 Could not set queue depth (nvme0n1) 00:15:24.119 Could not set queue depth (nvme0n2) 00:15:24.119 Could not set queue depth (nvme0n3) 00:15:24.119 Could not set queue depth (nvme0n4) 00:15:24.119 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:15:24.119 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:15:24.119 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:15:24.119 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:15:24.119 fio-3.35 00:15:24.119 Starting 4 threads 00:15:25.514 00:15:25.514 job0: (groupid=0, jobs=1): err= 0: pid=3052235: Wed May 15 17:06:13 2024 00:15:25.514 read: IOPS=3885, BW=15.2MiB/s (15.9MB/s)(15.3MiB/1008msec) 00:15:25.514 slat (nsec): min=1604, max=10760k, avg=108499.43, stdev=734677.04 00:15:25.514 clat (usec): min=3141, max=29283, avg=12729.01, stdev=4295.08 00:15:25.514 lat (usec): min=3461, max=29286, avg=12837.51, stdev=4334.89 00:15:25.514 clat percentiles (usec): 00:15:25.514 | 1.00th=[ 5473], 5.00th=[ 8848], 10.00th=[10028], 20.00th=[10290], 00:15:25.514 | 30.00th=[10552], 40.00th=[10945], 50.00th=[11207], 60.00th=[11469], 00:15:25.514 | 70.00th=[12125], 80.00th=[15664], 90.00th=[18220], 95.00th=[22414], 00:15:25.514 | 99.00th=[28181], 99.50th=[28181], 99.90th=[29230], 99.95th=[29230], 00:15:25.514 | 99.99th=[29230] 00:15:25.514 write: IOPS=4063, BW=15.9MiB/s (16.6MB/s)(16.0MiB/1008msec); 0 zone resets 00:15:25.514 slat (usec): min=2, max=21518, avg=136.00, stdev=735.23 00:15:25.514 clat (usec): min=2254, max=99287, avg=18989.79, stdev=16496.18 00:15:25.514 lat (usec): min=2265, max=99299, avg=19125.80, stdev=16580.74 00:15:25.514 clat percentiles (usec): 00:15:25.514 | 1.00th=[ 2802], 5.00th=[ 5014], 10.00th=[ 7111], 20.00th=[10028], 00:15:25.514 | 30.00th=[10290], 40.00th=[10552], 50.00th=[10814], 60.00th=[18744], 00:15:25.514 | 70.00th=[20317], 80.00th=[20579], 90.00th=[44827], 95.00th=[54789], 00:15:25.514 | 99.00th=[91751], 99.50th=[94897], 99.90th=[98042], 99.95th=[98042], 00:15:25.514 | 99.99th=[99091] 00:15:25.514 bw ( KiB/s): min=12288, max=20480, per=23.35%, avg=16384.00, stdev=5792.62, samples=2 00:15:25.514 iops : min= 3072, max= 5120, avg=4096.00, stdev=1448.15, samples=2 00:15:25.514 lat (msec) : 4=1.86%, 10=13.30%, 20=62.91%, 50=17.80%, 100=4.13% 00:15:25.514 cpu : usr=3.08%, sys=3.67%, ctx=600, majf=0, minf=1 00:15:25.514 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:15:25.514 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:25.514 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:15:25.514 issued rwts: total=3917,4096,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:25.514 latency : target=0, window=0, percentile=100.00%, depth=128 00:15:25.514 job1: (groupid=0, jobs=1): err= 0: pid=3052246: Wed May 15 17:06:13 2024 00:15:25.514 read: IOPS=3358, BW=13.1MiB/s (13.8MB/s)(13.3MiB/1013msec) 00:15:25.515 slat (nsec): min=1090, max=17392k, avg=155627.69, stdev=1091425.77 00:15:25.515 clat (usec): min=3071, max=65426, avg=17675.35, stdev=9806.49 00:15:25.515 lat (usec): min=5559, max=65430, avg=17830.98, stdev=9909.78 00:15:25.515 clat percentiles (usec): 00:15:25.515 | 1.00th=[ 6521], 5.00th=[ 9503], 10.00th=[10945], 20.00th=[11600], 00:15:25.515 | 30.00th=[11863], 40.00th=[12256], 50.00th=[13698], 60.00th=[15533], 00:15:25.515 | 70.00th=[19006], 80.00th=[22152], 90.00th=[27657], 95.00th=[42730], 00:15:25.515 | 99.00th=[58459], 99.50th=[62653], 99.90th=[65274], 99.95th=[65274], 00:15:25.515 | 99.99th=[65274] 00:15:25.515 write: IOPS=3538, BW=13.8MiB/s (14.5MB/s)(14.0MiB/1013msec); 0 zone resets 00:15:25.515 slat (nsec): min=1818, max=18949k, avg=113708.68, stdev=765381.32 00:15:25.515 clat (usec): min=634, max=65419, avg=18914.98, stdev=11925.37 00:15:25.515 lat (usec): min=641, max=65423, avg=19028.69, stdev=11980.45 00:15:25.515 clat percentiles (usec): 00:15:25.515 | 1.00th=[ 2040], 5.00th=[ 5014], 10.00th=[ 7701], 20.00th=[ 9896], 00:15:25.515 | 30.00th=[11994], 40.00th=[14484], 50.00th=[18744], 60.00th=[20055], 00:15:25.515 | 70.00th=[20317], 80.00th=[22414], 90.00th=[30802], 95.00th=[53216], 00:15:25.515 | 99.00th=[56361], 99.50th=[56361], 99.90th=[58459], 99.95th=[65274], 00:15:25.515 | 99.99th=[65274] 00:15:25.515 bw ( KiB/s): min=13200, max=15472, per=20.43%, avg=14336.00, stdev=1606.55, samples=2 00:15:25.515 iops : min= 3300, max= 3868, avg=3584.00, stdev=401.64, samples=2 00:15:25.515 lat (usec) : 750=0.04%, 1000=0.14% 00:15:25.515 lat (msec) : 2=0.27%, 4=1.35%, 10=11.29%, 20=50.74%, 50=32.09% 00:15:25.515 lat (msec) : 100=4.07% 00:15:25.515 cpu : usr=2.08%, sys=4.35%, ctx=342, majf=0, minf=1 00:15:25.515 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.1% 00:15:25.515 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:25.515 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:15:25.515 issued rwts: total=3402,3584,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:25.515 latency : target=0, window=0, percentile=100.00%, depth=128 00:15:25.515 job2: (groupid=0, jobs=1): err= 0: pid=3052278: Wed May 15 17:06:13 2024 00:15:25.515 read: IOPS=4396, BW=17.2MiB/s (18.0MB/s)(17.4MiB/1013msec) 00:15:25.515 slat (nsec): min=1343, max=16995k, avg=124690.30, stdev=983415.91 00:15:25.515 clat (usec): min=4379, max=45772, avg=14985.00, stdev=5901.60 00:15:25.515 lat (usec): min=4397, max=45777, avg=15109.69, stdev=5994.26 00:15:25.515 clat percentiles (usec): 00:15:25.515 | 1.00th=[ 5604], 5.00th=[ 9503], 10.00th=[10552], 20.00th=[10945], 00:15:25.515 | 30.00th=[11600], 40.00th=[11994], 50.00th=[12256], 60.00th=[13435], 00:15:25.515 | 70.00th=[16450], 80.00th=[20055], 90.00th=[22152], 95.00th=[27919], 00:15:25.515 | 99.00th=[39584], 99.50th=[39584], 99.90th=[45876], 99.95th=[45876], 00:15:25.515 | 99.99th=[45876] 00:15:25.515 write: IOPS=4548, BW=17.8MiB/s (18.6MB/s)(18.0MiB/1013msec); 0 zone resets 00:15:25.515 slat (usec): min=2, max=15091, avg=92.09, stdev=604.16 00:15:25.515 clat (usec): min=1578, max=52788, avg=13384.68, stdev=7253.78 00:15:25.515 lat (usec): min=1590, max=52796, avg=13476.77, stdev=7306.50 00:15:25.515 clat percentiles (usec): 00:15:25.515 | 1.00th=[ 3884], 5.00th=[ 7111], 10.00th=[ 8586], 20.00th=[10159], 00:15:25.515 | 30.00th=[10945], 40.00th=[11338], 50.00th=[11600], 60.00th=[11863], 00:15:25.515 | 70.00th=[11994], 80.00th=[12780], 90.00th=[22152], 95.00th=[24511], 00:15:25.515 | 99.00th=[49021], 99.50th=[51119], 99.90th=[52691], 99.95th=[52691], 00:15:25.515 | 99.99th=[52691] 00:15:25.515 bw ( KiB/s): min=14856, max=22008, per=26.27%, avg=18432.00, stdev=5057.23, samples=2 00:15:25.515 iops : min= 3714, max= 5502, avg=4608.00, stdev=1264.31, samples=2 00:15:25.515 lat (msec) : 2=0.10%, 4=0.44%, 10=11.60%, 20=71.74%, 50=15.70% 00:15:25.515 lat (msec) : 100=0.42% 00:15:25.515 cpu : usr=4.05%, sys=4.25%, ctx=443, majf=0, minf=1 00:15:25.515 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:15:25.515 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:25.515 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:15:25.515 issued rwts: total=4454,4608,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:25.515 latency : target=0, window=0, percentile=100.00%, depth=128 00:15:25.515 job3: (groupid=0, jobs=1): err= 0: pid=3052289: Wed May 15 17:06:13 2024 00:15:25.515 read: IOPS=5064, BW=19.8MiB/s (20.7MB/s)(20.0MiB/1011msec) 00:15:25.515 slat (nsec): min=1653, max=9436.0k, avg=97333.44, stdev=606004.61 00:15:25.515 clat (usec): min=7537, max=27015, avg=12460.22, stdev=2039.81 00:15:25.515 lat (usec): min=8223, max=35558, avg=12557.55, stdev=2110.06 00:15:25.515 clat percentiles (usec): 00:15:25.515 | 1.00th=[ 8848], 5.00th=[ 9765], 10.00th=[10945], 20.00th=[11338], 00:15:25.515 | 30.00th=[11600], 40.00th=[11863], 50.00th=[12125], 60.00th=[12256], 00:15:25.515 | 70.00th=[12518], 80.00th=[13304], 90.00th=[14746], 95.00th=[16319], 00:15:25.515 | 99.00th=[17957], 99.50th=[26870], 99.90th=[27132], 99.95th=[27132], 00:15:25.515 | 99.99th=[27132] 00:15:25.515 write: IOPS=5423, BW=21.2MiB/s (22.2MB/s)(21.4MiB/1011msec); 0 zone resets 00:15:25.515 slat (usec): min=2, max=12985, avg=84.93, stdev=558.90 00:15:25.515 clat (usec): min=511, max=27559, avg=11703.61, stdev=2370.76 00:15:25.515 lat (usec): min=540, max=27592, avg=11788.53, stdev=2410.66 00:15:25.515 clat percentiles (usec): 00:15:25.515 | 1.00th=[ 3392], 5.00th=[ 7570], 10.00th=[ 9110], 20.00th=[10945], 00:15:25.515 | 30.00th=[11469], 40.00th=[11731], 50.00th=[11863], 60.00th=[11994], 00:15:25.515 | 70.00th=[12256], 80.00th=[12518], 90.00th=[14222], 95.00th=[15795], 00:15:25.515 | 99.00th=[17695], 99.50th=[18744], 99.90th=[20317], 99.95th=[23987], 00:15:25.515 | 99.99th=[27657] 00:15:25.515 bw ( KiB/s): min=20712, max=22128, per=30.53%, avg=21420.00, stdev=1001.26, samples=2 00:15:25.515 iops : min= 5178, max= 5532, avg=5355.00, stdev=250.32, samples=2 00:15:25.515 lat (usec) : 750=0.02%, 1000=0.03% 00:15:25.515 lat (msec) : 2=0.26%, 4=0.45%, 10=9.02%, 20=89.77%, 50=0.45% 00:15:25.515 cpu : usr=5.35%, sys=5.84%, ctx=485, majf=0, minf=1 00:15:25.515 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:15:25.515 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:25.515 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:15:25.515 issued rwts: total=5120,5483,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:25.515 latency : target=0, window=0, percentile=100.00%, depth=128 00:15:25.515 00:15:25.515 Run status group 0 (all jobs): 00:15:25.515 READ: bw=65.1MiB/s (68.3MB/s), 13.1MiB/s-19.8MiB/s (13.8MB/s-20.7MB/s), io=66.0MiB (69.2MB), run=1008-1013msec 00:15:25.515 WRITE: bw=68.5MiB/s (71.9MB/s), 13.8MiB/s-21.2MiB/s (14.5MB/s-22.2MB/s), io=69.4MiB (72.8MB), run=1008-1013msec 00:15:25.515 00:15:25.515 Disk stats (read/write): 00:15:25.515 nvme0n1: ios=2734/3072, merge=0/0, ticks=33674/67401, in_queue=101075, util=93.29% 00:15:25.515 nvme0n2: ios=2610/3072, merge=0/0, ticks=40675/56030, in_queue=96705, util=97.33% 00:15:25.515 nvme0n3: ios=3584/3967, merge=0/0, ticks=49202/48211, in_queue=97413, util=87.55% 00:15:25.515 nvme0n4: ios=4140/4502, merge=0/0, ticks=24871/25661, in_queue=50532, util=99.23% 00:15:25.515 17:06:13 nvmf_tcp.nvmf_fio_target -- target/fio.sh@55 -- # sync 00:15:25.515 17:06:13 nvmf_tcp.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=3052456 00:15:25.515 17:06:13 nvmf_tcp.nvmf_fio_target -- target/fio.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:15:25.515 17:06:13 nvmf_tcp.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 00:15:25.515 [global] 00:15:25.515 thread=1 00:15:25.515 invalidate=1 00:15:25.515 rw=read 00:15:25.515 time_based=1 00:15:25.515 runtime=10 00:15:25.515 ioengine=libaio 00:15:25.515 direct=1 00:15:25.515 bs=4096 00:15:25.515 iodepth=1 00:15:25.515 norandommap=1 00:15:25.515 numjobs=1 00:15:25.515 00:15:25.515 [job0] 00:15:25.515 filename=/dev/nvme0n1 00:15:25.515 [job1] 00:15:25.515 filename=/dev/nvme0n2 00:15:25.515 [job2] 00:15:25.515 filename=/dev/nvme0n3 00:15:25.515 [job3] 00:15:25.515 filename=/dev/nvme0n4 00:15:25.515 Could not set queue depth (nvme0n1) 00:15:25.515 Could not set queue depth (nvme0n2) 00:15:25.515 Could not set queue depth (nvme0n3) 00:15:25.515 Could not set queue depth (nvme0n4) 00:15:25.773 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:15:25.773 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:15:25.773 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:15:25.773 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:15:25.773 fio-3.35 00:15:25.773 Starting 4 threads 00:15:29.061 17:06:16 nvmf_tcp.nvmf_fio_target -- target/fio.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete concat0 00:15:29.061 17:06:16 nvmf_tcp.nvmf_fio_target -- target/fio.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete raid0 00:15:29.061 fio: io_u error on file /dev/nvme0n4: Remote I/O error: read offset=286720, buflen=4096 00:15:29.061 fio: pid=3052748, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:15:29.061 17:06:16 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:15:29.061 17:06:16 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:15:29.061 fio: io_u error on file /dev/nvme0n3: Remote I/O error: read offset=33845248, buflen=4096 00:15:29.061 fio: pid=3052743, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:15:29.061 fio: io_u error on file /dev/nvme0n1: Remote I/O error: read offset=28471296, buflen=4096 00:15:29.062 fio: pid=3052713, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:15:29.062 17:06:16 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:15:29.062 17:06:16 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:15:29.319 17:06:16 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:15:29.319 17:06:16 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:15:29.319 fio: io_u error on file /dev/nvme0n2: Input/output error: read offset=331776, buflen=4096 00:15:29.319 fio: pid=3052726, err=5/file:io_u.c:1889, func=io_u error, error=Input/output error 00:15:29.319 00:15:29.319 job0: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=3052713: Wed May 15 17:06:16 2024 00:15:29.319 read: IOPS=2275, BW=9101KiB/s (9320kB/s)(27.2MiB/3055msec) 00:15:29.319 slat (usec): min=5, max=16274, avg=13.07, stdev=262.88 00:15:29.319 clat (usec): min=252, max=49112, avg=421.45, stdev=2153.37 00:15:29.319 lat (usec): min=268, max=53861, avg=434.52, stdev=2204.14 00:15:29.319 clat percentiles (usec): 00:15:29.319 | 1.00th=[ 277], 5.00th=[ 289], 10.00th=[ 293], 20.00th=[ 297], 00:15:29.319 | 30.00th=[ 302], 40.00th=[ 306], 50.00th=[ 306], 60.00th=[ 310], 00:15:29.319 | 70.00th=[ 314], 80.00th=[ 318], 90.00th=[ 326], 95.00th=[ 330], 00:15:29.319 | 99.00th=[ 429], 99.50th=[ 461], 99.90th=[41157], 99.95th=[41157], 00:15:29.319 | 99.99th=[49021] 00:15:29.319 bw ( KiB/s): min= 4448, max=12632, per=58.22%, avg=10936.00, stdev=3627.30, samples=5 00:15:29.319 iops : min= 1112, max= 3158, avg=2734.00, stdev=906.83, samples=5 00:15:29.319 lat (usec) : 500=99.57%, 750=0.13%, 1000=0.01% 00:15:29.319 lat (msec) : 50=0.27% 00:15:29.319 cpu : usr=1.15%, sys=3.70%, ctx=6955, majf=0, minf=1 00:15:29.319 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:15:29.319 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:29.319 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:29.319 issued rwts: total=6952,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:29.319 latency : target=0, window=0, percentile=100.00%, depth=1 00:15:29.319 job1: (groupid=0, jobs=1): err= 5 (file:io_u.c:1889, func=io_u error, error=Input/output error): pid=3052726: Wed May 15 17:06:16 2024 00:15:29.319 read: IOPS=25, BW=99.0KiB/s (101kB/s)(324KiB/3272msec) 00:15:29.319 slat (usec): min=9, max=17773, avg=416.86, stdev=2264.53 00:15:29.319 clat (usec): min=379, max=49975, avg=39648.33, stdev=7774.48 00:15:29.319 lat (usec): min=396, max=58943, avg=39980.38, stdev=8123.84 00:15:29.319 clat percentiles (usec): 00:15:29.320 | 1.00th=[ 379], 5.00th=[40633], 10.00th=[40633], 20.00th=[41157], 00:15:29.320 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:15:29.320 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41681], 00:15:29.320 | 99.00th=[50070], 99.50th=[50070], 99.90th=[50070], 99.95th=[50070], 00:15:29.320 | 99.99th=[50070] 00:15:29.320 bw ( KiB/s): min= 96, max= 112, per=0.53%, avg=100.50, stdev= 6.44, samples=6 00:15:29.320 iops : min= 24, max= 28, avg=25.00, stdev= 1.67, samples=6 00:15:29.320 lat (usec) : 500=2.44%, 1000=1.22% 00:15:29.320 lat (msec) : 50=95.12% 00:15:29.320 cpu : usr=0.00%, sys=0.31%, ctx=86, majf=0, minf=1 00:15:29.320 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:15:29.320 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:29.320 complete : 0=1.2%, 4=98.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:29.320 issued rwts: total=82,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:29.320 latency : target=0, window=0, percentile=100.00%, depth=1 00:15:29.320 job2: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=3052743: Wed May 15 17:06:16 2024 00:15:29.320 read: IOPS=2854, BW=11.1MiB/s (11.7MB/s)(32.3MiB/2895msec) 00:15:29.320 slat (nsec): min=6593, max=36703, avg=8325.01, stdev=1346.21 00:15:29.320 clat (usec): min=230, max=41075, avg=337.17, stdev=1549.09 00:15:29.320 lat (usec): min=238, max=41099, avg=345.50, stdev=1549.58 00:15:29.320 clat percentiles (usec): 00:15:29.320 | 1.00th=[ 245], 5.00th=[ 255], 10.00th=[ 260], 20.00th=[ 265], 00:15:29.320 | 30.00th=[ 269], 40.00th=[ 273], 50.00th=[ 277], 60.00th=[ 281], 00:15:29.320 | 70.00th=[ 285], 80.00th=[ 289], 90.00th=[ 293], 95.00th=[ 302], 00:15:29.320 | 99.00th=[ 404], 99.50th=[ 441], 99.90th=[40633], 99.95th=[41157], 00:15:29.320 | 99.99th=[41157] 00:15:29.320 bw ( KiB/s): min= 7208, max=13968, per=65.11%, avg=12230.40, stdev=2847.69, samples=5 00:15:29.320 iops : min= 1802, max= 3492, avg=3057.60, stdev=711.92, samples=5 00:15:29.320 lat (usec) : 250=2.60%, 500=97.19%, 750=0.02% 00:15:29.320 lat (msec) : 2=0.02%, 50=0.15% 00:15:29.320 cpu : usr=1.87%, sys=4.25%, ctx=8267, majf=0, minf=1 00:15:29.320 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:15:29.320 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:29.320 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:29.320 issued rwts: total=8264,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:29.320 latency : target=0, window=0, percentile=100.00%, depth=1 00:15:29.320 job3: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=3052748: Wed May 15 17:06:16 2024 00:15:29.320 read: IOPS=25, BW=102KiB/s (105kB/s)(280KiB/2732msec) 00:15:29.320 slat (nsec): min=9933, max=39785, avg=22116.49, stdev=3835.67 00:15:29.320 clat (usec): min=379, max=45020, avg=38696.60, stdev=9514.87 00:15:29.320 lat (usec): min=392, max=45034, avg=38718.75, stdev=9514.27 00:15:29.320 clat percentiles (usec): 00:15:29.320 | 1.00th=[ 379], 5.00th=[ 578], 10.00th=[40633], 20.00th=[41157], 00:15:29.320 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:15:29.320 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41681], 00:15:29.320 | 99.00th=[44827], 99.50th=[44827], 99.90th=[44827], 99.95th=[44827], 00:15:29.320 | 99.99th=[44827] 00:15:29.320 bw ( KiB/s): min= 96, max= 120, per=0.54%, avg=102.40, stdev=10.43, samples=5 00:15:29.320 iops : min= 24, max= 30, avg=25.60, stdev= 2.61, samples=5 00:15:29.320 lat (usec) : 500=2.82%, 750=2.82% 00:15:29.320 lat (msec) : 50=92.96% 00:15:29.320 cpu : usr=0.15%, sys=0.00%, ctx=73, majf=0, minf=2 00:15:29.320 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:15:29.320 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:29.320 complete : 0=1.4%, 4=98.6%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:29.320 issued rwts: total=71,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:29.320 latency : target=0, window=0, percentile=100.00%, depth=1 00:15:29.320 00:15:29.320 Run status group 0 (all jobs): 00:15:29.320 READ: bw=18.3MiB/s (19.2MB/s), 99.0KiB/s-11.1MiB/s (101kB/s-11.7MB/s), io=60.0MiB (62.9MB), run=2732-3272msec 00:15:29.320 00:15:29.320 Disk stats (read/write): 00:15:29.320 nvme0n1: ios=6844/0, merge=0/0, ticks=2676/0, in_queue=2676, util=95.43% 00:15:29.320 nvme0n2: ios=78/0, merge=0/0, ticks=3080/0, in_queue=3080, util=95.36% 00:15:29.320 nvme0n3: ios=8248/0, merge=0/0, ticks=2829/0, in_queue=2829, util=100.00% 00:15:29.320 nvme0n4: ios=113/0, merge=0/0, ticks=3308/0, in_queue=3308, util=99.70% 00:15:29.578 17:06:16 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:15:29.578 17:06:16 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:15:29.578 17:06:17 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:15:29.578 17:06:17 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:15:29.837 17:06:17 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:15:29.837 17:06:17 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:15:30.095 17:06:17 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:15:30.095 17:06:17 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:15:30.352 17:06:17 nvmf_tcp.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 00:15:30.352 17:06:17 nvmf_tcp.nvmf_fio_target -- target/fio.sh@70 -- # wait 3052456 00:15:30.352 17:06:17 nvmf_tcp.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 00:15:30.352 17:06:17 nvmf_tcp.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:15:30.352 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:30.352 17:06:17 nvmf_tcp.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:15:30.352 17:06:17 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1215 -- # local i=0 00:15:30.352 17:06:17 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:15:30.352 17:06:17 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:30.352 17:06:17 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:15:30.352 17:06:17 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1223 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:30.352 17:06:17 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1227 -- # return 0 00:15:30.352 17:06:17 nvmf_tcp.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:15:30.352 17:06:17 nvmf_tcp.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:15:30.352 nvmf hotplug test: fio failed as expected 00:15:30.352 17:06:17 nvmf_tcp.nvmf_fio_target -- target/fio.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:30.611 17:06:18 nvmf_tcp.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:15:30.611 17:06:18 nvmf_tcp.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:15:30.611 17:06:18 nvmf_tcp.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:15:30.611 17:06:18 nvmf_tcp.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:15:30.611 17:06:18 nvmf_tcp.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 00:15:30.611 17:06:18 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@488 -- # nvmfcleanup 00:15:30.611 17:06:18 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@117 -- # sync 00:15:30.612 17:06:18 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:15:30.612 17:06:18 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@120 -- # set +e 00:15:30.612 17:06:18 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@121 -- # for i in {1..20} 00:15:30.612 17:06:18 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:15:30.612 rmmod nvme_tcp 00:15:30.612 rmmod nvme_fabrics 00:15:30.612 rmmod nvme_keyring 00:15:30.612 17:06:18 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:15:30.612 17:06:18 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@124 -- # set -e 00:15:30.612 17:06:18 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@125 -- # return 0 00:15:30.612 17:06:18 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@489 -- # '[' -n 3049219 ']' 00:15:30.612 17:06:18 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@490 -- # killprocess 3049219 00:15:30.612 17:06:18 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@946 -- # '[' -z 3049219 ']' 00:15:30.612 17:06:18 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@950 -- # kill -0 3049219 00:15:30.612 17:06:18 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@951 -- # uname 00:15:30.612 17:06:18 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:15:30.612 17:06:18 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3049219 00:15:30.612 17:06:18 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:15:30.612 17:06:18 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:15:30.612 17:06:18 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3049219' 00:15:30.612 killing process with pid 3049219 00:15:30.612 17:06:18 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@965 -- # kill 3049219 00:15:30.612 [2024-05-15 17:06:18.209468] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:15:30.612 17:06:18 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@970 -- # wait 3049219 00:15:30.871 17:06:18 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:15:30.871 17:06:18 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:15:30.871 17:06:18 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:15:30.871 17:06:18 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:30.871 17:06:18 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@278 -- # remove_spdk_ns 00:15:30.871 17:06:18 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:30.871 17:06:18 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:30.871 17:06:18 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:33.406 17:06:20 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:15:33.406 00:15:33.406 real 0m26.529s 00:15:33.406 user 1m46.601s 00:15:33.406 sys 0m7.869s 00:15:33.406 17:06:20 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1122 -- # xtrace_disable 00:15:33.406 17:06:20 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:15:33.406 ************************************ 00:15:33.406 END TEST nvmf_fio_target 00:15:33.406 ************************************ 00:15:33.406 17:06:20 nvmf_tcp -- nvmf/nvmf.sh@56 -- # run_test nvmf_bdevio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:15:33.406 17:06:20 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:15:33.406 17:06:20 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:15:33.406 17:06:20 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:15:33.406 ************************************ 00:15:33.406 START TEST nvmf_bdevio 00:15:33.406 ************************************ 00:15:33.406 17:06:20 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:15:33.406 * Looking for test storage... 00:15:33.406 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:15:33.406 17:06:20 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:33.406 17:06:20 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 00:15:33.406 17:06:20 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:33.406 17:06:20 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:33.406 17:06:20 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:33.406 17:06:20 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:33.406 17:06:20 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:33.406 17:06:20 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:33.406 17:06:20 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:33.406 17:06:20 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:33.406 17:06:20 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:33.406 17:06:20 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:33.406 17:06:20 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:15:33.406 17:06:20 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:15:33.406 17:06:20 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:33.406 17:06:20 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:33.406 17:06:20 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:33.406 17:06:20 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:33.406 17:06:20 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:33.406 17:06:20 nvmf_tcp.nvmf_bdevio -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:33.406 17:06:20 nvmf_tcp.nvmf_bdevio -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:33.406 17:06:20 nvmf_tcp.nvmf_bdevio -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:33.406 17:06:20 nvmf_tcp.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:33.406 17:06:20 nvmf_tcp.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:33.406 17:06:20 nvmf_tcp.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:33.406 17:06:20 nvmf_tcp.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 00:15:33.406 17:06:20 nvmf_tcp.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:33.406 17:06:20 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@47 -- # : 0 00:15:33.406 17:06:20 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:15:33.406 17:06:20 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:15:33.406 17:06:20 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:33.406 17:06:20 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:33.406 17:06:20 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:33.406 17:06:20 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:15:33.406 17:06:20 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:15:33.406 17:06:20 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@51 -- # have_pci_nics=0 00:15:33.406 17:06:20 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:15:33.406 17:06:20 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:15:33.406 17:06:20 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 00:15:33.406 17:06:20 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:15:33.406 17:06:20 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:33.406 17:06:20 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@448 -- # prepare_net_devs 00:15:33.406 17:06:20 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@410 -- # local -g is_hw=no 00:15:33.406 17:06:20 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@412 -- # remove_spdk_ns 00:15:33.406 17:06:20 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:33.406 17:06:20 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:33.406 17:06:20 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:33.406 17:06:20 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:15:33.406 17:06:20 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:15:33.406 17:06:20 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@285 -- # xtrace_disable 00:15:33.406 17:06:20 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:15:38.679 17:06:25 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:15:38.679 17:06:25 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@291 -- # pci_devs=() 00:15:38.679 17:06:25 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@291 -- # local -a pci_devs 00:15:38.679 17:06:25 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@292 -- # pci_net_devs=() 00:15:38.679 17:06:25 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:15:38.679 17:06:25 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@293 -- # pci_drivers=() 00:15:38.679 17:06:25 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@293 -- # local -A pci_drivers 00:15:38.679 17:06:25 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@295 -- # net_devs=() 00:15:38.679 17:06:25 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@295 -- # local -ga net_devs 00:15:38.679 17:06:25 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@296 -- # e810=() 00:15:38.679 17:06:25 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@296 -- # local -ga e810 00:15:38.679 17:06:25 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@297 -- # x722=() 00:15:38.679 17:06:25 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@297 -- # local -ga x722 00:15:38.679 17:06:25 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@298 -- # mlx=() 00:15:38.679 17:06:25 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@298 -- # local -ga mlx 00:15:38.679 17:06:25 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:15:38.679 17:06:25 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:15:38.679 17:06:25 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:15:38.679 17:06:25 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:15:38.679 17:06:25 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:15:38.679 17:06:25 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:15:38.679 17:06:25 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:15:38.679 17:06:25 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:15:38.679 17:06:25 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:15:38.679 17:06:25 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:15:38.679 17:06:25 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:15:38.679 17:06:25 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:15:38.679 17:06:25 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:15:38.679 17:06:25 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:15:38.679 17:06:25 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:15:38.679 17:06:25 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:15:38.679 17:06:25 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:15:38.679 17:06:25 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:38.679 17:06:25 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:15:38.679 Found 0000:86:00.0 (0x8086 - 0x159b) 00:15:38.679 17:06:25 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:38.679 17:06:25 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:38.679 17:06:25 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:38.679 17:06:25 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:38.679 17:06:25 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:15:38.679 17:06:25 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:38.679 17:06:25 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:15:38.679 Found 0000:86:00.1 (0x8086 - 0x159b) 00:15:38.679 17:06:25 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:38.679 17:06:25 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:38.679 17:06:25 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:38.679 17:06:25 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:38.679 17:06:25 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:15:38.679 17:06:25 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:15:38.679 17:06:25 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:15:38.679 17:06:25 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:15:38.679 17:06:25 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:38.679 17:06:25 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:38.679 17:06:25 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:15:38.679 17:06:25 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:38.679 17:06:25 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@390 -- # [[ up == up ]] 00:15:38.679 17:06:25 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:15:38.679 17:06:25 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:38.679 17:06:25 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:15:38.679 Found net devices under 0000:86:00.0: cvl_0_0 00:15:38.679 17:06:25 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:15:38.679 17:06:25 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:38.679 17:06:25 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:38.679 17:06:25 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:15:38.679 17:06:25 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:38.679 17:06:25 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@390 -- # [[ up == up ]] 00:15:38.679 17:06:25 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:15:38.679 17:06:25 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:38.679 17:06:25 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:15:38.679 Found net devices under 0000:86:00.1: cvl_0_1 00:15:38.679 17:06:25 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:15:38.679 17:06:25 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:15:38.679 17:06:25 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@414 -- # is_hw=yes 00:15:38.679 17:06:25 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:15:38.679 17:06:25 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:15:38.679 17:06:25 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:15:38.679 17:06:25 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:38.679 17:06:25 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:38.679 17:06:25 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:15:38.679 17:06:25 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:15:38.679 17:06:25 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:15:38.679 17:06:25 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:15:38.679 17:06:25 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:15:38.679 17:06:25 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:15:38.679 17:06:25 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:38.679 17:06:25 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:15:38.679 17:06:25 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:15:38.679 17:06:25 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:15:38.679 17:06:25 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:15:38.679 17:06:25 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:15:38.679 17:06:25 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:15:38.679 17:06:25 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:15:38.679 17:06:25 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:15:38.679 17:06:25 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:15:38.679 17:06:25 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:15:38.679 17:06:25 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:15:38.679 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:38.679 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.154 ms 00:15:38.679 00:15:38.679 --- 10.0.0.2 ping statistics --- 00:15:38.679 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:38.679 rtt min/avg/max/mdev = 0.154/0.154/0.154/0.000 ms 00:15:38.679 17:06:25 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:15:38.680 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:38.680 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.230 ms 00:15:38.680 00:15:38.680 --- 10.0.0.1 ping statistics --- 00:15:38.680 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:38.680 rtt min/avg/max/mdev = 0.230/0.230/0.230/0.000 ms 00:15:38.680 17:06:25 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:38.680 17:06:25 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@422 -- # return 0 00:15:38.680 17:06:25 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:15:38.680 17:06:25 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:38.680 17:06:25 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:15:38.680 17:06:25 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:15:38.680 17:06:25 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:38.680 17:06:25 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:15:38.680 17:06:25 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:15:38.680 17:06:25 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:15:38.680 17:06:25 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:15:38.680 17:06:25 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@720 -- # xtrace_disable 00:15:38.680 17:06:25 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:15:38.680 17:06:25 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@481 -- # nvmfpid=3057040 00:15:38.680 17:06:25 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@482 -- # waitforlisten 3057040 00:15:38.680 17:06:25 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x78 00:15:38.680 17:06:25 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@827 -- # '[' -z 3057040 ']' 00:15:38.680 17:06:25 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:38.680 17:06:25 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@832 -- # local max_retries=100 00:15:38.680 17:06:25 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:38.680 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:38.680 17:06:25 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@836 -- # xtrace_disable 00:15:38.680 17:06:25 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:15:38.680 [2024-05-15 17:06:26.006061] Starting SPDK v24.05-pre git sha1 0ba8ca574 / DPDK 23.11.0 initialization... 00:15:38.680 [2024-05-15 17:06:26.006103] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:38.680 EAL: No free 2048 kB hugepages reported on node 1 00:15:38.680 [2024-05-15 17:06:26.063641] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:15:38.680 [2024-05-15 17:06:26.134589] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:38.680 [2024-05-15 17:06:26.134632] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:38.680 [2024-05-15 17:06:26.134639] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:38.680 [2024-05-15 17:06:26.134644] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:38.680 [2024-05-15 17:06:26.134650] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:38.680 [2024-05-15 17:06:26.134767] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:15:38.680 [2024-05-15 17:06:26.134853] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:15:38.680 [2024-05-15 17:06:26.134939] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:15:38.680 [2024-05-15 17:06:26.134940] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:15:39.247 17:06:26 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:15:39.247 17:06:26 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@860 -- # return 0 00:15:39.247 17:06:26 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:15:39.247 17:06:26 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@726 -- # xtrace_disable 00:15:39.247 17:06:26 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:15:39.247 17:06:26 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:39.247 17:06:26 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:15:39.247 17:06:26 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:39.247 17:06:26 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:15:39.247 [2024-05-15 17:06:26.851970] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:39.247 17:06:26 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:39.247 17:06:26 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:15:39.247 17:06:26 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:39.247 17:06:26 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:15:39.247 Malloc0 00:15:39.247 17:06:26 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:39.247 17:06:26 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:15:39.247 17:06:26 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:39.247 17:06:26 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:15:39.247 17:06:26 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:39.247 17:06:26 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:15:39.247 17:06:26 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:39.247 17:06:26 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:15:39.247 17:06:26 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:39.247 17:06:26 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:39.247 17:06:26 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:39.247 17:06:26 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:15:39.247 [2024-05-15 17:06:26.903147] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:15:39.247 [2024-05-15 17:06:26.903395] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:39.506 17:06:26 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:39.506 17:06:26 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:15:39.506 17:06:26 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:15:39.506 17:06:26 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@532 -- # config=() 00:15:39.506 17:06:26 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@532 -- # local subsystem config 00:15:39.506 17:06:26 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:15:39.506 17:06:26 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:15:39.506 { 00:15:39.506 "params": { 00:15:39.506 "name": "Nvme$subsystem", 00:15:39.506 "trtype": "$TEST_TRANSPORT", 00:15:39.506 "traddr": "$NVMF_FIRST_TARGET_IP", 00:15:39.506 "adrfam": "ipv4", 00:15:39.506 "trsvcid": "$NVMF_PORT", 00:15:39.506 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:15:39.506 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:15:39.506 "hdgst": ${hdgst:-false}, 00:15:39.506 "ddgst": ${ddgst:-false} 00:15:39.506 }, 00:15:39.506 "method": "bdev_nvme_attach_controller" 00:15:39.506 } 00:15:39.506 EOF 00:15:39.506 )") 00:15:39.506 17:06:26 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@554 -- # cat 00:15:39.506 17:06:26 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@556 -- # jq . 00:15:39.506 17:06:26 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@557 -- # IFS=, 00:15:39.506 17:06:26 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:15:39.506 "params": { 00:15:39.506 "name": "Nvme1", 00:15:39.506 "trtype": "tcp", 00:15:39.506 "traddr": "10.0.0.2", 00:15:39.506 "adrfam": "ipv4", 00:15:39.506 "trsvcid": "4420", 00:15:39.506 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:15:39.506 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:15:39.506 "hdgst": false, 00:15:39.506 "ddgst": false 00:15:39.506 }, 00:15:39.506 "method": "bdev_nvme_attach_controller" 00:15:39.506 }' 00:15:39.506 [2024-05-15 17:06:26.952046] Starting SPDK v24.05-pre git sha1 0ba8ca574 / DPDK 23.11.0 initialization... 00:15:39.506 [2024-05-15 17:06:26.952090] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3057078 ] 00:15:39.506 EAL: No free 2048 kB hugepages reported on node 1 00:15:39.506 [2024-05-15 17:06:27.007999] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:15:39.506 [2024-05-15 17:06:27.083274] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:15:39.506 [2024-05-15 17:06:27.083371] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:39.506 [2024-05-15 17:06:27.083372] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:15:39.764 I/O targets: 00:15:39.764 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:15:39.764 00:15:39.764 00:15:39.764 CUnit - A unit testing framework for C - Version 2.1-3 00:15:39.764 http://cunit.sourceforge.net/ 00:15:39.765 00:15:39.765 00:15:39.765 Suite: bdevio tests on: Nvme1n1 00:15:39.765 Test: blockdev write read block ...passed 00:15:39.765 Test: blockdev write zeroes read block ...passed 00:15:39.765 Test: blockdev write zeroes read no split ...passed 00:15:40.023 Test: blockdev write zeroes read split ...passed 00:15:40.023 Test: blockdev write zeroes read split partial ...passed 00:15:40.023 Test: blockdev reset ...[2024-05-15 17:06:27.481941] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:15:40.023 [2024-05-15 17:06:27.482004] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc4d7f0 (9): Bad file descriptor 00:15:40.023 [2024-05-15 17:06:27.591599] bdev_nvme.c:2055:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:15:40.023 passed 00:15:40.023 Test: blockdev write read 8 blocks ...passed 00:15:40.023 Test: blockdev write read size > 128k ...passed 00:15:40.023 Test: blockdev write read invalid size ...passed 00:15:40.023 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:15:40.023 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:15:40.023 Test: blockdev write read max offset ...passed 00:15:40.282 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:15:40.282 Test: blockdev writev readv 8 blocks ...passed 00:15:40.282 Test: blockdev writev readv 30 x 1block ...passed 00:15:40.282 Test: blockdev writev readv block ...passed 00:15:40.282 Test: blockdev writev readv size > 128k ...passed 00:15:40.282 Test: blockdev writev readv size > 128k in two iovs ...passed 00:15:40.282 Test: blockdev comparev and writev ...[2024-05-15 17:06:27.846383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:15:40.282 [2024-05-15 17:06:27.846413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:15:40.282 [2024-05-15 17:06:27.846427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:15:40.282 [2024-05-15 17:06:27.846435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:15:40.282 [2024-05-15 17:06:27.846689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:15:40.282 [2024-05-15 17:06:27.846700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:15:40.282 [2024-05-15 17:06:27.846712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:15:40.282 [2024-05-15 17:06:27.846720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:15:40.282 [2024-05-15 17:06:27.846988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:15:40.282 [2024-05-15 17:06:27.846999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:15:40.282 [2024-05-15 17:06:27.847012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:15:40.282 [2024-05-15 17:06:27.847020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:15:40.282 [2024-05-15 17:06:27.847281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:15:40.282 [2024-05-15 17:06:27.847291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:15:40.282 [2024-05-15 17:06:27.847303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:15:40.282 [2024-05-15 17:06:27.847316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:15:40.282 passed 00:15:40.282 Test: blockdev nvme passthru rw ...passed 00:15:40.282 Test: blockdev nvme passthru vendor specific ...[2024-05-15 17:06:27.929565] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:15:40.282 [2024-05-15 17:06:27.929582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:15:40.282 [2024-05-15 17:06:27.929715] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:15:40.282 [2024-05-15 17:06:27.929726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:15:40.282 [2024-05-15 17:06:27.929857] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:15:40.282 [2024-05-15 17:06:27.929867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:15:40.282 [2024-05-15 17:06:27.929993] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:15:40.282 [2024-05-15 17:06:27.930003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:15:40.282 passed 00:15:40.541 Test: blockdev nvme admin passthru ...passed 00:15:40.541 Test: blockdev copy ...passed 00:15:40.541 00:15:40.541 Run Summary: Type Total Ran Passed Failed Inactive 00:15:40.541 suites 1 1 n/a 0 0 00:15:40.541 tests 23 23 23 0 0 00:15:40.541 asserts 152 152 152 0 n/a 00:15:40.541 00:15:40.541 Elapsed time = 1.400 seconds 00:15:40.541 17:06:28 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:40.541 17:06:28 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:40.541 17:06:28 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:15:40.541 17:06:28 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:40.541 17:06:28 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:15:40.541 17:06:28 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 00:15:40.541 17:06:28 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@488 -- # nvmfcleanup 00:15:40.541 17:06:28 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@117 -- # sync 00:15:40.541 17:06:28 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:15:40.541 17:06:28 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@120 -- # set +e 00:15:40.541 17:06:28 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@121 -- # for i in {1..20} 00:15:40.541 17:06:28 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:15:40.541 rmmod nvme_tcp 00:15:40.799 rmmod nvme_fabrics 00:15:40.799 rmmod nvme_keyring 00:15:40.799 17:06:28 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:15:40.799 17:06:28 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@124 -- # set -e 00:15:40.799 17:06:28 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@125 -- # return 0 00:15:40.799 17:06:28 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@489 -- # '[' -n 3057040 ']' 00:15:40.799 17:06:28 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@490 -- # killprocess 3057040 00:15:40.799 17:06:28 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@946 -- # '[' -z 3057040 ']' 00:15:40.799 17:06:28 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@950 -- # kill -0 3057040 00:15:40.799 17:06:28 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@951 -- # uname 00:15:40.799 17:06:28 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:15:40.799 17:06:28 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3057040 00:15:40.799 17:06:28 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@952 -- # process_name=reactor_3 00:15:40.799 17:06:28 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@956 -- # '[' reactor_3 = sudo ']' 00:15:40.799 17:06:28 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3057040' 00:15:40.799 killing process with pid 3057040 00:15:40.799 17:06:28 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@965 -- # kill 3057040 00:15:40.799 [2024-05-15 17:06:28.287608] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:15:40.799 17:06:28 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@970 -- # wait 3057040 00:15:41.058 17:06:28 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:15:41.058 17:06:28 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:15:41.058 17:06:28 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:15:41.058 17:06:28 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:41.058 17:06:28 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@278 -- # remove_spdk_ns 00:15:41.058 17:06:28 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:41.058 17:06:28 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:41.058 17:06:28 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:42.962 17:06:30 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:15:42.962 00:15:42.962 real 0m10.024s 00:15:42.962 user 0m13.180s 00:15:42.962 sys 0m4.584s 00:15:42.962 17:06:30 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@1122 -- # xtrace_disable 00:15:42.962 17:06:30 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:15:42.962 ************************************ 00:15:42.962 END TEST nvmf_bdevio 00:15:42.962 ************************************ 00:15:42.962 17:06:30 nvmf_tcp -- nvmf/nvmf.sh@57 -- # run_test nvmf_auth_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=tcp 00:15:42.962 17:06:30 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:15:42.962 17:06:30 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:15:42.962 17:06:30 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:15:43.220 ************************************ 00:15:43.220 START TEST nvmf_auth_target 00:15:43.220 ************************************ 00:15:43.220 17:06:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=tcp 00:15:43.220 * Looking for test storage... 00:15:43.220 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:15:43.220 17:06:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:43.220 17:06:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@7 -- # uname -s 00:15:43.220 17:06:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:43.221 17:06:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:43.221 17:06:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:43.221 17:06:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:43.221 17:06:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:43.221 17:06:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:43.221 17:06:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:43.221 17:06:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:43.221 17:06:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:43.221 17:06:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:43.221 17:06:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:15:43.221 17:06:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:15:43.221 17:06:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:43.221 17:06:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:43.221 17:06:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:43.221 17:06:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:43.221 17:06:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:43.221 17:06:30 nvmf_tcp.nvmf_auth_target -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:43.221 17:06:30 nvmf_tcp.nvmf_auth_target -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:43.221 17:06:30 nvmf_tcp.nvmf_auth_target -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:43.221 17:06:30 nvmf_tcp.nvmf_auth_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:43.221 17:06:30 nvmf_tcp.nvmf_auth_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:43.221 17:06:30 nvmf_tcp.nvmf_auth_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:43.221 17:06:30 nvmf_tcp.nvmf_auth_target -- paths/export.sh@5 -- # export PATH 00:15:43.221 17:06:30 nvmf_tcp.nvmf_auth_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:43.221 17:06:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@47 -- # : 0 00:15:43.221 17:06:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:15:43.221 17:06:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:15:43.221 17:06:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:43.221 17:06:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:43.221 17:06:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:43.221 17:06:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:15:43.221 17:06:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:15:43.221 17:06:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@51 -- # have_pci_nics=0 00:15:43.221 17:06:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:15:43.221 17:06:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@14 -- # dhgroups=("null" "ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:15:43.221 17:06:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@15 -- # subnqn=nqn.2024-03.io.spdk:cnode0 00:15:43.221 17:06:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@16 -- # hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:15:43.221 17:06:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@17 -- # hostsock=/var/tmp/host.sock 00:15:43.221 17:06:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@18 -- # keys=() 00:15:43.221 17:06:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@57 -- # nvmftestinit 00:15:43.221 17:06:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:15:43.221 17:06:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:43.221 17:06:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@448 -- # prepare_net_devs 00:15:43.221 17:06:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@410 -- # local -g is_hw=no 00:15:43.221 17:06:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@412 -- # remove_spdk_ns 00:15:43.221 17:06:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:43.221 17:06:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:43.221 17:06:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:43.221 17:06:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:15:43.221 17:06:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:15:43.221 17:06:30 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@285 -- # xtrace_disable 00:15:43.221 17:06:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:48.516 17:06:35 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:15:48.516 17:06:35 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@291 -- # pci_devs=() 00:15:48.516 17:06:35 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@291 -- # local -a pci_devs 00:15:48.516 17:06:35 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@292 -- # pci_net_devs=() 00:15:48.516 17:06:35 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:15:48.516 17:06:35 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@293 -- # pci_drivers=() 00:15:48.516 17:06:35 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@293 -- # local -A pci_drivers 00:15:48.516 17:06:35 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@295 -- # net_devs=() 00:15:48.516 17:06:35 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@295 -- # local -ga net_devs 00:15:48.516 17:06:35 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@296 -- # e810=() 00:15:48.516 17:06:35 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@296 -- # local -ga e810 00:15:48.516 17:06:35 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@297 -- # x722=() 00:15:48.516 17:06:35 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@297 -- # local -ga x722 00:15:48.516 17:06:35 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@298 -- # mlx=() 00:15:48.516 17:06:35 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@298 -- # local -ga mlx 00:15:48.516 17:06:35 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:15:48.516 17:06:35 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:15:48.516 17:06:35 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:15:48.516 17:06:35 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:15:48.516 17:06:35 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:15:48.516 17:06:35 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:15:48.516 17:06:35 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:15:48.516 17:06:35 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:15:48.516 17:06:35 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:15:48.516 17:06:35 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:15:48.516 17:06:35 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:15:48.516 17:06:35 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:15:48.516 17:06:35 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:15:48.516 17:06:35 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:15:48.516 17:06:35 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:15:48.516 17:06:35 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:15:48.516 17:06:35 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:15:48.516 17:06:35 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:48.516 17:06:35 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:15:48.516 Found 0000:86:00.0 (0x8086 - 0x159b) 00:15:48.516 17:06:35 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:48.516 17:06:35 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:48.516 17:06:35 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:48.516 17:06:35 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:48.516 17:06:35 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:15:48.516 17:06:35 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:48.516 17:06:35 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:15:48.516 Found 0000:86:00.1 (0x8086 - 0x159b) 00:15:48.516 17:06:35 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:48.516 17:06:35 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:48.516 17:06:35 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:48.516 17:06:35 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:48.516 17:06:35 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:15:48.516 17:06:35 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:15:48.516 17:06:35 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:15:48.516 17:06:35 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:15:48.516 17:06:35 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:48.516 17:06:35 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:48.516 17:06:35 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:15:48.516 17:06:35 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:48.516 17:06:35 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:15:48.516 17:06:35 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:15:48.516 17:06:35 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:48.516 17:06:35 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:15:48.516 Found net devices under 0000:86:00.0: cvl_0_0 00:15:48.516 17:06:35 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:15:48.516 17:06:35 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:48.516 17:06:35 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:48.516 17:06:35 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:15:48.516 17:06:35 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:48.516 17:06:35 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:15:48.516 17:06:35 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:15:48.516 17:06:35 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:48.516 17:06:35 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:15:48.516 Found net devices under 0000:86:00.1: cvl_0_1 00:15:48.516 17:06:35 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:15:48.516 17:06:35 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:15:48.516 17:06:35 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@414 -- # is_hw=yes 00:15:48.516 17:06:35 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:15:48.516 17:06:35 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:15:48.516 17:06:35 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:15:48.516 17:06:35 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:48.516 17:06:35 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:48.516 17:06:35 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:15:48.516 17:06:35 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:15:48.516 17:06:35 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:15:48.516 17:06:35 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:15:48.516 17:06:35 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:15:48.516 17:06:35 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:15:48.516 17:06:35 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:48.516 17:06:35 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:15:48.516 17:06:35 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:15:48.516 17:06:35 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:15:48.516 17:06:35 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:15:48.516 17:06:35 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:15:48.516 17:06:35 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:15:48.516 17:06:35 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:15:48.516 17:06:35 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:15:48.516 17:06:35 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:15:48.516 17:06:35 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:15:48.516 17:06:35 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:15:48.516 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:48.516 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.191 ms 00:15:48.517 00:15:48.517 --- 10.0.0.2 ping statistics --- 00:15:48.517 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:48.517 rtt min/avg/max/mdev = 0.191/0.191/0.191/0.000 ms 00:15:48.517 17:06:35 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:15:48.517 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:48.517 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.227 ms 00:15:48.517 00:15:48.517 --- 10.0.0.1 ping statistics --- 00:15:48.517 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:48.517 rtt min/avg/max/mdev = 0.227/0.227/0.227/0.000 ms 00:15:48.517 17:06:35 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:48.517 17:06:35 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@422 -- # return 0 00:15:48.517 17:06:35 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:15:48.517 17:06:35 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:48.517 17:06:35 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:15:48.517 17:06:35 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:15:48.517 17:06:35 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:48.517 17:06:35 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:15:48.517 17:06:35 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:15:48.517 17:06:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@58 -- # nvmfappstart -L nvmf_auth 00:15:48.517 17:06:35 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:15:48.517 17:06:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@720 -- # xtrace_disable 00:15:48.517 17:06:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:48.517 17:06:35 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@481 -- # nvmfpid=3060816 00:15:48.517 17:06:35 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@482 -- # waitforlisten 3060816 00:15:48.517 17:06:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@827 -- # '[' -z 3060816 ']' 00:15:48.517 17:06:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:48.517 17:06:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@832 -- # local max_retries=100 00:15:48.517 17:06:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:48.517 17:06:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@836 -- # xtrace_disable 00:15:48.517 17:06:35 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvmf_auth 00:15:48.517 17:06:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:49.453 17:06:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:15:49.453 17:06:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@860 -- # return 0 00:15:49.453 17:06:36 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:15:49.453 17:06:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:15:49.453 17:06:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:49.453 17:06:36 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:49.453 17:06:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@60 -- # hostpid=3060888 00:15:49.453 17:06:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@62 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:15:49.453 17:06:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/host.sock -L nvme_auth 00:15:49.453 17:06:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@64 -- # gen_dhchap_key null 48 00:15:49.453 17:06:36 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:15:49.453 17:06:36 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:15:49.453 17:06:36 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:15:49.453 17:06:36 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=null 00:15:49.453 17:06:36 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=48 00:15:49.453 17:06:36 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:15:49.453 17:06:36 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=4166a93ec3f82ee4e75548ce0add7b3733267af4f83f3d9d 00:15:49.453 17:06:36 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:15:49.453 17:06:36 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.fsB 00:15:49.453 17:06:36 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 4166a93ec3f82ee4e75548ce0add7b3733267af4f83f3d9d 0 00:15:49.453 17:06:36 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 4166a93ec3f82ee4e75548ce0add7b3733267af4f83f3d9d 0 00:15:49.453 17:06:36 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:15:49.453 17:06:36 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:15:49.453 17:06:36 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=4166a93ec3f82ee4e75548ce0add7b3733267af4f83f3d9d 00:15:49.453 17:06:36 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=0 00:15:49.453 17:06:36 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:15:49.453 17:06:36 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.fsB 00:15:49.453 17:06:36 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.fsB 00:15:49.453 17:06:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@64 -- # keys[0]=/tmp/spdk.key-null.fsB 00:15:49.453 17:06:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@65 -- # gen_dhchap_key sha256 32 00:15:49.453 17:06:36 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:15:49.453 17:06:36 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:15:49.453 17:06:36 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:15:49.453 17:06:36 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha256 00:15:49.453 17:06:36 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=32 00:15:49.453 17:06:36 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:15:49.453 17:06:36 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=bfd7a2baf63744b14334a06ad8f21531 00:15:49.453 17:06:36 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:15:49.453 17:06:36 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.LnA 00:15:49.453 17:06:36 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key bfd7a2baf63744b14334a06ad8f21531 1 00:15:49.453 17:06:36 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 bfd7a2baf63744b14334a06ad8f21531 1 00:15:49.453 17:06:36 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:15:49.453 17:06:36 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:15:49.453 17:06:36 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=bfd7a2baf63744b14334a06ad8f21531 00:15:49.453 17:06:36 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=1 00:15:49.453 17:06:36 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:15:49.453 17:06:36 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.LnA 00:15:49.454 17:06:36 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.LnA 00:15:49.454 17:06:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@65 -- # keys[1]=/tmp/spdk.key-sha256.LnA 00:15:49.454 17:06:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@66 -- # gen_dhchap_key sha384 48 00:15:49.454 17:06:36 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:15:49.454 17:06:36 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:15:49.454 17:06:36 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:15:49.454 17:06:36 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha384 00:15:49.454 17:06:36 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=48 00:15:49.454 17:06:36 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:15:49.454 17:06:37 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=1841bcef6f0868dbe7c3f95c48314531af7ac87598275413 00:15:49.454 17:06:37 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:15:49.454 17:06:37 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.hGd 00:15:49.454 17:06:37 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 1841bcef6f0868dbe7c3f95c48314531af7ac87598275413 2 00:15:49.454 17:06:37 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 1841bcef6f0868dbe7c3f95c48314531af7ac87598275413 2 00:15:49.454 17:06:37 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:15:49.454 17:06:37 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:15:49.454 17:06:37 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=1841bcef6f0868dbe7c3f95c48314531af7ac87598275413 00:15:49.454 17:06:37 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=2 00:15:49.454 17:06:37 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:15:49.454 17:06:37 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.hGd 00:15:49.454 17:06:37 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.hGd 00:15:49.454 17:06:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@66 -- # keys[2]=/tmp/spdk.key-sha384.hGd 00:15:49.454 17:06:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@67 -- # gen_dhchap_key sha512 64 00:15:49.454 17:06:37 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:15:49.454 17:06:37 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:15:49.454 17:06:37 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:15:49.454 17:06:37 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha512 00:15:49.454 17:06:37 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=64 00:15:49.454 17:06:37 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:15:49.454 17:06:37 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=d7cd8544e37dbbdbe5a850872d04cfa1fc962eea3f42bb019b2f3037b682a1ae 00:15:49.454 17:06:37 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:15:49.454 17:06:37 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.z8S 00:15:49.454 17:06:37 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key d7cd8544e37dbbdbe5a850872d04cfa1fc962eea3f42bb019b2f3037b682a1ae 3 00:15:49.454 17:06:37 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 d7cd8544e37dbbdbe5a850872d04cfa1fc962eea3f42bb019b2f3037b682a1ae 3 00:15:49.454 17:06:37 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:15:49.454 17:06:37 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:15:49.454 17:06:37 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=d7cd8544e37dbbdbe5a850872d04cfa1fc962eea3f42bb019b2f3037b682a1ae 00:15:49.454 17:06:37 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=3 00:15:49.454 17:06:37 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:15:49.712 17:06:37 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.z8S 00:15:49.712 17:06:37 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.z8S 00:15:49.712 17:06:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@67 -- # keys[3]=/tmp/spdk.key-sha512.z8S 00:15:49.712 17:06:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@69 -- # waitforlisten 3060816 00:15:49.712 17:06:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@827 -- # '[' -z 3060816 ']' 00:15:49.712 17:06:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:49.712 17:06:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@832 -- # local max_retries=100 00:15:49.712 17:06:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:49.712 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:49.712 17:06:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@836 -- # xtrace_disable 00:15:49.712 17:06:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:49.712 17:06:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:15:49.712 17:06:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@860 -- # return 0 00:15:49.712 17:06:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@70 -- # waitforlisten 3060888 /var/tmp/host.sock 00:15:49.712 17:06:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@827 -- # '[' -z 3060888 ']' 00:15:49.712 17:06:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/host.sock 00:15:49.712 17:06:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@832 -- # local max_retries=100 00:15:49.712 17:06:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:15:49.712 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:15:49.712 17:06:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@836 -- # xtrace_disable 00:15:49.712 17:06:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:49.970 17:06:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:15:49.970 17:06:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@860 -- # return 0 00:15:49.970 17:06:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@71 -- # rpc_cmd 00:15:49.970 17:06:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:49.971 17:06:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:49.971 17:06:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:49.971 17:06:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@78 -- # for i in "${!keys[@]}" 00:15:49.971 17:06:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@79 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.fsB 00:15:49.971 17:06:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:49.971 17:06:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:49.971 17:06:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:49.971 17:06:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@80 -- # hostrpc keyring_file_add_key key0 /tmp/spdk.key-null.fsB 00:15:49.971 17:06:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key0 /tmp/spdk.key-null.fsB 00:15:50.229 17:06:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@78 -- # for i in "${!keys[@]}" 00:15:50.229 17:06:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@79 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.LnA 00:15:50.229 17:06:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:50.229 17:06:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:50.229 17:06:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:50.229 17:06:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@80 -- # hostrpc keyring_file_add_key key1 /tmp/spdk.key-sha256.LnA 00:15:50.229 17:06:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key1 /tmp/spdk.key-sha256.LnA 00:15:50.229 17:06:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@78 -- # for i in "${!keys[@]}" 00:15:50.229 17:06:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@79 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.hGd 00:15:50.229 17:06:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:50.229 17:06:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:50.229 17:06:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:50.229 17:06:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@80 -- # hostrpc keyring_file_add_key key2 /tmp/spdk.key-sha384.hGd 00:15:50.229 17:06:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key2 /tmp/spdk.key-sha384.hGd 00:15:50.488 17:06:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@78 -- # for i in "${!keys[@]}" 00:15:50.488 17:06:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@79 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.z8S 00:15:50.488 17:06:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:50.488 17:06:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:50.488 17:06:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:50.488 17:06:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@80 -- # hostrpc keyring_file_add_key key3 /tmp/spdk.key-sha512.z8S 00:15:50.488 17:06:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key3 /tmp/spdk.key-sha512.z8S 00:15:50.746 17:06:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@84 -- # for digest in "${digests[@]}" 00:15:50.746 17:06:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # for dhgroup in "${dhgroups[@]}" 00:15:50.746 17:06:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:15:50.746 17:06:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:15:50.746 17:06:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:15:51.004 17:06:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha256 null 0 00:15:51.004 17:06:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:15:51.004 17:06:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:15:51.004 17:06:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:15:51.004 17:06:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:15:51.004 17:06:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 00:15:51.004 17:06:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:51.004 17:06:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:51.004 17:06:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:51.004 17:06:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:15:51.004 17:06:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:15:51.004 00:15:51.004 17:06:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:15:51.004 17:06:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:15:51.004 17:06:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:51.262 17:06:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:51.262 17:06:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:51.262 17:06:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:51.262 17:06:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:51.262 17:06:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:51.262 17:06:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:15:51.262 { 00:15:51.262 "cntlid": 1, 00:15:51.262 "qid": 0, 00:15:51.262 "state": "enabled", 00:15:51.262 "listen_address": { 00:15:51.262 "trtype": "TCP", 00:15:51.262 "adrfam": "IPv4", 00:15:51.262 "traddr": "10.0.0.2", 00:15:51.262 "trsvcid": "4420" 00:15:51.262 }, 00:15:51.262 "peer_address": { 00:15:51.263 "trtype": "TCP", 00:15:51.263 "adrfam": "IPv4", 00:15:51.263 "traddr": "10.0.0.1", 00:15:51.263 "trsvcid": "37992" 00:15:51.263 }, 00:15:51.263 "auth": { 00:15:51.263 "state": "completed", 00:15:51.263 "digest": "sha256", 00:15:51.263 "dhgroup": "null" 00:15:51.263 } 00:15:51.263 } 00:15:51.263 ]' 00:15:51.263 17:06:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:15:51.263 17:06:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:51.263 17:06:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:15:51.263 17:06:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ null == \n\u\l\l ]] 00:15:51.263 17:06:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:15:51.521 17:06:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:51.521 17:06:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:51.521 17:06:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:51.521 17:06:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:00:NDE2NmE5M2VjM2Y4MmVlNGU3NTU0OGNlMGFkZDdiMzczMzI2N2FmNGY4M2YzZDlkN/JmXw==: 00:15:52.086 17:06:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:52.086 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:52.086 17:06:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:15:52.086 17:06:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:52.086 17:06:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:52.086 17:06:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:52.086 17:06:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:15:52.086 17:06:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:15:52.086 17:06:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:15:52.344 17:06:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha256 null 1 00:15:52.344 17:06:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:15:52.344 17:06:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:15:52.344 17:06:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:15:52.344 17:06:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:15:52.344 17:06:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 00:15:52.344 17:06:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:52.344 17:06:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:52.344 17:06:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:52.344 17:06:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:15:52.344 17:06:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:15:52.603 00:15:52.603 17:06:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:15:52.603 17:06:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:15:52.603 17:06:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:52.861 17:06:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:52.861 17:06:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:52.861 17:06:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:52.861 17:06:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:52.861 17:06:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:52.861 17:06:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:15:52.861 { 00:15:52.861 "cntlid": 3, 00:15:52.861 "qid": 0, 00:15:52.861 "state": "enabled", 00:15:52.861 "listen_address": { 00:15:52.861 "trtype": "TCP", 00:15:52.861 "adrfam": "IPv4", 00:15:52.861 "traddr": "10.0.0.2", 00:15:52.861 "trsvcid": "4420" 00:15:52.861 }, 00:15:52.861 "peer_address": { 00:15:52.861 "trtype": "TCP", 00:15:52.861 "adrfam": "IPv4", 00:15:52.861 "traddr": "10.0.0.1", 00:15:52.861 "trsvcid": "38024" 00:15:52.861 }, 00:15:52.861 "auth": { 00:15:52.861 "state": "completed", 00:15:52.861 "digest": "sha256", 00:15:52.861 "dhgroup": "null" 00:15:52.861 } 00:15:52.861 } 00:15:52.861 ]' 00:15:52.861 17:06:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:15:52.861 17:06:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:52.861 17:06:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:15:52.861 17:06:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ null == \n\u\l\l ]] 00:15:52.861 17:06:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:15:52.861 17:06:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:52.861 17:06:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:52.861 17:06:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:53.120 17:06:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:01:YmZkN2EyYmFmNjM3NDRiMTQzMzRhMDZhZDhmMjE1MzHZt7k4: 00:15:53.686 17:06:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:53.686 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:53.686 17:06:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:15:53.686 17:06:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:53.686 17:06:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:53.686 17:06:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:53.687 17:06:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:15:53.687 17:06:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:15:53.687 17:06:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:15:53.945 17:06:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha256 null 2 00:15:53.945 17:06:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:15:53.945 17:06:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:15:53.945 17:06:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:15:53.945 17:06:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:15:53.945 17:06:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 00:15:53.945 17:06:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:53.945 17:06:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:53.945 17:06:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:53.945 17:06:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:15:53.945 17:06:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:15:53.945 00:15:54.204 17:06:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:15:54.204 17:06:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:15:54.204 17:06:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:54.204 17:06:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:54.204 17:06:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:54.204 17:06:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:54.204 17:06:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:54.204 17:06:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:54.204 17:06:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:15:54.204 { 00:15:54.204 "cntlid": 5, 00:15:54.204 "qid": 0, 00:15:54.204 "state": "enabled", 00:15:54.204 "listen_address": { 00:15:54.204 "trtype": "TCP", 00:15:54.205 "adrfam": "IPv4", 00:15:54.205 "traddr": "10.0.0.2", 00:15:54.205 "trsvcid": "4420" 00:15:54.205 }, 00:15:54.205 "peer_address": { 00:15:54.205 "trtype": "TCP", 00:15:54.205 "adrfam": "IPv4", 00:15:54.205 "traddr": "10.0.0.1", 00:15:54.205 "trsvcid": "38062" 00:15:54.205 }, 00:15:54.205 "auth": { 00:15:54.205 "state": "completed", 00:15:54.205 "digest": "sha256", 00:15:54.205 "dhgroup": "null" 00:15:54.205 } 00:15:54.205 } 00:15:54.205 ]' 00:15:54.205 17:06:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:15:54.205 17:06:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:54.205 17:06:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:15:54.463 17:06:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ null == \n\u\l\l ]] 00:15:54.463 17:06:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:15:54.463 17:06:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:54.463 17:06:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:54.463 17:06:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:54.463 17:06:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:02:MTg0MWJjZWY2ZjA4NjhkYmU3YzNmOTVjNDgzMTQ1MzFhZjdhYzg3NTk4Mjc1NDEzS6SZTg==: 00:15:55.030 17:06:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:55.030 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:55.030 17:06:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:15:55.030 17:06:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:55.031 17:06:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:55.031 17:06:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:55.031 17:06:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:15:55.031 17:06:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:15:55.031 17:06:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:15:55.290 17:06:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha256 null 3 00:15:55.290 17:06:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:15:55.290 17:06:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:15:55.290 17:06:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:15:55.290 17:06:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:15:55.290 17:06:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:15:55.290 17:06:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:55.290 17:06:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:55.290 17:06:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:55.290 17:06:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:15:55.290 17:06:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:15:55.548 00:15:55.548 17:06:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:15:55.548 17:06:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:15:55.548 17:06:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:55.807 17:06:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:55.807 17:06:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:55.807 17:06:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:55.807 17:06:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:55.807 17:06:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:55.807 17:06:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:15:55.807 { 00:15:55.807 "cntlid": 7, 00:15:55.807 "qid": 0, 00:15:55.807 "state": "enabled", 00:15:55.807 "listen_address": { 00:15:55.807 "trtype": "TCP", 00:15:55.807 "adrfam": "IPv4", 00:15:55.807 "traddr": "10.0.0.2", 00:15:55.807 "trsvcid": "4420" 00:15:55.807 }, 00:15:55.807 "peer_address": { 00:15:55.807 "trtype": "TCP", 00:15:55.807 "adrfam": "IPv4", 00:15:55.807 "traddr": "10.0.0.1", 00:15:55.807 "trsvcid": "38076" 00:15:55.807 }, 00:15:55.807 "auth": { 00:15:55.807 "state": "completed", 00:15:55.807 "digest": "sha256", 00:15:55.807 "dhgroup": "null" 00:15:55.807 } 00:15:55.807 } 00:15:55.807 ]' 00:15:55.807 17:06:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:15:55.807 17:06:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:55.807 17:06:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:15:55.807 17:06:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ null == \n\u\l\l ]] 00:15:55.807 17:06:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:15:55.807 17:06:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:55.807 17:06:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:55.807 17:06:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:56.066 17:06:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:03:ZDdjZDg1NDRlMzdkYmJkYmU1YTg1MDg3MmQwNGNmYTFmYzk2MmVlYTNmNDJiYjAxOWIyZjMwMzdiNjgyYTFhZb0Oe7c=: 00:15:56.634 17:06:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:56.634 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:56.634 17:06:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:15:56.634 17:06:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:56.634 17:06:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:56.634 17:06:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:56.634 17:06:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # for dhgroup in "${dhgroups[@]}" 00:15:56.634 17:06:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:15:56.634 17:06:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:15:56.634 17:06:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:15:56.894 17:06:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha256 ffdhe2048 0 00:15:56.894 17:06:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:15:56.894 17:06:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:15:56.894 17:06:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:15:56.894 17:06:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:15:56.894 17:06:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 00:15:56.894 17:06:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:56.894 17:06:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:56.894 17:06:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:56.894 17:06:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:15:56.894 17:06:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:15:57.154 00:15:57.154 17:06:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:15:57.154 17:06:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:15:57.154 17:06:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:57.154 17:06:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:57.154 17:06:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:57.154 17:06:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:57.154 17:06:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:57.154 17:06:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:57.154 17:06:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:15:57.154 { 00:15:57.154 "cntlid": 9, 00:15:57.154 "qid": 0, 00:15:57.154 "state": "enabled", 00:15:57.154 "listen_address": { 00:15:57.154 "trtype": "TCP", 00:15:57.154 "adrfam": "IPv4", 00:15:57.154 "traddr": "10.0.0.2", 00:15:57.154 "trsvcid": "4420" 00:15:57.154 }, 00:15:57.154 "peer_address": { 00:15:57.154 "trtype": "TCP", 00:15:57.154 "adrfam": "IPv4", 00:15:57.154 "traddr": "10.0.0.1", 00:15:57.154 "trsvcid": "52834" 00:15:57.154 }, 00:15:57.154 "auth": { 00:15:57.154 "state": "completed", 00:15:57.154 "digest": "sha256", 00:15:57.154 "dhgroup": "ffdhe2048" 00:15:57.154 } 00:15:57.154 } 00:15:57.154 ]' 00:15:57.154 17:06:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:15:57.413 17:06:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:57.413 17:06:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:15:57.413 17:06:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:15:57.413 17:06:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:15:57.413 17:06:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:57.413 17:06:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:57.413 17:06:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:57.672 17:06:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:00:NDE2NmE5M2VjM2Y4MmVlNGU3NTU0OGNlMGFkZDdiMzczMzI2N2FmNGY4M2YzZDlkN/JmXw==: 00:15:58.240 17:06:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:58.240 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:58.240 17:06:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:15:58.240 17:06:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:58.240 17:06:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:58.240 17:06:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:58.240 17:06:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:15:58.240 17:06:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:15:58.240 17:06:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:15:58.240 17:06:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha256 ffdhe2048 1 00:15:58.240 17:06:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:15:58.240 17:06:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:15:58.240 17:06:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:15:58.240 17:06:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:15:58.240 17:06:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 00:15:58.240 17:06:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:58.240 17:06:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:58.240 17:06:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:58.240 17:06:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:15:58.240 17:06:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:15:58.499 00:15:58.499 17:06:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:15:58.499 17:06:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:15:58.499 17:06:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:58.758 17:06:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:58.758 17:06:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:58.758 17:06:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:58.758 17:06:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:58.758 17:06:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:58.758 17:06:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:15:58.758 { 00:15:58.758 "cntlid": 11, 00:15:58.758 "qid": 0, 00:15:58.758 "state": "enabled", 00:15:58.758 "listen_address": { 00:15:58.758 "trtype": "TCP", 00:15:58.758 "adrfam": "IPv4", 00:15:58.758 "traddr": "10.0.0.2", 00:15:58.758 "trsvcid": "4420" 00:15:58.758 }, 00:15:58.758 "peer_address": { 00:15:58.758 "trtype": "TCP", 00:15:58.758 "adrfam": "IPv4", 00:15:58.758 "traddr": "10.0.0.1", 00:15:58.758 "trsvcid": "52858" 00:15:58.758 }, 00:15:58.758 "auth": { 00:15:58.758 "state": "completed", 00:15:58.758 "digest": "sha256", 00:15:58.758 "dhgroup": "ffdhe2048" 00:15:58.758 } 00:15:58.758 } 00:15:58.758 ]' 00:15:58.758 17:06:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:15:58.758 17:06:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:58.758 17:06:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:15:58.758 17:06:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:15:58.758 17:06:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:15:58.758 17:06:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:58.758 17:06:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:58.758 17:06:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:59.017 17:06:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:01:YmZkN2EyYmFmNjM3NDRiMTQzMzRhMDZhZDhmMjE1MzHZt7k4: 00:15:59.582 17:06:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:59.582 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:59.583 17:06:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:15:59.583 17:06:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:59.583 17:06:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:59.583 17:06:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:59.583 17:06:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:15:59.583 17:06:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:15:59.583 17:06:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:15:59.841 17:06:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha256 ffdhe2048 2 00:15:59.841 17:06:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:15:59.841 17:06:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:15:59.841 17:06:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:15:59.841 17:06:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:15:59.841 17:06:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 00:15:59.841 17:06:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:59.841 17:06:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:59.841 17:06:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:59.841 17:06:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:15:59.841 17:06:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:16:00.099 00:16:00.099 17:06:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:16:00.099 17:06:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:16:00.099 17:06:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:00.099 17:06:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:00.099 17:06:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:00.099 17:06:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:00.099 17:06:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:00.099 17:06:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:00.099 17:06:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:16:00.099 { 00:16:00.099 "cntlid": 13, 00:16:00.099 "qid": 0, 00:16:00.099 "state": "enabled", 00:16:00.099 "listen_address": { 00:16:00.099 "trtype": "TCP", 00:16:00.099 "adrfam": "IPv4", 00:16:00.099 "traddr": "10.0.0.2", 00:16:00.099 "trsvcid": "4420" 00:16:00.099 }, 00:16:00.099 "peer_address": { 00:16:00.099 "trtype": "TCP", 00:16:00.099 "adrfam": "IPv4", 00:16:00.099 "traddr": "10.0.0.1", 00:16:00.099 "trsvcid": "52888" 00:16:00.099 }, 00:16:00.099 "auth": { 00:16:00.099 "state": "completed", 00:16:00.099 "digest": "sha256", 00:16:00.099 "dhgroup": "ffdhe2048" 00:16:00.099 } 00:16:00.099 } 00:16:00.099 ]' 00:16:00.099 17:06:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:16:00.357 17:06:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:00.357 17:06:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:16:00.357 17:06:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:16:00.357 17:06:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:16:00.357 17:06:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:00.357 17:06:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:00.357 17:06:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:00.615 17:06:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:02:MTg0MWJjZWY2ZjA4NjhkYmU3YzNmOTVjNDgzMTQ1MzFhZjdhYzg3NTk4Mjc1NDEzS6SZTg==: 00:16:01.182 17:06:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:01.182 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:01.182 17:06:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:01.182 17:06:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:01.182 17:06:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:01.182 17:06:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:01.182 17:06:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:16:01.182 17:06:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:16:01.182 17:06:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:16:01.182 17:06:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha256 ffdhe2048 3 00:16:01.182 17:06:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:16:01.182 17:06:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:16:01.182 17:06:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:16:01.182 17:06:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:16:01.182 17:06:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:16:01.182 17:06:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:01.182 17:06:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:01.182 17:06:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:01.182 17:06:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:01.182 17:06:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:01.441 00:16:01.441 17:06:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:16:01.441 17:06:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:01.441 17:06:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:16:01.699 17:06:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:01.699 17:06:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:01.699 17:06:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:01.699 17:06:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:01.699 17:06:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:01.699 17:06:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:16:01.699 { 00:16:01.699 "cntlid": 15, 00:16:01.699 "qid": 0, 00:16:01.699 "state": "enabled", 00:16:01.699 "listen_address": { 00:16:01.699 "trtype": "TCP", 00:16:01.699 "adrfam": "IPv4", 00:16:01.699 "traddr": "10.0.0.2", 00:16:01.700 "trsvcid": "4420" 00:16:01.700 }, 00:16:01.700 "peer_address": { 00:16:01.700 "trtype": "TCP", 00:16:01.700 "adrfam": "IPv4", 00:16:01.700 "traddr": "10.0.0.1", 00:16:01.700 "trsvcid": "52908" 00:16:01.700 }, 00:16:01.700 "auth": { 00:16:01.700 "state": "completed", 00:16:01.700 "digest": "sha256", 00:16:01.700 "dhgroup": "ffdhe2048" 00:16:01.700 } 00:16:01.700 } 00:16:01.700 ]' 00:16:01.700 17:06:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:16:01.700 17:06:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:01.700 17:06:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:16:01.700 17:06:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:16:01.700 17:06:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:16:01.700 17:06:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:01.700 17:06:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:01.700 17:06:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:01.958 17:06:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:03:ZDdjZDg1NDRlMzdkYmJkYmU1YTg1MDg3MmQwNGNmYTFmYzk2MmVlYTNmNDJiYjAxOWIyZjMwMzdiNjgyYTFhZb0Oe7c=: 00:16:02.525 17:06:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:02.525 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:02.525 17:06:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:02.525 17:06:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:02.525 17:06:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:02.525 17:06:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:02.525 17:06:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # for dhgroup in "${dhgroups[@]}" 00:16:02.525 17:06:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:16:02.525 17:06:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:16:02.525 17:06:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:16:02.783 17:06:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha256 ffdhe3072 0 00:16:02.783 17:06:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:16:02.783 17:06:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:16:02.783 17:06:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:16:02.783 17:06:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:16:02.783 17:06:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 00:16:02.783 17:06:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:02.783 17:06:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:02.783 17:06:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:02.783 17:06:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:16:02.783 17:06:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:16:03.041 00:16:03.041 17:06:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:16:03.041 17:06:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:16:03.041 17:06:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:03.041 17:06:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:03.041 17:06:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:03.041 17:06:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:03.041 17:06:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:03.299 17:06:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:03.299 17:06:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:16:03.299 { 00:16:03.299 "cntlid": 17, 00:16:03.299 "qid": 0, 00:16:03.299 "state": "enabled", 00:16:03.299 "listen_address": { 00:16:03.299 "trtype": "TCP", 00:16:03.299 "adrfam": "IPv4", 00:16:03.299 "traddr": "10.0.0.2", 00:16:03.299 "trsvcid": "4420" 00:16:03.299 }, 00:16:03.299 "peer_address": { 00:16:03.299 "trtype": "TCP", 00:16:03.299 "adrfam": "IPv4", 00:16:03.299 "traddr": "10.0.0.1", 00:16:03.299 "trsvcid": "52926" 00:16:03.299 }, 00:16:03.299 "auth": { 00:16:03.299 "state": "completed", 00:16:03.299 "digest": "sha256", 00:16:03.299 "dhgroup": "ffdhe3072" 00:16:03.299 } 00:16:03.299 } 00:16:03.299 ]' 00:16:03.299 17:06:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:16:03.299 17:06:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:03.299 17:06:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:16:03.299 17:06:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:16:03.299 17:06:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:16:03.299 17:06:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:03.299 17:06:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:03.299 17:06:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:03.557 17:06:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:00:NDE2NmE5M2VjM2Y4MmVlNGU3NTU0OGNlMGFkZDdiMzczMzI2N2FmNGY4M2YzZDlkN/JmXw==: 00:16:04.123 17:06:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:04.123 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:04.123 17:06:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:04.123 17:06:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:04.123 17:06:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:04.123 17:06:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:04.123 17:06:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:16:04.123 17:06:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:16:04.123 17:06:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:16:04.123 17:06:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha256 ffdhe3072 1 00:16:04.123 17:06:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:16:04.123 17:06:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:16:04.123 17:06:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:16:04.123 17:06:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:16:04.123 17:06:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 00:16:04.123 17:06:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:04.123 17:06:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:04.123 17:06:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:04.123 17:06:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:16:04.123 17:06:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:16:04.412 00:16:04.412 17:06:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:16:04.412 17:06:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:16:04.412 17:06:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:04.678 17:06:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:04.678 17:06:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:04.678 17:06:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:04.678 17:06:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:04.678 17:06:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:04.678 17:06:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:16:04.678 { 00:16:04.678 "cntlid": 19, 00:16:04.678 "qid": 0, 00:16:04.678 "state": "enabled", 00:16:04.678 "listen_address": { 00:16:04.678 "trtype": "TCP", 00:16:04.678 "adrfam": "IPv4", 00:16:04.678 "traddr": "10.0.0.2", 00:16:04.678 "trsvcid": "4420" 00:16:04.678 }, 00:16:04.678 "peer_address": { 00:16:04.678 "trtype": "TCP", 00:16:04.678 "adrfam": "IPv4", 00:16:04.678 "traddr": "10.0.0.1", 00:16:04.678 "trsvcid": "52944" 00:16:04.678 }, 00:16:04.678 "auth": { 00:16:04.678 "state": "completed", 00:16:04.678 "digest": "sha256", 00:16:04.678 "dhgroup": "ffdhe3072" 00:16:04.678 } 00:16:04.678 } 00:16:04.678 ]' 00:16:04.678 17:06:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:16:04.678 17:06:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:04.678 17:06:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:16:04.678 17:06:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:16:04.678 17:06:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:16:04.678 17:06:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:04.678 17:06:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:04.678 17:06:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:04.936 17:06:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:01:YmZkN2EyYmFmNjM3NDRiMTQzMzRhMDZhZDhmMjE1MzHZt7k4: 00:16:05.502 17:06:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:05.502 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:05.502 17:06:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:05.502 17:06:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:05.502 17:06:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:05.502 17:06:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:05.502 17:06:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:16:05.502 17:06:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:16:05.502 17:06:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:16:05.761 17:06:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha256 ffdhe3072 2 00:16:05.761 17:06:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:16:05.761 17:06:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:16:05.761 17:06:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:16:05.761 17:06:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:16:05.761 17:06:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 00:16:05.761 17:06:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:05.761 17:06:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:05.761 17:06:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:05.761 17:06:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:16:05.761 17:06:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:16:06.019 00:16:06.019 17:06:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:16:06.019 17:06:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:16:06.019 17:06:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:06.277 17:06:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:06.277 17:06:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:06.277 17:06:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:06.277 17:06:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:06.277 17:06:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:06.277 17:06:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:16:06.277 { 00:16:06.277 "cntlid": 21, 00:16:06.277 "qid": 0, 00:16:06.277 "state": "enabled", 00:16:06.277 "listen_address": { 00:16:06.277 "trtype": "TCP", 00:16:06.277 "adrfam": "IPv4", 00:16:06.277 "traddr": "10.0.0.2", 00:16:06.277 "trsvcid": "4420" 00:16:06.277 }, 00:16:06.277 "peer_address": { 00:16:06.277 "trtype": "TCP", 00:16:06.277 "adrfam": "IPv4", 00:16:06.277 "traddr": "10.0.0.1", 00:16:06.277 "trsvcid": "48470" 00:16:06.277 }, 00:16:06.277 "auth": { 00:16:06.277 "state": "completed", 00:16:06.277 "digest": "sha256", 00:16:06.277 "dhgroup": "ffdhe3072" 00:16:06.277 } 00:16:06.277 } 00:16:06.277 ]' 00:16:06.277 17:06:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:16:06.277 17:06:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:06.277 17:06:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:16:06.277 17:06:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:16:06.277 17:06:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:16:06.277 17:06:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:06.277 17:06:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:06.277 17:06:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:06.535 17:06:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:02:MTg0MWJjZWY2ZjA4NjhkYmU3YzNmOTVjNDgzMTQ1MzFhZjdhYzg3NTk4Mjc1NDEzS6SZTg==: 00:16:07.102 17:06:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:07.102 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:07.102 17:06:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:07.102 17:06:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:07.102 17:06:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:07.102 17:06:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:07.103 17:06:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:16:07.103 17:06:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:16:07.103 17:06:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:16:07.103 17:06:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha256 ffdhe3072 3 00:16:07.103 17:06:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:16:07.103 17:06:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:16:07.103 17:06:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:16:07.103 17:06:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:16:07.103 17:06:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:16:07.103 17:06:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:07.103 17:06:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:07.103 17:06:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:07.103 17:06:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:07.103 17:06:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:07.361 00:16:07.361 17:06:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:16:07.361 17:06:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:16:07.361 17:06:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:07.620 17:06:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:07.620 17:06:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:07.620 17:06:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:07.620 17:06:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:07.620 17:06:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:07.620 17:06:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:16:07.620 { 00:16:07.620 "cntlid": 23, 00:16:07.620 "qid": 0, 00:16:07.620 "state": "enabled", 00:16:07.620 "listen_address": { 00:16:07.620 "trtype": "TCP", 00:16:07.620 "adrfam": "IPv4", 00:16:07.620 "traddr": "10.0.0.2", 00:16:07.620 "trsvcid": "4420" 00:16:07.620 }, 00:16:07.620 "peer_address": { 00:16:07.620 "trtype": "TCP", 00:16:07.620 "adrfam": "IPv4", 00:16:07.620 "traddr": "10.0.0.1", 00:16:07.620 "trsvcid": "48506" 00:16:07.620 }, 00:16:07.620 "auth": { 00:16:07.620 "state": "completed", 00:16:07.620 "digest": "sha256", 00:16:07.620 "dhgroup": "ffdhe3072" 00:16:07.620 } 00:16:07.620 } 00:16:07.620 ]' 00:16:07.620 17:06:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:16:07.620 17:06:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:07.620 17:06:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:16:07.877 17:06:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:16:07.877 17:06:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:16:07.877 17:06:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:07.877 17:06:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:07.877 17:06:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:07.877 17:06:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:03:ZDdjZDg1NDRlMzdkYmJkYmU1YTg1MDg3MmQwNGNmYTFmYzk2MmVlYTNmNDJiYjAxOWIyZjMwMzdiNjgyYTFhZb0Oe7c=: 00:16:08.440 17:06:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:08.440 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:08.440 17:06:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:08.440 17:06:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:08.440 17:06:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:08.440 17:06:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:08.440 17:06:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # for dhgroup in "${dhgroups[@]}" 00:16:08.440 17:06:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:16:08.440 17:06:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:16:08.440 17:06:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:16:08.697 17:06:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha256 ffdhe4096 0 00:16:08.697 17:06:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:16:08.697 17:06:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:16:08.697 17:06:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:16:08.697 17:06:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:16:08.697 17:06:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 00:16:08.697 17:06:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:08.697 17:06:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:08.697 17:06:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:08.697 17:06:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:16:08.697 17:06:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:16:08.955 00:16:08.955 17:06:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:16:08.955 17:06:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:16:08.955 17:06:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:09.213 17:06:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:09.213 17:06:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:09.213 17:06:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:09.213 17:06:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:09.213 17:06:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:09.213 17:06:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:16:09.213 { 00:16:09.213 "cntlid": 25, 00:16:09.213 "qid": 0, 00:16:09.213 "state": "enabled", 00:16:09.213 "listen_address": { 00:16:09.213 "trtype": "TCP", 00:16:09.213 "adrfam": "IPv4", 00:16:09.213 "traddr": "10.0.0.2", 00:16:09.213 "trsvcid": "4420" 00:16:09.213 }, 00:16:09.213 "peer_address": { 00:16:09.213 "trtype": "TCP", 00:16:09.213 "adrfam": "IPv4", 00:16:09.213 "traddr": "10.0.0.1", 00:16:09.213 "trsvcid": "48526" 00:16:09.213 }, 00:16:09.213 "auth": { 00:16:09.213 "state": "completed", 00:16:09.213 "digest": "sha256", 00:16:09.213 "dhgroup": "ffdhe4096" 00:16:09.213 } 00:16:09.213 } 00:16:09.213 ]' 00:16:09.213 17:06:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:16:09.214 17:06:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:09.214 17:06:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:16:09.214 17:06:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:16:09.214 17:06:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:16:09.214 17:06:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:09.214 17:06:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:09.214 17:06:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:09.472 17:06:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:00:NDE2NmE5M2VjM2Y4MmVlNGU3NTU0OGNlMGFkZDdiMzczMzI2N2FmNGY4M2YzZDlkN/JmXw==: 00:16:10.037 17:06:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:10.037 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:10.037 17:06:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:10.037 17:06:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:10.037 17:06:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:10.037 17:06:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:10.037 17:06:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:16:10.037 17:06:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:16:10.037 17:06:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:16:10.295 17:06:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha256 ffdhe4096 1 00:16:10.295 17:06:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:16:10.295 17:06:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:16:10.295 17:06:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:16:10.295 17:06:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:16:10.295 17:06:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 00:16:10.295 17:06:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:10.295 17:06:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:10.295 17:06:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:10.295 17:06:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:16:10.295 17:06:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:16:10.553 00:16:10.553 17:06:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:16:10.553 17:06:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:16:10.553 17:06:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:10.553 17:06:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:10.553 17:06:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:10.553 17:06:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:10.553 17:06:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:10.553 17:06:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:10.553 17:06:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:16:10.553 { 00:16:10.553 "cntlid": 27, 00:16:10.553 "qid": 0, 00:16:10.553 "state": "enabled", 00:16:10.553 "listen_address": { 00:16:10.553 "trtype": "TCP", 00:16:10.553 "adrfam": "IPv4", 00:16:10.553 "traddr": "10.0.0.2", 00:16:10.553 "trsvcid": "4420" 00:16:10.553 }, 00:16:10.553 "peer_address": { 00:16:10.553 "trtype": "TCP", 00:16:10.553 "adrfam": "IPv4", 00:16:10.553 "traddr": "10.0.0.1", 00:16:10.553 "trsvcid": "48558" 00:16:10.553 }, 00:16:10.553 "auth": { 00:16:10.553 "state": "completed", 00:16:10.553 "digest": "sha256", 00:16:10.553 "dhgroup": "ffdhe4096" 00:16:10.553 } 00:16:10.553 } 00:16:10.553 ]' 00:16:10.553 17:06:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:16:10.819 17:06:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:10.819 17:06:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:16:10.819 17:06:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:16:10.819 17:06:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:16:10.819 17:06:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:10.819 17:06:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:10.819 17:06:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:11.085 17:06:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:01:YmZkN2EyYmFmNjM3NDRiMTQzMzRhMDZhZDhmMjE1MzHZt7k4: 00:16:11.650 17:06:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:11.650 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:11.650 17:06:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:11.651 17:06:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:11.651 17:06:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:11.651 17:06:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:11.651 17:06:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:16:11.651 17:06:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:16:11.651 17:06:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:16:11.651 17:06:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha256 ffdhe4096 2 00:16:11.651 17:06:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:16:11.651 17:06:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:16:11.651 17:06:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:16:11.651 17:06:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:16:11.651 17:06:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 00:16:11.651 17:06:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:11.651 17:06:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:11.651 17:06:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:11.651 17:06:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:16:11.651 17:06:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:16:11.909 00:16:11.909 17:06:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:16:11.909 17:06:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:16:11.909 17:06:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:12.167 17:06:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:12.167 17:06:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:12.167 17:06:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:12.167 17:06:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:12.167 17:06:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:12.167 17:06:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:16:12.167 { 00:16:12.167 "cntlid": 29, 00:16:12.167 "qid": 0, 00:16:12.167 "state": "enabled", 00:16:12.167 "listen_address": { 00:16:12.167 "trtype": "TCP", 00:16:12.167 "adrfam": "IPv4", 00:16:12.167 "traddr": "10.0.0.2", 00:16:12.167 "trsvcid": "4420" 00:16:12.167 }, 00:16:12.167 "peer_address": { 00:16:12.167 "trtype": "TCP", 00:16:12.167 "adrfam": "IPv4", 00:16:12.167 "traddr": "10.0.0.1", 00:16:12.167 "trsvcid": "48594" 00:16:12.167 }, 00:16:12.167 "auth": { 00:16:12.167 "state": "completed", 00:16:12.167 "digest": "sha256", 00:16:12.167 "dhgroup": "ffdhe4096" 00:16:12.167 } 00:16:12.167 } 00:16:12.167 ]' 00:16:12.167 17:06:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:16:12.167 17:06:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:12.167 17:06:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:16:12.167 17:06:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:16:12.425 17:06:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:16:12.425 17:06:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:12.425 17:06:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:12.425 17:06:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:12.425 17:07:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:02:MTg0MWJjZWY2ZjA4NjhkYmU3YzNmOTVjNDgzMTQ1MzFhZjdhYzg3NTk4Mjc1NDEzS6SZTg==: 00:16:12.990 17:07:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:12.990 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:12.990 17:07:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:12.990 17:07:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:12.990 17:07:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:12.990 17:07:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:12.990 17:07:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:16:12.991 17:07:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:16:12.991 17:07:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:16:13.249 17:07:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha256 ffdhe4096 3 00:16:13.249 17:07:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:16:13.249 17:07:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:16:13.249 17:07:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:16:13.249 17:07:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:16:13.249 17:07:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:16:13.249 17:07:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:13.249 17:07:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:13.249 17:07:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:13.249 17:07:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:13.249 17:07:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:13.508 00:16:13.508 17:07:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:16:13.508 17:07:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:16:13.508 17:07:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:13.766 17:07:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:13.766 17:07:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:13.766 17:07:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:13.766 17:07:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:13.766 17:07:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:13.766 17:07:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:16:13.766 { 00:16:13.766 "cntlid": 31, 00:16:13.766 "qid": 0, 00:16:13.766 "state": "enabled", 00:16:13.766 "listen_address": { 00:16:13.766 "trtype": "TCP", 00:16:13.766 "adrfam": "IPv4", 00:16:13.766 "traddr": "10.0.0.2", 00:16:13.766 "trsvcid": "4420" 00:16:13.766 }, 00:16:13.766 "peer_address": { 00:16:13.766 "trtype": "TCP", 00:16:13.766 "adrfam": "IPv4", 00:16:13.766 "traddr": "10.0.0.1", 00:16:13.766 "trsvcid": "48634" 00:16:13.766 }, 00:16:13.766 "auth": { 00:16:13.766 "state": "completed", 00:16:13.766 "digest": "sha256", 00:16:13.766 "dhgroup": "ffdhe4096" 00:16:13.766 } 00:16:13.766 } 00:16:13.766 ]' 00:16:13.766 17:07:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:16:13.766 17:07:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:13.766 17:07:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:16:13.766 17:07:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:16:13.766 17:07:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:16:13.766 17:07:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:13.766 17:07:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:13.767 17:07:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:14.025 17:07:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:03:ZDdjZDg1NDRlMzdkYmJkYmU1YTg1MDg3MmQwNGNmYTFmYzk2MmVlYTNmNDJiYjAxOWIyZjMwMzdiNjgyYTFhZb0Oe7c=: 00:16:14.591 17:07:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:14.591 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:14.591 17:07:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:14.591 17:07:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:14.591 17:07:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:14.591 17:07:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:14.591 17:07:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # for dhgroup in "${dhgroups[@]}" 00:16:14.591 17:07:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:16:14.591 17:07:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:16:14.591 17:07:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:16:14.849 17:07:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha256 ffdhe6144 0 00:16:14.849 17:07:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:16:14.849 17:07:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:16:14.849 17:07:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:16:14.849 17:07:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:16:14.849 17:07:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 00:16:14.849 17:07:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:14.849 17:07:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:14.849 17:07:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:14.849 17:07:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:16:14.849 17:07:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:16:15.210 00:16:15.210 17:07:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:16:15.210 17:07:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:16:15.210 17:07:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:15.210 17:07:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:15.210 17:07:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:15.210 17:07:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:15.210 17:07:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:15.469 17:07:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:15.469 17:07:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:16:15.469 { 00:16:15.469 "cntlid": 33, 00:16:15.469 "qid": 0, 00:16:15.469 "state": "enabled", 00:16:15.469 "listen_address": { 00:16:15.469 "trtype": "TCP", 00:16:15.469 "adrfam": "IPv4", 00:16:15.469 "traddr": "10.0.0.2", 00:16:15.469 "trsvcid": "4420" 00:16:15.469 }, 00:16:15.469 "peer_address": { 00:16:15.469 "trtype": "TCP", 00:16:15.469 "adrfam": "IPv4", 00:16:15.469 "traddr": "10.0.0.1", 00:16:15.469 "trsvcid": "48656" 00:16:15.469 }, 00:16:15.469 "auth": { 00:16:15.469 "state": "completed", 00:16:15.469 "digest": "sha256", 00:16:15.469 "dhgroup": "ffdhe6144" 00:16:15.469 } 00:16:15.469 } 00:16:15.469 ]' 00:16:15.469 17:07:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:16:15.469 17:07:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:15.469 17:07:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:16:15.469 17:07:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:16:15.469 17:07:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:16:15.469 17:07:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:15.469 17:07:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:15.469 17:07:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:15.727 17:07:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:00:NDE2NmE5M2VjM2Y4MmVlNGU3NTU0OGNlMGFkZDdiMzczMzI2N2FmNGY4M2YzZDlkN/JmXw==: 00:16:16.294 17:07:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:16.294 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:16.294 17:07:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:16.294 17:07:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:16.294 17:07:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:16.294 17:07:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:16.294 17:07:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:16:16.294 17:07:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:16:16.294 17:07:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:16:16.294 17:07:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha256 ffdhe6144 1 00:16:16.294 17:07:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:16:16.294 17:07:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:16:16.294 17:07:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:16:16.294 17:07:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:16:16.294 17:07:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 00:16:16.294 17:07:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:16.294 17:07:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:16.294 17:07:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:16.294 17:07:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:16:16.294 17:07:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:16:16.862 00:16:16.862 17:07:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:16:16.862 17:07:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:16:16.862 17:07:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:16.862 17:07:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:16.862 17:07:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:16.862 17:07:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:16.862 17:07:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:16.862 17:07:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:16.862 17:07:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:16:16.862 { 00:16:16.862 "cntlid": 35, 00:16:16.862 "qid": 0, 00:16:16.862 "state": "enabled", 00:16:16.862 "listen_address": { 00:16:16.862 "trtype": "TCP", 00:16:16.862 "adrfam": "IPv4", 00:16:16.862 "traddr": "10.0.0.2", 00:16:16.862 "trsvcid": "4420" 00:16:16.862 }, 00:16:16.862 "peer_address": { 00:16:16.862 "trtype": "TCP", 00:16:16.862 "adrfam": "IPv4", 00:16:16.862 "traddr": "10.0.0.1", 00:16:16.862 "trsvcid": "35192" 00:16:16.862 }, 00:16:16.862 "auth": { 00:16:16.862 "state": "completed", 00:16:16.862 "digest": "sha256", 00:16:16.862 "dhgroup": "ffdhe6144" 00:16:16.862 } 00:16:16.862 } 00:16:16.862 ]' 00:16:16.862 17:07:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:16:17.120 17:07:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:17.120 17:07:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:16:17.120 17:07:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:16:17.120 17:07:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:16:17.120 17:07:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:17.120 17:07:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:17.120 17:07:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:17.378 17:07:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:01:YmZkN2EyYmFmNjM3NDRiMTQzMzRhMDZhZDhmMjE1MzHZt7k4: 00:16:17.944 17:07:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:17.944 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:17.944 17:07:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:17.944 17:07:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:17.944 17:07:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:17.944 17:07:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:17.944 17:07:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:16:17.944 17:07:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:16:17.944 17:07:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:16:17.944 17:07:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha256 ffdhe6144 2 00:16:17.944 17:07:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:16:17.944 17:07:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:16:17.944 17:07:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:16:17.944 17:07:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:16:17.944 17:07:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 00:16:17.944 17:07:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:17.944 17:07:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:17.944 17:07:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:17.944 17:07:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:16:17.944 17:07:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:16:18.510 00:16:18.510 17:07:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:16:18.510 17:07:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:18.510 17:07:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:16:18.510 17:07:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:18.510 17:07:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:18.510 17:07:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:18.510 17:07:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:18.510 17:07:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:18.510 17:07:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:16:18.510 { 00:16:18.510 "cntlid": 37, 00:16:18.510 "qid": 0, 00:16:18.510 "state": "enabled", 00:16:18.510 "listen_address": { 00:16:18.510 "trtype": "TCP", 00:16:18.510 "adrfam": "IPv4", 00:16:18.510 "traddr": "10.0.0.2", 00:16:18.510 "trsvcid": "4420" 00:16:18.510 }, 00:16:18.510 "peer_address": { 00:16:18.510 "trtype": "TCP", 00:16:18.510 "adrfam": "IPv4", 00:16:18.510 "traddr": "10.0.0.1", 00:16:18.510 "trsvcid": "35216" 00:16:18.510 }, 00:16:18.510 "auth": { 00:16:18.510 "state": "completed", 00:16:18.510 "digest": "sha256", 00:16:18.510 "dhgroup": "ffdhe6144" 00:16:18.510 } 00:16:18.510 } 00:16:18.510 ]' 00:16:18.510 17:07:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:16:18.510 17:07:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:18.510 17:07:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:16:18.768 17:07:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:16:18.768 17:07:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:16:18.768 17:07:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:18.768 17:07:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:18.768 17:07:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:18.768 17:07:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:02:MTg0MWJjZWY2ZjA4NjhkYmU3YzNmOTVjNDgzMTQ1MzFhZjdhYzg3NTk4Mjc1NDEzS6SZTg==: 00:16:19.333 17:07:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:19.333 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:19.333 17:07:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:19.333 17:07:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:19.333 17:07:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:19.591 17:07:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:19.591 17:07:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:16:19.591 17:07:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:16:19.591 17:07:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:16:19.591 17:07:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha256 ffdhe6144 3 00:16:19.591 17:07:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:16:19.591 17:07:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:16:19.591 17:07:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:16:19.591 17:07:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:16:19.591 17:07:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:16:19.591 17:07:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:19.591 17:07:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:19.591 17:07:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:19.591 17:07:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:19.591 17:07:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:19.849 00:16:20.108 17:07:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:16:20.108 17:07:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:16:20.108 17:07:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:20.108 17:07:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:20.108 17:07:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:20.108 17:07:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:20.108 17:07:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:20.108 17:07:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:20.108 17:07:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:16:20.108 { 00:16:20.108 "cntlid": 39, 00:16:20.108 "qid": 0, 00:16:20.108 "state": "enabled", 00:16:20.108 "listen_address": { 00:16:20.108 "trtype": "TCP", 00:16:20.108 "adrfam": "IPv4", 00:16:20.108 "traddr": "10.0.0.2", 00:16:20.108 "trsvcid": "4420" 00:16:20.108 }, 00:16:20.108 "peer_address": { 00:16:20.108 "trtype": "TCP", 00:16:20.108 "adrfam": "IPv4", 00:16:20.108 "traddr": "10.0.0.1", 00:16:20.108 "trsvcid": "35260" 00:16:20.108 }, 00:16:20.108 "auth": { 00:16:20.108 "state": "completed", 00:16:20.108 "digest": "sha256", 00:16:20.108 "dhgroup": "ffdhe6144" 00:16:20.108 } 00:16:20.108 } 00:16:20.108 ]' 00:16:20.108 17:07:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:16:20.108 17:07:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:20.108 17:07:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:16:20.367 17:07:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:16:20.367 17:07:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:16:20.367 17:07:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:20.367 17:07:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:20.367 17:07:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:20.367 17:07:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:03:ZDdjZDg1NDRlMzdkYmJkYmU1YTg1MDg3MmQwNGNmYTFmYzk2MmVlYTNmNDJiYjAxOWIyZjMwMzdiNjgyYTFhZb0Oe7c=: 00:16:20.935 17:07:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:20.935 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:20.935 17:07:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:20.935 17:07:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:20.935 17:07:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:20.935 17:07:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:20.935 17:07:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # for dhgroup in "${dhgroups[@]}" 00:16:20.935 17:07:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:16:20.935 17:07:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:16:20.935 17:07:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:16:21.194 17:07:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha256 ffdhe8192 0 00:16:21.194 17:07:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:16:21.194 17:07:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:16:21.194 17:07:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:16:21.194 17:07:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:16:21.194 17:07:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 00:16:21.194 17:07:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:21.194 17:07:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:21.194 17:07:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:21.194 17:07:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:16:21.194 17:07:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:16:21.762 00:16:21.762 17:07:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:16:21.762 17:07:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:16:21.762 17:07:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:21.762 17:07:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:22.021 17:07:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:22.021 17:07:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:22.021 17:07:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:22.021 17:07:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:22.021 17:07:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:16:22.021 { 00:16:22.021 "cntlid": 41, 00:16:22.021 "qid": 0, 00:16:22.021 "state": "enabled", 00:16:22.021 "listen_address": { 00:16:22.021 "trtype": "TCP", 00:16:22.021 "adrfam": "IPv4", 00:16:22.021 "traddr": "10.0.0.2", 00:16:22.021 "trsvcid": "4420" 00:16:22.021 }, 00:16:22.021 "peer_address": { 00:16:22.021 "trtype": "TCP", 00:16:22.021 "adrfam": "IPv4", 00:16:22.021 "traddr": "10.0.0.1", 00:16:22.021 "trsvcid": "35282" 00:16:22.021 }, 00:16:22.021 "auth": { 00:16:22.022 "state": "completed", 00:16:22.022 "digest": "sha256", 00:16:22.022 "dhgroup": "ffdhe8192" 00:16:22.022 } 00:16:22.022 } 00:16:22.022 ]' 00:16:22.022 17:07:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:16:22.022 17:07:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:22.022 17:07:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:16:22.022 17:07:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:16:22.022 17:07:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:16:22.022 17:07:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:22.022 17:07:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:22.022 17:07:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:22.280 17:07:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:00:NDE2NmE5M2VjM2Y4MmVlNGU3NTU0OGNlMGFkZDdiMzczMzI2N2FmNGY4M2YzZDlkN/JmXw==: 00:16:22.848 17:07:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:22.848 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:22.848 17:07:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:22.848 17:07:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:22.848 17:07:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:22.848 17:07:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:22.848 17:07:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:16:22.848 17:07:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:16:22.848 17:07:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:16:22.848 17:07:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha256 ffdhe8192 1 00:16:22.848 17:07:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:16:22.848 17:07:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:16:22.848 17:07:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:16:22.848 17:07:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:16:22.848 17:07:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 00:16:22.848 17:07:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:22.848 17:07:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:22.848 17:07:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:22.848 17:07:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:16:22.848 17:07:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:16:23.415 00:16:23.415 17:07:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:16:23.415 17:07:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:16:23.415 17:07:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:23.675 17:07:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:23.675 17:07:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:23.675 17:07:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:23.675 17:07:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:23.675 17:07:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:23.675 17:07:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:16:23.675 { 00:16:23.675 "cntlid": 43, 00:16:23.675 "qid": 0, 00:16:23.675 "state": "enabled", 00:16:23.675 "listen_address": { 00:16:23.675 "trtype": "TCP", 00:16:23.675 "adrfam": "IPv4", 00:16:23.675 "traddr": "10.0.0.2", 00:16:23.675 "trsvcid": "4420" 00:16:23.675 }, 00:16:23.675 "peer_address": { 00:16:23.675 "trtype": "TCP", 00:16:23.675 "adrfam": "IPv4", 00:16:23.675 "traddr": "10.0.0.1", 00:16:23.675 "trsvcid": "35308" 00:16:23.675 }, 00:16:23.675 "auth": { 00:16:23.675 "state": "completed", 00:16:23.675 "digest": "sha256", 00:16:23.675 "dhgroup": "ffdhe8192" 00:16:23.675 } 00:16:23.675 } 00:16:23.675 ]' 00:16:23.675 17:07:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:16:23.675 17:07:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:23.675 17:07:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:16:23.675 17:07:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:16:23.675 17:07:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:16:23.675 17:07:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:23.675 17:07:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:23.675 17:07:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:23.933 17:07:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:01:YmZkN2EyYmFmNjM3NDRiMTQzMzRhMDZhZDhmMjE1MzHZt7k4: 00:16:24.499 17:07:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:24.499 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:24.499 17:07:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:24.499 17:07:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:24.499 17:07:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:24.499 17:07:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:24.499 17:07:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:16:24.499 17:07:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:16:24.499 17:07:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:16:24.757 17:07:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha256 ffdhe8192 2 00:16:24.757 17:07:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:16:24.757 17:07:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:16:24.757 17:07:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:16:24.757 17:07:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:16:24.757 17:07:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 00:16:24.757 17:07:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:24.757 17:07:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:24.757 17:07:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:24.757 17:07:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:16:24.757 17:07:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:16:25.016 00:16:25.275 17:07:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:16:25.275 17:07:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:16:25.275 17:07:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:25.275 17:07:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:25.275 17:07:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:25.275 17:07:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:25.275 17:07:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:25.275 17:07:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:25.275 17:07:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:16:25.275 { 00:16:25.275 "cntlid": 45, 00:16:25.275 "qid": 0, 00:16:25.275 "state": "enabled", 00:16:25.275 "listen_address": { 00:16:25.275 "trtype": "TCP", 00:16:25.275 "adrfam": "IPv4", 00:16:25.275 "traddr": "10.0.0.2", 00:16:25.275 "trsvcid": "4420" 00:16:25.275 }, 00:16:25.275 "peer_address": { 00:16:25.275 "trtype": "TCP", 00:16:25.275 "adrfam": "IPv4", 00:16:25.275 "traddr": "10.0.0.1", 00:16:25.275 "trsvcid": "35342" 00:16:25.275 }, 00:16:25.275 "auth": { 00:16:25.275 "state": "completed", 00:16:25.275 "digest": "sha256", 00:16:25.275 "dhgroup": "ffdhe8192" 00:16:25.275 } 00:16:25.275 } 00:16:25.275 ]' 00:16:25.275 17:07:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:16:25.275 17:07:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:25.275 17:07:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:16:25.533 17:07:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:16:25.533 17:07:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:16:25.533 17:07:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:25.533 17:07:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:25.533 17:07:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:25.533 17:07:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:02:MTg0MWJjZWY2ZjA4NjhkYmU3YzNmOTVjNDgzMTQ1MzFhZjdhYzg3NTk4Mjc1NDEzS6SZTg==: 00:16:26.101 17:07:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:26.101 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:26.101 17:07:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:26.101 17:07:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:26.101 17:07:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:26.360 17:07:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:26.360 17:07:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:16:26.360 17:07:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:16:26.360 17:07:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:16:26.360 17:07:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha256 ffdhe8192 3 00:16:26.360 17:07:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:16:26.360 17:07:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:16:26.360 17:07:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:16:26.360 17:07:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:16:26.360 17:07:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:16:26.360 17:07:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:26.360 17:07:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:26.360 17:07:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:26.360 17:07:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:26.360 17:07:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:26.928 00:16:26.928 17:07:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:16:26.928 17:07:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:16:26.928 17:07:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:27.187 17:07:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:27.187 17:07:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:27.187 17:07:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:27.187 17:07:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:27.187 17:07:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:27.187 17:07:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:16:27.187 { 00:16:27.187 "cntlid": 47, 00:16:27.187 "qid": 0, 00:16:27.187 "state": "enabled", 00:16:27.187 "listen_address": { 00:16:27.187 "trtype": "TCP", 00:16:27.187 "adrfam": "IPv4", 00:16:27.187 "traddr": "10.0.0.2", 00:16:27.187 "trsvcid": "4420" 00:16:27.187 }, 00:16:27.187 "peer_address": { 00:16:27.187 "trtype": "TCP", 00:16:27.187 "adrfam": "IPv4", 00:16:27.187 "traddr": "10.0.0.1", 00:16:27.187 "trsvcid": "49070" 00:16:27.187 }, 00:16:27.187 "auth": { 00:16:27.187 "state": "completed", 00:16:27.187 "digest": "sha256", 00:16:27.187 "dhgroup": "ffdhe8192" 00:16:27.187 } 00:16:27.187 } 00:16:27.187 ]' 00:16:27.187 17:07:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:16:27.187 17:07:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:27.187 17:07:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:16:27.187 17:07:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:16:27.187 17:07:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:16:27.187 17:07:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:27.187 17:07:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:27.187 17:07:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:27.446 17:07:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:03:ZDdjZDg1NDRlMzdkYmJkYmU1YTg1MDg3MmQwNGNmYTFmYzk2MmVlYTNmNDJiYjAxOWIyZjMwMzdiNjgyYTFhZb0Oe7c=: 00:16:28.012 17:07:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:28.012 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:28.012 17:07:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:28.012 17:07:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:28.012 17:07:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:28.012 17:07:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:28.012 17:07:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@84 -- # for digest in "${digests[@]}" 00:16:28.012 17:07:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # for dhgroup in "${dhgroups[@]}" 00:16:28.012 17:07:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:16:28.012 17:07:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:16:28.012 17:07:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:16:28.271 17:07:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha384 null 0 00:16:28.271 17:07:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:16:28.271 17:07:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:16:28.271 17:07:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:16:28.271 17:07:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:16:28.271 17:07:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 00:16:28.271 17:07:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:28.271 17:07:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:28.271 17:07:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:28.271 17:07:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:16:28.271 17:07:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:16:28.271 00:16:28.271 17:07:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:16:28.531 17:07:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:28.531 17:07:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:16:28.531 17:07:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:28.531 17:07:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:28.531 17:07:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:28.531 17:07:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:28.531 17:07:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:28.531 17:07:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:16:28.531 { 00:16:28.531 "cntlid": 49, 00:16:28.531 "qid": 0, 00:16:28.531 "state": "enabled", 00:16:28.531 "listen_address": { 00:16:28.531 "trtype": "TCP", 00:16:28.531 "adrfam": "IPv4", 00:16:28.531 "traddr": "10.0.0.2", 00:16:28.531 "trsvcid": "4420" 00:16:28.531 }, 00:16:28.531 "peer_address": { 00:16:28.531 "trtype": "TCP", 00:16:28.531 "adrfam": "IPv4", 00:16:28.531 "traddr": "10.0.0.1", 00:16:28.531 "trsvcid": "49092" 00:16:28.531 }, 00:16:28.531 "auth": { 00:16:28.531 "state": "completed", 00:16:28.531 "digest": "sha384", 00:16:28.531 "dhgroup": "null" 00:16:28.531 } 00:16:28.531 } 00:16:28.531 ]' 00:16:28.531 17:07:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:16:28.531 17:07:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:28.531 17:07:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:16:28.790 17:07:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ null == \n\u\l\l ]] 00:16:28.790 17:07:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:16:28.790 17:07:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:28.790 17:07:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:28.790 17:07:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:28.790 17:07:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:00:NDE2NmE5M2VjM2Y4MmVlNGU3NTU0OGNlMGFkZDdiMzczMzI2N2FmNGY4M2YzZDlkN/JmXw==: 00:16:29.358 17:07:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:29.358 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:29.358 17:07:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:29.358 17:07:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:29.358 17:07:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:29.358 17:07:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:29.358 17:07:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:16:29.358 17:07:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:16:29.358 17:07:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:16:29.617 17:07:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha384 null 1 00:16:29.617 17:07:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:16:29.617 17:07:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:16:29.617 17:07:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:16:29.617 17:07:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:16:29.617 17:07:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 00:16:29.617 17:07:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:29.617 17:07:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:29.617 17:07:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:29.617 17:07:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:16:29.617 17:07:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:16:29.876 00:16:29.876 17:07:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:16:29.876 17:07:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:16:29.876 17:07:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:30.136 17:07:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:30.136 17:07:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:30.136 17:07:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:30.136 17:07:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:30.136 17:07:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:30.136 17:07:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:16:30.136 { 00:16:30.136 "cntlid": 51, 00:16:30.136 "qid": 0, 00:16:30.136 "state": "enabled", 00:16:30.136 "listen_address": { 00:16:30.136 "trtype": "TCP", 00:16:30.136 "adrfam": "IPv4", 00:16:30.136 "traddr": "10.0.0.2", 00:16:30.136 "trsvcid": "4420" 00:16:30.136 }, 00:16:30.136 "peer_address": { 00:16:30.136 "trtype": "TCP", 00:16:30.136 "adrfam": "IPv4", 00:16:30.136 "traddr": "10.0.0.1", 00:16:30.136 "trsvcid": "49122" 00:16:30.136 }, 00:16:30.136 "auth": { 00:16:30.136 "state": "completed", 00:16:30.136 "digest": "sha384", 00:16:30.136 "dhgroup": "null" 00:16:30.136 } 00:16:30.136 } 00:16:30.136 ]' 00:16:30.136 17:07:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:16:30.136 17:07:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:30.136 17:07:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:16:30.136 17:07:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ null == \n\u\l\l ]] 00:16:30.136 17:07:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:16:30.136 17:07:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:30.136 17:07:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:30.136 17:07:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:30.395 17:07:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:01:YmZkN2EyYmFmNjM3NDRiMTQzMzRhMDZhZDhmMjE1MzHZt7k4: 00:16:30.963 17:07:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:30.963 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:30.963 17:07:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:30.963 17:07:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:30.963 17:07:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:30.963 17:07:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:30.963 17:07:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:16:30.963 17:07:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:16:30.963 17:07:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:16:31.221 17:07:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha384 null 2 00:16:31.221 17:07:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:16:31.221 17:07:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:16:31.221 17:07:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:16:31.221 17:07:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:16:31.221 17:07:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 00:16:31.221 17:07:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:31.221 17:07:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:31.221 17:07:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:31.221 17:07:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:16:31.222 17:07:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:16:31.222 00:16:31.222 17:07:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:16:31.222 17:07:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:16:31.222 17:07:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:31.480 17:07:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:31.480 17:07:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:31.480 17:07:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:31.480 17:07:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:31.480 17:07:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:31.480 17:07:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:16:31.480 { 00:16:31.480 "cntlid": 53, 00:16:31.480 "qid": 0, 00:16:31.480 "state": "enabled", 00:16:31.480 "listen_address": { 00:16:31.480 "trtype": "TCP", 00:16:31.480 "adrfam": "IPv4", 00:16:31.481 "traddr": "10.0.0.2", 00:16:31.481 "trsvcid": "4420" 00:16:31.481 }, 00:16:31.481 "peer_address": { 00:16:31.481 "trtype": "TCP", 00:16:31.481 "adrfam": "IPv4", 00:16:31.481 "traddr": "10.0.0.1", 00:16:31.481 "trsvcid": "49140" 00:16:31.481 }, 00:16:31.481 "auth": { 00:16:31.481 "state": "completed", 00:16:31.481 "digest": "sha384", 00:16:31.481 "dhgroup": "null" 00:16:31.481 } 00:16:31.481 } 00:16:31.481 ]' 00:16:31.481 17:07:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:16:31.481 17:07:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:31.481 17:07:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:16:31.481 17:07:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ null == \n\u\l\l ]] 00:16:31.481 17:07:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:16:31.740 17:07:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:31.740 17:07:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:31.740 17:07:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:31.740 17:07:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:02:MTg0MWJjZWY2ZjA4NjhkYmU3YzNmOTVjNDgzMTQ1MzFhZjdhYzg3NTk4Mjc1NDEzS6SZTg==: 00:16:32.308 17:07:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:32.308 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:32.308 17:07:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:32.308 17:07:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:32.308 17:07:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:32.308 17:07:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:32.308 17:07:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:16:32.308 17:07:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:16:32.308 17:07:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:16:32.567 17:07:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha384 null 3 00:16:32.567 17:07:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:16:32.567 17:07:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:16:32.567 17:07:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:16:32.567 17:07:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:16:32.567 17:07:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:16:32.567 17:07:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:32.567 17:07:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:32.567 17:07:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:32.567 17:07:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:32.567 17:07:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:32.826 00:16:32.826 17:07:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:16:32.826 17:07:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:32.826 17:07:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:16:33.085 17:07:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:33.085 17:07:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:33.085 17:07:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:33.085 17:07:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:33.085 17:07:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:33.085 17:07:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:16:33.085 { 00:16:33.085 "cntlid": 55, 00:16:33.085 "qid": 0, 00:16:33.085 "state": "enabled", 00:16:33.085 "listen_address": { 00:16:33.085 "trtype": "TCP", 00:16:33.085 "adrfam": "IPv4", 00:16:33.085 "traddr": "10.0.0.2", 00:16:33.085 "trsvcid": "4420" 00:16:33.085 }, 00:16:33.085 "peer_address": { 00:16:33.085 "trtype": "TCP", 00:16:33.085 "adrfam": "IPv4", 00:16:33.085 "traddr": "10.0.0.1", 00:16:33.085 "trsvcid": "49186" 00:16:33.085 }, 00:16:33.085 "auth": { 00:16:33.085 "state": "completed", 00:16:33.085 "digest": "sha384", 00:16:33.085 "dhgroup": "null" 00:16:33.085 } 00:16:33.085 } 00:16:33.085 ]' 00:16:33.085 17:07:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:16:33.085 17:07:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:33.085 17:07:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:16:33.085 17:07:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ null == \n\u\l\l ]] 00:16:33.085 17:07:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:16:33.085 17:07:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:33.085 17:07:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:33.085 17:07:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:33.343 17:07:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:03:ZDdjZDg1NDRlMzdkYmJkYmU1YTg1MDg3MmQwNGNmYTFmYzk2MmVlYTNmNDJiYjAxOWIyZjMwMzdiNjgyYTFhZb0Oe7c=: 00:16:33.910 17:07:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:33.910 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:33.910 17:07:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:33.910 17:07:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:33.910 17:07:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:33.910 17:07:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:33.910 17:07:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # for dhgroup in "${dhgroups[@]}" 00:16:33.910 17:07:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:16:33.910 17:07:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:16:33.910 17:07:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:16:34.169 17:07:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha384 ffdhe2048 0 00:16:34.169 17:07:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:16:34.169 17:07:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:16:34.169 17:07:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:16:34.169 17:07:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:16:34.169 17:07:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 00:16:34.169 17:07:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:34.169 17:07:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:34.169 17:07:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:34.169 17:07:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:16:34.169 17:07:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:16:34.169 00:16:34.428 17:07:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:16:34.428 17:07:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:16:34.428 17:07:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:34.428 17:07:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:34.428 17:07:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:34.428 17:07:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:34.428 17:07:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:34.428 17:07:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:34.428 17:07:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:16:34.428 { 00:16:34.428 "cntlid": 57, 00:16:34.428 "qid": 0, 00:16:34.428 "state": "enabled", 00:16:34.428 "listen_address": { 00:16:34.428 "trtype": "TCP", 00:16:34.428 "adrfam": "IPv4", 00:16:34.428 "traddr": "10.0.0.2", 00:16:34.428 "trsvcid": "4420" 00:16:34.428 }, 00:16:34.428 "peer_address": { 00:16:34.428 "trtype": "TCP", 00:16:34.428 "adrfam": "IPv4", 00:16:34.428 "traddr": "10.0.0.1", 00:16:34.428 "trsvcid": "49210" 00:16:34.428 }, 00:16:34.428 "auth": { 00:16:34.428 "state": "completed", 00:16:34.429 "digest": "sha384", 00:16:34.429 "dhgroup": "ffdhe2048" 00:16:34.429 } 00:16:34.429 } 00:16:34.429 ]' 00:16:34.429 17:07:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:16:34.429 17:07:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:34.429 17:07:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:16:34.688 17:07:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:16:34.688 17:07:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:16:34.688 17:07:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:34.688 17:07:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:34.688 17:07:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:34.688 17:07:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:00:NDE2NmE5M2VjM2Y4MmVlNGU3NTU0OGNlMGFkZDdiMzczMzI2N2FmNGY4M2YzZDlkN/JmXw==: 00:16:35.255 17:07:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:35.255 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:35.255 17:07:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:35.255 17:07:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:35.255 17:07:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:35.255 17:07:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:35.255 17:07:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:16:35.255 17:07:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:16:35.255 17:07:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:16:35.513 17:07:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha384 ffdhe2048 1 00:16:35.513 17:07:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:16:35.513 17:07:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:16:35.513 17:07:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:16:35.513 17:07:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:16:35.513 17:07:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 00:16:35.513 17:07:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:35.513 17:07:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:35.513 17:07:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:35.513 17:07:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:16:35.514 17:07:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:16:35.771 00:16:35.771 17:07:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:16:35.771 17:07:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:16:35.771 17:07:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:36.030 17:07:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:36.030 17:07:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:36.030 17:07:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:36.030 17:07:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:36.030 17:07:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:36.030 17:07:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:16:36.030 { 00:16:36.030 "cntlid": 59, 00:16:36.030 "qid": 0, 00:16:36.030 "state": "enabled", 00:16:36.030 "listen_address": { 00:16:36.030 "trtype": "TCP", 00:16:36.030 "adrfam": "IPv4", 00:16:36.030 "traddr": "10.0.0.2", 00:16:36.030 "trsvcid": "4420" 00:16:36.030 }, 00:16:36.030 "peer_address": { 00:16:36.030 "trtype": "TCP", 00:16:36.030 "adrfam": "IPv4", 00:16:36.030 "traddr": "10.0.0.1", 00:16:36.030 "trsvcid": "57320" 00:16:36.030 }, 00:16:36.030 "auth": { 00:16:36.030 "state": "completed", 00:16:36.030 "digest": "sha384", 00:16:36.030 "dhgroup": "ffdhe2048" 00:16:36.030 } 00:16:36.030 } 00:16:36.030 ]' 00:16:36.030 17:07:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:16:36.030 17:07:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:36.030 17:07:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:16:36.030 17:07:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:16:36.030 17:07:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:16:36.030 17:07:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:36.030 17:07:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:36.030 17:07:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:36.288 17:07:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:01:YmZkN2EyYmFmNjM3NDRiMTQzMzRhMDZhZDhmMjE1MzHZt7k4: 00:16:36.855 17:07:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:36.855 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:36.855 17:07:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:36.855 17:07:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:36.855 17:07:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:36.855 17:07:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:36.855 17:07:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:16:36.856 17:07:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:16:36.856 17:07:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:16:37.115 17:07:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha384 ffdhe2048 2 00:16:37.115 17:07:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:16:37.115 17:07:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:16:37.115 17:07:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:16:37.115 17:07:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:16:37.115 17:07:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 00:16:37.115 17:07:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:37.115 17:07:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:37.115 17:07:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:37.115 17:07:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:16:37.115 17:07:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:16:37.373 00:16:37.373 17:07:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:16:37.373 17:07:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:16:37.373 17:07:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:37.373 17:07:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:37.373 17:07:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:37.373 17:07:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:37.373 17:07:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:37.373 17:07:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:37.373 17:07:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:16:37.373 { 00:16:37.373 "cntlid": 61, 00:16:37.373 "qid": 0, 00:16:37.373 "state": "enabled", 00:16:37.373 "listen_address": { 00:16:37.373 "trtype": "TCP", 00:16:37.373 "adrfam": "IPv4", 00:16:37.373 "traddr": "10.0.0.2", 00:16:37.373 "trsvcid": "4420" 00:16:37.373 }, 00:16:37.373 "peer_address": { 00:16:37.373 "trtype": "TCP", 00:16:37.373 "adrfam": "IPv4", 00:16:37.373 "traddr": "10.0.0.1", 00:16:37.373 "trsvcid": "57350" 00:16:37.373 }, 00:16:37.373 "auth": { 00:16:37.373 "state": "completed", 00:16:37.373 "digest": "sha384", 00:16:37.373 "dhgroup": "ffdhe2048" 00:16:37.373 } 00:16:37.373 } 00:16:37.373 ]' 00:16:37.373 17:07:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:16:37.632 17:07:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:37.632 17:07:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:16:37.632 17:07:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:16:37.632 17:07:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:16:37.632 17:07:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:37.632 17:07:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:37.632 17:07:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:37.891 17:07:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:02:MTg0MWJjZWY2ZjA4NjhkYmU3YzNmOTVjNDgzMTQ1MzFhZjdhYzg3NTk4Mjc1NDEzS6SZTg==: 00:16:38.459 17:07:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:38.459 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:38.459 17:07:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:38.459 17:07:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:38.459 17:07:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:38.459 17:07:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:38.459 17:07:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:16:38.459 17:07:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:16:38.459 17:07:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:16:38.460 17:07:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha384 ffdhe2048 3 00:16:38.460 17:07:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:16:38.460 17:07:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:16:38.460 17:07:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:16:38.460 17:07:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:16:38.460 17:07:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:16:38.460 17:07:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:38.460 17:07:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:38.460 17:07:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:38.460 17:07:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:38.460 17:07:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:38.718 00:16:38.718 17:07:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:16:38.718 17:07:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:16:38.718 17:07:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:38.977 17:07:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:38.977 17:07:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:38.977 17:07:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:38.977 17:07:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:38.977 17:07:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:38.977 17:07:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:16:38.977 { 00:16:38.977 "cntlid": 63, 00:16:38.977 "qid": 0, 00:16:38.977 "state": "enabled", 00:16:38.977 "listen_address": { 00:16:38.977 "trtype": "TCP", 00:16:38.977 "adrfam": "IPv4", 00:16:38.977 "traddr": "10.0.0.2", 00:16:38.977 "trsvcid": "4420" 00:16:38.977 }, 00:16:38.977 "peer_address": { 00:16:38.977 "trtype": "TCP", 00:16:38.977 "adrfam": "IPv4", 00:16:38.977 "traddr": "10.0.0.1", 00:16:38.977 "trsvcid": "57386" 00:16:38.977 }, 00:16:38.977 "auth": { 00:16:38.977 "state": "completed", 00:16:38.977 "digest": "sha384", 00:16:38.977 "dhgroup": "ffdhe2048" 00:16:38.977 } 00:16:38.977 } 00:16:38.977 ]' 00:16:38.977 17:07:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:16:38.977 17:07:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:38.977 17:07:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:16:38.977 17:07:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:16:38.977 17:07:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:16:39.237 17:07:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:39.237 17:07:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:39.237 17:07:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:39.237 17:07:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:03:ZDdjZDg1NDRlMzdkYmJkYmU1YTg1MDg3MmQwNGNmYTFmYzk2MmVlYTNmNDJiYjAxOWIyZjMwMzdiNjgyYTFhZb0Oe7c=: 00:16:39.805 17:07:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:39.805 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:39.805 17:07:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:39.805 17:07:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:39.805 17:07:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:39.805 17:07:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:39.805 17:07:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # for dhgroup in "${dhgroups[@]}" 00:16:39.805 17:07:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:16:39.805 17:07:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:16:39.805 17:07:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:16:40.064 17:07:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha384 ffdhe3072 0 00:16:40.064 17:07:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:16:40.064 17:07:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:16:40.064 17:07:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:16:40.064 17:07:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:16:40.064 17:07:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 00:16:40.064 17:07:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:40.064 17:07:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:40.064 17:07:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:40.064 17:07:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:16:40.065 17:07:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:16:40.323 00:16:40.323 17:07:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:16:40.323 17:07:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:16:40.324 17:07:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:40.640 17:07:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:40.640 17:07:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:40.640 17:07:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:40.640 17:07:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:40.640 17:07:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:40.640 17:07:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:16:40.640 { 00:16:40.640 "cntlid": 65, 00:16:40.640 "qid": 0, 00:16:40.640 "state": "enabled", 00:16:40.640 "listen_address": { 00:16:40.640 "trtype": "TCP", 00:16:40.640 "adrfam": "IPv4", 00:16:40.640 "traddr": "10.0.0.2", 00:16:40.640 "trsvcid": "4420" 00:16:40.640 }, 00:16:40.640 "peer_address": { 00:16:40.640 "trtype": "TCP", 00:16:40.640 "adrfam": "IPv4", 00:16:40.640 "traddr": "10.0.0.1", 00:16:40.640 "trsvcid": "57398" 00:16:40.640 }, 00:16:40.640 "auth": { 00:16:40.640 "state": "completed", 00:16:40.640 "digest": "sha384", 00:16:40.640 "dhgroup": "ffdhe3072" 00:16:40.640 } 00:16:40.640 } 00:16:40.640 ]' 00:16:40.640 17:07:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:16:40.640 17:07:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:40.640 17:07:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:16:40.640 17:07:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:16:40.640 17:07:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:16:40.640 17:07:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:40.640 17:07:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:40.640 17:07:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:40.938 17:07:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:00:NDE2NmE5M2VjM2Y4MmVlNGU3NTU0OGNlMGFkZDdiMzczMzI2N2FmNGY4M2YzZDlkN/JmXw==: 00:16:41.248 17:07:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:41.248 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:41.248 17:07:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:41.248 17:07:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:41.248 17:07:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:41.248 17:07:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:41.248 17:07:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:16:41.248 17:07:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:16:41.248 17:07:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:16:41.508 17:07:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha384 ffdhe3072 1 00:16:41.508 17:07:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:16:41.508 17:07:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:16:41.508 17:07:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:16:41.508 17:07:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:16:41.508 17:07:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 00:16:41.508 17:07:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:41.508 17:07:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:41.508 17:07:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:41.508 17:07:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:16:41.508 17:07:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:16:41.767 00:16:41.767 17:07:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:16:41.767 17:07:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:16:41.767 17:07:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:42.026 17:07:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:42.026 17:07:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:42.026 17:07:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:42.026 17:07:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:42.026 17:07:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:42.026 17:07:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:16:42.026 { 00:16:42.026 "cntlid": 67, 00:16:42.026 "qid": 0, 00:16:42.026 "state": "enabled", 00:16:42.026 "listen_address": { 00:16:42.027 "trtype": "TCP", 00:16:42.027 "adrfam": "IPv4", 00:16:42.027 "traddr": "10.0.0.2", 00:16:42.027 "trsvcid": "4420" 00:16:42.027 }, 00:16:42.027 "peer_address": { 00:16:42.027 "trtype": "TCP", 00:16:42.027 "adrfam": "IPv4", 00:16:42.027 "traddr": "10.0.0.1", 00:16:42.027 "trsvcid": "57422" 00:16:42.027 }, 00:16:42.027 "auth": { 00:16:42.027 "state": "completed", 00:16:42.027 "digest": "sha384", 00:16:42.027 "dhgroup": "ffdhe3072" 00:16:42.027 } 00:16:42.027 } 00:16:42.027 ]' 00:16:42.027 17:07:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:16:42.027 17:07:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:42.027 17:07:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:16:42.027 17:07:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:16:42.027 17:07:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:16:42.027 17:07:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:42.027 17:07:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:42.027 17:07:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:42.285 17:07:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:01:YmZkN2EyYmFmNjM3NDRiMTQzMzRhMDZhZDhmMjE1MzHZt7k4: 00:16:42.851 17:07:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:42.851 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:42.851 17:07:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:42.851 17:07:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:42.851 17:07:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:42.851 17:07:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:42.851 17:07:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:16:42.851 17:07:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:16:42.851 17:07:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:16:43.109 17:07:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha384 ffdhe3072 2 00:16:43.109 17:07:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:16:43.109 17:07:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:16:43.109 17:07:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:16:43.109 17:07:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:16:43.109 17:07:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 00:16:43.109 17:07:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:43.109 17:07:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:43.109 17:07:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:43.109 17:07:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:16:43.109 17:07:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:16:43.368 00:16:43.368 17:07:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:16:43.368 17:07:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:16:43.368 17:07:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:43.368 17:07:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:43.368 17:07:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:43.368 17:07:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:43.368 17:07:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:43.368 17:07:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:43.368 17:07:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:16:43.368 { 00:16:43.368 "cntlid": 69, 00:16:43.368 "qid": 0, 00:16:43.368 "state": "enabled", 00:16:43.368 "listen_address": { 00:16:43.368 "trtype": "TCP", 00:16:43.368 "adrfam": "IPv4", 00:16:43.368 "traddr": "10.0.0.2", 00:16:43.368 "trsvcid": "4420" 00:16:43.368 }, 00:16:43.368 "peer_address": { 00:16:43.368 "trtype": "TCP", 00:16:43.368 "adrfam": "IPv4", 00:16:43.368 "traddr": "10.0.0.1", 00:16:43.368 "trsvcid": "57454" 00:16:43.368 }, 00:16:43.368 "auth": { 00:16:43.368 "state": "completed", 00:16:43.368 "digest": "sha384", 00:16:43.368 "dhgroup": "ffdhe3072" 00:16:43.368 } 00:16:43.368 } 00:16:43.368 ]' 00:16:43.368 17:07:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:16:43.627 17:07:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:43.627 17:07:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:16:43.627 17:07:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:16:43.627 17:07:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:16:43.627 17:07:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:43.627 17:07:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:43.627 17:07:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:43.886 17:07:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:02:MTg0MWJjZWY2ZjA4NjhkYmU3YzNmOTVjNDgzMTQ1MzFhZjdhYzg3NTk4Mjc1NDEzS6SZTg==: 00:16:44.454 17:07:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:44.454 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:44.454 17:07:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:44.454 17:07:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:44.454 17:07:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:44.454 17:07:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:44.454 17:07:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:16:44.454 17:07:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:16:44.454 17:07:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:16:44.454 17:07:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha384 ffdhe3072 3 00:16:44.454 17:07:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:16:44.454 17:07:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:16:44.454 17:07:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:16:44.454 17:07:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:16:44.454 17:07:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:16:44.454 17:07:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:44.454 17:07:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:44.454 17:07:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:44.454 17:07:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:44.454 17:07:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:44.713 00:16:44.713 17:07:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:16:44.713 17:07:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:44.713 17:07:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:16:44.972 17:07:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:44.972 17:07:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:44.972 17:07:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:44.972 17:07:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:44.972 17:07:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:44.972 17:07:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:16:44.972 { 00:16:44.972 "cntlid": 71, 00:16:44.972 "qid": 0, 00:16:44.972 "state": "enabled", 00:16:44.972 "listen_address": { 00:16:44.972 "trtype": "TCP", 00:16:44.972 "adrfam": "IPv4", 00:16:44.972 "traddr": "10.0.0.2", 00:16:44.972 "trsvcid": "4420" 00:16:44.972 }, 00:16:44.972 "peer_address": { 00:16:44.972 "trtype": "TCP", 00:16:44.972 "adrfam": "IPv4", 00:16:44.972 "traddr": "10.0.0.1", 00:16:44.972 "trsvcid": "57492" 00:16:44.972 }, 00:16:44.972 "auth": { 00:16:44.972 "state": "completed", 00:16:44.972 "digest": "sha384", 00:16:44.972 "dhgroup": "ffdhe3072" 00:16:44.972 } 00:16:44.972 } 00:16:44.972 ]' 00:16:44.972 17:07:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:16:44.972 17:07:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:44.972 17:07:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:16:44.972 17:07:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:16:44.972 17:07:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:16:45.231 17:07:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:45.231 17:07:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:45.231 17:07:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:45.231 17:07:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:03:ZDdjZDg1NDRlMzdkYmJkYmU1YTg1MDg3MmQwNGNmYTFmYzk2MmVlYTNmNDJiYjAxOWIyZjMwMzdiNjgyYTFhZb0Oe7c=: 00:16:45.799 17:07:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:45.799 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:45.799 17:07:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:45.799 17:07:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:45.799 17:07:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:45.799 17:07:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:45.799 17:07:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # for dhgroup in "${dhgroups[@]}" 00:16:45.799 17:07:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:16:45.799 17:07:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:16:45.799 17:07:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:16:46.058 17:07:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha384 ffdhe4096 0 00:16:46.058 17:07:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:16:46.058 17:07:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:16:46.058 17:07:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:16:46.058 17:07:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:16:46.058 17:07:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 00:16:46.058 17:07:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:46.058 17:07:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:46.058 17:07:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:46.058 17:07:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:16:46.058 17:07:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:16:46.317 00:16:46.317 17:07:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:16:46.317 17:07:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:16:46.317 17:07:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:46.576 17:07:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:46.576 17:07:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:46.576 17:07:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:46.576 17:07:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:46.576 17:07:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:46.576 17:07:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:16:46.576 { 00:16:46.576 "cntlid": 73, 00:16:46.576 "qid": 0, 00:16:46.576 "state": "enabled", 00:16:46.576 "listen_address": { 00:16:46.576 "trtype": "TCP", 00:16:46.576 "adrfam": "IPv4", 00:16:46.576 "traddr": "10.0.0.2", 00:16:46.576 "trsvcid": "4420" 00:16:46.576 }, 00:16:46.576 "peer_address": { 00:16:46.576 "trtype": "TCP", 00:16:46.576 "adrfam": "IPv4", 00:16:46.576 "traddr": "10.0.0.1", 00:16:46.576 "trsvcid": "45456" 00:16:46.576 }, 00:16:46.576 "auth": { 00:16:46.576 "state": "completed", 00:16:46.576 "digest": "sha384", 00:16:46.576 "dhgroup": "ffdhe4096" 00:16:46.576 } 00:16:46.576 } 00:16:46.576 ]' 00:16:46.576 17:07:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:16:46.576 17:07:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:46.576 17:07:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:16:46.576 17:07:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:16:46.576 17:07:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:16:46.576 17:07:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:46.576 17:07:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:46.576 17:07:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:46.835 17:07:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:00:NDE2NmE5M2VjM2Y4MmVlNGU3NTU0OGNlMGFkZDdiMzczMzI2N2FmNGY4M2YzZDlkN/JmXw==: 00:16:47.402 17:07:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:47.402 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:47.402 17:07:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:47.402 17:07:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:47.402 17:07:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:47.402 17:07:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:47.402 17:07:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:16:47.402 17:07:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:16:47.402 17:07:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:16:47.661 17:07:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha384 ffdhe4096 1 00:16:47.661 17:07:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:16:47.661 17:07:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:16:47.661 17:07:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:16:47.661 17:07:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:16:47.661 17:07:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 00:16:47.661 17:07:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:47.661 17:07:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:47.661 17:07:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:47.661 17:07:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:16:47.661 17:07:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:16:47.920 00:16:47.920 17:07:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:16:47.920 17:07:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:16:47.920 17:07:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:47.920 17:07:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:47.920 17:07:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:47.920 17:07:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:47.920 17:07:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:47.920 17:07:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:47.920 17:07:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:16:47.920 { 00:16:47.920 "cntlid": 75, 00:16:47.920 "qid": 0, 00:16:47.920 "state": "enabled", 00:16:47.920 "listen_address": { 00:16:47.920 "trtype": "TCP", 00:16:47.920 "adrfam": "IPv4", 00:16:47.920 "traddr": "10.0.0.2", 00:16:47.920 "trsvcid": "4420" 00:16:47.920 }, 00:16:47.920 "peer_address": { 00:16:47.920 "trtype": "TCP", 00:16:47.920 "adrfam": "IPv4", 00:16:47.920 "traddr": "10.0.0.1", 00:16:47.920 "trsvcid": "45484" 00:16:47.920 }, 00:16:47.920 "auth": { 00:16:47.920 "state": "completed", 00:16:47.920 "digest": "sha384", 00:16:47.920 "dhgroup": "ffdhe4096" 00:16:47.920 } 00:16:47.920 } 00:16:47.920 ]' 00:16:47.920 17:07:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:16:48.179 17:07:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:48.179 17:07:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:16:48.179 17:07:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:16:48.179 17:07:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:16:48.179 17:07:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:48.179 17:07:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:48.179 17:07:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:48.438 17:07:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:01:YmZkN2EyYmFmNjM3NDRiMTQzMzRhMDZhZDhmMjE1MzHZt7k4: 00:16:49.006 17:07:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:49.006 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:49.006 17:07:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:49.006 17:07:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:49.006 17:07:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:49.006 17:07:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:49.006 17:07:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:16:49.006 17:07:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:16:49.006 17:07:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:16:49.006 17:07:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha384 ffdhe4096 2 00:16:49.006 17:07:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:16:49.006 17:07:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:16:49.006 17:07:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:16:49.006 17:07:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:16:49.006 17:07:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 00:16:49.006 17:07:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:49.006 17:07:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:49.006 17:07:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:49.006 17:07:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:16:49.006 17:07:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:16:49.265 00:16:49.265 17:07:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:16:49.265 17:07:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:16:49.265 17:07:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:49.524 17:07:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:49.524 17:07:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:49.524 17:07:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:49.524 17:07:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:49.524 17:07:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:49.524 17:07:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:16:49.524 { 00:16:49.524 "cntlid": 77, 00:16:49.524 "qid": 0, 00:16:49.524 "state": "enabled", 00:16:49.524 "listen_address": { 00:16:49.524 "trtype": "TCP", 00:16:49.524 "adrfam": "IPv4", 00:16:49.524 "traddr": "10.0.0.2", 00:16:49.524 "trsvcid": "4420" 00:16:49.524 }, 00:16:49.524 "peer_address": { 00:16:49.524 "trtype": "TCP", 00:16:49.524 "adrfam": "IPv4", 00:16:49.524 "traddr": "10.0.0.1", 00:16:49.524 "trsvcid": "45504" 00:16:49.524 }, 00:16:49.524 "auth": { 00:16:49.524 "state": "completed", 00:16:49.524 "digest": "sha384", 00:16:49.524 "dhgroup": "ffdhe4096" 00:16:49.524 } 00:16:49.524 } 00:16:49.524 ]' 00:16:49.524 17:07:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:16:49.524 17:07:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:49.524 17:07:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:16:49.524 17:07:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:16:49.524 17:07:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:16:49.524 17:07:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:49.524 17:07:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:49.524 17:07:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:49.783 17:07:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:02:MTg0MWJjZWY2ZjA4NjhkYmU3YzNmOTVjNDgzMTQ1MzFhZjdhYzg3NTk4Mjc1NDEzS6SZTg==: 00:16:50.351 17:07:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:50.351 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:50.351 17:07:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:50.351 17:07:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:50.351 17:07:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:50.351 17:07:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:50.351 17:07:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:16:50.351 17:07:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:16:50.351 17:07:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:16:50.610 17:07:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha384 ffdhe4096 3 00:16:50.610 17:07:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:16:50.610 17:07:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:16:50.610 17:07:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:16:50.610 17:07:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:16:50.610 17:07:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:16:50.610 17:07:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:50.610 17:07:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:50.610 17:07:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:50.610 17:07:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:50.610 17:07:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:50.869 00:16:50.869 17:07:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:16:50.869 17:07:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:16:50.869 17:07:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:51.128 17:07:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:51.128 17:07:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:51.128 17:07:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:51.128 17:07:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:51.128 17:07:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:51.128 17:07:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:16:51.128 { 00:16:51.128 "cntlid": 79, 00:16:51.128 "qid": 0, 00:16:51.128 "state": "enabled", 00:16:51.128 "listen_address": { 00:16:51.128 "trtype": "TCP", 00:16:51.128 "adrfam": "IPv4", 00:16:51.128 "traddr": "10.0.0.2", 00:16:51.128 "trsvcid": "4420" 00:16:51.128 }, 00:16:51.128 "peer_address": { 00:16:51.128 "trtype": "TCP", 00:16:51.128 "adrfam": "IPv4", 00:16:51.128 "traddr": "10.0.0.1", 00:16:51.128 "trsvcid": "45532" 00:16:51.128 }, 00:16:51.128 "auth": { 00:16:51.128 "state": "completed", 00:16:51.128 "digest": "sha384", 00:16:51.128 "dhgroup": "ffdhe4096" 00:16:51.128 } 00:16:51.128 } 00:16:51.128 ]' 00:16:51.128 17:07:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:16:51.128 17:07:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:51.128 17:07:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:16:51.128 17:07:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:16:51.128 17:07:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:16:51.128 17:07:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:51.128 17:07:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:51.128 17:07:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:51.387 17:07:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:03:ZDdjZDg1NDRlMzdkYmJkYmU1YTg1MDg3MmQwNGNmYTFmYzk2MmVlYTNmNDJiYjAxOWIyZjMwMzdiNjgyYTFhZb0Oe7c=: 00:16:51.955 17:07:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:51.955 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:51.955 17:07:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:51.955 17:07:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:51.955 17:07:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:51.955 17:07:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:51.955 17:07:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # for dhgroup in "${dhgroups[@]}" 00:16:51.955 17:07:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:16:51.955 17:07:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:16:51.955 17:07:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:16:51.955 17:07:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha384 ffdhe6144 0 00:16:51.955 17:07:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:16:51.955 17:07:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:16:51.955 17:07:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:16:51.955 17:07:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:16:51.955 17:07:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 00:16:51.955 17:07:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:51.955 17:07:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:51.955 17:07:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:51.955 17:07:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:16:51.955 17:07:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:16:52.523 00:16:52.523 17:07:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:16:52.523 17:07:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:16:52.523 17:07:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:52.523 17:07:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:52.523 17:07:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:52.524 17:07:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:52.524 17:07:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:52.524 17:07:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:52.524 17:07:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:16:52.524 { 00:16:52.524 "cntlid": 81, 00:16:52.524 "qid": 0, 00:16:52.524 "state": "enabled", 00:16:52.524 "listen_address": { 00:16:52.524 "trtype": "TCP", 00:16:52.524 "adrfam": "IPv4", 00:16:52.524 "traddr": "10.0.0.2", 00:16:52.524 "trsvcid": "4420" 00:16:52.524 }, 00:16:52.524 "peer_address": { 00:16:52.524 "trtype": "TCP", 00:16:52.524 "adrfam": "IPv4", 00:16:52.524 "traddr": "10.0.0.1", 00:16:52.524 "trsvcid": "45578" 00:16:52.524 }, 00:16:52.524 "auth": { 00:16:52.524 "state": "completed", 00:16:52.524 "digest": "sha384", 00:16:52.524 "dhgroup": "ffdhe6144" 00:16:52.524 } 00:16:52.524 } 00:16:52.524 ]' 00:16:52.524 17:07:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:16:52.783 17:07:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:52.783 17:07:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:16:52.783 17:07:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:16:52.783 17:07:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:16:52.783 17:07:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:52.783 17:07:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:52.783 17:07:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:53.042 17:07:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:00:NDE2NmE5M2VjM2Y4MmVlNGU3NTU0OGNlMGFkZDdiMzczMzI2N2FmNGY4M2YzZDlkN/JmXw==: 00:16:53.610 17:07:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:53.610 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:53.610 17:07:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:53.610 17:07:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:53.610 17:07:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:53.610 17:07:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:53.610 17:07:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:16:53.610 17:07:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:16:53.610 17:07:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:16:53.610 17:07:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha384 ffdhe6144 1 00:16:53.610 17:07:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:16:53.610 17:07:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:16:53.610 17:07:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:16:53.610 17:07:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:16:53.610 17:07:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 00:16:53.610 17:07:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:53.610 17:07:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:53.610 17:07:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:53.610 17:07:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:16:53.610 17:07:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:16:54.178 00:16:54.178 17:07:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:16:54.178 17:07:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:54.178 17:07:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:16:54.178 17:07:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:54.178 17:07:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:54.178 17:07:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:54.178 17:07:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:54.178 17:07:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:54.178 17:07:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:16:54.178 { 00:16:54.178 "cntlid": 83, 00:16:54.178 "qid": 0, 00:16:54.178 "state": "enabled", 00:16:54.178 "listen_address": { 00:16:54.178 "trtype": "TCP", 00:16:54.178 "adrfam": "IPv4", 00:16:54.178 "traddr": "10.0.0.2", 00:16:54.178 "trsvcid": "4420" 00:16:54.178 }, 00:16:54.178 "peer_address": { 00:16:54.178 "trtype": "TCP", 00:16:54.179 "adrfam": "IPv4", 00:16:54.179 "traddr": "10.0.0.1", 00:16:54.179 "trsvcid": "45600" 00:16:54.179 }, 00:16:54.179 "auth": { 00:16:54.179 "state": "completed", 00:16:54.179 "digest": "sha384", 00:16:54.179 "dhgroup": "ffdhe6144" 00:16:54.179 } 00:16:54.179 } 00:16:54.179 ]' 00:16:54.179 17:07:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:16:54.179 17:07:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:54.179 17:07:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:16:54.179 17:07:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:16:54.179 17:07:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:16:54.437 17:07:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:54.437 17:07:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:54.437 17:07:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:54.437 17:07:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:01:YmZkN2EyYmFmNjM3NDRiMTQzMzRhMDZhZDhmMjE1MzHZt7k4: 00:16:55.014 17:07:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:55.014 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:55.014 17:07:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:55.014 17:07:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:55.014 17:07:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:55.014 17:07:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:55.014 17:07:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:16:55.014 17:07:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:16:55.014 17:07:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:16:55.273 17:07:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha384 ffdhe6144 2 00:16:55.273 17:07:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:16:55.273 17:07:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:16:55.273 17:07:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:16:55.273 17:07:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:16:55.273 17:07:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 00:16:55.273 17:07:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:55.273 17:07:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:55.273 17:07:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:55.273 17:07:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:16:55.273 17:07:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:16:55.531 00:16:55.531 17:07:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:16:55.531 17:07:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:55.531 17:07:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:16:55.789 17:07:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:55.789 17:07:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:55.789 17:07:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:55.789 17:07:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:55.789 17:07:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:55.789 17:07:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:16:55.789 { 00:16:55.789 "cntlid": 85, 00:16:55.789 "qid": 0, 00:16:55.789 "state": "enabled", 00:16:55.789 "listen_address": { 00:16:55.789 "trtype": "TCP", 00:16:55.789 "adrfam": "IPv4", 00:16:55.789 "traddr": "10.0.0.2", 00:16:55.789 "trsvcid": "4420" 00:16:55.789 }, 00:16:55.789 "peer_address": { 00:16:55.789 "trtype": "TCP", 00:16:55.789 "adrfam": "IPv4", 00:16:55.789 "traddr": "10.0.0.1", 00:16:55.789 "trsvcid": "45632" 00:16:55.789 }, 00:16:55.789 "auth": { 00:16:55.789 "state": "completed", 00:16:55.789 "digest": "sha384", 00:16:55.789 "dhgroup": "ffdhe6144" 00:16:55.789 } 00:16:55.790 } 00:16:55.790 ]' 00:16:55.790 17:07:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:16:55.790 17:07:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:55.790 17:07:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:16:55.790 17:07:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:16:55.790 17:07:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:16:56.048 17:07:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:56.048 17:07:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:56.048 17:07:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:56.048 17:07:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:02:MTg0MWJjZWY2ZjA4NjhkYmU3YzNmOTVjNDgzMTQ1MzFhZjdhYzg3NTk4Mjc1NDEzS6SZTg==: 00:16:56.616 17:07:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:56.616 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:56.616 17:07:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:56.616 17:07:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:56.616 17:07:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:56.616 17:07:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:56.616 17:07:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:16:56.616 17:07:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:16:56.616 17:07:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:16:56.908 17:07:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha384 ffdhe6144 3 00:16:56.908 17:07:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:16:56.908 17:07:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:16:56.908 17:07:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:16:56.908 17:07:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:16:56.908 17:07:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:16:56.908 17:07:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:56.908 17:07:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:56.908 17:07:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:56.908 17:07:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:56.908 17:07:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:57.166 00:16:57.166 17:07:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:16:57.166 17:07:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:16:57.166 17:07:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:57.425 17:07:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:57.425 17:07:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:57.425 17:07:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:57.425 17:07:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:57.425 17:07:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:57.425 17:07:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:16:57.425 { 00:16:57.425 "cntlid": 87, 00:16:57.425 "qid": 0, 00:16:57.425 "state": "enabled", 00:16:57.425 "listen_address": { 00:16:57.425 "trtype": "TCP", 00:16:57.425 "adrfam": "IPv4", 00:16:57.425 "traddr": "10.0.0.2", 00:16:57.425 "trsvcid": "4420" 00:16:57.425 }, 00:16:57.425 "peer_address": { 00:16:57.425 "trtype": "TCP", 00:16:57.425 "adrfam": "IPv4", 00:16:57.425 "traddr": "10.0.0.1", 00:16:57.425 "trsvcid": "43566" 00:16:57.425 }, 00:16:57.425 "auth": { 00:16:57.425 "state": "completed", 00:16:57.425 "digest": "sha384", 00:16:57.425 "dhgroup": "ffdhe6144" 00:16:57.425 } 00:16:57.425 } 00:16:57.425 ]' 00:16:57.425 17:07:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:16:57.425 17:07:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:57.425 17:07:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:16:57.425 17:07:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:16:57.425 17:07:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:16:57.683 17:07:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:57.683 17:07:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:57.683 17:07:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:57.683 17:07:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:03:ZDdjZDg1NDRlMzdkYmJkYmU1YTg1MDg3MmQwNGNmYTFmYzk2MmVlYTNmNDJiYjAxOWIyZjMwMzdiNjgyYTFhZb0Oe7c=: 00:16:58.251 17:07:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:58.251 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:58.251 17:07:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:58.251 17:07:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:58.251 17:07:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:58.251 17:07:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:58.251 17:07:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # for dhgroup in "${dhgroups[@]}" 00:16:58.251 17:07:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:16:58.251 17:07:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:16:58.251 17:07:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:16:58.510 17:07:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha384 ffdhe8192 0 00:16:58.510 17:07:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:16:58.510 17:07:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:16:58.510 17:07:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:16:58.510 17:07:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:16:58.510 17:07:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 00:16:58.511 17:07:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:58.511 17:07:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:58.511 17:07:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:58.511 17:07:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:16:58.511 17:07:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:16:59.079 00:16:59.079 17:07:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:16:59.079 17:07:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:16:59.079 17:07:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:59.079 17:07:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:59.079 17:07:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:59.079 17:07:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:59.079 17:07:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:59.079 17:07:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:59.079 17:07:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:16:59.079 { 00:16:59.079 "cntlid": 89, 00:16:59.079 "qid": 0, 00:16:59.079 "state": "enabled", 00:16:59.079 "listen_address": { 00:16:59.079 "trtype": "TCP", 00:16:59.079 "adrfam": "IPv4", 00:16:59.079 "traddr": "10.0.0.2", 00:16:59.079 "trsvcid": "4420" 00:16:59.079 }, 00:16:59.079 "peer_address": { 00:16:59.079 "trtype": "TCP", 00:16:59.079 "adrfam": "IPv4", 00:16:59.079 "traddr": "10.0.0.1", 00:16:59.079 "trsvcid": "43594" 00:16:59.079 }, 00:16:59.079 "auth": { 00:16:59.079 "state": "completed", 00:16:59.079 "digest": "sha384", 00:16:59.079 "dhgroup": "ffdhe8192" 00:16:59.079 } 00:16:59.079 } 00:16:59.079 ]' 00:16:59.079 17:07:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:16:59.338 17:07:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:59.338 17:07:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:16:59.338 17:07:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:16:59.338 17:07:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:16:59.338 17:07:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:59.338 17:07:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:59.338 17:07:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:59.596 17:07:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:00:NDE2NmE5M2VjM2Y4MmVlNGU3NTU0OGNlMGFkZDdiMzczMzI2N2FmNGY4M2YzZDlkN/JmXw==: 00:17:00.165 17:07:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:00.165 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:00.165 17:07:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:00.165 17:07:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:00.165 17:07:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:00.165 17:07:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:00.165 17:07:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:17:00.165 17:07:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:17:00.165 17:07:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:17:00.165 17:07:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha384 ffdhe8192 1 00:17:00.165 17:07:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:17:00.165 17:07:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:17:00.165 17:07:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:17:00.165 17:07:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:17:00.165 17:07:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 00:17:00.165 17:07:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:00.165 17:07:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:00.165 17:07:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:00.165 17:07:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:17:00.165 17:07:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:17:00.732 00:17:00.732 17:07:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:17:00.732 17:07:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:17:00.732 17:07:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:00.991 17:07:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:00.991 17:07:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:00.991 17:07:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:00.991 17:07:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:00.991 17:07:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:00.991 17:07:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:17:00.991 { 00:17:00.991 "cntlid": 91, 00:17:00.991 "qid": 0, 00:17:00.991 "state": "enabled", 00:17:00.991 "listen_address": { 00:17:00.991 "trtype": "TCP", 00:17:00.991 "adrfam": "IPv4", 00:17:00.991 "traddr": "10.0.0.2", 00:17:00.991 "trsvcid": "4420" 00:17:00.991 }, 00:17:00.991 "peer_address": { 00:17:00.991 "trtype": "TCP", 00:17:00.991 "adrfam": "IPv4", 00:17:00.991 "traddr": "10.0.0.1", 00:17:00.991 "trsvcid": "43622" 00:17:00.991 }, 00:17:00.991 "auth": { 00:17:00.991 "state": "completed", 00:17:00.991 "digest": "sha384", 00:17:00.991 "dhgroup": "ffdhe8192" 00:17:00.991 } 00:17:00.991 } 00:17:00.991 ]' 00:17:00.991 17:07:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:17:00.991 17:07:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:00.992 17:07:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:17:00.992 17:07:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:00.992 17:07:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:17:00.992 17:07:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:00.992 17:07:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:00.992 17:07:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:01.250 17:07:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:01:YmZkN2EyYmFmNjM3NDRiMTQzMzRhMDZhZDhmMjE1MzHZt7k4: 00:17:01.818 17:07:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:01.818 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:01.818 17:07:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:01.818 17:07:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:01.818 17:07:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:01.818 17:07:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:01.818 17:07:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:17:01.818 17:07:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:17:01.818 17:07:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:17:02.076 17:07:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha384 ffdhe8192 2 00:17:02.076 17:07:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:17:02.076 17:07:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:17:02.076 17:07:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:17:02.076 17:07:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:17:02.076 17:07:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 00:17:02.076 17:07:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:02.076 17:07:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:02.076 17:07:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:02.076 17:07:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:17:02.076 17:07:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:17:02.334 00:17:02.334 17:07:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:17:02.334 17:07:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:17:02.334 17:07:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:02.592 17:07:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:02.592 17:07:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:02.592 17:07:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:02.592 17:07:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:02.592 17:07:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:02.592 17:07:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:17:02.592 { 00:17:02.592 "cntlid": 93, 00:17:02.592 "qid": 0, 00:17:02.592 "state": "enabled", 00:17:02.592 "listen_address": { 00:17:02.592 "trtype": "TCP", 00:17:02.592 "adrfam": "IPv4", 00:17:02.592 "traddr": "10.0.0.2", 00:17:02.592 "trsvcid": "4420" 00:17:02.592 }, 00:17:02.592 "peer_address": { 00:17:02.592 "trtype": "TCP", 00:17:02.592 "adrfam": "IPv4", 00:17:02.592 "traddr": "10.0.0.1", 00:17:02.592 "trsvcid": "43642" 00:17:02.592 }, 00:17:02.592 "auth": { 00:17:02.593 "state": "completed", 00:17:02.593 "digest": "sha384", 00:17:02.593 "dhgroup": "ffdhe8192" 00:17:02.593 } 00:17:02.593 } 00:17:02.593 ]' 00:17:02.593 17:07:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:17:02.593 17:07:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:02.593 17:07:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:17:02.851 17:07:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:02.851 17:07:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:17:02.851 17:07:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:02.851 17:07:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:02.851 17:07:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:02.851 17:07:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:02:MTg0MWJjZWY2ZjA4NjhkYmU3YzNmOTVjNDgzMTQ1MzFhZjdhYzg3NTk4Mjc1NDEzS6SZTg==: 00:17:03.417 17:07:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:03.417 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:03.417 17:07:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:03.417 17:07:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:03.417 17:07:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:03.417 17:07:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:03.417 17:07:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:17:03.417 17:07:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:17:03.417 17:07:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:17:03.675 17:07:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha384 ffdhe8192 3 00:17:03.675 17:07:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:17:03.675 17:07:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:17:03.675 17:07:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:17:03.675 17:07:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:17:03.675 17:07:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:17:03.675 17:07:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:03.675 17:07:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:03.675 17:07:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:03.675 17:07:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:03.675 17:07:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:04.241 00:17:04.241 17:07:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:17:04.241 17:07:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:04.241 17:07:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:17:04.241 17:07:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:04.241 17:07:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:04.241 17:07:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:04.241 17:07:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:04.241 17:07:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:04.241 17:07:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:17:04.241 { 00:17:04.241 "cntlid": 95, 00:17:04.241 "qid": 0, 00:17:04.241 "state": "enabled", 00:17:04.241 "listen_address": { 00:17:04.241 "trtype": "TCP", 00:17:04.241 "adrfam": "IPv4", 00:17:04.241 "traddr": "10.0.0.2", 00:17:04.241 "trsvcid": "4420" 00:17:04.241 }, 00:17:04.241 "peer_address": { 00:17:04.241 "trtype": "TCP", 00:17:04.241 "adrfam": "IPv4", 00:17:04.241 "traddr": "10.0.0.1", 00:17:04.241 "trsvcid": "43678" 00:17:04.241 }, 00:17:04.241 "auth": { 00:17:04.241 "state": "completed", 00:17:04.241 "digest": "sha384", 00:17:04.241 "dhgroup": "ffdhe8192" 00:17:04.241 } 00:17:04.241 } 00:17:04.241 ]' 00:17:04.241 17:07:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:17:04.499 17:07:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:04.499 17:07:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:17:04.499 17:07:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:04.499 17:07:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:17:04.499 17:07:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:04.499 17:07:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:04.499 17:07:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:04.757 17:07:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:03:ZDdjZDg1NDRlMzdkYmJkYmU1YTg1MDg3MmQwNGNmYTFmYzk2MmVlYTNmNDJiYjAxOWIyZjMwMzdiNjgyYTFhZb0Oe7c=: 00:17:05.323 17:07:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:05.323 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:05.323 17:07:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:05.323 17:07:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:05.323 17:07:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:05.323 17:07:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:05.323 17:07:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@84 -- # for digest in "${digests[@]}" 00:17:05.323 17:07:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # for dhgroup in "${dhgroups[@]}" 00:17:05.323 17:07:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:17:05.323 17:07:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:17:05.323 17:07:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:17:05.323 17:07:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha512 null 0 00:17:05.323 17:07:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:17:05.323 17:07:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:17:05.323 17:07:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:17:05.323 17:07:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:17:05.323 17:07:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 00:17:05.323 17:07:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:05.323 17:07:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:05.323 17:07:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:05.323 17:07:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:17:05.323 17:07:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:17:05.580 00:17:05.580 17:07:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:17:05.580 17:07:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:17:05.580 17:07:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:05.838 17:07:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:05.838 17:07:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:05.838 17:07:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:05.838 17:07:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:05.838 17:07:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:05.838 17:07:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:17:05.838 { 00:17:05.838 "cntlid": 97, 00:17:05.838 "qid": 0, 00:17:05.838 "state": "enabled", 00:17:05.838 "listen_address": { 00:17:05.838 "trtype": "TCP", 00:17:05.838 "adrfam": "IPv4", 00:17:05.838 "traddr": "10.0.0.2", 00:17:05.838 "trsvcid": "4420" 00:17:05.838 }, 00:17:05.838 "peer_address": { 00:17:05.838 "trtype": "TCP", 00:17:05.838 "adrfam": "IPv4", 00:17:05.838 "traddr": "10.0.0.1", 00:17:05.838 "trsvcid": "43712" 00:17:05.838 }, 00:17:05.838 "auth": { 00:17:05.838 "state": "completed", 00:17:05.838 "digest": "sha512", 00:17:05.838 "dhgroup": "null" 00:17:05.838 } 00:17:05.838 } 00:17:05.838 ]' 00:17:05.838 17:07:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:17:05.838 17:07:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:05.838 17:07:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:17:05.838 17:07:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ null == \n\u\l\l ]] 00:17:05.838 17:07:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:17:05.838 17:07:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:05.838 17:07:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:05.838 17:07:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:06.097 17:07:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:00:NDE2NmE5M2VjM2Y4MmVlNGU3NTU0OGNlMGFkZDdiMzczMzI2N2FmNGY4M2YzZDlkN/JmXw==: 00:17:06.662 17:07:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:06.662 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:06.662 17:07:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:06.662 17:07:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:06.662 17:07:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:06.662 17:07:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:06.662 17:07:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:17:06.662 17:07:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:17:06.662 17:07:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:17:06.920 17:07:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha512 null 1 00:17:06.920 17:07:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:17:06.920 17:07:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:17:06.920 17:07:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:17:06.920 17:07:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:17:06.920 17:07:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 00:17:06.920 17:07:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:06.920 17:07:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:06.920 17:07:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:06.920 17:07:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:17:06.920 17:07:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:17:07.178 00:17:07.178 17:07:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:17:07.178 17:07:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:17:07.178 17:07:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:07.436 17:07:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:07.436 17:07:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:07.436 17:07:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:07.436 17:07:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:07.436 17:07:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:07.436 17:07:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:17:07.436 { 00:17:07.436 "cntlid": 99, 00:17:07.436 "qid": 0, 00:17:07.436 "state": "enabled", 00:17:07.436 "listen_address": { 00:17:07.436 "trtype": "TCP", 00:17:07.436 "adrfam": "IPv4", 00:17:07.436 "traddr": "10.0.0.2", 00:17:07.436 "trsvcid": "4420" 00:17:07.436 }, 00:17:07.436 "peer_address": { 00:17:07.436 "trtype": "TCP", 00:17:07.436 "adrfam": "IPv4", 00:17:07.436 "traddr": "10.0.0.1", 00:17:07.436 "trsvcid": "56764" 00:17:07.436 }, 00:17:07.436 "auth": { 00:17:07.436 "state": "completed", 00:17:07.436 "digest": "sha512", 00:17:07.436 "dhgroup": "null" 00:17:07.436 } 00:17:07.436 } 00:17:07.436 ]' 00:17:07.436 17:07:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:17:07.436 17:07:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:07.436 17:07:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:17:07.436 17:07:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ null == \n\u\l\l ]] 00:17:07.436 17:07:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:17:07.436 17:07:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:07.436 17:07:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:07.436 17:07:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:07.694 17:07:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:01:YmZkN2EyYmFmNjM3NDRiMTQzMzRhMDZhZDhmMjE1MzHZt7k4: 00:17:08.259 17:07:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:08.259 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:08.259 17:07:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:08.259 17:07:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:08.259 17:07:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:08.259 17:07:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:08.259 17:07:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:17:08.259 17:07:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:17:08.259 17:07:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:17:08.259 17:07:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha512 null 2 00:17:08.259 17:07:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:17:08.259 17:07:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:17:08.259 17:07:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:17:08.260 17:07:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:17:08.260 17:07:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 00:17:08.260 17:07:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:08.260 17:07:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:08.518 17:07:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:08.518 17:07:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:17:08.518 17:07:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:17:08.518 00:17:08.518 17:07:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:17:08.518 17:07:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:17:08.518 17:07:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:08.776 17:07:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:08.776 17:07:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:08.776 17:07:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:08.776 17:07:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:08.776 17:07:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:08.776 17:07:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:17:08.776 { 00:17:08.776 "cntlid": 101, 00:17:08.776 "qid": 0, 00:17:08.776 "state": "enabled", 00:17:08.776 "listen_address": { 00:17:08.776 "trtype": "TCP", 00:17:08.776 "adrfam": "IPv4", 00:17:08.776 "traddr": "10.0.0.2", 00:17:08.776 "trsvcid": "4420" 00:17:08.776 }, 00:17:08.776 "peer_address": { 00:17:08.776 "trtype": "TCP", 00:17:08.776 "adrfam": "IPv4", 00:17:08.776 "traddr": "10.0.0.1", 00:17:08.776 "trsvcid": "56792" 00:17:08.776 }, 00:17:08.776 "auth": { 00:17:08.776 "state": "completed", 00:17:08.776 "digest": "sha512", 00:17:08.776 "dhgroup": "null" 00:17:08.776 } 00:17:08.776 } 00:17:08.776 ]' 00:17:08.776 17:07:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:17:08.776 17:07:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:08.776 17:07:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:17:09.034 17:07:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ null == \n\u\l\l ]] 00:17:09.034 17:07:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:17:09.034 17:07:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:09.034 17:07:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:09.034 17:07:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:09.034 17:07:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:02:MTg0MWJjZWY2ZjA4NjhkYmU3YzNmOTVjNDgzMTQ1MzFhZjdhYzg3NTk4Mjc1NDEzS6SZTg==: 00:17:09.599 17:07:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:09.599 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:09.599 17:07:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:09.599 17:07:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:09.599 17:07:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:09.599 17:07:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:09.599 17:07:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:17:09.599 17:07:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:17:09.599 17:07:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:17:09.856 17:07:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha512 null 3 00:17:09.856 17:07:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:17:09.856 17:07:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:17:09.856 17:07:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:17:09.856 17:07:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:17:09.856 17:07:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:17:09.856 17:07:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:09.856 17:07:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:09.856 17:07:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:09.856 17:07:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:09.856 17:07:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:10.114 00:17:10.114 17:07:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:17:10.114 17:07:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:17:10.114 17:07:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:10.372 17:07:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:10.372 17:07:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:10.372 17:07:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:10.372 17:07:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:10.372 17:07:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:10.372 17:07:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:17:10.372 { 00:17:10.372 "cntlid": 103, 00:17:10.372 "qid": 0, 00:17:10.372 "state": "enabled", 00:17:10.372 "listen_address": { 00:17:10.372 "trtype": "TCP", 00:17:10.372 "adrfam": "IPv4", 00:17:10.372 "traddr": "10.0.0.2", 00:17:10.372 "trsvcid": "4420" 00:17:10.372 }, 00:17:10.372 "peer_address": { 00:17:10.372 "trtype": "TCP", 00:17:10.372 "adrfam": "IPv4", 00:17:10.372 "traddr": "10.0.0.1", 00:17:10.372 "trsvcid": "56814" 00:17:10.372 }, 00:17:10.372 "auth": { 00:17:10.372 "state": "completed", 00:17:10.372 "digest": "sha512", 00:17:10.372 "dhgroup": "null" 00:17:10.372 } 00:17:10.372 } 00:17:10.372 ]' 00:17:10.372 17:07:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:17:10.372 17:07:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:10.372 17:07:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:17:10.372 17:07:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ null == \n\u\l\l ]] 00:17:10.372 17:07:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:17:10.372 17:07:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:10.372 17:07:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:10.372 17:07:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:10.630 17:07:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:03:ZDdjZDg1NDRlMzdkYmJkYmU1YTg1MDg3MmQwNGNmYTFmYzk2MmVlYTNmNDJiYjAxOWIyZjMwMzdiNjgyYTFhZb0Oe7c=: 00:17:11.195 17:07:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:11.195 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:11.195 17:07:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:11.195 17:07:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:11.195 17:07:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:11.195 17:07:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:11.195 17:07:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # for dhgroup in "${dhgroups[@]}" 00:17:11.195 17:07:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:17:11.195 17:07:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:17:11.195 17:07:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:17:11.452 17:07:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha512 ffdhe2048 0 00:17:11.452 17:07:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:17:11.452 17:07:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:17:11.452 17:07:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:17:11.452 17:07:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:17:11.453 17:07:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 00:17:11.453 17:07:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:11.453 17:07:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:11.453 17:07:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:11.453 17:07:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:17:11.453 17:07:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:17:11.711 00:17:11.711 17:07:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:17:11.711 17:07:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:17:11.711 17:07:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:11.711 17:07:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:11.711 17:07:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:11.711 17:07:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:11.711 17:07:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:11.711 17:07:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:11.711 17:07:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:17:11.711 { 00:17:11.711 "cntlid": 105, 00:17:11.711 "qid": 0, 00:17:11.711 "state": "enabled", 00:17:11.711 "listen_address": { 00:17:11.711 "trtype": "TCP", 00:17:11.711 "adrfam": "IPv4", 00:17:11.711 "traddr": "10.0.0.2", 00:17:11.711 "trsvcid": "4420" 00:17:11.711 }, 00:17:11.711 "peer_address": { 00:17:11.711 "trtype": "TCP", 00:17:11.711 "adrfam": "IPv4", 00:17:11.711 "traddr": "10.0.0.1", 00:17:11.711 "trsvcid": "56846" 00:17:11.711 }, 00:17:11.711 "auth": { 00:17:11.711 "state": "completed", 00:17:11.711 "digest": "sha512", 00:17:11.711 "dhgroup": "ffdhe2048" 00:17:11.711 } 00:17:11.711 } 00:17:11.711 ]' 00:17:11.711 17:07:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:17:11.711 17:07:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:11.711 17:07:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:17:11.969 17:07:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:17:11.969 17:07:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:17:11.969 17:07:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:11.969 17:07:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:11.969 17:07:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:11.969 17:07:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:00:NDE2NmE5M2VjM2Y4MmVlNGU3NTU0OGNlMGFkZDdiMzczMzI2N2FmNGY4M2YzZDlkN/JmXw==: 00:17:12.536 17:08:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:12.536 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:12.536 17:08:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:12.536 17:08:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:12.536 17:08:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:12.536 17:08:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:12.536 17:08:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:17:12.536 17:08:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:17:12.536 17:08:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:17:12.795 17:08:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha512 ffdhe2048 1 00:17:12.795 17:08:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:17:12.795 17:08:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:17:12.795 17:08:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:17:12.795 17:08:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:17:12.795 17:08:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 00:17:12.795 17:08:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:12.795 17:08:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:12.795 17:08:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:12.795 17:08:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:17:12.795 17:08:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:17:13.054 00:17:13.055 17:08:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:17:13.055 17:08:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:17:13.055 17:08:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:13.314 17:08:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:13.314 17:08:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:13.314 17:08:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:13.314 17:08:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:13.314 17:08:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:13.314 17:08:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:17:13.314 { 00:17:13.314 "cntlid": 107, 00:17:13.314 "qid": 0, 00:17:13.314 "state": "enabled", 00:17:13.314 "listen_address": { 00:17:13.314 "trtype": "TCP", 00:17:13.314 "adrfam": "IPv4", 00:17:13.314 "traddr": "10.0.0.2", 00:17:13.314 "trsvcid": "4420" 00:17:13.314 }, 00:17:13.314 "peer_address": { 00:17:13.314 "trtype": "TCP", 00:17:13.314 "adrfam": "IPv4", 00:17:13.314 "traddr": "10.0.0.1", 00:17:13.314 "trsvcid": "56872" 00:17:13.314 }, 00:17:13.314 "auth": { 00:17:13.314 "state": "completed", 00:17:13.314 "digest": "sha512", 00:17:13.314 "dhgroup": "ffdhe2048" 00:17:13.314 } 00:17:13.314 } 00:17:13.314 ]' 00:17:13.314 17:08:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:17:13.314 17:08:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:13.314 17:08:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:17:13.314 17:08:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:17:13.314 17:08:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:17:13.314 17:08:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:13.314 17:08:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:13.314 17:08:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:13.573 17:08:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:01:YmZkN2EyYmFmNjM3NDRiMTQzMzRhMDZhZDhmMjE1MzHZt7k4: 00:17:14.141 17:08:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:14.141 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:14.141 17:08:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:14.141 17:08:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:14.141 17:08:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:14.141 17:08:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:14.141 17:08:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:17:14.141 17:08:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:17:14.141 17:08:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:17:14.399 17:08:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha512 ffdhe2048 2 00:17:14.399 17:08:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:17:14.399 17:08:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:17:14.399 17:08:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:17:14.399 17:08:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:17:14.399 17:08:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 00:17:14.399 17:08:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:14.399 17:08:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:14.399 17:08:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:14.399 17:08:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:17:14.399 17:08:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:17:14.657 00:17:14.657 17:08:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:17:14.657 17:08:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:14.657 17:08:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:17:14.657 17:08:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:14.657 17:08:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:14.657 17:08:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:14.657 17:08:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:14.657 17:08:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:14.916 17:08:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:17:14.916 { 00:17:14.916 "cntlid": 109, 00:17:14.916 "qid": 0, 00:17:14.916 "state": "enabled", 00:17:14.916 "listen_address": { 00:17:14.916 "trtype": "TCP", 00:17:14.916 "adrfam": "IPv4", 00:17:14.916 "traddr": "10.0.0.2", 00:17:14.916 "trsvcid": "4420" 00:17:14.916 }, 00:17:14.916 "peer_address": { 00:17:14.916 "trtype": "TCP", 00:17:14.916 "adrfam": "IPv4", 00:17:14.916 "traddr": "10.0.0.1", 00:17:14.916 "trsvcid": "56906" 00:17:14.916 }, 00:17:14.916 "auth": { 00:17:14.916 "state": "completed", 00:17:14.916 "digest": "sha512", 00:17:14.916 "dhgroup": "ffdhe2048" 00:17:14.916 } 00:17:14.916 } 00:17:14.916 ]' 00:17:14.916 17:08:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:17:14.916 17:08:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:14.916 17:08:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:17:14.916 17:08:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:17:14.916 17:08:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:17:14.916 17:08:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:14.916 17:08:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:14.916 17:08:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:15.174 17:08:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:02:MTg0MWJjZWY2ZjA4NjhkYmU3YzNmOTVjNDgzMTQ1MzFhZjdhYzg3NTk4Mjc1NDEzS6SZTg==: 00:17:15.738 17:08:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:15.738 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:15.738 17:08:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:15.738 17:08:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:15.738 17:08:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:15.738 17:08:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:15.738 17:08:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:17:15.738 17:08:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:17:15.738 17:08:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:17:15.738 17:08:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha512 ffdhe2048 3 00:17:15.738 17:08:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:17:15.738 17:08:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:17:15.738 17:08:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:17:15.738 17:08:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:17:15.738 17:08:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:17:15.738 17:08:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:15.739 17:08:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:15.739 17:08:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:15.739 17:08:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:15.739 17:08:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:15.996 00:17:15.996 17:08:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:17:15.996 17:08:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:17:15.997 17:08:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:16.255 17:08:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:16.255 17:08:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:16.255 17:08:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:16.255 17:08:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:16.255 17:08:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:16.255 17:08:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:17:16.255 { 00:17:16.255 "cntlid": 111, 00:17:16.255 "qid": 0, 00:17:16.255 "state": "enabled", 00:17:16.255 "listen_address": { 00:17:16.255 "trtype": "TCP", 00:17:16.255 "adrfam": "IPv4", 00:17:16.255 "traddr": "10.0.0.2", 00:17:16.255 "trsvcid": "4420" 00:17:16.255 }, 00:17:16.255 "peer_address": { 00:17:16.255 "trtype": "TCP", 00:17:16.255 "adrfam": "IPv4", 00:17:16.255 "traddr": "10.0.0.1", 00:17:16.255 "trsvcid": "39834" 00:17:16.255 }, 00:17:16.255 "auth": { 00:17:16.255 "state": "completed", 00:17:16.255 "digest": "sha512", 00:17:16.255 "dhgroup": "ffdhe2048" 00:17:16.255 } 00:17:16.255 } 00:17:16.255 ]' 00:17:16.255 17:08:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:17:16.255 17:08:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:16.255 17:08:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:17:16.255 17:08:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:17:16.255 17:08:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:17:16.513 17:08:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:16.513 17:08:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:16.513 17:08:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:16.513 17:08:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:03:ZDdjZDg1NDRlMzdkYmJkYmU1YTg1MDg3MmQwNGNmYTFmYzk2MmVlYTNmNDJiYjAxOWIyZjMwMzdiNjgyYTFhZb0Oe7c=: 00:17:17.140 17:08:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:17.140 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:17.140 17:08:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:17.140 17:08:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:17.140 17:08:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:17.140 17:08:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:17.140 17:08:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # for dhgroup in "${dhgroups[@]}" 00:17:17.140 17:08:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:17:17.140 17:08:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:17:17.140 17:08:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:17:17.398 17:08:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha512 ffdhe3072 0 00:17:17.398 17:08:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:17:17.398 17:08:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:17:17.398 17:08:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:17:17.398 17:08:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:17:17.398 17:08:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 00:17:17.398 17:08:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:17.398 17:08:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:17.398 17:08:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:17.398 17:08:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:17:17.398 17:08:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:17:17.656 00:17:17.656 17:08:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:17:17.656 17:08:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:17:17.656 17:08:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:17.656 17:08:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:17.656 17:08:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:17.656 17:08:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:17.656 17:08:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:17.656 17:08:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:17.656 17:08:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:17:17.656 { 00:17:17.656 "cntlid": 113, 00:17:17.656 "qid": 0, 00:17:17.656 "state": "enabled", 00:17:17.656 "listen_address": { 00:17:17.656 "trtype": "TCP", 00:17:17.656 "adrfam": "IPv4", 00:17:17.656 "traddr": "10.0.0.2", 00:17:17.656 "trsvcid": "4420" 00:17:17.656 }, 00:17:17.656 "peer_address": { 00:17:17.656 "trtype": "TCP", 00:17:17.656 "adrfam": "IPv4", 00:17:17.656 "traddr": "10.0.0.1", 00:17:17.656 "trsvcid": "39862" 00:17:17.656 }, 00:17:17.656 "auth": { 00:17:17.656 "state": "completed", 00:17:17.656 "digest": "sha512", 00:17:17.656 "dhgroup": "ffdhe3072" 00:17:17.656 } 00:17:17.656 } 00:17:17.656 ]' 00:17:17.656 17:08:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:17:17.914 17:08:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:17.914 17:08:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:17:17.914 17:08:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:17:17.914 17:08:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:17:17.914 17:08:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:17.914 17:08:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:17.914 17:08:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:18.172 17:08:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:00:NDE2NmE5M2VjM2Y4MmVlNGU3NTU0OGNlMGFkZDdiMzczMzI2N2FmNGY4M2YzZDlkN/JmXw==: 00:17:18.739 17:08:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:18.739 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:18.739 17:08:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:18.739 17:08:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:18.739 17:08:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:18.739 17:08:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:18.739 17:08:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:17:18.739 17:08:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:17:18.739 17:08:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:17:18.739 17:08:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha512 ffdhe3072 1 00:17:18.739 17:08:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:17:18.739 17:08:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:17:18.739 17:08:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:17:18.739 17:08:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:17:18.739 17:08:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 00:17:18.739 17:08:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:18.739 17:08:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:18.739 17:08:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:18.739 17:08:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:17:18.739 17:08:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:17:18.998 00:17:18.998 17:08:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:17:18.998 17:08:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:17:18.998 17:08:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:19.258 17:08:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:19.258 17:08:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:19.258 17:08:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:19.258 17:08:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:19.258 17:08:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:19.258 17:08:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:17:19.258 { 00:17:19.258 "cntlid": 115, 00:17:19.258 "qid": 0, 00:17:19.258 "state": "enabled", 00:17:19.258 "listen_address": { 00:17:19.258 "trtype": "TCP", 00:17:19.258 "adrfam": "IPv4", 00:17:19.258 "traddr": "10.0.0.2", 00:17:19.258 "trsvcid": "4420" 00:17:19.258 }, 00:17:19.258 "peer_address": { 00:17:19.258 "trtype": "TCP", 00:17:19.258 "adrfam": "IPv4", 00:17:19.258 "traddr": "10.0.0.1", 00:17:19.258 "trsvcid": "39882" 00:17:19.258 }, 00:17:19.258 "auth": { 00:17:19.258 "state": "completed", 00:17:19.258 "digest": "sha512", 00:17:19.258 "dhgroup": "ffdhe3072" 00:17:19.258 } 00:17:19.258 } 00:17:19.258 ]' 00:17:19.258 17:08:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:17:19.258 17:08:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:19.258 17:08:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:17:19.258 17:08:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:17:19.258 17:08:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:17:19.258 17:08:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:19.258 17:08:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:19.258 17:08:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:19.517 17:08:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:01:YmZkN2EyYmFmNjM3NDRiMTQzMzRhMDZhZDhmMjE1MzHZt7k4: 00:17:20.085 17:08:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:20.085 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:20.085 17:08:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:20.085 17:08:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:20.085 17:08:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:20.085 17:08:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:20.085 17:08:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:17:20.085 17:08:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:17:20.085 17:08:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:17:20.345 17:08:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha512 ffdhe3072 2 00:17:20.345 17:08:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:17:20.345 17:08:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:17:20.345 17:08:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:17:20.345 17:08:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:17:20.345 17:08:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 00:17:20.345 17:08:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:20.345 17:08:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:20.345 17:08:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:20.345 17:08:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:17:20.345 17:08:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:17:20.604 00:17:20.604 17:08:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:17:20.604 17:08:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:17:20.604 17:08:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:20.863 17:08:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:20.863 17:08:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:20.863 17:08:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:20.863 17:08:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:20.863 17:08:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:20.863 17:08:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:17:20.863 { 00:17:20.863 "cntlid": 117, 00:17:20.863 "qid": 0, 00:17:20.863 "state": "enabled", 00:17:20.863 "listen_address": { 00:17:20.863 "trtype": "TCP", 00:17:20.863 "adrfam": "IPv4", 00:17:20.863 "traddr": "10.0.0.2", 00:17:20.863 "trsvcid": "4420" 00:17:20.863 }, 00:17:20.863 "peer_address": { 00:17:20.863 "trtype": "TCP", 00:17:20.863 "adrfam": "IPv4", 00:17:20.863 "traddr": "10.0.0.1", 00:17:20.863 "trsvcid": "39906" 00:17:20.863 }, 00:17:20.863 "auth": { 00:17:20.863 "state": "completed", 00:17:20.863 "digest": "sha512", 00:17:20.863 "dhgroup": "ffdhe3072" 00:17:20.863 } 00:17:20.863 } 00:17:20.863 ]' 00:17:20.863 17:08:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:17:20.863 17:08:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:20.863 17:08:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:17:20.863 17:08:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:17:20.863 17:08:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:17:20.863 17:08:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:20.863 17:08:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:20.863 17:08:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:21.122 17:08:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:02:MTg0MWJjZWY2ZjA4NjhkYmU3YzNmOTVjNDgzMTQ1MzFhZjdhYzg3NTk4Mjc1NDEzS6SZTg==: 00:17:21.690 17:08:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:21.690 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:21.690 17:08:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:21.690 17:08:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:21.690 17:08:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:21.690 17:08:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:21.690 17:08:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:17:21.690 17:08:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:17:21.690 17:08:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:17:21.690 17:08:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha512 ffdhe3072 3 00:17:21.690 17:08:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:17:21.690 17:08:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:17:21.690 17:08:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:17:21.690 17:08:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:17:21.690 17:08:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:17:21.690 17:08:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:21.690 17:08:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:21.949 17:08:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:21.949 17:08:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:21.949 17:08:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:21.949 00:17:21.949 17:08:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:17:21.949 17:08:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:17:21.949 17:08:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:22.208 17:08:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:22.208 17:08:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:22.208 17:08:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:22.208 17:08:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:22.208 17:08:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:22.208 17:08:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:17:22.208 { 00:17:22.208 "cntlid": 119, 00:17:22.208 "qid": 0, 00:17:22.208 "state": "enabled", 00:17:22.208 "listen_address": { 00:17:22.208 "trtype": "TCP", 00:17:22.208 "adrfam": "IPv4", 00:17:22.208 "traddr": "10.0.0.2", 00:17:22.208 "trsvcid": "4420" 00:17:22.208 }, 00:17:22.208 "peer_address": { 00:17:22.208 "trtype": "TCP", 00:17:22.208 "adrfam": "IPv4", 00:17:22.208 "traddr": "10.0.0.1", 00:17:22.208 "trsvcid": "39952" 00:17:22.208 }, 00:17:22.208 "auth": { 00:17:22.208 "state": "completed", 00:17:22.208 "digest": "sha512", 00:17:22.208 "dhgroup": "ffdhe3072" 00:17:22.208 } 00:17:22.208 } 00:17:22.208 ]' 00:17:22.208 17:08:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:17:22.208 17:08:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:22.208 17:08:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:17:22.467 17:08:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:17:22.467 17:08:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:17:22.467 17:08:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:22.467 17:08:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:22.467 17:08:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:22.467 17:08:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:03:ZDdjZDg1NDRlMzdkYmJkYmU1YTg1MDg3MmQwNGNmYTFmYzk2MmVlYTNmNDJiYjAxOWIyZjMwMzdiNjgyYTFhZb0Oe7c=: 00:17:23.036 17:08:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:23.295 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:23.295 17:08:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:23.295 17:08:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:23.295 17:08:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:23.295 17:08:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:23.295 17:08:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # for dhgroup in "${dhgroups[@]}" 00:17:23.295 17:08:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:17:23.295 17:08:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:17:23.295 17:08:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:17:23.295 17:08:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha512 ffdhe4096 0 00:17:23.295 17:08:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:17:23.295 17:08:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:17:23.295 17:08:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:17:23.295 17:08:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:17:23.295 17:08:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 00:17:23.295 17:08:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:23.295 17:08:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:23.295 17:08:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:23.295 17:08:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:17:23.295 17:08:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:17:23.554 00:17:23.554 17:08:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:17:23.554 17:08:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:23.554 17:08:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:17:23.813 17:08:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:23.813 17:08:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:23.813 17:08:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:23.813 17:08:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:23.813 17:08:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:23.813 17:08:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:17:23.813 { 00:17:23.813 "cntlid": 121, 00:17:23.813 "qid": 0, 00:17:23.813 "state": "enabled", 00:17:23.813 "listen_address": { 00:17:23.813 "trtype": "TCP", 00:17:23.813 "adrfam": "IPv4", 00:17:23.813 "traddr": "10.0.0.2", 00:17:23.813 "trsvcid": "4420" 00:17:23.813 }, 00:17:23.813 "peer_address": { 00:17:23.813 "trtype": "TCP", 00:17:23.813 "adrfam": "IPv4", 00:17:23.813 "traddr": "10.0.0.1", 00:17:23.813 "trsvcid": "39992" 00:17:23.813 }, 00:17:23.813 "auth": { 00:17:23.813 "state": "completed", 00:17:23.813 "digest": "sha512", 00:17:23.813 "dhgroup": "ffdhe4096" 00:17:23.813 } 00:17:23.813 } 00:17:23.813 ]' 00:17:23.813 17:08:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:17:23.813 17:08:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:23.813 17:08:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:17:23.813 17:08:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:17:23.813 17:08:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:17:23.813 17:08:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:23.813 17:08:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:23.813 17:08:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:24.071 17:08:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:00:NDE2NmE5M2VjM2Y4MmVlNGU3NTU0OGNlMGFkZDdiMzczMzI2N2FmNGY4M2YzZDlkN/JmXw==: 00:17:24.639 17:08:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:24.639 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:24.639 17:08:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:24.639 17:08:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:24.639 17:08:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:24.639 17:08:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:24.639 17:08:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:17:24.639 17:08:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:17:24.639 17:08:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:17:24.898 17:08:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha512 ffdhe4096 1 00:17:24.898 17:08:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:17:24.898 17:08:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:17:24.898 17:08:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:17:24.898 17:08:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:17:24.898 17:08:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 00:17:24.898 17:08:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:24.898 17:08:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:24.898 17:08:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:24.898 17:08:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:17:24.898 17:08:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:17:25.158 00:17:25.158 17:08:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:17:25.158 17:08:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:17:25.158 17:08:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:25.417 17:08:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:25.417 17:08:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:25.417 17:08:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:25.417 17:08:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:25.417 17:08:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:25.417 17:08:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:17:25.417 { 00:17:25.417 "cntlid": 123, 00:17:25.417 "qid": 0, 00:17:25.417 "state": "enabled", 00:17:25.417 "listen_address": { 00:17:25.417 "trtype": "TCP", 00:17:25.417 "adrfam": "IPv4", 00:17:25.417 "traddr": "10.0.0.2", 00:17:25.417 "trsvcid": "4420" 00:17:25.417 }, 00:17:25.417 "peer_address": { 00:17:25.417 "trtype": "TCP", 00:17:25.417 "adrfam": "IPv4", 00:17:25.417 "traddr": "10.0.0.1", 00:17:25.417 "trsvcid": "40026" 00:17:25.417 }, 00:17:25.417 "auth": { 00:17:25.417 "state": "completed", 00:17:25.417 "digest": "sha512", 00:17:25.417 "dhgroup": "ffdhe4096" 00:17:25.417 } 00:17:25.417 } 00:17:25.417 ]' 00:17:25.417 17:08:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:17:25.417 17:08:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:25.417 17:08:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:17:25.417 17:08:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:17:25.417 17:08:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:17:25.417 17:08:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:25.417 17:08:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:25.417 17:08:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:25.675 17:08:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:01:YmZkN2EyYmFmNjM3NDRiMTQzMzRhMDZhZDhmMjE1MzHZt7k4: 00:17:26.244 17:08:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:26.244 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:26.244 17:08:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:26.244 17:08:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:26.244 17:08:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:26.244 17:08:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:26.244 17:08:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:17:26.244 17:08:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:17:26.244 17:08:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:17:26.504 17:08:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha512 ffdhe4096 2 00:17:26.504 17:08:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:17:26.504 17:08:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:17:26.504 17:08:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:17:26.504 17:08:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:17:26.504 17:08:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 00:17:26.504 17:08:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:26.504 17:08:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:26.504 17:08:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:26.504 17:08:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:17:26.504 17:08:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:17:26.764 00:17:26.764 17:08:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:17:26.764 17:08:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:17:26.764 17:08:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:26.764 17:08:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:26.764 17:08:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:26.764 17:08:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:26.764 17:08:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:26.764 17:08:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:26.764 17:08:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:17:26.764 { 00:17:26.764 "cntlid": 125, 00:17:26.764 "qid": 0, 00:17:26.764 "state": "enabled", 00:17:26.764 "listen_address": { 00:17:26.764 "trtype": "TCP", 00:17:26.764 "adrfam": "IPv4", 00:17:26.764 "traddr": "10.0.0.2", 00:17:26.764 "trsvcid": "4420" 00:17:26.764 }, 00:17:26.764 "peer_address": { 00:17:26.764 "trtype": "TCP", 00:17:26.764 "adrfam": "IPv4", 00:17:26.764 "traddr": "10.0.0.1", 00:17:26.764 "trsvcid": "37368" 00:17:26.764 }, 00:17:26.764 "auth": { 00:17:26.764 "state": "completed", 00:17:26.764 "digest": "sha512", 00:17:26.764 "dhgroup": "ffdhe4096" 00:17:26.764 } 00:17:26.764 } 00:17:26.764 ]' 00:17:26.764 17:08:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:17:27.022 17:08:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:27.022 17:08:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:17:27.022 17:08:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:17:27.022 17:08:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:17:27.022 17:08:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:27.022 17:08:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:27.022 17:08:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:27.280 17:08:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:02:MTg0MWJjZWY2ZjA4NjhkYmU3YzNmOTVjNDgzMTQ1MzFhZjdhYzg3NTk4Mjc1NDEzS6SZTg==: 00:17:27.847 17:08:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:27.847 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:27.847 17:08:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:27.847 17:08:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:27.847 17:08:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:27.847 17:08:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:27.847 17:08:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:17:27.847 17:08:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:17:27.847 17:08:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:17:27.847 17:08:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha512 ffdhe4096 3 00:17:27.847 17:08:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:17:27.847 17:08:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:17:27.847 17:08:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:17:27.847 17:08:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:17:27.847 17:08:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:17:27.847 17:08:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:27.847 17:08:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:27.847 17:08:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:27.847 17:08:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:27.847 17:08:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:28.105 00:17:28.105 17:08:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:17:28.105 17:08:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:17:28.105 17:08:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:28.363 17:08:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:28.363 17:08:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:28.363 17:08:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:28.363 17:08:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:28.363 17:08:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:28.363 17:08:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:17:28.363 { 00:17:28.363 "cntlid": 127, 00:17:28.363 "qid": 0, 00:17:28.363 "state": "enabled", 00:17:28.363 "listen_address": { 00:17:28.363 "trtype": "TCP", 00:17:28.363 "adrfam": "IPv4", 00:17:28.363 "traddr": "10.0.0.2", 00:17:28.363 "trsvcid": "4420" 00:17:28.363 }, 00:17:28.363 "peer_address": { 00:17:28.363 "trtype": "TCP", 00:17:28.363 "adrfam": "IPv4", 00:17:28.363 "traddr": "10.0.0.1", 00:17:28.363 "trsvcid": "37386" 00:17:28.363 }, 00:17:28.363 "auth": { 00:17:28.363 "state": "completed", 00:17:28.363 "digest": "sha512", 00:17:28.363 "dhgroup": "ffdhe4096" 00:17:28.363 } 00:17:28.363 } 00:17:28.363 ]' 00:17:28.363 17:08:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:17:28.363 17:08:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:28.363 17:08:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:17:28.363 17:08:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:17:28.363 17:08:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:17:28.621 17:08:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:28.621 17:08:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:28.621 17:08:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:28.621 17:08:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:03:ZDdjZDg1NDRlMzdkYmJkYmU1YTg1MDg3MmQwNGNmYTFmYzk2MmVlYTNmNDJiYjAxOWIyZjMwMzdiNjgyYTFhZb0Oe7c=: 00:17:29.187 17:08:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:29.187 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:29.187 17:08:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:29.187 17:08:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:29.187 17:08:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:29.187 17:08:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:29.187 17:08:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # for dhgroup in "${dhgroups[@]}" 00:17:29.187 17:08:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:17:29.187 17:08:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:17:29.187 17:08:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:17:29.445 17:08:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha512 ffdhe6144 0 00:17:29.445 17:08:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:17:29.445 17:08:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:17:29.445 17:08:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:17:29.445 17:08:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:17:29.445 17:08:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 00:17:29.445 17:08:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:29.445 17:08:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:29.445 17:08:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:29.445 17:08:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:17:29.445 17:08:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:17:29.703 00:17:29.703 17:08:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:17:29.703 17:08:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:17:29.703 17:08:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:29.961 17:08:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:29.961 17:08:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:29.961 17:08:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:29.961 17:08:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:29.961 17:08:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:29.961 17:08:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:17:29.961 { 00:17:29.961 "cntlid": 129, 00:17:29.961 "qid": 0, 00:17:29.961 "state": "enabled", 00:17:29.961 "listen_address": { 00:17:29.961 "trtype": "TCP", 00:17:29.961 "adrfam": "IPv4", 00:17:29.961 "traddr": "10.0.0.2", 00:17:29.961 "trsvcid": "4420" 00:17:29.961 }, 00:17:29.961 "peer_address": { 00:17:29.961 "trtype": "TCP", 00:17:29.961 "adrfam": "IPv4", 00:17:29.961 "traddr": "10.0.0.1", 00:17:29.961 "trsvcid": "37418" 00:17:29.961 }, 00:17:29.961 "auth": { 00:17:29.961 "state": "completed", 00:17:29.961 "digest": "sha512", 00:17:29.961 "dhgroup": "ffdhe6144" 00:17:29.961 } 00:17:29.961 } 00:17:29.961 ]' 00:17:29.961 17:08:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:17:29.961 17:08:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:29.961 17:08:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:17:29.961 17:08:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:17:29.961 17:08:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:17:29.961 17:08:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:29.961 17:08:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:29.961 17:08:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:30.218 17:08:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:00:NDE2NmE5M2VjM2Y4MmVlNGU3NTU0OGNlMGFkZDdiMzczMzI2N2FmNGY4M2YzZDlkN/JmXw==: 00:17:30.784 17:08:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:30.784 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:30.784 17:08:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:30.784 17:08:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:30.784 17:08:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:30.784 17:08:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:30.784 17:08:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:17:30.784 17:08:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:17:30.784 17:08:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:17:31.042 17:08:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha512 ffdhe6144 1 00:17:31.042 17:08:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:17:31.042 17:08:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:17:31.042 17:08:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:17:31.042 17:08:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:17:31.042 17:08:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 00:17:31.042 17:08:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:31.042 17:08:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:31.042 17:08:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:31.042 17:08:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:17:31.042 17:08:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:17:31.300 00:17:31.300 17:08:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:17:31.300 17:08:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:17:31.300 17:08:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:31.558 17:08:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:31.558 17:08:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:31.558 17:08:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:31.558 17:08:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:31.558 17:08:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:31.558 17:08:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:17:31.558 { 00:17:31.558 "cntlid": 131, 00:17:31.558 "qid": 0, 00:17:31.558 "state": "enabled", 00:17:31.558 "listen_address": { 00:17:31.558 "trtype": "TCP", 00:17:31.558 "adrfam": "IPv4", 00:17:31.558 "traddr": "10.0.0.2", 00:17:31.558 "trsvcid": "4420" 00:17:31.558 }, 00:17:31.558 "peer_address": { 00:17:31.558 "trtype": "TCP", 00:17:31.558 "adrfam": "IPv4", 00:17:31.558 "traddr": "10.0.0.1", 00:17:31.558 "trsvcid": "37442" 00:17:31.558 }, 00:17:31.558 "auth": { 00:17:31.558 "state": "completed", 00:17:31.558 "digest": "sha512", 00:17:31.558 "dhgroup": "ffdhe6144" 00:17:31.558 } 00:17:31.558 } 00:17:31.558 ]' 00:17:31.558 17:08:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:17:31.558 17:08:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:31.558 17:08:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:17:31.558 17:08:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:17:31.558 17:08:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:17:31.816 17:08:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:31.816 17:08:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:31.816 17:08:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:31.816 17:08:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:01:YmZkN2EyYmFmNjM3NDRiMTQzMzRhMDZhZDhmMjE1MzHZt7k4: 00:17:32.383 17:08:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:32.383 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:32.383 17:08:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:32.383 17:08:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:32.383 17:08:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:32.383 17:08:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:32.383 17:08:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:17:32.383 17:08:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:17:32.383 17:08:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:17:32.642 17:08:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha512 ffdhe6144 2 00:17:32.642 17:08:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:17:32.642 17:08:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:17:32.642 17:08:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:17:32.642 17:08:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:17:32.642 17:08:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 00:17:32.642 17:08:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:32.642 17:08:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:32.642 17:08:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:32.642 17:08:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:17:32.642 17:08:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:17:32.901 00:17:32.901 17:08:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:17:32.901 17:08:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:17:32.901 17:08:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:33.160 17:08:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:33.160 17:08:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:33.160 17:08:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:33.160 17:08:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:33.160 17:08:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:33.160 17:08:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:17:33.160 { 00:17:33.160 "cntlid": 133, 00:17:33.160 "qid": 0, 00:17:33.160 "state": "enabled", 00:17:33.160 "listen_address": { 00:17:33.160 "trtype": "TCP", 00:17:33.160 "adrfam": "IPv4", 00:17:33.160 "traddr": "10.0.0.2", 00:17:33.160 "trsvcid": "4420" 00:17:33.160 }, 00:17:33.160 "peer_address": { 00:17:33.160 "trtype": "TCP", 00:17:33.160 "adrfam": "IPv4", 00:17:33.160 "traddr": "10.0.0.1", 00:17:33.160 "trsvcid": "37468" 00:17:33.160 }, 00:17:33.160 "auth": { 00:17:33.160 "state": "completed", 00:17:33.160 "digest": "sha512", 00:17:33.160 "dhgroup": "ffdhe6144" 00:17:33.160 } 00:17:33.160 } 00:17:33.160 ]' 00:17:33.160 17:08:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:17:33.160 17:08:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:33.160 17:08:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:17:33.160 17:08:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:17:33.160 17:08:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:17:33.160 17:08:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:33.160 17:08:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:33.160 17:08:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:33.419 17:08:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:02:MTg0MWJjZWY2ZjA4NjhkYmU3YzNmOTVjNDgzMTQ1MzFhZjdhYzg3NTk4Mjc1NDEzS6SZTg==: 00:17:33.987 17:08:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:33.987 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:33.987 17:08:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:33.987 17:08:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:33.987 17:08:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:33.987 17:08:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:33.987 17:08:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:17:33.987 17:08:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:17:33.987 17:08:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:17:34.246 17:08:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha512 ffdhe6144 3 00:17:34.246 17:08:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:17:34.246 17:08:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:17:34.246 17:08:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:17:34.246 17:08:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:17:34.246 17:08:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:17:34.246 17:08:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:34.246 17:08:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:34.246 17:08:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:34.246 17:08:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:34.246 17:08:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:34.505 00:17:34.505 17:08:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:17:34.505 17:08:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:17:34.505 17:08:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:34.764 17:08:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:34.764 17:08:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:34.764 17:08:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:34.764 17:08:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:34.764 17:08:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:34.764 17:08:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:17:34.764 { 00:17:34.764 "cntlid": 135, 00:17:34.764 "qid": 0, 00:17:34.764 "state": "enabled", 00:17:34.764 "listen_address": { 00:17:34.764 "trtype": "TCP", 00:17:34.764 "adrfam": "IPv4", 00:17:34.764 "traddr": "10.0.0.2", 00:17:34.764 "trsvcid": "4420" 00:17:34.764 }, 00:17:34.764 "peer_address": { 00:17:34.764 "trtype": "TCP", 00:17:34.764 "adrfam": "IPv4", 00:17:34.764 "traddr": "10.0.0.1", 00:17:34.764 "trsvcid": "37488" 00:17:34.764 }, 00:17:34.764 "auth": { 00:17:34.764 "state": "completed", 00:17:34.764 "digest": "sha512", 00:17:34.764 "dhgroup": "ffdhe6144" 00:17:34.764 } 00:17:34.764 } 00:17:34.764 ]' 00:17:34.764 17:08:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:17:34.764 17:08:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:34.764 17:08:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:17:34.764 17:08:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:17:34.764 17:08:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:17:34.764 17:08:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:34.764 17:08:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:34.764 17:08:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:35.023 17:08:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:03:ZDdjZDg1NDRlMzdkYmJkYmU1YTg1MDg3MmQwNGNmYTFmYzk2MmVlYTNmNDJiYjAxOWIyZjMwMzdiNjgyYTFhZb0Oe7c=: 00:17:35.624 17:08:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:35.624 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:35.624 17:08:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:35.624 17:08:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:35.624 17:08:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:35.624 17:08:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:35.624 17:08:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # for dhgroup in "${dhgroups[@]}" 00:17:35.624 17:08:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:17:35.624 17:08:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:17:35.624 17:08:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:17:35.883 17:08:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha512 ffdhe8192 0 00:17:35.883 17:08:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:17:35.883 17:08:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:17:35.883 17:08:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:17:35.883 17:08:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:17:35.883 17:08:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 00:17:35.883 17:08:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:35.883 17:08:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:35.883 17:08:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:35.883 17:08:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:17:35.883 17:08:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:17:36.142 00:17:36.142 17:08:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:17:36.142 17:08:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:17:36.142 17:08:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:36.401 17:08:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:36.401 17:08:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:36.401 17:08:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:36.401 17:08:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:36.401 17:08:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:36.401 17:08:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:17:36.401 { 00:17:36.401 "cntlid": 137, 00:17:36.401 "qid": 0, 00:17:36.401 "state": "enabled", 00:17:36.401 "listen_address": { 00:17:36.401 "trtype": "TCP", 00:17:36.401 "adrfam": "IPv4", 00:17:36.401 "traddr": "10.0.0.2", 00:17:36.401 "trsvcid": "4420" 00:17:36.401 }, 00:17:36.401 "peer_address": { 00:17:36.401 "trtype": "TCP", 00:17:36.401 "adrfam": "IPv4", 00:17:36.401 "traddr": "10.0.0.1", 00:17:36.401 "trsvcid": "55602" 00:17:36.401 }, 00:17:36.401 "auth": { 00:17:36.401 "state": "completed", 00:17:36.401 "digest": "sha512", 00:17:36.401 "dhgroup": "ffdhe8192" 00:17:36.401 } 00:17:36.401 } 00:17:36.401 ]' 00:17:36.401 17:08:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:17:36.401 17:08:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:36.401 17:08:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:17:36.660 17:08:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:36.660 17:08:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:17:36.660 17:08:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:36.660 17:08:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:36.660 17:08:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:36.660 17:08:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:00:NDE2NmE5M2VjM2Y4MmVlNGU3NTU0OGNlMGFkZDdiMzczMzI2N2FmNGY4M2YzZDlkN/JmXw==: 00:17:37.228 17:08:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:37.228 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:37.228 17:08:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:37.228 17:08:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:37.228 17:08:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:37.228 17:08:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:37.228 17:08:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:17:37.228 17:08:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:17:37.228 17:08:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:17:37.487 17:08:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha512 ffdhe8192 1 00:17:37.487 17:08:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:17:37.487 17:08:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:17:37.487 17:08:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:17:37.487 17:08:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:17:37.487 17:08:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 00:17:37.487 17:08:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:37.487 17:08:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:37.487 17:08:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:37.487 17:08:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:17:37.487 17:08:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:17:38.055 00:17:38.055 17:08:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:17:38.055 17:08:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:38.055 17:08:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:17:38.055 17:08:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:38.055 17:08:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:38.055 17:08:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:38.055 17:08:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:38.314 17:08:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:38.314 17:08:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:17:38.314 { 00:17:38.314 "cntlid": 139, 00:17:38.314 "qid": 0, 00:17:38.314 "state": "enabled", 00:17:38.314 "listen_address": { 00:17:38.314 "trtype": "TCP", 00:17:38.314 "adrfam": "IPv4", 00:17:38.314 "traddr": "10.0.0.2", 00:17:38.314 "trsvcid": "4420" 00:17:38.314 }, 00:17:38.314 "peer_address": { 00:17:38.314 "trtype": "TCP", 00:17:38.314 "adrfam": "IPv4", 00:17:38.314 "traddr": "10.0.0.1", 00:17:38.314 "trsvcid": "55636" 00:17:38.314 }, 00:17:38.314 "auth": { 00:17:38.314 "state": "completed", 00:17:38.314 "digest": "sha512", 00:17:38.314 "dhgroup": "ffdhe8192" 00:17:38.314 } 00:17:38.314 } 00:17:38.314 ]' 00:17:38.314 17:08:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:17:38.314 17:08:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:38.314 17:08:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:17:38.314 17:08:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:38.314 17:08:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:17:38.314 17:08:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:38.314 17:08:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:38.314 17:08:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:38.572 17:08:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:01:YmZkN2EyYmFmNjM3NDRiMTQzMzRhMDZhZDhmMjE1MzHZt7k4: 00:17:39.140 17:08:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:39.140 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:39.140 17:08:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:39.140 17:08:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:39.140 17:08:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:39.140 17:08:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:39.140 17:08:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:17:39.140 17:08:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:17:39.140 17:08:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:17:39.141 17:08:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha512 ffdhe8192 2 00:17:39.141 17:08:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:17:39.141 17:08:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:17:39.141 17:08:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:17:39.141 17:08:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:17:39.141 17:08:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 00:17:39.141 17:08:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:39.141 17:08:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:39.141 17:08:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:39.141 17:08:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:17:39.141 17:08:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:17:39.708 00:17:39.708 17:08:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:17:39.708 17:08:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:17:39.708 17:08:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:39.967 17:08:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:39.967 17:08:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:39.967 17:08:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:39.967 17:08:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:39.967 17:08:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:39.967 17:08:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:17:39.967 { 00:17:39.967 "cntlid": 141, 00:17:39.967 "qid": 0, 00:17:39.967 "state": "enabled", 00:17:39.967 "listen_address": { 00:17:39.967 "trtype": "TCP", 00:17:39.967 "adrfam": "IPv4", 00:17:39.967 "traddr": "10.0.0.2", 00:17:39.967 "trsvcid": "4420" 00:17:39.967 }, 00:17:39.967 "peer_address": { 00:17:39.967 "trtype": "TCP", 00:17:39.967 "adrfam": "IPv4", 00:17:39.967 "traddr": "10.0.0.1", 00:17:39.967 "trsvcid": "55656" 00:17:39.967 }, 00:17:39.967 "auth": { 00:17:39.967 "state": "completed", 00:17:39.967 "digest": "sha512", 00:17:39.967 "dhgroup": "ffdhe8192" 00:17:39.967 } 00:17:39.967 } 00:17:39.967 ]' 00:17:39.967 17:08:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:17:39.967 17:08:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:39.967 17:08:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:17:39.967 17:08:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:39.967 17:08:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:17:39.967 17:08:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:39.967 17:08:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:39.967 17:08:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:40.226 17:08:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:02:MTg0MWJjZWY2ZjA4NjhkYmU3YzNmOTVjNDgzMTQ1MzFhZjdhYzg3NTk4Mjc1NDEzS6SZTg==: 00:17:40.794 17:08:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:40.794 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:40.795 17:08:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:40.795 17:08:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:40.795 17:08:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:40.795 17:08:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:40.795 17:08:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:17:40.795 17:08:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:17:40.795 17:08:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:17:41.053 17:08:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha512 ffdhe8192 3 00:17:41.053 17:08:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:17:41.053 17:08:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:17:41.053 17:08:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:17:41.053 17:08:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:17:41.054 17:08:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:17:41.054 17:08:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:41.054 17:08:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:41.054 17:08:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:41.054 17:08:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:41.054 17:08:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:41.620 00:17:41.620 17:08:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:17:41.620 17:08:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:17:41.620 17:08:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:41.620 17:08:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:41.620 17:08:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:41.620 17:08:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:41.620 17:08:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:41.620 17:08:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:41.620 17:08:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:17:41.620 { 00:17:41.620 "cntlid": 143, 00:17:41.620 "qid": 0, 00:17:41.620 "state": "enabled", 00:17:41.620 "listen_address": { 00:17:41.620 "trtype": "TCP", 00:17:41.620 "adrfam": "IPv4", 00:17:41.620 "traddr": "10.0.0.2", 00:17:41.620 "trsvcid": "4420" 00:17:41.620 }, 00:17:41.620 "peer_address": { 00:17:41.620 "trtype": "TCP", 00:17:41.620 "adrfam": "IPv4", 00:17:41.620 "traddr": "10.0.0.1", 00:17:41.620 "trsvcid": "55682" 00:17:41.620 }, 00:17:41.620 "auth": { 00:17:41.620 "state": "completed", 00:17:41.620 "digest": "sha512", 00:17:41.620 "dhgroup": "ffdhe8192" 00:17:41.620 } 00:17:41.620 } 00:17:41.620 ]' 00:17:41.620 17:08:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:17:41.620 17:08:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:41.620 17:08:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:17:41.889 17:08:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:41.889 17:08:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:17:41.889 17:08:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:41.889 17:08:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:41.889 17:08:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:41.889 17:08:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:03:ZDdjZDg1NDRlMzdkYmJkYmU1YTg1MDg3MmQwNGNmYTFmYzk2MmVlYTNmNDJiYjAxOWIyZjMwMzdiNjgyYTFhZb0Oe7c=: 00:17:42.457 17:08:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:42.457 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:42.457 17:08:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:42.457 17:08:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:42.457 17:08:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:42.457 17:08:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:42.457 17:08:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@95 -- # IFS=, 00:17:42.457 17:08:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # printf %s sha256,sha384,sha512 00:17:42.457 17:08:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@95 -- # IFS=, 00:17:42.457 17:08:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:17:42.457 17:08:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@95 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:17:42.457 17:08:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:17:42.714 17:08:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@107 -- # connect_authenticate sha512 ffdhe8192 0 00:17:42.714 17:08:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:17:42.714 17:08:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:17:42.714 17:08:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:17:42.714 17:08:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:17:42.714 17:08:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 00:17:42.714 17:08:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:42.714 17:08:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:42.714 17:08:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:42.714 17:08:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:17:42.714 17:08:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:17:43.277 00:17:43.277 17:08:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:17:43.277 17:08:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:17:43.277 17:08:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:43.277 17:08:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:43.277 17:08:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:43.277 17:08:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:43.277 17:08:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:43.534 17:08:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:43.534 17:08:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:17:43.534 { 00:17:43.534 "cntlid": 145, 00:17:43.534 "qid": 0, 00:17:43.534 "state": "enabled", 00:17:43.534 "listen_address": { 00:17:43.534 "trtype": "TCP", 00:17:43.534 "adrfam": "IPv4", 00:17:43.534 "traddr": "10.0.0.2", 00:17:43.534 "trsvcid": "4420" 00:17:43.534 }, 00:17:43.534 "peer_address": { 00:17:43.534 "trtype": "TCP", 00:17:43.534 "adrfam": "IPv4", 00:17:43.534 "traddr": "10.0.0.1", 00:17:43.534 "trsvcid": "55716" 00:17:43.534 }, 00:17:43.534 "auth": { 00:17:43.534 "state": "completed", 00:17:43.534 "digest": "sha512", 00:17:43.534 "dhgroup": "ffdhe8192" 00:17:43.534 } 00:17:43.534 } 00:17:43.534 ]' 00:17:43.534 17:08:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:17:43.534 17:08:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:43.534 17:08:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:17:43.534 17:08:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:43.534 17:08:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:17:43.534 17:08:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:43.534 17:08:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:43.534 17:08:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:43.792 17:08:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:00:NDE2NmE5M2VjM2Y4MmVlNGU3NTU0OGNlMGFkZDdiMzczMzI2N2FmNGY4M2YzZDlkN/JmXw==: 00:17:44.358 17:08:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:44.358 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:44.358 17:08:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:44.358 17:08:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:44.358 17:08:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:44.358 17:08:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:44.358 17:08:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@110 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 00:17:44.358 17:08:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:44.358 17:08:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:44.358 17:08:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:44.358 17:08:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@111 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:17:44.358 17:08:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:17:44.358 17:08:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:17:44.358 17:08:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:17:44.358 17:08:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:44.358 17:08:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:17:44.358 17:08:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:44.358 17:08:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:17:44.358 17:08:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:17:44.924 request: 00:17:44.924 { 00:17:44.924 "name": "nvme0", 00:17:44.924 "trtype": "tcp", 00:17:44.924 "traddr": "10.0.0.2", 00:17:44.924 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:17:44.924 "adrfam": "ipv4", 00:17:44.924 "trsvcid": "4420", 00:17:44.924 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:17:44.924 "dhchap_key": "key2", 00:17:44.924 "method": "bdev_nvme_attach_controller", 00:17:44.924 "req_id": 1 00:17:44.924 } 00:17:44.924 Got JSON-RPC error response 00:17:44.924 response: 00:17:44.924 { 00:17:44.924 "code": -32602, 00:17:44.924 "message": "Invalid parameters" 00:17:44.924 } 00:17:44.924 17:08:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:17:44.924 17:08:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:17:44.924 17:08:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:17:44.924 17:08:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:17:44.924 17:08:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@114 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:44.924 17:08:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:44.924 17:08:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:44.924 17:08:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:44.924 17:08:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@116 -- # trap - SIGINT SIGTERM EXIT 00:17:44.924 17:08:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@117 -- # cleanup 00:17:44.924 17:08:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@21 -- # killprocess 3060888 00:17:44.924 17:08:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@946 -- # '[' -z 3060888 ']' 00:17:44.924 17:08:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@950 -- # kill -0 3060888 00:17:44.924 17:08:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@951 -- # uname 00:17:44.924 17:08:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:17:44.924 17:08:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3060888 00:17:44.924 17:08:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:17:44.924 17:08:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:17:44.924 17:08:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3060888' 00:17:44.924 killing process with pid 3060888 00:17:44.924 17:08:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@965 -- # kill 3060888 00:17:44.924 17:08:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@970 -- # wait 3060888 00:17:45.183 17:08:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@22 -- # nvmftestfini 00:17:45.183 17:08:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@488 -- # nvmfcleanup 00:17:45.183 17:08:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@117 -- # sync 00:17:45.183 17:08:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:17:45.183 17:08:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@120 -- # set +e 00:17:45.183 17:08:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@121 -- # for i in {1..20} 00:17:45.183 17:08:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:17:45.183 rmmod nvme_tcp 00:17:45.183 rmmod nvme_fabrics 00:17:45.183 rmmod nvme_keyring 00:17:45.183 17:08:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:17:45.183 17:08:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@124 -- # set -e 00:17:45.183 17:08:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@125 -- # return 0 00:17:45.183 17:08:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@489 -- # '[' -n 3060816 ']' 00:17:45.183 17:08:32 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@490 -- # killprocess 3060816 00:17:45.183 17:08:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@946 -- # '[' -z 3060816 ']' 00:17:45.183 17:08:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@950 -- # kill -0 3060816 00:17:45.183 17:08:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@951 -- # uname 00:17:45.183 17:08:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:17:45.183 17:08:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3060816 00:17:45.183 17:08:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:17:45.183 17:08:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:17:45.183 17:08:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3060816' 00:17:45.183 killing process with pid 3060816 00:17:45.183 17:08:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@965 -- # kill 3060816 00:17:45.183 17:08:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@970 -- # wait 3060816 00:17:45.441 17:08:33 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:17:45.441 17:08:33 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:17:45.441 17:08:33 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:17:45.441 17:08:33 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:17:45.441 17:08:33 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@278 -- # remove_spdk_ns 00:17:45.441 17:08:33 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:45.441 17:08:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:45.441 17:08:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:47.972 17:08:35 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:17:47.972 17:08:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@23 -- # rm -f /tmp/spdk.key-null.fsB /tmp/spdk.key-sha256.LnA /tmp/spdk.key-sha384.hGd /tmp/spdk.key-sha512.z8S /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf-auth.log 00:17:47.972 00:17:47.972 real 2m4.432s 00:17:47.972 user 4m44.994s 00:17:47.972 sys 0m19.602s 00:17:47.972 17:08:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@1122 -- # xtrace_disable 00:17:47.972 17:08:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:47.972 ************************************ 00:17:47.972 END TEST nvmf_auth_target 00:17:47.972 ************************************ 00:17:47.972 17:08:35 nvmf_tcp -- nvmf/nvmf.sh@59 -- # '[' tcp = tcp ']' 00:17:47.972 17:08:35 nvmf_tcp -- nvmf/nvmf.sh@60 -- # run_test nvmf_bdevio_no_huge /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:17:47.972 17:08:35 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 4 -le 1 ']' 00:17:47.972 17:08:35 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:17:47.972 17:08:35 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:17:47.972 ************************************ 00:17:47.972 START TEST nvmf_bdevio_no_huge 00:17:47.972 ************************************ 00:17:47.972 17:08:35 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:17:47.972 * Looking for test storage... 00:17:47.972 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:17:47.972 17:08:35 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:47.972 17:08:35 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # uname -s 00:17:47.972 17:08:35 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:47.972 17:08:35 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:47.972 17:08:35 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:47.972 17:08:35 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:47.972 17:08:35 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:47.972 17:08:35 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:47.972 17:08:35 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:47.972 17:08:35 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:47.972 17:08:35 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:47.972 17:08:35 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:47.973 17:08:35 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:47.973 17:08:35 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:17:47.973 17:08:35 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:47.973 17:08:35 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:47.973 17:08:35 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:47.973 17:08:35 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:47.973 17:08:35 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:47.973 17:08:35 nvmf_tcp.nvmf_bdevio_no_huge -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:47.973 17:08:35 nvmf_tcp.nvmf_bdevio_no_huge -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:47.973 17:08:35 nvmf_tcp.nvmf_bdevio_no_huge -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:47.973 17:08:35 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:47.973 17:08:35 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:47.973 17:08:35 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:47.973 17:08:35 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@5 -- # export PATH 00:17:47.973 17:08:35 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:47.973 17:08:35 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@47 -- # : 0 00:17:47.973 17:08:35 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:17:47.973 17:08:35 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:17:47.973 17:08:35 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:47.973 17:08:35 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:47.973 17:08:35 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:47.973 17:08:35 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:17:47.973 17:08:35 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:17:47.973 17:08:35 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@51 -- # have_pci_nics=0 00:17:47.973 17:08:35 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:17:47.973 17:08:35 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:17:47.973 17:08:35 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@14 -- # nvmftestinit 00:17:47.973 17:08:35 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:17:47.973 17:08:35 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:47.973 17:08:35 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@448 -- # prepare_net_devs 00:17:47.973 17:08:35 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@410 -- # local -g is_hw=no 00:17:47.973 17:08:35 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@412 -- # remove_spdk_ns 00:17:47.973 17:08:35 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:47.973 17:08:35 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:47.973 17:08:35 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:47.973 17:08:35 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:17:47.973 17:08:35 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:17:47.973 17:08:35 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@285 -- # xtrace_disable 00:17:47.973 17:08:35 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:17:53.234 17:08:40 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:17:53.234 17:08:40 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@291 -- # pci_devs=() 00:17:53.234 17:08:40 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@291 -- # local -a pci_devs 00:17:53.234 17:08:40 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@292 -- # pci_net_devs=() 00:17:53.234 17:08:40 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:17:53.234 17:08:40 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@293 -- # pci_drivers=() 00:17:53.234 17:08:40 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@293 -- # local -A pci_drivers 00:17:53.234 17:08:40 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@295 -- # net_devs=() 00:17:53.234 17:08:40 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@295 -- # local -ga net_devs 00:17:53.234 17:08:40 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@296 -- # e810=() 00:17:53.234 17:08:40 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@296 -- # local -ga e810 00:17:53.234 17:08:40 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@297 -- # x722=() 00:17:53.234 17:08:40 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@297 -- # local -ga x722 00:17:53.234 17:08:40 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@298 -- # mlx=() 00:17:53.234 17:08:40 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@298 -- # local -ga mlx 00:17:53.234 17:08:40 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:53.234 17:08:40 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:53.234 17:08:40 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:53.234 17:08:40 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:53.234 17:08:40 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:53.234 17:08:40 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:53.234 17:08:40 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:53.234 17:08:40 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:53.234 17:08:40 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:53.234 17:08:40 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:53.234 17:08:40 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:53.234 17:08:40 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:17:53.234 17:08:40 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:17:53.234 17:08:40 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:17:53.234 17:08:40 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:17:53.234 17:08:40 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:17:53.234 17:08:40 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:17:53.234 17:08:40 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:53.234 17:08:40 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:17:53.234 Found 0000:86:00.0 (0x8086 - 0x159b) 00:17:53.234 17:08:40 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:53.234 17:08:40 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:53.234 17:08:40 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:53.234 17:08:40 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:53.234 17:08:40 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:53.234 17:08:40 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:53.234 17:08:40 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:17:53.234 Found 0000:86:00.1 (0x8086 - 0x159b) 00:17:53.234 17:08:40 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:53.234 17:08:40 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:53.234 17:08:40 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:53.234 17:08:40 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:53.234 17:08:40 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:53.234 17:08:40 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:17:53.234 17:08:40 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:17:53.234 17:08:40 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:17:53.234 17:08:40 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:53.234 17:08:40 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:53.234 17:08:40 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:17:53.234 17:08:40 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:53.234 17:08:40 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@390 -- # [[ up == up ]] 00:17:53.234 17:08:40 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:17:53.234 17:08:40 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:53.234 17:08:40 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:17:53.234 Found net devices under 0000:86:00.0: cvl_0_0 00:17:53.234 17:08:40 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:17:53.234 17:08:40 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:53.234 17:08:40 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:53.234 17:08:40 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:17:53.234 17:08:40 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:53.234 17:08:40 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@390 -- # [[ up == up ]] 00:17:53.234 17:08:40 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:17:53.234 17:08:40 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:53.234 17:08:40 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:17:53.234 Found net devices under 0000:86:00.1: cvl_0_1 00:17:53.234 17:08:40 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:17:53.234 17:08:40 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:17:53.234 17:08:40 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@414 -- # is_hw=yes 00:17:53.234 17:08:40 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:17:53.234 17:08:40 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:17:53.234 17:08:40 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:17:53.234 17:08:40 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:53.234 17:08:40 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:53.234 17:08:40 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:17:53.234 17:08:40 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:17:53.234 17:08:40 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:17:53.234 17:08:40 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:17:53.234 17:08:40 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:17:53.234 17:08:40 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:17:53.234 17:08:40 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:53.234 17:08:40 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:17:53.234 17:08:40 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:17:53.234 17:08:40 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:17:53.234 17:08:40 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:17:53.234 17:08:40 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:17:53.234 17:08:40 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:17:53.234 17:08:40 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:17:53.234 17:08:40 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:17:53.234 17:08:40 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:17:53.234 17:08:40 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:17:53.234 17:08:40 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:17:53.234 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:53.234 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.303 ms 00:17:53.234 00:17:53.234 --- 10.0.0.2 ping statistics --- 00:17:53.234 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:53.234 rtt min/avg/max/mdev = 0.303/0.303/0.303/0.000 ms 00:17:53.234 17:08:40 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:17:53.234 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:53.234 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.222 ms 00:17:53.234 00:17:53.234 --- 10.0.0.1 ping statistics --- 00:17:53.234 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:53.234 rtt min/avg/max/mdev = 0.222/0.222/0.222/0.000 ms 00:17:53.234 17:08:40 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:53.234 17:08:40 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@422 -- # return 0 00:17:53.235 17:08:40 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:17:53.235 17:08:40 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:53.235 17:08:40 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:17:53.235 17:08:40 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:17:53.235 17:08:40 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:53.235 17:08:40 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:17:53.235 17:08:40 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:17:53.235 17:08:40 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:17:53.235 17:08:40 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:17:53.235 17:08:40 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@720 -- # xtrace_disable 00:17:53.235 17:08:40 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:17:53.235 17:08:40 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@481 -- # nvmfpid=3084282 00:17:53.235 17:08:40 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --no-huge -s 1024 -m 0x78 00:17:53.235 17:08:40 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@482 -- # waitforlisten 3084282 00:17:53.235 17:08:40 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@827 -- # '[' -z 3084282 ']' 00:17:53.235 17:08:40 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:53.235 17:08:40 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@832 -- # local max_retries=100 00:17:53.235 17:08:40 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:53.235 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:53.235 17:08:40 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@836 -- # xtrace_disable 00:17:53.235 17:08:40 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:17:53.235 [2024-05-15 17:08:40.842791] Starting SPDK v24.05-pre git sha1 0ba8ca574 / DPDK 23.11.0 initialization... 00:17:53.235 [2024-05-15 17:08:40.842839] [ DPDK EAL parameters: nvmf -c 0x78 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk0 --proc-type=auto ] 00:17:53.493 [2024-05-15 17:08:40.906625] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:17:53.493 [2024-05-15 17:08:40.991547] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:53.493 [2024-05-15 17:08:40.991580] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:53.493 [2024-05-15 17:08:40.991587] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:53.493 [2024-05-15 17:08:40.991593] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:53.493 [2024-05-15 17:08:40.991598] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:53.493 [2024-05-15 17:08:40.991646] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:17:53.493 [2024-05-15 17:08:40.991753] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:17:53.493 [2024-05-15 17:08:40.991861] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:17:53.493 [2024-05-15 17:08:40.991862] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:17:54.059 17:08:41 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:17:54.059 17:08:41 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@860 -- # return 0 00:17:54.059 17:08:41 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:17:54.059 17:08:41 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@726 -- # xtrace_disable 00:17:54.059 17:08:41 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:17:54.059 17:08:41 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:54.059 17:08:41 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:17:54.059 17:08:41 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:54.059 17:08:41 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:17:54.059 [2024-05-15 17:08:41.703059] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:54.059 17:08:41 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:54.059 17:08:41 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:17:54.059 17:08:41 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:54.059 17:08:41 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:17:54.317 Malloc0 00:17:54.317 17:08:41 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:54.317 17:08:41 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:17:54.317 17:08:41 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:54.318 17:08:41 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:17:54.318 17:08:41 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:54.318 17:08:41 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:17:54.318 17:08:41 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:54.318 17:08:41 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:17:54.318 17:08:41 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:54.318 17:08:41 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:54.318 17:08:41 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:54.318 17:08:41 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:17:54.318 [2024-05-15 17:08:41.747136] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:17:54.318 [2024-05-15 17:08:41.747364] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:54.318 17:08:41 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:54.318 17:08:41 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 --no-huge -s 1024 00:17:54.318 17:08:41 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:17:54.318 17:08:41 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@532 -- # config=() 00:17:54.318 17:08:41 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@532 -- # local subsystem config 00:17:54.318 17:08:41 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:17:54.318 17:08:41 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:17:54.318 { 00:17:54.318 "params": { 00:17:54.318 "name": "Nvme$subsystem", 00:17:54.318 "trtype": "$TEST_TRANSPORT", 00:17:54.318 "traddr": "$NVMF_FIRST_TARGET_IP", 00:17:54.318 "adrfam": "ipv4", 00:17:54.318 "trsvcid": "$NVMF_PORT", 00:17:54.318 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:17:54.318 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:17:54.318 "hdgst": ${hdgst:-false}, 00:17:54.318 "ddgst": ${ddgst:-false} 00:17:54.318 }, 00:17:54.318 "method": "bdev_nvme_attach_controller" 00:17:54.318 } 00:17:54.318 EOF 00:17:54.318 )") 00:17:54.318 17:08:41 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@554 -- # cat 00:17:54.318 17:08:41 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@556 -- # jq . 00:17:54.318 17:08:41 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@557 -- # IFS=, 00:17:54.318 17:08:41 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:17:54.318 "params": { 00:17:54.318 "name": "Nvme1", 00:17:54.318 "trtype": "tcp", 00:17:54.318 "traddr": "10.0.0.2", 00:17:54.318 "adrfam": "ipv4", 00:17:54.318 "trsvcid": "4420", 00:17:54.318 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:17:54.318 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:17:54.318 "hdgst": false, 00:17:54.318 "ddgst": false 00:17:54.318 }, 00:17:54.318 "method": "bdev_nvme_attach_controller" 00:17:54.318 }' 00:17:54.318 [2024-05-15 17:08:41.797207] Starting SPDK v24.05-pre git sha1 0ba8ca574 / DPDK 23.11.0 initialization... 00:17:54.318 [2024-05-15 17:08:41.797256] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk_pid3084514 ] 00:17:54.318 [2024-05-15 17:08:41.855208] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:17:54.318 [2024-05-15 17:08:41.942462] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:17:54.318 [2024-05-15 17:08:41.942558] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:54.318 [2024-05-15 17:08:41.942558] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:17:54.576 I/O targets: 00:17:54.576 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:17:54.576 00:17:54.576 00:17:54.576 CUnit - A unit testing framework for C - Version 2.1-3 00:17:54.576 http://cunit.sourceforge.net/ 00:17:54.576 00:17:54.576 00:17:54.576 Suite: bdevio tests on: Nvme1n1 00:17:54.576 Test: blockdev write read block ...passed 00:17:54.576 Test: blockdev write zeroes read block ...passed 00:17:54.576 Test: blockdev write zeroes read no split ...passed 00:17:54.834 Test: blockdev write zeroes read split ...passed 00:17:54.834 Test: blockdev write zeroes read split partial ...passed 00:17:54.834 Test: blockdev reset ...[2024-05-15 17:08:42.291504] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:17:54.834 [2024-05-15 17:08:42.291568] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x157d5a0 (9): Bad file descriptor 00:17:54.834 [2024-05-15 17:08:42.305308] bdev_nvme.c:2055:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:17:54.834 passed 00:17:54.834 Test: blockdev write read 8 blocks ...passed 00:17:54.834 Test: blockdev write read size > 128k ...passed 00:17:54.834 Test: blockdev write read invalid size ...passed 00:17:54.834 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:17:54.834 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:17:54.834 Test: blockdev write read max offset ...passed 00:17:54.834 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:17:54.834 Test: blockdev writev readv 8 blocks ...passed 00:17:55.093 Test: blockdev writev readv 30 x 1block ...passed 00:17:55.093 Test: blockdev writev readv block ...passed 00:17:55.093 Test: blockdev writev readv size > 128k ...passed 00:17:55.093 Test: blockdev writev readv size > 128k in two iovs ...passed 00:17:55.093 Test: blockdev comparev and writev ...[2024-05-15 17:08:42.563546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:55.093 [2024-05-15 17:08:42.563576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:55.093 [2024-05-15 17:08:42.563589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:55.093 [2024-05-15 17:08:42.563597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:55.093 [2024-05-15 17:08:42.563869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:55.093 [2024-05-15 17:08:42.563879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:17:55.093 [2024-05-15 17:08:42.563891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:55.093 [2024-05-15 17:08:42.563898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:17:55.093 [2024-05-15 17:08:42.564181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:55.093 [2024-05-15 17:08:42.564192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:17:55.093 [2024-05-15 17:08:42.564203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:55.093 [2024-05-15 17:08:42.564210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:17:55.093 [2024-05-15 17:08:42.564479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:55.093 [2024-05-15 17:08:42.564491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:17:55.093 [2024-05-15 17:08:42.564503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:55.093 [2024-05-15 17:08:42.564512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:17:55.093 passed 00:17:55.093 Test: blockdev nvme passthru rw ...passed 00:17:55.093 Test: blockdev nvme passthru vendor specific ...[2024-05-15 17:08:42.647570] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:17:55.093 [2024-05-15 17:08:42.647585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:17:55.093 [2024-05-15 17:08:42.647727] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:17:55.093 [2024-05-15 17:08:42.647742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:17:55.093 [2024-05-15 17:08:42.647875] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:17:55.093 [2024-05-15 17:08:42.647886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:17:55.093 [2024-05-15 17:08:42.648020] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:17:55.093 [2024-05-15 17:08:42.648030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:17:55.093 passed 00:17:55.093 Test: blockdev nvme admin passthru ...passed 00:17:55.093 Test: blockdev copy ...passed 00:17:55.093 00:17:55.093 Run Summary: Type Total Ran Passed Failed Inactive 00:17:55.093 suites 1 1 n/a 0 0 00:17:55.093 tests 23 23 23 0 0 00:17:55.093 asserts 152 152 152 0 n/a 00:17:55.093 00:17:55.093 Elapsed time = 1.240 seconds 00:17:55.351 17:08:42 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:55.351 17:08:42 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:55.351 17:08:42 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:17:55.351 17:08:43 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:55.351 17:08:43 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:17:55.351 17:08:43 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@30 -- # nvmftestfini 00:17:55.351 17:08:43 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@488 -- # nvmfcleanup 00:17:55.351 17:08:43 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@117 -- # sync 00:17:55.351 17:08:43 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:17:55.351 17:08:43 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@120 -- # set +e 00:17:55.351 17:08:43 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@121 -- # for i in {1..20} 00:17:55.351 17:08:43 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:17:55.610 rmmod nvme_tcp 00:17:55.610 rmmod nvme_fabrics 00:17:55.610 rmmod nvme_keyring 00:17:55.610 17:08:43 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:17:55.610 17:08:43 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@124 -- # set -e 00:17:55.610 17:08:43 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@125 -- # return 0 00:17:55.610 17:08:43 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@489 -- # '[' -n 3084282 ']' 00:17:55.610 17:08:43 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@490 -- # killprocess 3084282 00:17:55.610 17:08:43 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@946 -- # '[' -z 3084282 ']' 00:17:55.610 17:08:43 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@950 -- # kill -0 3084282 00:17:55.610 17:08:43 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@951 -- # uname 00:17:55.610 17:08:43 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:17:55.610 17:08:43 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3084282 00:17:55.610 17:08:43 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@952 -- # process_name=reactor_3 00:17:55.610 17:08:43 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@956 -- # '[' reactor_3 = sudo ']' 00:17:55.610 17:08:43 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3084282' 00:17:55.610 killing process with pid 3084282 00:17:55.610 17:08:43 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@965 -- # kill 3084282 00:17:55.610 [2024-05-15 17:08:43.109264] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:17:55.610 17:08:43 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@970 -- # wait 3084282 00:17:55.868 17:08:43 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:17:55.868 17:08:43 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:17:55.868 17:08:43 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:17:55.868 17:08:43 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:17:55.868 17:08:43 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@278 -- # remove_spdk_ns 00:17:55.868 17:08:43 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:55.868 17:08:43 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:55.868 17:08:43 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:58.403 17:08:45 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:17:58.403 00:17:58.403 real 0m10.326s 00:17:58.403 user 0m13.182s 00:17:58.403 sys 0m4.967s 00:17:58.403 17:08:45 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@1122 -- # xtrace_disable 00:17:58.403 17:08:45 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:17:58.403 ************************************ 00:17:58.403 END TEST nvmf_bdevio_no_huge 00:17:58.403 ************************************ 00:17:58.403 17:08:45 nvmf_tcp -- nvmf/nvmf.sh@61 -- # run_test nvmf_tls /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:17:58.403 17:08:45 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:17:58.403 17:08:45 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:17:58.403 17:08:45 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:17:58.403 ************************************ 00:17:58.403 START TEST nvmf_tls 00:17:58.403 ************************************ 00:17:58.403 17:08:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:17:58.403 * Looking for test storage... 00:17:58.403 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:17:58.403 17:08:45 nvmf_tcp.nvmf_tls -- target/tls.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:58.403 17:08:45 nvmf_tcp.nvmf_tls -- nvmf/common.sh@7 -- # uname -s 00:17:58.403 17:08:45 nvmf_tcp.nvmf_tls -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:58.403 17:08:45 nvmf_tcp.nvmf_tls -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:58.403 17:08:45 nvmf_tcp.nvmf_tls -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:58.403 17:08:45 nvmf_tcp.nvmf_tls -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:58.403 17:08:45 nvmf_tcp.nvmf_tls -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:58.403 17:08:45 nvmf_tcp.nvmf_tls -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:58.403 17:08:45 nvmf_tcp.nvmf_tls -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:58.403 17:08:45 nvmf_tcp.nvmf_tls -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:58.403 17:08:45 nvmf_tcp.nvmf_tls -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:58.403 17:08:45 nvmf_tcp.nvmf_tls -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:58.403 17:08:45 nvmf_tcp.nvmf_tls -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:58.403 17:08:45 nvmf_tcp.nvmf_tls -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:17:58.403 17:08:45 nvmf_tcp.nvmf_tls -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:58.403 17:08:45 nvmf_tcp.nvmf_tls -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:58.403 17:08:45 nvmf_tcp.nvmf_tls -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:58.403 17:08:45 nvmf_tcp.nvmf_tls -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:58.403 17:08:45 nvmf_tcp.nvmf_tls -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:58.403 17:08:45 nvmf_tcp.nvmf_tls -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:58.403 17:08:45 nvmf_tcp.nvmf_tls -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:58.403 17:08:45 nvmf_tcp.nvmf_tls -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:58.403 17:08:45 nvmf_tcp.nvmf_tls -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:58.404 17:08:45 nvmf_tcp.nvmf_tls -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:58.404 17:08:45 nvmf_tcp.nvmf_tls -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:58.404 17:08:45 nvmf_tcp.nvmf_tls -- paths/export.sh@5 -- # export PATH 00:17:58.404 17:08:45 nvmf_tcp.nvmf_tls -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:58.404 17:08:45 nvmf_tcp.nvmf_tls -- nvmf/common.sh@47 -- # : 0 00:17:58.404 17:08:45 nvmf_tcp.nvmf_tls -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:17:58.404 17:08:45 nvmf_tcp.nvmf_tls -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:17:58.404 17:08:45 nvmf_tcp.nvmf_tls -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:58.404 17:08:45 nvmf_tcp.nvmf_tls -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:58.404 17:08:45 nvmf_tcp.nvmf_tls -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:58.404 17:08:45 nvmf_tcp.nvmf_tls -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:17:58.404 17:08:45 nvmf_tcp.nvmf_tls -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:17:58.404 17:08:45 nvmf_tcp.nvmf_tls -- nvmf/common.sh@51 -- # have_pci_nics=0 00:17:58.404 17:08:45 nvmf_tcp.nvmf_tls -- target/tls.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:58.404 17:08:45 nvmf_tcp.nvmf_tls -- target/tls.sh@62 -- # nvmftestinit 00:17:58.404 17:08:45 nvmf_tcp.nvmf_tls -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:17:58.404 17:08:45 nvmf_tcp.nvmf_tls -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:58.404 17:08:45 nvmf_tcp.nvmf_tls -- nvmf/common.sh@448 -- # prepare_net_devs 00:17:58.404 17:08:45 nvmf_tcp.nvmf_tls -- nvmf/common.sh@410 -- # local -g is_hw=no 00:17:58.404 17:08:45 nvmf_tcp.nvmf_tls -- nvmf/common.sh@412 -- # remove_spdk_ns 00:17:58.404 17:08:45 nvmf_tcp.nvmf_tls -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:58.404 17:08:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:58.404 17:08:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:58.404 17:08:45 nvmf_tcp.nvmf_tls -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:17:58.404 17:08:45 nvmf_tcp.nvmf_tls -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:17:58.404 17:08:45 nvmf_tcp.nvmf_tls -- nvmf/common.sh@285 -- # xtrace_disable 00:17:58.404 17:08:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:03.711 17:08:51 nvmf_tcp.nvmf_tls -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:18:03.711 17:08:51 nvmf_tcp.nvmf_tls -- nvmf/common.sh@291 -- # pci_devs=() 00:18:03.711 17:08:51 nvmf_tcp.nvmf_tls -- nvmf/common.sh@291 -- # local -a pci_devs 00:18:03.711 17:08:51 nvmf_tcp.nvmf_tls -- nvmf/common.sh@292 -- # pci_net_devs=() 00:18:03.711 17:08:51 nvmf_tcp.nvmf_tls -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:18:03.711 17:08:51 nvmf_tcp.nvmf_tls -- nvmf/common.sh@293 -- # pci_drivers=() 00:18:03.711 17:08:51 nvmf_tcp.nvmf_tls -- nvmf/common.sh@293 -- # local -A pci_drivers 00:18:03.711 17:08:51 nvmf_tcp.nvmf_tls -- nvmf/common.sh@295 -- # net_devs=() 00:18:03.711 17:08:51 nvmf_tcp.nvmf_tls -- nvmf/common.sh@295 -- # local -ga net_devs 00:18:03.711 17:08:51 nvmf_tcp.nvmf_tls -- nvmf/common.sh@296 -- # e810=() 00:18:03.711 17:08:51 nvmf_tcp.nvmf_tls -- nvmf/common.sh@296 -- # local -ga e810 00:18:03.711 17:08:51 nvmf_tcp.nvmf_tls -- nvmf/common.sh@297 -- # x722=() 00:18:03.711 17:08:51 nvmf_tcp.nvmf_tls -- nvmf/common.sh@297 -- # local -ga x722 00:18:03.711 17:08:51 nvmf_tcp.nvmf_tls -- nvmf/common.sh@298 -- # mlx=() 00:18:03.711 17:08:51 nvmf_tcp.nvmf_tls -- nvmf/common.sh@298 -- # local -ga mlx 00:18:03.711 17:08:51 nvmf_tcp.nvmf_tls -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:03.711 17:08:51 nvmf_tcp.nvmf_tls -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:03.711 17:08:51 nvmf_tcp.nvmf_tls -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:03.711 17:08:51 nvmf_tcp.nvmf_tls -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:03.711 17:08:51 nvmf_tcp.nvmf_tls -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:03.711 17:08:51 nvmf_tcp.nvmf_tls -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:03.711 17:08:51 nvmf_tcp.nvmf_tls -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:03.711 17:08:51 nvmf_tcp.nvmf_tls -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:03.711 17:08:51 nvmf_tcp.nvmf_tls -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:03.711 17:08:51 nvmf_tcp.nvmf_tls -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:03.711 17:08:51 nvmf_tcp.nvmf_tls -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:03.711 17:08:51 nvmf_tcp.nvmf_tls -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:18:03.711 17:08:51 nvmf_tcp.nvmf_tls -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:18:03.711 17:08:51 nvmf_tcp.nvmf_tls -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:18:03.711 17:08:51 nvmf_tcp.nvmf_tls -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:18:03.711 17:08:51 nvmf_tcp.nvmf_tls -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:18:03.711 17:08:51 nvmf_tcp.nvmf_tls -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:18:03.711 17:08:51 nvmf_tcp.nvmf_tls -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:03.711 17:08:51 nvmf_tcp.nvmf_tls -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:18:03.711 Found 0000:86:00.0 (0x8086 - 0x159b) 00:18:03.711 17:08:51 nvmf_tcp.nvmf_tls -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:03.711 17:08:51 nvmf_tcp.nvmf_tls -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:03.711 17:08:51 nvmf_tcp.nvmf_tls -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:03.711 17:08:51 nvmf_tcp.nvmf_tls -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:03.711 17:08:51 nvmf_tcp.nvmf_tls -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:03.711 17:08:51 nvmf_tcp.nvmf_tls -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:03.711 17:08:51 nvmf_tcp.nvmf_tls -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:18:03.711 Found 0000:86:00.1 (0x8086 - 0x159b) 00:18:03.711 17:08:51 nvmf_tcp.nvmf_tls -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:03.711 17:08:51 nvmf_tcp.nvmf_tls -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:03.711 17:08:51 nvmf_tcp.nvmf_tls -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:03.711 17:08:51 nvmf_tcp.nvmf_tls -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:03.711 17:08:51 nvmf_tcp.nvmf_tls -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:03.711 17:08:51 nvmf_tcp.nvmf_tls -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:18:03.711 17:08:51 nvmf_tcp.nvmf_tls -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:18:03.711 17:08:51 nvmf_tcp.nvmf_tls -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:18:03.711 17:08:51 nvmf_tcp.nvmf_tls -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:03.711 17:08:51 nvmf_tcp.nvmf_tls -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:03.711 17:08:51 nvmf_tcp.nvmf_tls -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:18:03.711 17:08:51 nvmf_tcp.nvmf_tls -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:03.711 17:08:51 nvmf_tcp.nvmf_tls -- nvmf/common.sh@390 -- # [[ up == up ]] 00:18:03.711 17:08:51 nvmf_tcp.nvmf_tls -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:18:03.711 17:08:51 nvmf_tcp.nvmf_tls -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:03.711 17:08:51 nvmf_tcp.nvmf_tls -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:18:03.711 Found net devices under 0000:86:00.0: cvl_0_0 00:18:03.711 17:08:51 nvmf_tcp.nvmf_tls -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:18:03.711 17:08:51 nvmf_tcp.nvmf_tls -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:03.711 17:08:51 nvmf_tcp.nvmf_tls -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:03.711 17:08:51 nvmf_tcp.nvmf_tls -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:18:03.711 17:08:51 nvmf_tcp.nvmf_tls -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:03.711 17:08:51 nvmf_tcp.nvmf_tls -- nvmf/common.sh@390 -- # [[ up == up ]] 00:18:03.711 17:08:51 nvmf_tcp.nvmf_tls -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:18:03.711 17:08:51 nvmf_tcp.nvmf_tls -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:03.711 17:08:51 nvmf_tcp.nvmf_tls -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:18:03.711 Found net devices under 0000:86:00.1: cvl_0_1 00:18:03.711 17:08:51 nvmf_tcp.nvmf_tls -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:18:03.711 17:08:51 nvmf_tcp.nvmf_tls -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:18:03.711 17:08:51 nvmf_tcp.nvmf_tls -- nvmf/common.sh@414 -- # is_hw=yes 00:18:03.711 17:08:51 nvmf_tcp.nvmf_tls -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:18:03.711 17:08:51 nvmf_tcp.nvmf_tls -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:18:03.711 17:08:51 nvmf_tcp.nvmf_tls -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:18:03.711 17:08:51 nvmf_tcp.nvmf_tls -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:03.711 17:08:51 nvmf_tcp.nvmf_tls -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:03.711 17:08:51 nvmf_tcp.nvmf_tls -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:18:03.711 17:08:51 nvmf_tcp.nvmf_tls -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:18:03.711 17:08:51 nvmf_tcp.nvmf_tls -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:18:03.711 17:08:51 nvmf_tcp.nvmf_tls -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:18:03.711 17:08:51 nvmf_tcp.nvmf_tls -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:18:03.711 17:08:51 nvmf_tcp.nvmf_tls -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:18:03.711 17:08:51 nvmf_tcp.nvmf_tls -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:03.711 17:08:51 nvmf_tcp.nvmf_tls -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:18:03.711 17:08:51 nvmf_tcp.nvmf_tls -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:18:03.711 17:08:51 nvmf_tcp.nvmf_tls -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:18:03.711 17:08:51 nvmf_tcp.nvmf_tls -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:18:03.711 17:08:51 nvmf_tcp.nvmf_tls -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:18:03.711 17:08:51 nvmf_tcp.nvmf_tls -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:18:03.711 17:08:51 nvmf_tcp.nvmf_tls -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:18:03.711 17:08:51 nvmf_tcp.nvmf_tls -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:18:03.711 17:08:51 nvmf_tcp.nvmf_tls -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:18:03.711 17:08:51 nvmf_tcp.nvmf_tls -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:18:03.711 17:08:51 nvmf_tcp.nvmf_tls -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:18:03.711 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:03.711 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.283 ms 00:18:03.711 00:18:03.711 --- 10.0.0.2 ping statistics --- 00:18:03.711 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:03.711 rtt min/avg/max/mdev = 0.283/0.283/0.283/0.000 ms 00:18:03.711 17:08:51 nvmf_tcp.nvmf_tls -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:18:03.711 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:03.711 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.143 ms 00:18:03.711 00:18:03.711 --- 10.0.0.1 ping statistics --- 00:18:03.711 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:03.711 rtt min/avg/max/mdev = 0.143/0.143/0.143/0.000 ms 00:18:03.711 17:08:51 nvmf_tcp.nvmf_tls -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:03.711 17:08:51 nvmf_tcp.nvmf_tls -- nvmf/common.sh@422 -- # return 0 00:18:03.711 17:08:51 nvmf_tcp.nvmf_tls -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:18:03.711 17:08:51 nvmf_tcp.nvmf_tls -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:03.711 17:08:51 nvmf_tcp.nvmf_tls -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:18:03.711 17:08:51 nvmf_tcp.nvmf_tls -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:18:03.711 17:08:51 nvmf_tcp.nvmf_tls -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:03.711 17:08:51 nvmf_tcp.nvmf_tls -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:18:03.711 17:08:51 nvmf_tcp.nvmf_tls -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:18:03.711 17:08:51 nvmf_tcp.nvmf_tls -- target/tls.sh@63 -- # nvmfappstart -m 0x2 --wait-for-rpc 00:18:03.711 17:08:51 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:18:03.711 17:08:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@720 -- # xtrace_disable 00:18:03.711 17:08:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:03.711 17:08:51 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=3088112 00:18:03.711 17:08:51 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 3088112 00:18:03.711 17:08:51 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 --wait-for-rpc 00:18:03.711 17:08:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 3088112 ']' 00:18:03.711 17:08:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:03.711 17:08:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:18:03.711 17:08:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:03.712 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:03.712 17:08:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:18:03.712 17:08:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:03.970 [2024-05-15 17:08:51.400209] Starting SPDK v24.05-pre git sha1 0ba8ca574 / DPDK 23.11.0 initialization... 00:18:03.970 [2024-05-15 17:08:51.400253] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:03.970 EAL: No free 2048 kB hugepages reported on node 1 00:18:03.970 [2024-05-15 17:08:51.459000] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:03.970 [2024-05-15 17:08:51.533087] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:03.970 [2024-05-15 17:08:51.533129] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:03.970 [2024-05-15 17:08:51.533136] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:03.970 [2024-05-15 17:08:51.533141] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:03.970 [2024-05-15 17:08:51.533147] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:03.970 [2024-05-15 17:08:51.533176] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:18:04.536 17:08:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:18:04.536 17:08:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:18:04.536 17:08:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:18:04.536 17:08:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:04.536 17:08:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:04.793 17:08:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:04.793 17:08:52 nvmf_tcp.nvmf_tls -- target/tls.sh@65 -- # '[' tcp '!=' tcp ']' 00:18:04.793 17:08:52 nvmf_tcp.nvmf_tls -- target/tls.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_set_default_impl -i ssl 00:18:04.793 true 00:18:04.793 17:08:52 nvmf_tcp.nvmf_tls -- target/tls.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:18:04.793 17:08:52 nvmf_tcp.nvmf_tls -- target/tls.sh@73 -- # jq -r .tls_version 00:18:05.050 17:08:52 nvmf_tcp.nvmf_tls -- target/tls.sh@73 -- # version=0 00:18:05.050 17:08:52 nvmf_tcp.nvmf_tls -- target/tls.sh@74 -- # [[ 0 != \0 ]] 00:18:05.050 17:08:52 nvmf_tcp.nvmf_tls -- target/tls.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:18:05.309 17:08:52 nvmf_tcp.nvmf_tls -- target/tls.sh@81 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:18:05.309 17:08:52 nvmf_tcp.nvmf_tls -- target/tls.sh@81 -- # jq -r .tls_version 00:18:05.309 17:08:52 nvmf_tcp.nvmf_tls -- target/tls.sh@81 -- # version=13 00:18:05.309 17:08:52 nvmf_tcp.nvmf_tls -- target/tls.sh@82 -- # [[ 13 != \1\3 ]] 00:18:05.309 17:08:52 nvmf_tcp.nvmf_tls -- target/tls.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 7 00:18:05.567 17:08:53 nvmf_tcp.nvmf_tls -- target/tls.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:18:05.567 17:08:53 nvmf_tcp.nvmf_tls -- target/tls.sh@89 -- # jq -r .tls_version 00:18:05.825 17:08:53 nvmf_tcp.nvmf_tls -- target/tls.sh@89 -- # version=7 00:18:05.825 17:08:53 nvmf_tcp.nvmf_tls -- target/tls.sh@90 -- # [[ 7 != \7 ]] 00:18:05.825 17:08:53 nvmf_tcp.nvmf_tls -- target/tls.sh@96 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:18:05.825 17:08:53 nvmf_tcp.nvmf_tls -- target/tls.sh@96 -- # jq -r .enable_ktls 00:18:05.825 17:08:53 nvmf_tcp.nvmf_tls -- target/tls.sh@96 -- # ktls=false 00:18:05.825 17:08:53 nvmf_tcp.nvmf_tls -- target/tls.sh@97 -- # [[ false != \f\a\l\s\e ]] 00:18:05.825 17:08:53 nvmf_tcp.nvmf_tls -- target/tls.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --enable-ktls 00:18:06.083 17:08:53 nvmf_tcp.nvmf_tls -- target/tls.sh@104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:18:06.083 17:08:53 nvmf_tcp.nvmf_tls -- target/tls.sh@104 -- # jq -r .enable_ktls 00:18:06.341 17:08:53 nvmf_tcp.nvmf_tls -- target/tls.sh@104 -- # ktls=true 00:18:06.341 17:08:53 nvmf_tcp.nvmf_tls -- target/tls.sh@105 -- # [[ true != \t\r\u\e ]] 00:18:06.341 17:08:53 nvmf_tcp.nvmf_tls -- target/tls.sh@111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --disable-ktls 00:18:06.341 17:08:53 nvmf_tcp.nvmf_tls -- target/tls.sh@112 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:18:06.341 17:08:53 nvmf_tcp.nvmf_tls -- target/tls.sh@112 -- # jq -r .enable_ktls 00:18:06.599 17:08:54 nvmf_tcp.nvmf_tls -- target/tls.sh@112 -- # ktls=false 00:18:06.599 17:08:54 nvmf_tcp.nvmf_tls -- target/tls.sh@113 -- # [[ false != \f\a\l\s\e ]] 00:18:06.599 17:08:54 nvmf_tcp.nvmf_tls -- target/tls.sh@118 -- # format_interchange_psk 00112233445566778899aabbccddeeff 1 00:18:06.599 17:08:54 nvmf_tcp.nvmf_tls -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 1 00:18:06.599 17:08:54 nvmf_tcp.nvmf_tls -- nvmf/common.sh@702 -- # local prefix key digest 00:18:06.599 17:08:54 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:18:06.599 17:08:54 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:18:06.599 17:08:54 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # digest=1 00:18:06.599 17:08:54 nvmf_tcp.nvmf_tls -- nvmf/common.sh@705 -- # python - 00:18:06.599 17:08:54 nvmf_tcp.nvmf_tls -- target/tls.sh@118 -- # key=NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:18:06.599 17:08:54 nvmf_tcp.nvmf_tls -- target/tls.sh@119 -- # format_interchange_psk ffeeddccbbaa99887766554433221100 1 00:18:06.599 17:08:54 nvmf_tcp.nvmf_tls -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 ffeeddccbbaa99887766554433221100 1 00:18:06.599 17:08:54 nvmf_tcp.nvmf_tls -- nvmf/common.sh@702 -- # local prefix key digest 00:18:06.599 17:08:54 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:18:06.599 17:08:54 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # key=ffeeddccbbaa99887766554433221100 00:18:06.599 17:08:54 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # digest=1 00:18:06.599 17:08:54 nvmf_tcp.nvmf_tls -- nvmf/common.sh@705 -- # python - 00:18:06.599 17:08:54 nvmf_tcp.nvmf_tls -- target/tls.sh@119 -- # key_2=NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:18:06.599 17:08:54 nvmf_tcp.nvmf_tls -- target/tls.sh@121 -- # mktemp 00:18:06.599 17:08:54 nvmf_tcp.nvmf_tls -- target/tls.sh@121 -- # key_path=/tmp/tmp.Xm8ApRGOFe 00:18:06.599 17:08:54 nvmf_tcp.nvmf_tls -- target/tls.sh@122 -- # mktemp 00:18:06.599 17:08:54 nvmf_tcp.nvmf_tls -- target/tls.sh@122 -- # key_2_path=/tmp/tmp.GoybeJ8Iud 00:18:06.599 17:08:54 nvmf_tcp.nvmf_tls -- target/tls.sh@124 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:18:06.599 17:08:54 nvmf_tcp.nvmf_tls -- target/tls.sh@125 -- # echo -n NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:18:06.599 17:08:54 nvmf_tcp.nvmf_tls -- target/tls.sh@127 -- # chmod 0600 /tmp/tmp.Xm8ApRGOFe 00:18:06.599 17:08:54 nvmf_tcp.nvmf_tls -- target/tls.sh@128 -- # chmod 0600 /tmp/tmp.GoybeJ8Iud 00:18:06.599 17:08:54 nvmf_tcp.nvmf_tls -- target/tls.sh@130 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:18:06.858 17:08:54 nvmf_tcp.nvmf_tls -- target/tls.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_start_init 00:18:07.117 17:08:54 nvmf_tcp.nvmf_tls -- target/tls.sh@133 -- # setup_nvmf_tgt /tmp/tmp.Xm8ApRGOFe 00:18:07.117 17:08:54 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.Xm8ApRGOFe 00:18:07.117 17:08:54 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:18:07.375 [2024-05-15 17:08:54.801194] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:07.375 17:08:54 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:18:07.375 17:08:54 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:18:07.633 [2024-05-15 17:08:55.113968] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:18:07.633 [2024-05-15 17:08:55.114016] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:18:07.633 [2024-05-15 17:08:55.114211] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:07.633 17:08:55 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:18:07.633 malloc0 00:18:07.634 17:08:55 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:18:07.892 17:08:55 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.Xm8ApRGOFe 00:18:08.150 [2024-05-15 17:08:55.615435] tcp.c:3665:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:18:08.150 17:08:55 nvmf_tcp.nvmf_tls -- target/tls.sh@137 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -S ssl -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 hostnqn:nqn.2016-06.io.spdk:host1' --psk-path /tmp/tmp.Xm8ApRGOFe 00:18:08.150 EAL: No free 2048 kB hugepages reported on node 1 00:18:18.125 Initializing NVMe Controllers 00:18:18.125 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:18:18.125 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:18:18.125 Initialization complete. Launching workers. 00:18:18.125 ======================================================== 00:18:18.125 Latency(us) 00:18:18.125 Device Information : IOPS MiB/s Average min max 00:18:18.125 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 16403.78 64.08 3901.98 802.82 5861.62 00:18:18.125 ======================================================== 00:18:18.125 Total : 16403.78 64.08 3901.98 802.82 5861.62 00:18:18.125 00:18:18.125 17:09:05 nvmf_tcp.nvmf_tls -- target/tls.sh@143 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.Xm8ApRGOFe 00:18:18.125 17:09:05 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:18:18.125 17:09:05 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:18:18.125 17:09:05 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:18:18.125 17:09:05 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.Xm8ApRGOFe' 00:18:18.125 17:09:05 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:18:18.125 17:09:05 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=3090757 00:18:18.125 17:09:05 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:18:18.125 17:09:05 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:18:18.125 17:09:05 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 3090757 /var/tmp/bdevperf.sock 00:18:18.125 17:09:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 3090757 ']' 00:18:18.125 17:09:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:18.125 17:09:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:18:18.125 17:09:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:18.125 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:18.125 17:09:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:18:18.125 17:09:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:18.125 [2024-05-15 17:09:05.770133] Starting SPDK v24.05-pre git sha1 0ba8ca574 / DPDK 23.11.0 initialization... 00:18:18.125 [2024-05-15 17:09:05.770186] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3090757 ] 00:18:18.383 EAL: No free 2048 kB hugepages reported on node 1 00:18:18.383 [2024-05-15 17:09:05.820331] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:18.383 [2024-05-15 17:09:05.897362] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:18:18.948 17:09:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:18:18.948 17:09:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:18:18.948 17:09:06 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.Xm8ApRGOFe 00:18:19.206 [2024-05-15 17:09:06.743221] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:19.206 [2024-05-15 17:09:06.743291] nvme_tcp.c:2580:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:18:19.206 TLSTESTn1 00:18:19.206 17:09:06 nvmf_tcp.nvmf_tls -- target/tls.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:18:19.464 Running I/O for 10 seconds... 00:18:29.431 00:18:29.431 Latency(us) 00:18:29.431 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:29.431 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:18:29.431 Verification LBA range: start 0x0 length 0x2000 00:18:29.431 TLSTESTn1 : 10.02 5536.89 21.63 0.00 0.00 23079.19 6867.03 37384.01 00:18:29.431 =================================================================================================================== 00:18:29.431 Total : 5536.89 21.63 0.00 0.00 23079.19 6867.03 37384.01 00:18:29.431 0 00:18:29.431 17:09:16 nvmf_tcp.nvmf_tls -- target/tls.sh@44 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:18:29.431 17:09:16 nvmf_tcp.nvmf_tls -- target/tls.sh@45 -- # killprocess 3090757 00:18:29.431 17:09:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 3090757 ']' 00:18:29.431 17:09:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 3090757 00:18:29.431 17:09:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:18:29.431 17:09:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:18:29.431 17:09:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3090757 00:18:29.432 17:09:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_2 00:18:29.432 17:09:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_2 = sudo ']' 00:18:29.432 17:09:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3090757' 00:18:29.432 killing process with pid 3090757 00:18:29.432 17:09:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 3090757 00:18:29.432 Received shutdown signal, test time was about 10.000000 seconds 00:18:29.432 00:18:29.432 Latency(us) 00:18:29.432 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:29.432 =================================================================================================================== 00:18:29.432 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:29.432 [2024-05-15 17:09:17.033855] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:18:29.432 17:09:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 3090757 00:18:29.690 17:09:17 nvmf_tcp.nvmf_tls -- target/tls.sh@146 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.GoybeJ8Iud 00:18:29.690 17:09:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:18:29.690 17:09:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.GoybeJ8Iud 00:18:29.690 17:09:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:18:29.690 17:09:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:18:29.690 17:09:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:18:29.690 17:09:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:18:29.690 17:09:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.GoybeJ8Iud 00:18:29.690 17:09:17 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:18:29.690 17:09:17 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:18:29.690 17:09:17 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:18:29.690 17:09:17 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.GoybeJ8Iud' 00:18:29.690 17:09:17 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:18:29.690 17:09:17 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=3092985 00:18:29.690 17:09:17 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:18:29.690 17:09:17 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:18:29.690 17:09:17 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 3092985 /var/tmp/bdevperf.sock 00:18:29.690 17:09:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 3092985 ']' 00:18:29.690 17:09:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:29.690 17:09:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:18:29.690 17:09:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:29.690 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:29.690 17:09:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:18:29.690 17:09:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:29.690 [2024-05-15 17:09:17.289067] Starting SPDK v24.05-pre git sha1 0ba8ca574 / DPDK 23.11.0 initialization... 00:18:29.690 [2024-05-15 17:09:17.289115] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3092985 ] 00:18:29.690 EAL: No free 2048 kB hugepages reported on node 1 00:18:29.690 [2024-05-15 17:09:17.337518] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:29.949 [2024-05-15 17:09:17.409697] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:18:30.515 17:09:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:18:30.515 17:09:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:18:30.515 17:09:18 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.GoybeJ8Iud 00:18:30.774 [2024-05-15 17:09:18.252114] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:30.774 [2024-05-15 17:09:18.252190] nvme_tcp.c:2580:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:18:30.774 [2024-05-15 17:09:18.257560] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:18:30.774 [2024-05-15 17:09:18.258500] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19c9490 (107): Transport endpoint is not connected 00:18:30.774 [2024-05-15 17:09:18.259493] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19c9490 (9): Bad file descriptor 00:18:30.774 [2024-05-15 17:09:18.260495] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:30.774 [2024-05-15 17:09:18.260506] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:18:30.774 [2024-05-15 17:09:18.260516] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:30.774 request: 00:18:30.774 { 00:18:30.774 "name": "TLSTEST", 00:18:30.774 "trtype": "tcp", 00:18:30.774 "traddr": "10.0.0.2", 00:18:30.774 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:30.774 "adrfam": "ipv4", 00:18:30.774 "trsvcid": "4420", 00:18:30.774 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:30.774 "psk": "/tmp/tmp.GoybeJ8Iud", 00:18:30.774 "method": "bdev_nvme_attach_controller", 00:18:30.774 "req_id": 1 00:18:30.774 } 00:18:30.774 Got JSON-RPC error response 00:18:30.774 response: 00:18:30.774 { 00:18:30.774 "code": -32602, 00:18:30.774 "message": "Invalid parameters" 00:18:30.774 } 00:18:30.774 17:09:18 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 3092985 00:18:30.774 17:09:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 3092985 ']' 00:18:30.774 17:09:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 3092985 00:18:30.774 17:09:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:18:30.774 17:09:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:18:30.774 17:09:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3092985 00:18:30.774 17:09:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_2 00:18:30.774 17:09:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_2 = sudo ']' 00:18:30.774 17:09:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3092985' 00:18:30.774 killing process with pid 3092985 00:18:30.774 17:09:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 3092985 00:18:30.774 Received shutdown signal, test time was about 10.000000 seconds 00:18:30.774 00:18:30.774 Latency(us) 00:18:30.774 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:30.774 =================================================================================================================== 00:18:30.774 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:18:30.774 [2024-05-15 17:09:18.321599] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:18:30.774 17:09:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 3092985 00:18:31.033 17:09:18 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:18:31.033 17:09:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:18:31.033 17:09:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:18:31.033 17:09:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:18:31.033 17:09:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:18:31.033 17:09:18 nvmf_tcp.nvmf_tls -- target/tls.sh@149 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.Xm8ApRGOFe 00:18:31.033 17:09:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:18:31.033 17:09:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.Xm8ApRGOFe 00:18:31.033 17:09:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:18:31.033 17:09:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:18:31.033 17:09:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:18:31.033 17:09:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:18:31.033 17:09:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.Xm8ApRGOFe 00:18:31.033 17:09:18 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:18:31.033 17:09:18 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:18:31.033 17:09:18 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host2 00:18:31.033 17:09:18 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.Xm8ApRGOFe' 00:18:31.033 17:09:18 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:18:31.033 17:09:18 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=3093223 00:18:31.033 17:09:18 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:18:31.033 17:09:18 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:18:31.033 17:09:18 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 3093223 /var/tmp/bdevperf.sock 00:18:31.033 17:09:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 3093223 ']' 00:18:31.033 17:09:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:31.033 17:09:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:18:31.033 17:09:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:31.033 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:31.033 17:09:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:18:31.033 17:09:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:31.033 [2024-05-15 17:09:18.570669] Starting SPDK v24.05-pre git sha1 0ba8ca574 / DPDK 23.11.0 initialization... 00:18:31.033 [2024-05-15 17:09:18.570717] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3093223 ] 00:18:31.033 EAL: No free 2048 kB hugepages reported on node 1 00:18:31.033 [2024-05-15 17:09:18.620079] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:31.292 [2024-05-15 17:09:18.698837] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:18:31.857 17:09:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:18:31.857 17:09:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:18:31.858 17:09:19 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 --psk /tmp/tmp.Xm8ApRGOFe 00:18:32.116 [2024-05-15 17:09:19.552449] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:32.116 [2024-05-15 17:09:19.552517] nvme_tcp.c:2580:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:18:32.116 [2024-05-15 17:09:19.560795] tcp.c: 881:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:18:32.116 [2024-05-15 17:09:19.560822] posix.c: 588:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:18:32.116 [2024-05-15 17:09:19.560847] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:18:32.116 [2024-05-15 17:09:19.561783] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9cb490 (107): Transport endpoint is not connected 00:18:32.116 [2024-05-15 17:09:19.562776] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9cb490 (9): Bad file descriptor 00:18:32.116 [2024-05-15 17:09:19.563778] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:32.116 [2024-05-15 17:09:19.563788] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:18:32.116 [2024-05-15 17:09:19.563797] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:32.116 request: 00:18:32.116 { 00:18:32.116 "name": "TLSTEST", 00:18:32.116 "trtype": "tcp", 00:18:32.116 "traddr": "10.0.0.2", 00:18:32.116 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:18:32.116 "adrfam": "ipv4", 00:18:32.116 "trsvcid": "4420", 00:18:32.116 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:32.116 "psk": "/tmp/tmp.Xm8ApRGOFe", 00:18:32.116 "method": "bdev_nvme_attach_controller", 00:18:32.116 "req_id": 1 00:18:32.116 } 00:18:32.116 Got JSON-RPC error response 00:18:32.116 response: 00:18:32.116 { 00:18:32.116 "code": -32602, 00:18:32.116 "message": "Invalid parameters" 00:18:32.116 } 00:18:32.116 17:09:19 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 3093223 00:18:32.116 17:09:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 3093223 ']' 00:18:32.116 17:09:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 3093223 00:18:32.116 17:09:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:18:32.116 17:09:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:18:32.116 17:09:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3093223 00:18:32.116 17:09:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_2 00:18:32.116 17:09:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_2 = sudo ']' 00:18:32.116 17:09:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3093223' 00:18:32.116 killing process with pid 3093223 00:18:32.116 17:09:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 3093223 00:18:32.116 Received shutdown signal, test time was about 10.000000 seconds 00:18:32.116 00:18:32.116 Latency(us) 00:18:32.116 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:32.116 =================================================================================================================== 00:18:32.116 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:18:32.116 [2024-05-15 17:09:19.625708] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:18:32.116 17:09:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 3093223 00:18:32.375 17:09:19 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:18:32.375 17:09:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:18:32.375 17:09:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:18:32.375 17:09:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:18:32.375 17:09:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:18:32.375 17:09:19 nvmf_tcp.nvmf_tls -- target/tls.sh@152 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.Xm8ApRGOFe 00:18:32.375 17:09:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:18:32.375 17:09:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.Xm8ApRGOFe 00:18:32.375 17:09:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:18:32.375 17:09:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:18:32.375 17:09:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:18:32.375 17:09:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:18:32.375 17:09:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.Xm8ApRGOFe 00:18:32.375 17:09:19 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:18:32.375 17:09:19 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode2 00:18:32.375 17:09:19 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:18:32.375 17:09:19 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.Xm8ApRGOFe' 00:18:32.375 17:09:19 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:18:32.375 17:09:19 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=3093456 00:18:32.375 17:09:19 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:18:32.375 17:09:19 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:18:32.375 17:09:19 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 3093456 /var/tmp/bdevperf.sock 00:18:32.375 17:09:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 3093456 ']' 00:18:32.375 17:09:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:32.375 17:09:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:18:32.375 17:09:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:32.375 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:32.375 17:09:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:18:32.375 17:09:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:32.375 [2024-05-15 17:09:19.878064] Starting SPDK v24.05-pre git sha1 0ba8ca574 / DPDK 23.11.0 initialization... 00:18:32.375 [2024-05-15 17:09:19.878111] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3093456 ] 00:18:32.375 EAL: No free 2048 kB hugepages reported on node 1 00:18:32.376 [2024-05-15 17:09:19.926362] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:32.376 [2024-05-15 17:09:19.996746] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:18:33.311 17:09:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:18:33.311 17:09:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:18:33.311 17:09:20 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.Xm8ApRGOFe 00:18:33.311 [2024-05-15 17:09:20.841058] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:33.311 [2024-05-15 17:09:20.841130] nvme_tcp.c:2580:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:18:33.311 [2024-05-15 17:09:20.845523] tcp.c: 881:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:18:33.311 [2024-05-15 17:09:20.845550] posix.c: 588:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:18:33.311 [2024-05-15 17:09:20.845577] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:18:33.311 [2024-05-15 17:09:20.846292] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbba490 (107): Transport endpoint is not connected 00:18:33.311 [2024-05-15 17:09:20.847284] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbba490 (9): Bad file descriptor 00:18:33.311 [2024-05-15 17:09:20.848286] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:18:33.311 [2024-05-15 17:09:20.848297] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:18:33.311 [2024-05-15 17:09:20.848308] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:18:33.311 request: 00:18:33.311 { 00:18:33.311 "name": "TLSTEST", 00:18:33.311 "trtype": "tcp", 00:18:33.311 "traddr": "10.0.0.2", 00:18:33.311 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:33.311 "adrfam": "ipv4", 00:18:33.311 "trsvcid": "4420", 00:18:33.311 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:18:33.311 "psk": "/tmp/tmp.Xm8ApRGOFe", 00:18:33.311 "method": "bdev_nvme_attach_controller", 00:18:33.311 "req_id": 1 00:18:33.311 } 00:18:33.311 Got JSON-RPC error response 00:18:33.311 response: 00:18:33.311 { 00:18:33.311 "code": -32602, 00:18:33.311 "message": "Invalid parameters" 00:18:33.311 } 00:18:33.311 17:09:20 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 3093456 00:18:33.311 17:09:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 3093456 ']' 00:18:33.311 17:09:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 3093456 00:18:33.311 17:09:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:18:33.311 17:09:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:18:33.311 17:09:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3093456 00:18:33.311 17:09:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_2 00:18:33.311 17:09:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_2 = sudo ']' 00:18:33.311 17:09:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3093456' 00:18:33.311 killing process with pid 3093456 00:18:33.311 17:09:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 3093456 00:18:33.311 Received shutdown signal, test time was about 10.000000 seconds 00:18:33.311 00:18:33.311 Latency(us) 00:18:33.311 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:33.311 =================================================================================================================== 00:18:33.311 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:18:33.311 [2024-05-15 17:09:20.920541] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:18:33.311 17:09:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 3093456 00:18:33.570 17:09:21 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:18:33.570 17:09:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:18:33.570 17:09:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:18:33.570 17:09:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:18:33.570 17:09:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:18:33.571 17:09:21 nvmf_tcp.nvmf_tls -- target/tls.sh@155 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:18:33.571 17:09:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:18:33.571 17:09:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:18:33.571 17:09:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:18:33.571 17:09:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:18:33.571 17:09:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:18:33.571 17:09:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:18:33.571 17:09:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:18:33.571 17:09:21 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:18:33.571 17:09:21 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:18:33.571 17:09:21 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:18:33.571 17:09:21 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk= 00:18:33.571 17:09:21 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:18:33.571 17:09:21 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=3093699 00:18:33.571 17:09:21 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:18:33.571 17:09:21 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:18:33.571 17:09:21 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 3093699 /var/tmp/bdevperf.sock 00:18:33.571 17:09:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 3093699 ']' 00:18:33.571 17:09:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:33.571 17:09:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:18:33.571 17:09:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:33.571 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:33.571 17:09:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:18:33.571 17:09:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:33.571 [2024-05-15 17:09:21.168578] Starting SPDK v24.05-pre git sha1 0ba8ca574 / DPDK 23.11.0 initialization... 00:18:33.571 [2024-05-15 17:09:21.168623] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3093699 ] 00:18:33.571 EAL: No free 2048 kB hugepages reported on node 1 00:18:33.571 [2024-05-15 17:09:21.219053] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:33.829 [2024-05-15 17:09:21.287664] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:18:34.397 17:09:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:18:34.397 17:09:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:18:34.397 17:09:21 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:18:34.731 [2024-05-15 17:09:22.128601] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:18:34.731 [2024-05-15 17:09:22.130468] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22cbb30 (9): Bad file descriptor 00:18:34.731 [2024-05-15 17:09:22.131466] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:34.731 [2024-05-15 17:09:22.131477] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:18:34.731 [2024-05-15 17:09:22.131486] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:34.731 request: 00:18:34.731 { 00:18:34.731 "name": "TLSTEST", 00:18:34.731 "trtype": "tcp", 00:18:34.731 "traddr": "10.0.0.2", 00:18:34.731 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:34.731 "adrfam": "ipv4", 00:18:34.731 "trsvcid": "4420", 00:18:34.731 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:34.731 "method": "bdev_nvme_attach_controller", 00:18:34.731 "req_id": 1 00:18:34.731 } 00:18:34.731 Got JSON-RPC error response 00:18:34.731 response: 00:18:34.731 { 00:18:34.731 "code": -32602, 00:18:34.731 "message": "Invalid parameters" 00:18:34.731 } 00:18:34.731 17:09:22 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 3093699 00:18:34.731 17:09:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 3093699 ']' 00:18:34.731 17:09:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 3093699 00:18:34.731 17:09:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:18:34.731 17:09:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:18:34.731 17:09:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3093699 00:18:34.731 17:09:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_2 00:18:34.731 17:09:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_2 = sudo ']' 00:18:34.731 17:09:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3093699' 00:18:34.731 killing process with pid 3093699 00:18:34.731 17:09:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 3093699 00:18:34.731 Received shutdown signal, test time was about 10.000000 seconds 00:18:34.731 00:18:34.731 Latency(us) 00:18:34.731 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:34.731 =================================================================================================================== 00:18:34.731 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:18:34.731 17:09:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 3093699 00:18:34.991 17:09:22 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:18:34.991 17:09:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:18:34.991 17:09:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:18:34.991 17:09:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:18:34.991 17:09:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:18:34.991 17:09:22 nvmf_tcp.nvmf_tls -- target/tls.sh@158 -- # killprocess 3088112 00:18:34.991 17:09:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 3088112 ']' 00:18:34.991 17:09:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 3088112 00:18:34.991 17:09:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:18:34.991 17:09:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:18:34.991 17:09:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3088112 00:18:34.991 17:09:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:18:34.991 17:09:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:18:34.991 17:09:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3088112' 00:18:34.991 killing process with pid 3088112 00:18:34.991 17:09:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 3088112 00:18:34.991 [2024-05-15 17:09:22.441052] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:18:34.991 [2024-05-15 17:09:22.441090] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:18:34.991 17:09:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 3088112 00:18:35.250 17:09:22 nvmf_tcp.nvmf_tls -- target/tls.sh@159 -- # format_interchange_psk 00112233445566778899aabbccddeeff0011223344556677 2 00:18:35.250 17:09:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff0011223344556677 2 00:18:35.250 17:09:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@702 -- # local prefix key digest 00:18:35.250 17:09:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:18:35.250 17:09:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff0011223344556677 00:18:35.250 17:09:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # digest=2 00:18:35.250 17:09:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@705 -- # python - 00:18:35.250 17:09:22 nvmf_tcp.nvmf_tls -- target/tls.sh@159 -- # key_long=NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:18:35.250 17:09:22 nvmf_tcp.nvmf_tls -- target/tls.sh@160 -- # mktemp 00:18:35.250 17:09:22 nvmf_tcp.nvmf_tls -- target/tls.sh@160 -- # key_long_path=/tmp/tmp.sG4BvOsBCe 00:18:35.250 17:09:22 nvmf_tcp.nvmf_tls -- target/tls.sh@161 -- # echo -n NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:18:35.250 17:09:22 nvmf_tcp.nvmf_tls -- target/tls.sh@162 -- # chmod 0600 /tmp/tmp.sG4BvOsBCe 00:18:35.250 17:09:22 nvmf_tcp.nvmf_tls -- target/tls.sh@163 -- # nvmfappstart -m 0x2 00:18:35.250 17:09:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:18:35.250 17:09:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@720 -- # xtrace_disable 00:18:35.251 17:09:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:35.251 17:09:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=3093951 00:18:35.251 17:09:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:18:35.251 17:09:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 3093951 00:18:35.251 17:09:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 3093951 ']' 00:18:35.251 17:09:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:35.251 17:09:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:18:35.251 17:09:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:35.251 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:35.251 17:09:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:18:35.251 17:09:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:35.251 [2024-05-15 17:09:22.763490] Starting SPDK v24.05-pre git sha1 0ba8ca574 / DPDK 23.11.0 initialization... 00:18:35.251 [2024-05-15 17:09:22.763534] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:35.251 EAL: No free 2048 kB hugepages reported on node 1 00:18:35.251 [2024-05-15 17:09:22.819613] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:35.251 [2024-05-15 17:09:22.894887] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:35.251 [2024-05-15 17:09:22.894926] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:35.251 [2024-05-15 17:09:22.894933] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:35.251 [2024-05-15 17:09:22.894940] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:35.251 [2024-05-15 17:09:22.894945] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:35.251 [2024-05-15 17:09:22.894963] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:18:36.184 17:09:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:18:36.184 17:09:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:18:36.184 17:09:23 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:18:36.184 17:09:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:36.184 17:09:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:36.184 17:09:23 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:36.184 17:09:23 nvmf_tcp.nvmf_tls -- target/tls.sh@165 -- # setup_nvmf_tgt /tmp/tmp.sG4BvOsBCe 00:18:36.184 17:09:23 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.sG4BvOsBCe 00:18:36.184 17:09:23 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:18:36.184 [2024-05-15 17:09:23.754472] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:36.184 17:09:23 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:18:36.442 17:09:23 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:18:36.442 [2024-05-15 17:09:24.099345] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:18:36.442 [2024-05-15 17:09:24.099396] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:18:36.442 [2024-05-15 17:09:24.099587] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:36.700 17:09:24 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:18:36.700 malloc0 00:18:36.700 17:09:24 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:18:36.959 17:09:24 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.sG4BvOsBCe 00:18:36.959 [2024-05-15 17:09:24.608943] tcp.c:3665:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:18:37.217 17:09:24 nvmf_tcp.nvmf_tls -- target/tls.sh@167 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.sG4BvOsBCe 00:18:37.217 17:09:24 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:18:37.217 17:09:24 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:18:37.217 17:09:24 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:18:37.217 17:09:24 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.sG4BvOsBCe' 00:18:37.217 17:09:24 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:18:37.217 17:09:24 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=3094214 00:18:37.217 17:09:24 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:18:37.217 17:09:24 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:18:37.217 17:09:24 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 3094214 /var/tmp/bdevperf.sock 00:18:37.217 17:09:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 3094214 ']' 00:18:37.217 17:09:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:37.217 17:09:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:18:37.217 17:09:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:37.217 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:37.217 17:09:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:18:37.217 17:09:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:37.217 [2024-05-15 17:09:24.673499] Starting SPDK v24.05-pre git sha1 0ba8ca574 / DPDK 23.11.0 initialization... 00:18:37.217 [2024-05-15 17:09:24.673543] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3094214 ] 00:18:37.217 EAL: No free 2048 kB hugepages reported on node 1 00:18:37.217 [2024-05-15 17:09:24.723094] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:37.217 [2024-05-15 17:09:24.800127] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:18:38.151 17:09:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:18:38.151 17:09:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:18:38.151 17:09:25 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.sG4BvOsBCe 00:18:38.151 [2024-05-15 17:09:25.622749] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:38.151 [2024-05-15 17:09:25.622823] nvme_tcp.c:2580:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:18:38.151 TLSTESTn1 00:18:38.151 17:09:25 nvmf_tcp.nvmf_tls -- target/tls.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:18:38.151 Running I/O for 10 seconds... 00:18:50.358 00:18:50.358 Latency(us) 00:18:50.358 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:50.358 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:18:50.358 Verification LBA range: start 0x0 length 0x2000 00:18:50.358 TLSTESTn1 : 10.02 5575.17 21.78 0.00 0.00 22915.06 5869.75 33508.84 00:18:50.358 =================================================================================================================== 00:18:50.358 Total : 5575.17 21.78 0.00 0.00 22915.06 5869.75 33508.84 00:18:50.358 0 00:18:50.358 17:09:35 nvmf_tcp.nvmf_tls -- target/tls.sh@44 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:18:50.359 17:09:35 nvmf_tcp.nvmf_tls -- target/tls.sh@45 -- # killprocess 3094214 00:18:50.359 17:09:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 3094214 ']' 00:18:50.359 17:09:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 3094214 00:18:50.359 17:09:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:18:50.359 17:09:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:18:50.359 17:09:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3094214 00:18:50.359 17:09:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_2 00:18:50.359 17:09:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_2 = sudo ']' 00:18:50.359 17:09:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3094214' 00:18:50.359 killing process with pid 3094214 00:18:50.359 17:09:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 3094214 00:18:50.359 Received shutdown signal, test time was about 10.000000 seconds 00:18:50.359 00:18:50.359 Latency(us) 00:18:50.359 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:50.359 =================================================================================================================== 00:18:50.359 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:50.359 [2024-05-15 17:09:35.914587] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:18:50.359 17:09:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 3094214 00:18:50.359 17:09:36 nvmf_tcp.nvmf_tls -- target/tls.sh@170 -- # chmod 0666 /tmp/tmp.sG4BvOsBCe 00:18:50.359 17:09:36 nvmf_tcp.nvmf_tls -- target/tls.sh@171 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.sG4BvOsBCe 00:18:50.359 17:09:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:18:50.359 17:09:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.sG4BvOsBCe 00:18:50.359 17:09:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:18:50.359 17:09:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:18:50.359 17:09:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:18:50.359 17:09:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:18:50.359 17:09:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.sG4BvOsBCe 00:18:50.359 17:09:36 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:18:50.359 17:09:36 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:18:50.359 17:09:36 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:18:50.359 17:09:36 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.sG4BvOsBCe' 00:18:50.359 17:09:36 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:18:50.359 17:09:36 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=3096056 00:18:50.359 17:09:36 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:18:50.359 17:09:36 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:18:50.359 17:09:36 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 3096056 /var/tmp/bdevperf.sock 00:18:50.359 17:09:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 3096056 ']' 00:18:50.359 17:09:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:50.359 17:09:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:18:50.359 17:09:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:50.359 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:50.359 17:09:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:18:50.359 17:09:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:50.359 [2024-05-15 17:09:36.173620] Starting SPDK v24.05-pre git sha1 0ba8ca574 / DPDK 23.11.0 initialization... 00:18:50.359 [2024-05-15 17:09:36.173671] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3096056 ] 00:18:50.359 EAL: No free 2048 kB hugepages reported on node 1 00:18:50.359 [2024-05-15 17:09:36.226100] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:50.359 [2024-05-15 17:09:36.296809] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:18:50.359 17:09:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:18:50.359 17:09:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:18:50.359 17:09:36 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.sG4BvOsBCe 00:18:50.359 [2024-05-15 17:09:37.130845] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:50.359 [2024-05-15 17:09:37.130897] bdev_nvme.c:6105:bdev_nvme_load_psk: *ERROR*: Incorrect permissions for PSK file 00:18:50.359 [2024-05-15 17:09:37.130904] bdev_nvme.c:6214:bdev_nvme_create: *ERROR*: Could not load PSK from /tmp/tmp.sG4BvOsBCe 00:18:50.359 request: 00:18:50.359 { 00:18:50.359 "name": "TLSTEST", 00:18:50.359 "trtype": "tcp", 00:18:50.359 "traddr": "10.0.0.2", 00:18:50.359 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:50.359 "adrfam": "ipv4", 00:18:50.359 "trsvcid": "4420", 00:18:50.359 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:50.359 "psk": "/tmp/tmp.sG4BvOsBCe", 00:18:50.359 "method": "bdev_nvme_attach_controller", 00:18:50.359 "req_id": 1 00:18:50.359 } 00:18:50.359 Got JSON-RPC error response 00:18:50.359 response: 00:18:50.359 { 00:18:50.359 "code": -1, 00:18:50.359 "message": "Operation not permitted" 00:18:50.359 } 00:18:50.359 17:09:37 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 3096056 00:18:50.359 17:09:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 3096056 ']' 00:18:50.359 17:09:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 3096056 00:18:50.359 17:09:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:18:50.359 17:09:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:18:50.359 17:09:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3096056 00:18:50.359 17:09:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_2 00:18:50.359 17:09:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_2 = sudo ']' 00:18:50.359 17:09:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3096056' 00:18:50.359 killing process with pid 3096056 00:18:50.359 17:09:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 3096056 00:18:50.359 Received shutdown signal, test time was about 10.000000 seconds 00:18:50.359 00:18:50.359 Latency(us) 00:18:50.359 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:50.359 =================================================================================================================== 00:18:50.359 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:18:50.359 17:09:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 3096056 00:18:50.359 17:09:37 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:18:50.359 17:09:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:18:50.359 17:09:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:18:50.359 17:09:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:18:50.359 17:09:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:18:50.359 17:09:37 nvmf_tcp.nvmf_tls -- target/tls.sh@174 -- # killprocess 3093951 00:18:50.359 17:09:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 3093951 ']' 00:18:50.359 17:09:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 3093951 00:18:50.359 17:09:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:18:50.359 17:09:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:18:50.359 17:09:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3093951 00:18:50.359 17:09:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:18:50.359 17:09:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:18:50.359 17:09:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3093951' 00:18:50.359 killing process with pid 3093951 00:18:50.359 17:09:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 3093951 00:18:50.359 [2024-05-15 17:09:37.439479] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:18:50.359 [2024-05-15 17:09:37.439529] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:18:50.359 17:09:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 3093951 00:18:50.359 17:09:37 nvmf_tcp.nvmf_tls -- target/tls.sh@175 -- # nvmfappstart -m 0x2 00:18:50.360 17:09:37 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:18:50.360 17:09:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@720 -- # xtrace_disable 00:18:50.360 17:09:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:50.360 17:09:37 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=3096370 00:18:50.360 17:09:37 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 3096370 00:18:50.360 17:09:37 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:18:50.360 17:09:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 3096370 ']' 00:18:50.360 17:09:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:50.360 17:09:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:18:50.360 17:09:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:50.360 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:50.360 17:09:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:18:50.360 17:09:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:50.360 [2024-05-15 17:09:37.713140] Starting SPDK v24.05-pre git sha1 0ba8ca574 / DPDK 23.11.0 initialization... 00:18:50.360 [2024-05-15 17:09:37.713194] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:50.360 EAL: No free 2048 kB hugepages reported on node 1 00:18:50.360 [2024-05-15 17:09:37.770128] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:50.360 [2024-05-15 17:09:37.842483] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:50.360 [2024-05-15 17:09:37.842522] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:50.360 [2024-05-15 17:09:37.842530] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:50.360 [2024-05-15 17:09:37.842536] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:50.360 [2024-05-15 17:09:37.842542] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:50.360 [2024-05-15 17:09:37.842559] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:18:50.928 17:09:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:18:50.928 17:09:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:18:50.928 17:09:38 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:18:50.928 17:09:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:50.928 17:09:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:50.928 17:09:38 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:50.928 17:09:38 nvmf_tcp.nvmf_tls -- target/tls.sh@177 -- # NOT setup_nvmf_tgt /tmp/tmp.sG4BvOsBCe 00:18:50.928 17:09:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:18:50.928 17:09:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg setup_nvmf_tgt /tmp/tmp.sG4BvOsBCe 00:18:50.928 17:09:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=setup_nvmf_tgt 00:18:50.928 17:09:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:18:50.928 17:09:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t setup_nvmf_tgt 00:18:50.928 17:09:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:18:50.928 17:09:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # setup_nvmf_tgt /tmp/tmp.sG4BvOsBCe 00:18:50.928 17:09:38 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.sG4BvOsBCe 00:18:50.928 17:09:38 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:18:51.187 [2024-05-15 17:09:38.706197] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:51.187 17:09:38 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:18:51.446 17:09:38 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:18:51.446 [2024-05-15 17:09:39.047065] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:18:51.446 [2024-05-15 17:09:39.047111] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:18:51.446 [2024-05-15 17:09:39.047306] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:51.446 17:09:39 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:18:51.705 malloc0 00:18:51.705 17:09:39 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:18:51.964 17:09:39 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.sG4BvOsBCe 00:18:51.964 [2024-05-15 17:09:39.568660] tcp.c:3575:tcp_load_psk: *ERROR*: Incorrect permissions for PSK file 00:18:51.964 [2024-05-15 17:09:39.568686] tcp.c:3661:nvmf_tcp_subsystem_add_host: *ERROR*: Could not retrieve PSK from file 00:18:51.964 [2024-05-15 17:09:39.568724] subsystem.c:1051:spdk_nvmf_subsystem_add_host_ext: *ERROR*: Unable to add host to TCP transport 00:18:51.964 request: 00:18:51.964 { 00:18:51.964 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:51.964 "host": "nqn.2016-06.io.spdk:host1", 00:18:51.964 "psk": "/tmp/tmp.sG4BvOsBCe", 00:18:51.964 "method": "nvmf_subsystem_add_host", 00:18:51.964 "req_id": 1 00:18:51.964 } 00:18:51.964 Got JSON-RPC error response 00:18:51.964 response: 00:18:51.964 { 00:18:51.964 "code": -32603, 00:18:51.964 "message": "Internal error" 00:18:51.964 } 00:18:51.964 17:09:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:18:51.964 17:09:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:18:51.964 17:09:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:18:51.964 17:09:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:18:51.964 17:09:39 nvmf_tcp.nvmf_tls -- target/tls.sh@180 -- # killprocess 3096370 00:18:51.964 17:09:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 3096370 ']' 00:18:51.964 17:09:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 3096370 00:18:51.964 17:09:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:18:51.964 17:09:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:18:51.964 17:09:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3096370 00:18:51.964 17:09:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:18:51.964 17:09:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:18:51.964 17:09:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3096370' 00:18:51.964 killing process with pid 3096370 00:18:51.964 17:09:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 3096370 00:18:51.964 [2024-05-15 17:09:39.617063] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:18:51.964 17:09:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 3096370 00:18:52.224 17:09:39 nvmf_tcp.nvmf_tls -- target/tls.sh@181 -- # chmod 0600 /tmp/tmp.sG4BvOsBCe 00:18:52.224 17:09:39 nvmf_tcp.nvmf_tls -- target/tls.sh@184 -- # nvmfappstart -m 0x2 00:18:52.224 17:09:39 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:18:52.224 17:09:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@720 -- # xtrace_disable 00:18:52.224 17:09:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:52.224 17:09:39 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=3096783 00:18:52.224 17:09:39 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:18:52.224 17:09:39 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 3096783 00:18:52.224 17:09:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 3096783 ']' 00:18:52.224 17:09:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:52.224 17:09:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:18:52.224 17:09:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:52.224 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:52.224 17:09:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:18:52.224 17:09:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:52.483 [2024-05-15 17:09:39.887979] Starting SPDK v24.05-pre git sha1 0ba8ca574 / DPDK 23.11.0 initialization... 00:18:52.483 [2024-05-15 17:09:39.888025] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:52.483 EAL: No free 2048 kB hugepages reported on node 1 00:18:52.483 [2024-05-15 17:09:39.943967] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:52.483 [2024-05-15 17:09:40.014633] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:52.483 [2024-05-15 17:09:40.014683] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:52.483 [2024-05-15 17:09:40.014694] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:52.483 [2024-05-15 17:09:40.014709] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:52.483 [2024-05-15 17:09:40.014717] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:52.483 [2024-05-15 17:09:40.014751] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:18:53.051 17:09:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:18:53.051 17:09:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:18:53.051 17:09:40 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:18:53.051 17:09:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:53.051 17:09:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:53.310 17:09:40 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:53.310 17:09:40 nvmf_tcp.nvmf_tls -- target/tls.sh@185 -- # setup_nvmf_tgt /tmp/tmp.sG4BvOsBCe 00:18:53.310 17:09:40 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.sG4BvOsBCe 00:18:53.310 17:09:40 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:18:53.310 [2024-05-15 17:09:40.884008] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:53.310 17:09:40 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:18:53.570 17:09:41 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:18:53.570 [2024-05-15 17:09:41.216839] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:18:53.570 [2024-05-15 17:09:41.216881] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:18:53.570 [2024-05-15 17:09:41.217076] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:53.829 17:09:41 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:18:53.829 malloc0 00:18:53.829 17:09:41 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:18:54.088 17:09:41 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.sG4BvOsBCe 00:18:54.088 [2024-05-15 17:09:41.714231] tcp.c:3665:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:18:54.088 17:09:41 nvmf_tcp.nvmf_tls -- target/tls.sh@187 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:18:54.088 17:09:41 nvmf_tcp.nvmf_tls -- target/tls.sh@188 -- # bdevperf_pid=3097062 00:18:54.088 17:09:41 nvmf_tcp.nvmf_tls -- target/tls.sh@190 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:18:54.088 17:09:41 nvmf_tcp.nvmf_tls -- target/tls.sh@191 -- # waitforlisten 3097062 /var/tmp/bdevperf.sock 00:18:54.088 17:09:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 3097062 ']' 00:18:54.088 17:09:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:54.088 17:09:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:18:54.088 17:09:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:54.088 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:54.088 17:09:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:18:54.088 17:09:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:54.348 [2024-05-15 17:09:41.758466] Starting SPDK v24.05-pre git sha1 0ba8ca574 / DPDK 23.11.0 initialization... 00:18:54.348 [2024-05-15 17:09:41.758512] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3097062 ] 00:18:54.348 EAL: No free 2048 kB hugepages reported on node 1 00:18:54.348 [2024-05-15 17:09:41.808828] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:54.348 [2024-05-15 17:09:41.882278] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:18:54.348 17:09:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:18:54.348 17:09:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:18:54.348 17:09:41 nvmf_tcp.nvmf_tls -- target/tls.sh@192 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.sG4BvOsBCe 00:18:54.607 [2024-05-15 17:09:42.115032] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:54.607 [2024-05-15 17:09:42.115108] nvme_tcp.c:2580:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:18:54.607 TLSTESTn1 00:18:54.607 17:09:42 nvmf_tcp.nvmf_tls -- target/tls.sh@196 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py save_config 00:18:54.866 17:09:42 nvmf_tcp.nvmf_tls -- target/tls.sh@196 -- # tgtconf='{ 00:18:54.866 "subsystems": [ 00:18:54.866 { 00:18:54.866 "subsystem": "keyring", 00:18:54.866 "config": [] 00:18:54.866 }, 00:18:54.867 { 00:18:54.867 "subsystem": "iobuf", 00:18:54.867 "config": [ 00:18:54.867 { 00:18:54.867 "method": "iobuf_set_options", 00:18:54.867 "params": { 00:18:54.867 "small_pool_count": 8192, 00:18:54.867 "large_pool_count": 1024, 00:18:54.867 "small_bufsize": 8192, 00:18:54.867 "large_bufsize": 135168 00:18:54.867 } 00:18:54.867 } 00:18:54.867 ] 00:18:54.867 }, 00:18:54.867 { 00:18:54.867 "subsystem": "sock", 00:18:54.867 "config": [ 00:18:54.867 { 00:18:54.867 "method": "sock_impl_set_options", 00:18:54.867 "params": { 00:18:54.867 "impl_name": "posix", 00:18:54.867 "recv_buf_size": 2097152, 00:18:54.867 "send_buf_size": 2097152, 00:18:54.867 "enable_recv_pipe": true, 00:18:54.867 "enable_quickack": false, 00:18:54.867 "enable_placement_id": 0, 00:18:54.867 "enable_zerocopy_send_server": true, 00:18:54.867 "enable_zerocopy_send_client": false, 00:18:54.867 "zerocopy_threshold": 0, 00:18:54.867 "tls_version": 0, 00:18:54.867 "enable_ktls": false 00:18:54.867 } 00:18:54.867 }, 00:18:54.867 { 00:18:54.867 "method": "sock_impl_set_options", 00:18:54.867 "params": { 00:18:54.867 "impl_name": "ssl", 00:18:54.867 "recv_buf_size": 4096, 00:18:54.867 "send_buf_size": 4096, 00:18:54.867 "enable_recv_pipe": true, 00:18:54.867 "enable_quickack": false, 00:18:54.867 "enable_placement_id": 0, 00:18:54.867 "enable_zerocopy_send_server": true, 00:18:54.867 "enable_zerocopy_send_client": false, 00:18:54.867 "zerocopy_threshold": 0, 00:18:54.867 "tls_version": 0, 00:18:54.867 "enable_ktls": false 00:18:54.867 } 00:18:54.867 } 00:18:54.867 ] 00:18:54.867 }, 00:18:54.867 { 00:18:54.867 "subsystem": "vmd", 00:18:54.867 "config": [] 00:18:54.867 }, 00:18:54.867 { 00:18:54.867 "subsystem": "accel", 00:18:54.867 "config": [ 00:18:54.867 { 00:18:54.867 "method": "accel_set_options", 00:18:54.867 "params": { 00:18:54.867 "small_cache_size": 128, 00:18:54.867 "large_cache_size": 16, 00:18:54.867 "task_count": 2048, 00:18:54.867 "sequence_count": 2048, 00:18:54.867 "buf_count": 2048 00:18:54.867 } 00:18:54.867 } 00:18:54.867 ] 00:18:54.867 }, 00:18:54.867 { 00:18:54.867 "subsystem": "bdev", 00:18:54.867 "config": [ 00:18:54.867 { 00:18:54.867 "method": "bdev_set_options", 00:18:54.867 "params": { 00:18:54.867 "bdev_io_pool_size": 65535, 00:18:54.867 "bdev_io_cache_size": 256, 00:18:54.867 "bdev_auto_examine": true, 00:18:54.867 "iobuf_small_cache_size": 128, 00:18:54.867 "iobuf_large_cache_size": 16 00:18:54.867 } 00:18:54.867 }, 00:18:54.867 { 00:18:54.867 "method": "bdev_raid_set_options", 00:18:54.867 "params": { 00:18:54.867 "process_window_size_kb": 1024 00:18:54.867 } 00:18:54.867 }, 00:18:54.867 { 00:18:54.867 "method": "bdev_iscsi_set_options", 00:18:54.867 "params": { 00:18:54.867 "timeout_sec": 30 00:18:54.867 } 00:18:54.867 }, 00:18:54.867 { 00:18:54.867 "method": "bdev_nvme_set_options", 00:18:54.867 "params": { 00:18:54.867 "action_on_timeout": "none", 00:18:54.867 "timeout_us": 0, 00:18:54.867 "timeout_admin_us": 0, 00:18:54.867 "keep_alive_timeout_ms": 10000, 00:18:54.867 "arbitration_burst": 0, 00:18:54.867 "low_priority_weight": 0, 00:18:54.867 "medium_priority_weight": 0, 00:18:54.867 "high_priority_weight": 0, 00:18:54.867 "nvme_adminq_poll_period_us": 10000, 00:18:54.867 "nvme_ioq_poll_period_us": 0, 00:18:54.867 "io_queue_requests": 0, 00:18:54.867 "delay_cmd_submit": true, 00:18:54.867 "transport_retry_count": 4, 00:18:54.867 "bdev_retry_count": 3, 00:18:54.867 "transport_ack_timeout": 0, 00:18:54.867 "ctrlr_loss_timeout_sec": 0, 00:18:54.867 "reconnect_delay_sec": 0, 00:18:54.867 "fast_io_fail_timeout_sec": 0, 00:18:54.867 "disable_auto_failback": false, 00:18:54.867 "generate_uuids": false, 00:18:54.867 "transport_tos": 0, 00:18:54.867 "nvme_error_stat": false, 00:18:54.867 "rdma_srq_size": 0, 00:18:54.867 "io_path_stat": false, 00:18:54.867 "allow_accel_sequence": false, 00:18:54.867 "rdma_max_cq_size": 0, 00:18:54.867 "rdma_cm_event_timeout_ms": 0, 00:18:54.867 "dhchap_digests": [ 00:18:54.867 "sha256", 00:18:54.867 "sha384", 00:18:54.867 "sha512" 00:18:54.867 ], 00:18:54.867 "dhchap_dhgroups": [ 00:18:54.867 "null", 00:18:54.867 "ffdhe2048", 00:18:54.867 "ffdhe3072", 00:18:54.867 "ffdhe4096", 00:18:54.867 "ffdhe6144", 00:18:54.867 "ffdhe8192" 00:18:54.867 ] 00:18:54.867 } 00:18:54.867 }, 00:18:54.867 { 00:18:54.867 "method": "bdev_nvme_set_hotplug", 00:18:54.867 "params": { 00:18:54.867 "period_us": 100000, 00:18:54.867 "enable": false 00:18:54.867 } 00:18:54.867 }, 00:18:54.867 { 00:18:54.867 "method": "bdev_malloc_create", 00:18:54.867 "params": { 00:18:54.867 "name": "malloc0", 00:18:54.867 "num_blocks": 8192, 00:18:54.867 "block_size": 4096, 00:18:54.867 "physical_block_size": 4096, 00:18:54.867 "uuid": "0e84d3ce-0bfd-47c2-b1eb-59ea47f85817", 00:18:54.867 "optimal_io_boundary": 0 00:18:54.867 } 00:18:54.867 }, 00:18:54.867 { 00:18:54.867 "method": "bdev_wait_for_examine" 00:18:54.867 } 00:18:54.867 ] 00:18:54.867 }, 00:18:54.867 { 00:18:54.867 "subsystem": "nbd", 00:18:54.867 "config": [] 00:18:54.867 }, 00:18:54.867 { 00:18:54.867 "subsystem": "scheduler", 00:18:54.867 "config": [ 00:18:54.867 { 00:18:54.867 "method": "framework_set_scheduler", 00:18:54.867 "params": { 00:18:54.867 "name": "static" 00:18:54.867 } 00:18:54.867 } 00:18:54.867 ] 00:18:54.867 }, 00:18:54.867 { 00:18:54.867 "subsystem": "nvmf", 00:18:54.867 "config": [ 00:18:54.867 { 00:18:54.867 "method": "nvmf_set_config", 00:18:54.867 "params": { 00:18:54.867 "discovery_filter": "match_any", 00:18:54.867 "admin_cmd_passthru": { 00:18:54.867 "identify_ctrlr": false 00:18:54.867 } 00:18:54.867 } 00:18:54.867 }, 00:18:54.867 { 00:18:54.867 "method": "nvmf_set_max_subsystems", 00:18:54.867 "params": { 00:18:54.867 "max_subsystems": 1024 00:18:54.867 } 00:18:54.867 }, 00:18:54.867 { 00:18:54.867 "method": "nvmf_set_crdt", 00:18:54.867 "params": { 00:18:54.867 "crdt1": 0, 00:18:54.867 "crdt2": 0, 00:18:54.867 "crdt3": 0 00:18:54.867 } 00:18:54.867 }, 00:18:54.867 { 00:18:54.867 "method": "nvmf_create_transport", 00:18:54.867 "params": { 00:18:54.867 "trtype": "TCP", 00:18:54.867 "max_queue_depth": 128, 00:18:54.867 "max_io_qpairs_per_ctrlr": 127, 00:18:54.867 "in_capsule_data_size": 4096, 00:18:54.867 "max_io_size": 131072, 00:18:54.867 "io_unit_size": 131072, 00:18:54.867 "max_aq_depth": 128, 00:18:54.867 "num_shared_buffers": 511, 00:18:54.867 "buf_cache_size": 4294967295, 00:18:54.867 "dif_insert_or_strip": false, 00:18:54.868 "zcopy": false, 00:18:54.868 "c2h_success": false, 00:18:54.868 "sock_priority": 0, 00:18:54.868 "abort_timeout_sec": 1, 00:18:54.868 "ack_timeout": 0, 00:18:54.868 "data_wr_pool_size": 0 00:18:54.868 } 00:18:54.868 }, 00:18:54.868 { 00:18:54.868 "method": "nvmf_create_subsystem", 00:18:54.868 "params": { 00:18:54.868 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:54.868 "allow_any_host": false, 00:18:54.868 "serial_number": "SPDK00000000000001", 00:18:54.868 "model_number": "SPDK bdev Controller", 00:18:54.868 "max_namespaces": 10, 00:18:54.868 "min_cntlid": 1, 00:18:54.868 "max_cntlid": 65519, 00:18:54.868 "ana_reporting": false 00:18:54.868 } 00:18:54.868 }, 00:18:54.868 { 00:18:54.868 "method": "nvmf_subsystem_add_host", 00:18:54.868 "params": { 00:18:54.868 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:54.868 "host": "nqn.2016-06.io.spdk:host1", 00:18:54.868 "psk": "/tmp/tmp.sG4BvOsBCe" 00:18:54.868 } 00:18:54.868 }, 00:18:54.868 { 00:18:54.868 "method": "nvmf_subsystem_add_ns", 00:18:54.868 "params": { 00:18:54.868 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:54.868 "namespace": { 00:18:54.868 "nsid": 1, 00:18:54.868 "bdev_name": "malloc0", 00:18:54.868 "nguid": "0E84D3CE0BFD47C2B1EB59EA47F85817", 00:18:54.868 "uuid": "0e84d3ce-0bfd-47c2-b1eb-59ea47f85817", 00:18:54.868 "no_auto_visible": false 00:18:54.868 } 00:18:54.868 } 00:18:54.868 }, 00:18:54.868 { 00:18:54.868 "method": "nvmf_subsystem_add_listener", 00:18:54.868 "params": { 00:18:54.868 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:54.868 "listen_address": { 00:18:54.868 "trtype": "TCP", 00:18:54.868 "adrfam": "IPv4", 00:18:54.868 "traddr": "10.0.0.2", 00:18:54.868 "trsvcid": "4420" 00:18:54.868 }, 00:18:54.868 "secure_channel": true 00:18:54.868 } 00:18:54.868 } 00:18:54.868 ] 00:18:54.868 } 00:18:54.868 ] 00:18:54.868 }' 00:18:54.868 17:09:42 nvmf_tcp.nvmf_tls -- target/tls.sh@197 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:18:55.128 17:09:42 nvmf_tcp.nvmf_tls -- target/tls.sh@197 -- # bdevperfconf='{ 00:18:55.128 "subsystems": [ 00:18:55.128 { 00:18:55.128 "subsystem": "keyring", 00:18:55.128 "config": [] 00:18:55.128 }, 00:18:55.128 { 00:18:55.128 "subsystem": "iobuf", 00:18:55.128 "config": [ 00:18:55.128 { 00:18:55.128 "method": "iobuf_set_options", 00:18:55.128 "params": { 00:18:55.128 "small_pool_count": 8192, 00:18:55.128 "large_pool_count": 1024, 00:18:55.128 "small_bufsize": 8192, 00:18:55.128 "large_bufsize": 135168 00:18:55.128 } 00:18:55.128 } 00:18:55.128 ] 00:18:55.128 }, 00:18:55.128 { 00:18:55.128 "subsystem": "sock", 00:18:55.128 "config": [ 00:18:55.128 { 00:18:55.128 "method": "sock_impl_set_options", 00:18:55.128 "params": { 00:18:55.128 "impl_name": "posix", 00:18:55.128 "recv_buf_size": 2097152, 00:18:55.128 "send_buf_size": 2097152, 00:18:55.128 "enable_recv_pipe": true, 00:18:55.128 "enable_quickack": false, 00:18:55.128 "enable_placement_id": 0, 00:18:55.128 "enable_zerocopy_send_server": true, 00:18:55.128 "enable_zerocopy_send_client": false, 00:18:55.128 "zerocopy_threshold": 0, 00:18:55.128 "tls_version": 0, 00:18:55.128 "enable_ktls": false 00:18:55.128 } 00:18:55.128 }, 00:18:55.128 { 00:18:55.128 "method": "sock_impl_set_options", 00:18:55.128 "params": { 00:18:55.128 "impl_name": "ssl", 00:18:55.128 "recv_buf_size": 4096, 00:18:55.128 "send_buf_size": 4096, 00:18:55.128 "enable_recv_pipe": true, 00:18:55.128 "enable_quickack": false, 00:18:55.128 "enable_placement_id": 0, 00:18:55.128 "enable_zerocopy_send_server": true, 00:18:55.128 "enable_zerocopy_send_client": false, 00:18:55.128 "zerocopy_threshold": 0, 00:18:55.128 "tls_version": 0, 00:18:55.128 "enable_ktls": false 00:18:55.128 } 00:18:55.128 } 00:18:55.128 ] 00:18:55.128 }, 00:18:55.128 { 00:18:55.128 "subsystem": "vmd", 00:18:55.128 "config": [] 00:18:55.128 }, 00:18:55.128 { 00:18:55.129 "subsystem": "accel", 00:18:55.129 "config": [ 00:18:55.129 { 00:18:55.129 "method": "accel_set_options", 00:18:55.129 "params": { 00:18:55.129 "small_cache_size": 128, 00:18:55.129 "large_cache_size": 16, 00:18:55.129 "task_count": 2048, 00:18:55.129 "sequence_count": 2048, 00:18:55.129 "buf_count": 2048 00:18:55.129 } 00:18:55.129 } 00:18:55.129 ] 00:18:55.129 }, 00:18:55.129 { 00:18:55.129 "subsystem": "bdev", 00:18:55.129 "config": [ 00:18:55.129 { 00:18:55.129 "method": "bdev_set_options", 00:18:55.129 "params": { 00:18:55.129 "bdev_io_pool_size": 65535, 00:18:55.129 "bdev_io_cache_size": 256, 00:18:55.129 "bdev_auto_examine": true, 00:18:55.129 "iobuf_small_cache_size": 128, 00:18:55.129 "iobuf_large_cache_size": 16 00:18:55.129 } 00:18:55.129 }, 00:18:55.129 { 00:18:55.129 "method": "bdev_raid_set_options", 00:18:55.129 "params": { 00:18:55.129 "process_window_size_kb": 1024 00:18:55.129 } 00:18:55.129 }, 00:18:55.129 { 00:18:55.129 "method": "bdev_iscsi_set_options", 00:18:55.129 "params": { 00:18:55.129 "timeout_sec": 30 00:18:55.129 } 00:18:55.129 }, 00:18:55.129 { 00:18:55.129 "method": "bdev_nvme_set_options", 00:18:55.129 "params": { 00:18:55.129 "action_on_timeout": "none", 00:18:55.129 "timeout_us": 0, 00:18:55.129 "timeout_admin_us": 0, 00:18:55.129 "keep_alive_timeout_ms": 10000, 00:18:55.129 "arbitration_burst": 0, 00:18:55.129 "low_priority_weight": 0, 00:18:55.129 "medium_priority_weight": 0, 00:18:55.129 "high_priority_weight": 0, 00:18:55.129 "nvme_adminq_poll_period_us": 10000, 00:18:55.129 "nvme_ioq_poll_period_us": 0, 00:18:55.129 "io_queue_requests": 512, 00:18:55.129 "delay_cmd_submit": true, 00:18:55.129 "transport_retry_count": 4, 00:18:55.129 "bdev_retry_count": 3, 00:18:55.129 "transport_ack_timeout": 0, 00:18:55.129 "ctrlr_loss_timeout_sec": 0, 00:18:55.129 "reconnect_delay_sec": 0, 00:18:55.129 "fast_io_fail_timeout_sec": 0, 00:18:55.129 "disable_auto_failback": false, 00:18:55.129 "generate_uuids": false, 00:18:55.129 "transport_tos": 0, 00:18:55.129 "nvme_error_stat": false, 00:18:55.129 "rdma_srq_size": 0, 00:18:55.129 "io_path_stat": false, 00:18:55.129 "allow_accel_sequence": false, 00:18:55.129 "rdma_max_cq_size": 0, 00:18:55.129 "rdma_cm_event_timeout_ms": 0, 00:18:55.129 "dhchap_digests": [ 00:18:55.129 "sha256", 00:18:55.129 "sha384", 00:18:55.129 "sha512" 00:18:55.129 ], 00:18:55.129 "dhchap_dhgroups": [ 00:18:55.129 "null", 00:18:55.129 "ffdhe2048", 00:18:55.129 "ffdhe3072", 00:18:55.129 "ffdhe4096", 00:18:55.129 "ffdhe6144", 00:18:55.129 "ffdhe8192" 00:18:55.129 ] 00:18:55.129 } 00:18:55.129 }, 00:18:55.129 { 00:18:55.129 "method": "bdev_nvme_attach_controller", 00:18:55.129 "params": { 00:18:55.129 "name": "TLSTEST", 00:18:55.129 "trtype": "TCP", 00:18:55.129 "adrfam": "IPv4", 00:18:55.129 "traddr": "10.0.0.2", 00:18:55.129 "trsvcid": "4420", 00:18:55.129 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:55.129 "prchk_reftag": false, 00:18:55.129 "prchk_guard": false, 00:18:55.129 "ctrlr_loss_timeout_sec": 0, 00:18:55.129 "reconnect_delay_sec": 0, 00:18:55.129 "fast_io_fail_timeout_sec": 0, 00:18:55.129 "psk": "/tmp/tmp.sG4BvOsBCe", 00:18:55.129 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:55.129 "hdgst": false, 00:18:55.129 "ddgst": false 00:18:55.129 } 00:18:55.129 }, 00:18:55.129 { 00:18:55.129 "method": "bdev_nvme_set_hotplug", 00:18:55.129 "params": { 00:18:55.129 "period_us": 100000, 00:18:55.129 "enable": false 00:18:55.129 } 00:18:55.129 }, 00:18:55.129 { 00:18:55.129 "method": "bdev_wait_for_examine" 00:18:55.129 } 00:18:55.129 ] 00:18:55.129 }, 00:18:55.129 { 00:18:55.129 "subsystem": "nbd", 00:18:55.129 "config": [] 00:18:55.129 } 00:18:55.129 ] 00:18:55.129 }' 00:18:55.129 17:09:42 nvmf_tcp.nvmf_tls -- target/tls.sh@199 -- # killprocess 3097062 00:18:55.129 17:09:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 3097062 ']' 00:18:55.129 17:09:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 3097062 00:18:55.129 17:09:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:18:55.129 17:09:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:18:55.129 17:09:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3097062 00:18:55.129 17:09:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_2 00:18:55.129 17:09:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_2 = sudo ']' 00:18:55.129 17:09:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3097062' 00:18:55.129 killing process with pid 3097062 00:18:55.129 17:09:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 3097062 00:18:55.129 Received shutdown signal, test time was about 10.000000 seconds 00:18:55.129 00:18:55.129 Latency(us) 00:18:55.129 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:55.129 =================================================================================================================== 00:18:55.129 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:18:55.129 [2024-05-15 17:09:42.725272] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:18:55.129 17:09:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 3097062 00:18:55.388 17:09:42 nvmf_tcp.nvmf_tls -- target/tls.sh@200 -- # killprocess 3096783 00:18:55.388 17:09:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 3096783 ']' 00:18:55.388 17:09:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 3096783 00:18:55.388 17:09:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:18:55.388 17:09:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:18:55.388 17:09:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3096783 00:18:55.388 17:09:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:18:55.388 17:09:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:18:55.388 17:09:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3096783' 00:18:55.388 killing process with pid 3096783 00:18:55.388 17:09:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 3096783 00:18:55.388 [2024-05-15 17:09:42.973692] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:18:55.388 [2024-05-15 17:09:42.973727] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:18:55.388 17:09:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 3096783 00:18:55.647 17:09:43 nvmf_tcp.nvmf_tls -- target/tls.sh@203 -- # nvmfappstart -m 0x2 -c /dev/fd/62 00:18:55.647 17:09:43 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:18:55.647 17:09:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@720 -- # xtrace_disable 00:18:55.647 17:09:43 nvmf_tcp.nvmf_tls -- target/tls.sh@203 -- # echo '{ 00:18:55.647 "subsystems": [ 00:18:55.647 { 00:18:55.647 "subsystem": "keyring", 00:18:55.647 "config": [] 00:18:55.647 }, 00:18:55.647 { 00:18:55.647 "subsystem": "iobuf", 00:18:55.647 "config": [ 00:18:55.647 { 00:18:55.647 "method": "iobuf_set_options", 00:18:55.647 "params": { 00:18:55.647 "small_pool_count": 8192, 00:18:55.647 "large_pool_count": 1024, 00:18:55.647 "small_bufsize": 8192, 00:18:55.647 "large_bufsize": 135168 00:18:55.647 } 00:18:55.647 } 00:18:55.647 ] 00:18:55.647 }, 00:18:55.647 { 00:18:55.647 "subsystem": "sock", 00:18:55.647 "config": [ 00:18:55.647 { 00:18:55.647 "method": "sock_impl_set_options", 00:18:55.647 "params": { 00:18:55.647 "impl_name": "posix", 00:18:55.647 "recv_buf_size": 2097152, 00:18:55.647 "send_buf_size": 2097152, 00:18:55.647 "enable_recv_pipe": true, 00:18:55.647 "enable_quickack": false, 00:18:55.647 "enable_placement_id": 0, 00:18:55.647 "enable_zerocopy_send_server": true, 00:18:55.647 "enable_zerocopy_send_client": false, 00:18:55.647 "zerocopy_threshold": 0, 00:18:55.647 "tls_version": 0, 00:18:55.647 "enable_ktls": false 00:18:55.647 } 00:18:55.647 }, 00:18:55.647 { 00:18:55.647 "method": "sock_impl_set_options", 00:18:55.647 "params": { 00:18:55.647 "impl_name": "ssl", 00:18:55.647 "recv_buf_size": 4096, 00:18:55.647 "send_buf_size": 4096, 00:18:55.647 "enable_recv_pipe": true, 00:18:55.647 "enable_quickack": false, 00:18:55.647 "enable_placement_id": 0, 00:18:55.647 "enable_zerocopy_send_server": true, 00:18:55.647 "enable_zerocopy_send_client": false, 00:18:55.647 "zerocopy_threshold": 0, 00:18:55.647 "tls_version": 0, 00:18:55.647 "enable_ktls": false 00:18:55.647 } 00:18:55.647 } 00:18:55.647 ] 00:18:55.647 }, 00:18:55.647 { 00:18:55.647 "subsystem": "vmd", 00:18:55.647 "config": [] 00:18:55.647 }, 00:18:55.647 { 00:18:55.647 "subsystem": "accel", 00:18:55.647 "config": [ 00:18:55.647 { 00:18:55.647 "method": "accel_set_options", 00:18:55.647 "params": { 00:18:55.647 "small_cache_size": 128, 00:18:55.647 "large_cache_size": 16, 00:18:55.647 "task_count": 2048, 00:18:55.647 "sequence_count": 2048, 00:18:55.647 "buf_count": 2048 00:18:55.647 } 00:18:55.647 } 00:18:55.647 ] 00:18:55.647 }, 00:18:55.647 { 00:18:55.647 "subsystem": "bdev", 00:18:55.647 "config": [ 00:18:55.647 { 00:18:55.647 "method": "bdev_set_options", 00:18:55.647 "params": { 00:18:55.647 "bdev_io_pool_size": 65535, 00:18:55.647 "bdev_io_cache_size": 256, 00:18:55.647 "bdev_auto_examine": true, 00:18:55.647 "iobuf_small_cache_size": 128, 00:18:55.647 "iobuf_large_cache_size": 16 00:18:55.647 } 00:18:55.647 }, 00:18:55.647 { 00:18:55.647 "method": "bdev_raid_set_options", 00:18:55.647 "params": { 00:18:55.647 "process_window_size_kb": 1024 00:18:55.647 } 00:18:55.647 }, 00:18:55.647 { 00:18:55.647 "method": "bdev_iscsi_set_options", 00:18:55.647 "params": { 00:18:55.647 "timeout_sec": 30 00:18:55.647 } 00:18:55.647 }, 00:18:55.647 { 00:18:55.647 "method": "bdev_nvme_set_options", 00:18:55.647 "params": { 00:18:55.647 "action_on_timeout": "none", 00:18:55.647 "timeout_us": 0, 00:18:55.647 "timeout_admin_us": 0, 00:18:55.647 "keep_alive_timeout_ms": 10000, 00:18:55.647 "arbitration_burst": 0, 00:18:55.647 "low_priority_weight": 0, 00:18:55.647 "medium_priority_weight": 0, 00:18:55.647 "high_priority_weight": 0, 00:18:55.647 "nvme_adminq_poll_period_us": 10000, 00:18:55.647 "nvme_ioq_poll_period_us": 0, 00:18:55.647 "io_queue_requests": 0, 00:18:55.647 "delay_cmd_submit": true, 00:18:55.647 "transport_retry_count": 4, 00:18:55.647 "bdev_retry_count": 3, 00:18:55.648 "transport_ack_timeout": 0, 00:18:55.648 "ctrlr_loss_timeout_sec": 0, 00:18:55.648 "reconnect_delay_sec": 0, 00:18:55.648 "fast_io_fail_timeout_sec": 0, 00:18:55.648 "disable_auto_failback": false, 00:18:55.648 "generate_uuids": false, 00:18:55.648 "transport_tos": 0, 00:18:55.648 "nvme_error_stat": false, 00:18:55.648 "rdma_srq_size": 0, 00:18:55.648 "io_path_stat": false, 00:18:55.648 "allow_accel_sequence": false, 00:18:55.648 "rdma_max_cq_size": 0, 00:18:55.648 "rdma_cm_event_timeout_ms": 0, 00:18:55.648 "dhchap_digests": [ 00:18:55.648 "sha256", 00:18:55.648 "sha384", 00:18:55.648 "sha512" 00:18:55.648 ], 00:18:55.648 "dhchap_dhgroups": [ 00:18:55.648 "null", 00:18:55.648 "ffdhe2048", 00:18:55.648 "ffdhe3072", 00:18:55.648 "ffdhe4096", 00:18:55.648 "ffdhe6144", 00:18:55.648 "ffdhe8192" 00:18:55.648 ] 00:18:55.648 } 00:18:55.648 }, 00:18:55.648 { 00:18:55.648 "method": "bdev_nvme_set_hotplug", 00:18:55.648 "params": { 00:18:55.648 "period_us": 100000, 00:18:55.648 "enable": false 00:18:55.648 } 00:18:55.648 }, 00:18:55.648 { 00:18:55.648 "method": "bdev_malloc_create", 00:18:55.648 "params": { 00:18:55.648 "name": "malloc0", 00:18:55.648 "num_blocks": 8192, 00:18:55.648 "block_size": 4096, 00:18:55.648 "physical_block_size": 4096, 00:18:55.648 "uuid": "0e84d3ce-0bfd-47c2-b1eb-59ea47f85817", 00:18:55.648 "optimal_io_boundary": 0 00:18:55.648 } 00:18:55.648 }, 00:18:55.648 { 00:18:55.648 "method": "bdev_wait_for_examine" 00:18:55.648 } 00:18:55.648 ] 00:18:55.648 }, 00:18:55.648 { 00:18:55.648 "subsystem": "nbd", 00:18:55.648 "config": [] 00:18:55.648 }, 00:18:55.648 { 00:18:55.648 "subsystem": "scheduler", 00:18:55.648 "config": [ 00:18:55.648 { 00:18:55.648 "method": "framework_set_scheduler", 00:18:55.648 "params": { 00:18:55.648 "name": "static" 00:18:55.648 } 00:18:55.648 } 00:18:55.648 ] 00:18:55.648 }, 00:18:55.648 { 00:18:55.648 "subsystem": "nvmf", 00:18:55.648 "config": [ 00:18:55.648 { 00:18:55.648 "method": "nvmf_set_config", 00:18:55.648 "params": { 00:18:55.648 "discovery_filter": "match_any", 00:18:55.648 "admin_cmd_passthru": { 00:18:55.648 "identify_ctrlr": false 00:18:55.648 } 00:18:55.648 } 00:18:55.648 }, 00:18:55.648 { 00:18:55.648 "method": "nvmf_set_max_subsystems", 00:18:55.648 "params": { 00:18:55.648 "max_subsystems": 1024 00:18:55.648 } 00:18:55.648 }, 00:18:55.648 { 00:18:55.648 "method": "nvmf_set_crdt", 00:18:55.648 "params": { 00:18:55.648 "crdt1": 0, 00:18:55.648 "crdt2": 0, 00:18:55.648 "crdt3": 0 00:18:55.648 } 00:18:55.648 }, 00:18:55.648 { 00:18:55.648 "method": "nvmf_create_transport", 00:18:55.648 "params": { 00:18:55.648 "trtype": "TCP", 00:18:55.648 "max_queue_depth": 128, 00:18:55.648 "max_io_qpairs_per_ctrlr": 127, 00:18:55.648 "in_capsule_data_size": 4096, 00:18:55.648 "max_io_size": 131072, 00:18:55.648 "io_unit_size": 131072, 00:18:55.648 "max_aq_depth": 128, 00:18:55.648 "num_shared_buffers": 511, 00:18:55.648 "buf_cache_size": 4294967295, 00:18:55.648 "dif_insert_or_strip": false, 00:18:55.648 "zcopy": false, 00:18:55.648 "c2h_success": false, 00:18:55.648 "sock_priority": 0, 00:18:55.648 "abort_timeout_sec": 1, 00:18:55.648 "ack_timeout": 0, 00:18:55.648 "data_wr_pool_size": 0 00:18:55.648 } 00:18:55.648 }, 00:18:55.648 { 00:18:55.648 "method": "nvmf_create_subsystem", 00:18:55.648 "params": { 00:18:55.648 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:55.648 "allow_any_host": false, 00:18:55.648 "serial_number": "SPDK00000000000001", 00:18:55.648 "model_number": "SPDK bdev Controller", 00:18:55.648 "max_namespaces": 10, 00:18:55.648 "min_cntlid": 1, 00:18:55.648 "max_cntlid": 65519, 00:18:55.648 "ana_reporting": false 00:18:55.648 } 00:18:55.648 }, 00:18:55.648 { 00:18:55.648 "method": "nvmf_subsystem_add_host", 00:18:55.648 "params": { 00:18:55.648 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:55.648 "host": "nqn.2016-06.io.spdk:host1", 00:18:55.648 "psk": "/tmp/tmp.sG4BvOsBCe" 00:18:55.648 } 00:18:55.648 }, 00:18:55.648 { 00:18:55.648 "method": "nvmf_subsystem_add_ns", 00:18:55.648 "params": { 00:18:55.648 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:55.648 "namespace": { 00:18:55.648 "nsid": 1, 00:18:55.648 "bdev_name": "malloc0", 00:18:55.648 "nguid": "0E84D3CE0BFD47C2B1EB59EA47F85817", 00:18:55.648 "uuid": "0e84d3ce-0bfd-47c2-b1eb-59ea47f85817", 00:18:55.648 "no_auto_visible": false 00:18:55.648 } 00:18:55.648 } 00:18:55.648 }, 00:18:55.648 { 00:18:55.648 "method": "nvmf_subsystem_add_listener", 00:18:55.648 "params": { 00:18:55.648 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:55.648 "listen_address": { 00:18:55.648 "trtype": "TCP", 00:18:55.648 "adrfam": "IPv4", 00:18:55.648 "traddr": "10.0.0.2", 00:18:55.648 "trsvcid": "4420" 00:18:55.648 }, 00:18:55.648 "secure_channel": true 00:18:55.648 } 00:18:55.648 } 00:18:55.648 ] 00:18:55.648 } 00:18:55.648 ] 00:18:55.648 }' 00:18:55.648 17:09:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:55.648 17:09:43 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=3097350 00:18:55.648 17:09:43 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 -c /dev/fd/62 00:18:55.648 17:09:43 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 3097350 00:18:55.648 17:09:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 3097350 ']' 00:18:55.648 17:09:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:55.648 17:09:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:18:55.648 17:09:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:55.648 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:55.648 17:09:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:18:55.648 17:09:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:55.648 [2024-05-15 17:09:43.243920] Starting SPDK v24.05-pre git sha1 0ba8ca574 / DPDK 23.11.0 initialization... 00:18:55.648 [2024-05-15 17:09:43.243965] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:55.648 EAL: No free 2048 kB hugepages reported on node 1 00:18:55.648 [2024-05-15 17:09:43.299856] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:55.907 [2024-05-15 17:09:43.379265] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:55.907 [2024-05-15 17:09:43.379302] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:55.907 [2024-05-15 17:09:43.379309] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:55.907 [2024-05-15 17:09:43.379315] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:55.907 [2024-05-15 17:09:43.379321] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:55.907 [2024-05-15 17:09:43.379375] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:18:56.165 [2024-05-15 17:09:43.573072] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:56.165 [2024-05-15 17:09:43.589025] tcp.c:3665:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:18:56.165 [2024-05-15 17:09:43.605060] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:18:56.165 [2024-05-15 17:09:43.605119] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:18:56.165 [2024-05-15 17:09:43.613300] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:56.424 17:09:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:18:56.424 17:09:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:18:56.424 17:09:44 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:18:56.424 17:09:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:56.424 17:09:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:56.424 17:09:44 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:56.424 17:09:44 nvmf_tcp.nvmf_tls -- target/tls.sh@207 -- # bdevperf_pid=3097534 00:18:56.424 17:09:44 nvmf_tcp.nvmf_tls -- target/tls.sh@208 -- # waitforlisten 3097534 /var/tmp/bdevperf.sock 00:18:56.424 17:09:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 3097534 ']' 00:18:56.424 17:09:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:56.424 17:09:44 nvmf_tcp.nvmf_tls -- target/tls.sh@204 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -c /dev/fd/63 00:18:56.424 17:09:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:18:56.424 17:09:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:56.424 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:56.424 17:09:44 nvmf_tcp.nvmf_tls -- target/tls.sh@204 -- # echo '{ 00:18:56.424 "subsystems": [ 00:18:56.424 { 00:18:56.424 "subsystem": "keyring", 00:18:56.424 "config": [] 00:18:56.424 }, 00:18:56.424 { 00:18:56.424 "subsystem": "iobuf", 00:18:56.424 "config": [ 00:18:56.424 { 00:18:56.424 "method": "iobuf_set_options", 00:18:56.424 "params": { 00:18:56.424 "small_pool_count": 8192, 00:18:56.424 "large_pool_count": 1024, 00:18:56.424 "small_bufsize": 8192, 00:18:56.424 "large_bufsize": 135168 00:18:56.424 } 00:18:56.424 } 00:18:56.424 ] 00:18:56.424 }, 00:18:56.424 { 00:18:56.424 "subsystem": "sock", 00:18:56.424 "config": [ 00:18:56.424 { 00:18:56.424 "method": "sock_impl_set_options", 00:18:56.425 "params": { 00:18:56.425 "impl_name": "posix", 00:18:56.425 "recv_buf_size": 2097152, 00:18:56.425 "send_buf_size": 2097152, 00:18:56.425 "enable_recv_pipe": true, 00:18:56.425 "enable_quickack": false, 00:18:56.425 "enable_placement_id": 0, 00:18:56.425 "enable_zerocopy_send_server": true, 00:18:56.425 "enable_zerocopy_send_client": false, 00:18:56.425 "zerocopy_threshold": 0, 00:18:56.425 "tls_version": 0, 00:18:56.425 "enable_ktls": false 00:18:56.425 } 00:18:56.425 }, 00:18:56.425 { 00:18:56.425 "method": "sock_impl_set_options", 00:18:56.425 "params": { 00:18:56.425 "impl_name": "ssl", 00:18:56.425 "recv_buf_size": 4096, 00:18:56.425 "send_buf_size": 4096, 00:18:56.425 "enable_recv_pipe": true, 00:18:56.425 "enable_quickack": false, 00:18:56.425 "enable_placement_id": 0, 00:18:56.425 "enable_zerocopy_send_server": true, 00:18:56.425 "enable_zerocopy_send_client": false, 00:18:56.425 "zerocopy_threshold": 0, 00:18:56.425 "tls_version": 0, 00:18:56.425 "enable_ktls": false 00:18:56.425 } 00:18:56.425 } 00:18:56.425 ] 00:18:56.425 }, 00:18:56.425 { 00:18:56.425 "subsystem": "vmd", 00:18:56.425 "config": [] 00:18:56.425 }, 00:18:56.425 { 00:18:56.425 "subsystem": "accel", 00:18:56.425 "config": [ 00:18:56.425 { 00:18:56.425 "method": "accel_set_options", 00:18:56.425 "params": { 00:18:56.425 "small_cache_size": 128, 00:18:56.425 "large_cache_size": 16, 00:18:56.425 "task_count": 2048, 00:18:56.425 "sequence_count": 2048, 00:18:56.425 "buf_count": 2048 00:18:56.425 } 00:18:56.425 } 00:18:56.425 ] 00:18:56.425 }, 00:18:56.425 { 00:18:56.425 "subsystem": "bdev", 00:18:56.425 "config": [ 00:18:56.425 { 00:18:56.425 "method": "bdev_set_options", 00:18:56.425 "params": { 00:18:56.425 "bdev_io_pool_size": 65535, 00:18:56.425 "bdev_io_cache_size": 256, 00:18:56.425 "bdev_auto_examine": true, 00:18:56.425 "iobuf_small_cache_size": 128, 00:18:56.425 "iobuf_large_cache_size": 16 00:18:56.425 } 00:18:56.425 }, 00:18:56.425 { 00:18:56.425 "method": "bdev_raid_set_options", 00:18:56.425 "params": { 00:18:56.425 "process_window_size_kb": 1024 00:18:56.425 } 00:18:56.425 }, 00:18:56.425 { 00:18:56.425 "method": "bdev_iscsi_set_options", 00:18:56.425 "params": { 00:18:56.425 "timeout_sec": 30 00:18:56.425 } 00:18:56.425 }, 00:18:56.425 { 00:18:56.425 "method": "bdev_nvme_set_options", 00:18:56.425 "params": { 00:18:56.425 "action_on_timeout": "none", 00:18:56.425 "timeout_us": 0, 00:18:56.425 "timeout_admin_us": 0, 00:18:56.425 "keep_alive_timeout_ms": 10000, 00:18:56.425 "arbitration_burst": 0, 00:18:56.425 "low_priority_weight": 0, 00:18:56.425 "medium_priority_weight": 0, 00:18:56.425 "high_priority_weight": 0, 00:18:56.425 "nvme_adminq_poll_period_us": 10000, 00:18:56.425 "nvme_ioq_poll_period_us": 0, 00:18:56.425 "io_queue_requests": 512, 00:18:56.425 "delay_cmd_submit": true, 00:18:56.425 "transport_retry_count": 4, 00:18:56.425 "bdev_retry_count": 3, 00:18:56.425 "transport_ack_timeout": 0, 00:18:56.425 "ctrlr_loss_timeout_sec": 0, 00:18:56.425 "reconnect_delay_sec": 0, 00:18:56.425 "fast_io_fail_timeout_sec": 0, 00:18:56.425 "disable_auto_failback": false, 00:18:56.425 "generate_uuids": false, 00:18:56.425 "transport_tos": 0, 00:18:56.425 "nvme_error_stat": false, 00:18:56.425 "rdma_srq_size": 0, 00:18:56.425 "io_path_stat": false, 00:18:56.425 "allow_accel_sequence": false, 00:18:56.425 "rdma_max_cq_size": 0, 00:18:56.425 "rdma_cm_event_timeout_ms": 0, 00:18:56.425 "dhchap_digests": [ 00:18:56.425 "sha256", 00:18:56.425 "sha384", 00:18:56.425 "sha512" 00:18:56.425 ], 00:18:56.425 "dhchap_dhgroups": [ 00:18:56.425 "null", 00:18:56.425 "ffdhe2048", 00:18:56.425 "ffdhe3072", 00:18:56.425 "ffdhe4096", 00:18:56.425 "ffdhe6144", 00:18:56.425 "ffdhe8192" 00:18:56.425 ] 00:18:56.425 } 00:18:56.425 }, 00:18:56.425 { 00:18:56.425 "method": "bdev_nvme_attach_controller", 00:18:56.425 "params": { 00:18:56.425 "name": "TLSTEST", 00:18:56.425 "trtype": "TCP", 00:18:56.425 "adrfam": "IPv4", 00:18:56.425 "traddr": "10.0.0.2", 00:18:56.425 "trsvcid": "4420", 00:18:56.425 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:56.425 "prchk_reftag": false, 00:18:56.425 "prchk_guard": false, 00:18:56.425 "ctrlr_loss_timeout_sec": 0, 00:18:56.425 "reconnect_delay_sec": 0, 00:18:56.425 "fast_io_fail_timeout_sec": 0, 00:18:56.425 "psk": "/tmp/tmp.sG4BvOsBCe", 00:18:56.425 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:56.425 "hdgst": false, 00:18:56.425 "ddgst": false 00:18:56.425 } 00:18:56.425 }, 00:18:56.425 { 00:18:56.425 "method": "bdev_nvme_set_hotplug", 00:18:56.425 "params": { 00:18:56.425 "period_us": 100000, 00:18:56.425 "enable": false 00:18:56.425 } 00:18:56.425 }, 00:18:56.425 { 00:18:56.425 "method": "bdev_wait_for_examine" 00:18:56.425 } 00:18:56.425 ] 00:18:56.425 }, 00:18:56.425 { 00:18:56.425 "subsystem": "nbd", 00:18:56.425 "config": [] 00:18:56.425 } 00:18:56.425 ] 00:18:56.425 }' 00:18:56.425 17:09:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:18:56.425 17:09:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:56.685 [2024-05-15 17:09:44.123106] Starting SPDK v24.05-pre git sha1 0ba8ca574 / DPDK 23.11.0 initialization... 00:18:56.685 [2024-05-15 17:09:44.123154] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3097534 ] 00:18:56.685 EAL: No free 2048 kB hugepages reported on node 1 00:18:56.685 [2024-05-15 17:09:44.172517] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:56.685 [2024-05-15 17:09:44.244813] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:18:56.944 [2024-05-15 17:09:44.379663] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:56.944 [2024-05-15 17:09:44.379741] nvme_tcp.c:2580:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:18:57.511 17:09:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:18:57.511 17:09:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:18:57.511 17:09:44 nvmf_tcp.nvmf_tls -- target/tls.sh@211 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:18:57.511 Running I/O for 10 seconds... 00:19:07.488 00:19:07.488 Latency(us) 00:19:07.488 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:07.488 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:19:07.488 Verification LBA range: start 0x0 length 0x2000 00:19:07.488 TLSTESTn1 : 10.02 5414.10 21.15 0.00 0.00 23601.54 6354.14 33964.74 00:19:07.488 =================================================================================================================== 00:19:07.488 Total : 5414.10 21.15 0.00 0.00 23601.54 6354.14 33964.74 00:19:07.488 0 00:19:07.488 17:09:55 nvmf_tcp.nvmf_tls -- target/tls.sh@213 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:19:07.488 17:09:55 nvmf_tcp.nvmf_tls -- target/tls.sh@214 -- # killprocess 3097534 00:19:07.488 17:09:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 3097534 ']' 00:19:07.488 17:09:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 3097534 00:19:07.488 17:09:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:19:07.488 17:09:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:19:07.488 17:09:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3097534 00:19:07.488 17:09:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_2 00:19:07.488 17:09:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_2 = sudo ']' 00:19:07.488 17:09:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3097534' 00:19:07.488 killing process with pid 3097534 00:19:07.488 17:09:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 3097534 00:19:07.488 Received shutdown signal, test time was about 10.000000 seconds 00:19:07.488 00:19:07.488 Latency(us) 00:19:07.488 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:07.488 =================================================================================================================== 00:19:07.488 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:19:07.488 [2024-05-15 17:09:55.115723] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:19:07.488 17:09:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 3097534 00:19:07.747 17:09:55 nvmf_tcp.nvmf_tls -- target/tls.sh@215 -- # killprocess 3097350 00:19:07.747 17:09:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 3097350 ']' 00:19:07.747 17:09:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 3097350 00:19:07.747 17:09:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:19:07.747 17:09:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:19:07.747 17:09:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3097350 00:19:07.747 17:09:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:19:07.747 17:09:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:19:07.747 17:09:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3097350' 00:19:07.747 killing process with pid 3097350 00:19:07.747 17:09:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 3097350 00:19:07.747 [2024-05-15 17:09:55.369363] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:19:07.747 [2024-05-15 17:09:55.369413] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:19:07.747 17:09:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 3097350 00:19:08.006 17:09:55 nvmf_tcp.nvmf_tls -- target/tls.sh@218 -- # nvmfappstart 00:19:08.006 17:09:55 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:19:08.006 17:09:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@720 -- # xtrace_disable 00:19:08.006 17:09:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:08.006 17:09:55 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=3099375 00:19:08.006 17:09:55 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:19:08.006 17:09:55 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 3099375 00:19:08.006 17:09:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 3099375 ']' 00:19:08.006 17:09:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:08.006 17:09:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:19:08.006 17:09:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:08.006 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:08.006 17:09:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:19:08.006 17:09:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:08.006 [2024-05-15 17:09:55.640636] Starting SPDK v24.05-pre git sha1 0ba8ca574 / DPDK 23.11.0 initialization... 00:19:08.006 [2024-05-15 17:09:55.640683] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:08.006 EAL: No free 2048 kB hugepages reported on node 1 00:19:08.266 [2024-05-15 17:09:55.696641] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:08.266 [2024-05-15 17:09:55.765410] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:08.266 [2024-05-15 17:09:55.765450] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:08.266 [2024-05-15 17:09:55.765457] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:08.266 [2024-05-15 17:09:55.765464] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:08.266 [2024-05-15 17:09:55.765469] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:08.266 [2024-05-15 17:09:55.765487] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:19:08.832 17:09:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:19:08.832 17:09:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:19:08.832 17:09:56 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:19:08.832 17:09:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:19:08.832 17:09:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:08.832 17:09:56 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:08.832 17:09:56 nvmf_tcp.nvmf_tls -- target/tls.sh@219 -- # setup_nvmf_tgt /tmp/tmp.sG4BvOsBCe 00:19:08.832 17:09:56 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.sG4BvOsBCe 00:19:08.832 17:09:56 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:19:09.093 [2024-05-15 17:09:56.621675] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:09.093 17:09:56 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:19:09.423 17:09:56 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:19:09.423 [2024-05-15 17:09:56.962533] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:19:09.423 [2024-05-15 17:09:56.962580] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:19:09.423 [2024-05-15 17:09:56.962767] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:09.423 17:09:56 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:19:09.682 malloc0 00:19:09.682 17:09:57 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:19:09.682 17:09:57 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.sG4BvOsBCe 00:19:09.940 [2024-05-15 17:09:57.492278] tcp.c:3665:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:19:09.940 17:09:57 nvmf_tcp.nvmf_tls -- target/tls.sh@222 -- # bdevperf_pid=3099852 00:19:09.940 17:09:57 nvmf_tcp.nvmf_tls -- target/tls.sh@224 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:19:09.940 17:09:57 nvmf_tcp.nvmf_tls -- target/tls.sh@225 -- # waitforlisten 3099852 /var/tmp/bdevperf.sock 00:19:09.940 17:09:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 3099852 ']' 00:19:09.940 17:09:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:09.940 17:09:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:19:09.940 17:09:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:09.940 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:09.940 17:09:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:19:09.940 17:09:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:09.940 17:09:57 nvmf_tcp.nvmf_tls -- target/tls.sh@220 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:19:09.940 [2024-05-15 17:09:57.555845] Starting SPDK v24.05-pre git sha1 0ba8ca574 / DPDK 23.11.0 initialization... 00:19:09.940 [2024-05-15 17:09:57.555890] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3099852 ] 00:19:09.940 EAL: No free 2048 kB hugepages reported on node 1 00:19:10.199 [2024-05-15 17:09:57.608995] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:10.199 [2024-05-15 17:09:57.683934] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:19:10.767 17:09:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:19:10.767 17:09:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:19:10.767 17:09:58 nvmf_tcp.nvmf_tls -- target/tls.sh@227 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.sG4BvOsBCe 00:19:11.025 17:09:58 nvmf_tcp.nvmf_tls -- target/tls.sh@228 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:19:11.025 [2024-05-15 17:09:58.678485] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:11.284 nvme0n1 00:19:11.284 17:09:58 nvmf_tcp.nvmf_tls -- target/tls.sh@232 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:19:11.284 Running I/O for 1 seconds... 00:19:12.659 00:19:12.659 Latency(us) 00:19:12.659 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:12.659 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:19:12.659 Verification LBA range: start 0x0 length 0x2000 00:19:12.659 nvme0n1 : 1.02 4766.28 18.62 0.00 0.00 26628.86 4786.98 53568.56 00:19:12.659 =================================================================================================================== 00:19:12.659 Total : 4766.28 18.62 0.00 0.00 26628.86 4786.98 53568.56 00:19:12.659 0 00:19:12.659 17:09:59 nvmf_tcp.nvmf_tls -- target/tls.sh@234 -- # killprocess 3099852 00:19:12.659 17:09:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 3099852 ']' 00:19:12.659 17:09:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 3099852 00:19:12.659 17:09:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:19:12.659 17:09:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:19:12.659 17:09:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3099852 00:19:12.659 17:09:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:19:12.659 17:09:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:19:12.659 17:09:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3099852' 00:19:12.659 killing process with pid 3099852 00:19:12.659 17:09:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 3099852 00:19:12.659 Received shutdown signal, test time was about 1.000000 seconds 00:19:12.659 00:19:12.659 Latency(us) 00:19:12.659 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:12.659 =================================================================================================================== 00:19:12.659 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:19:12.659 17:09:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 3099852 00:19:12.659 17:10:00 nvmf_tcp.nvmf_tls -- target/tls.sh@235 -- # killprocess 3099375 00:19:12.659 17:10:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 3099375 ']' 00:19:12.659 17:10:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 3099375 00:19:12.659 17:10:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:19:12.659 17:10:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:19:12.659 17:10:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3099375 00:19:12.659 17:10:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:19:12.659 17:10:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:19:12.659 17:10:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3099375' 00:19:12.659 killing process with pid 3099375 00:19:12.659 17:10:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 3099375 00:19:12.659 [2024-05-15 17:10:00.195384] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:19:12.659 [2024-05-15 17:10:00.195428] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:19:12.659 17:10:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 3099375 00:19:12.918 17:10:00 nvmf_tcp.nvmf_tls -- target/tls.sh@238 -- # nvmfappstart 00:19:12.918 17:10:00 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:19:12.918 17:10:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@720 -- # xtrace_disable 00:19:12.918 17:10:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:12.918 17:10:00 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=3100331 00:19:12.918 17:10:00 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 3100331 00:19:12.918 17:10:00 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:19:12.918 17:10:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 3100331 ']' 00:19:12.918 17:10:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:12.918 17:10:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:19:12.918 17:10:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:12.918 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:12.918 17:10:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:19:12.918 17:10:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:12.918 [2024-05-15 17:10:00.471317] Starting SPDK v24.05-pre git sha1 0ba8ca574 / DPDK 23.11.0 initialization... 00:19:12.918 [2024-05-15 17:10:00.471363] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:12.918 EAL: No free 2048 kB hugepages reported on node 1 00:19:12.918 [2024-05-15 17:10:00.527343] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:13.177 [2024-05-15 17:10:00.597143] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:13.177 [2024-05-15 17:10:00.597185] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:13.177 [2024-05-15 17:10:00.597192] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:13.177 [2024-05-15 17:10:00.597198] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:13.177 [2024-05-15 17:10:00.597203] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:13.177 [2024-05-15 17:10:00.597224] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:19:13.743 17:10:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:19:13.743 17:10:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:19:13.743 17:10:01 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:19:13.743 17:10:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:19:13.743 17:10:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:13.743 17:10:01 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:13.743 17:10:01 nvmf_tcp.nvmf_tls -- target/tls.sh@239 -- # rpc_cmd 00:19:13.743 17:10:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:13.743 17:10:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:13.743 [2024-05-15 17:10:01.296897] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:13.743 malloc0 00:19:13.743 [2024-05-15 17:10:01.325189] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:19:13.743 [2024-05-15 17:10:01.325240] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:19:13.743 [2024-05-15 17:10:01.325430] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:13.743 17:10:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:13.743 17:10:01 nvmf_tcp.nvmf_tls -- target/tls.sh@252 -- # bdevperf_pid=3100394 00:19:13.743 17:10:01 nvmf_tcp.nvmf_tls -- target/tls.sh@254 -- # waitforlisten 3100394 /var/tmp/bdevperf.sock 00:19:13.743 17:10:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 3100394 ']' 00:19:13.743 17:10:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:13.743 17:10:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:19:13.743 17:10:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:13.743 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:13.743 17:10:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:19:13.743 17:10:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:13.743 17:10:01 nvmf_tcp.nvmf_tls -- target/tls.sh@250 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:19:13.743 [2024-05-15 17:10:01.397371] Starting SPDK v24.05-pre git sha1 0ba8ca574 / DPDK 23.11.0 initialization... 00:19:13.743 [2024-05-15 17:10:01.397412] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3100394 ] 00:19:14.001 EAL: No free 2048 kB hugepages reported on node 1 00:19:14.001 [2024-05-15 17:10:01.452263] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:14.001 [2024-05-15 17:10:01.531176] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:19:14.567 17:10:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:19:14.567 17:10:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:19:14.567 17:10:02 nvmf_tcp.nvmf_tls -- target/tls.sh@255 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.sG4BvOsBCe 00:19:14.824 17:10:02 nvmf_tcp.nvmf_tls -- target/tls.sh@256 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:19:15.082 [2024-05-15 17:10:02.522015] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:15.082 nvme0n1 00:19:15.082 17:10:02 nvmf_tcp.nvmf_tls -- target/tls.sh@260 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:19:15.082 Running I/O for 1 seconds... 00:19:16.456 00:19:16.456 Latency(us) 00:19:16.456 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:16.456 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:19:16.456 Verification LBA range: start 0x0 length 0x2000 00:19:16.456 nvme0n1 : 1.02 5468.47 21.36 0.00 0.00 23204.92 6183.18 36016.31 00:19:16.456 =================================================================================================================== 00:19:16.456 Total : 5468.47 21.36 0.00 0.00 23204.92 6183.18 36016.31 00:19:16.456 0 00:19:16.456 17:10:03 nvmf_tcp.nvmf_tls -- target/tls.sh@263 -- # rpc_cmd save_config 00:19:16.456 17:10:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:16.456 17:10:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:16.456 17:10:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:16.456 17:10:03 nvmf_tcp.nvmf_tls -- target/tls.sh@263 -- # tgtcfg='{ 00:19:16.456 "subsystems": [ 00:19:16.456 { 00:19:16.456 "subsystem": "keyring", 00:19:16.456 "config": [ 00:19:16.456 { 00:19:16.456 "method": "keyring_file_add_key", 00:19:16.456 "params": { 00:19:16.456 "name": "key0", 00:19:16.456 "path": "/tmp/tmp.sG4BvOsBCe" 00:19:16.456 } 00:19:16.456 } 00:19:16.456 ] 00:19:16.456 }, 00:19:16.456 { 00:19:16.456 "subsystem": "iobuf", 00:19:16.456 "config": [ 00:19:16.456 { 00:19:16.456 "method": "iobuf_set_options", 00:19:16.456 "params": { 00:19:16.456 "small_pool_count": 8192, 00:19:16.456 "large_pool_count": 1024, 00:19:16.456 "small_bufsize": 8192, 00:19:16.456 "large_bufsize": 135168 00:19:16.456 } 00:19:16.456 } 00:19:16.456 ] 00:19:16.456 }, 00:19:16.456 { 00:19:16.456 "subsystem": "sock", 00:19:16.456 "config": [ 00:19:16.456 { 00:19:16.456 "method": "sock_impl_set_options", 00:19:16.456 "params": { 00:19:16.456 "impl_name": "posix", 00:19:16.456 "recv_buf_size": 2097152, 00:19:16.456 "send_buf_size": 2097152, 00:19:16.456 "enable_recv_pipe": true, 00:19:16.456 "enable_quickack": false, 00:19:16.456 "enable_placement_id": 0, 00:19:16.456 "enable_zerocopy_send_server": true, 00:19:16.456 "enable_zerocopy_send_client": false, 00:19:16.456 "zerocopy_threshold": 0, 00:19:16.456 "tls_version": 0, 00:19:16.456 "enable_ktls": false 00:19:16.456 } 00:19:16.456 }, 00:19:16.456 { 00:19:16.456 "method": "sock_impl_set_options", 00:19:16.456 "params": { 00:19:16.456 "impl_name": "ssl", 00:19:16.456 "recv_buf_size": 4096, 00:19:16.456 "send_buf_size": 4096, 00:19:16.456 "enable_recv_pipe": true, 00:19:16.456 "enable_quickack": false, 00:19:16.456 "enable_placement_id": 0, 00:19:16.456 "enable_zerocopy_send_server": true, 00:19:16.456 "enable_zerocopy_send_client": false, 00:19:16.456 "zerocopy_threshold": 0, 00:19:16.456 "tls_version": 0, 00:19:16.456 "enable_ktls": false 00:19:16.456 } 00:19:16.456 } 00:19:16.456 ] 00:19:16.456 }, 00:19:16.456 { 00:19:16.456 "subsystem": "vmd", 00:19:16.456 "config": [] 00:19:16.456 }, 00:19:16.456 { 00:19:16.456 "subsystem": "accel", 00:19:16.456 "config": [ 00:19:16.456 { 00:19:16.456 "method": "accel_set_options", 00:19:16.456 "params": { 00:19:16.456 "small_cache_size": 128, 00:19:16.456 "large_cache_size": 16, 00:19:16.456 "task_count": 2048, 00:19:16.456 "sequence_count": 2048, 00:19:16.456 "buf_count": 2048 00:19:16.456 } 00:19:16.456 } 00:19:16.456 ] 00:19:16.456 }, 00:19:16.456 { 00:19:16.456 "subsystem": "bdev", 00:19:16.456 "config": [ 00:19:16.456 { 00:19:16.456 "method": "bdev_set_options", 00:19:16.456 "params": { 00:19:16.456 "bdev_io_pool_size": 65535, 00:19:16.456 "bdev_io_cache_size": 256, 00:19:16.456 "bdev_auto_examine": true, 00:19:16.456 "iobuf_small_cache_size": 128, 00:19:16.456 "iobuf_large_cache_size": 16 00:19:16.456 } 00:19:16.456 }, 00:19:16.456 { 00:19:16.456 "method": "bdev_raid_set_options", 00:19:16.456 "params": { 00:19:16.456 "process_window_size_kb": 1024 00:19:16.456 } 00:19:16.456 }, 00:19:16.456 { 00:19:16.457 "method": "bdev_iscsi_set_options", 00:19:16.457 "params": { 00:19:16.457 "timeout_sec": 30 00:19:16.457 } 00:19:16.457 }, 00:19:16.457 { 00:19:16.457 "method": "bdev_nvme_set_options", 00:19:16.457 "params": { 00:19:16.457 "action_on_timeout": "none", 00:19:16.457 "timeout_us": 0, 00:19:16.457 "timeout_admin_us": 0, 00:19:16.457 "keep_alive_timeout_ms": 10000, 00:19:16.457 "arbitration_burst": 0, 00:19:16.457 "low_priority_weight": 0, 00:19:16.457 "medium_priority_weight": 0, 00:19:16.457 "high_priority_weight": 0, 00:19:16.457 "nvme_adminq_poll_period_us": 10000, 00:19:16.457 "nvme_ioq_poll_period_us": 0, 00:19:16.457 "io_queue_requests": 0, 00:19:16.457 "delay_cmd_submit": true, 00:19:16.457 "transport_retry_count": 4, 00:19:16.457 "bdev_retry_count": 3, 00:19:16.457 "transport_ack_timeout": 0, 00:19:16.457 "ctrlr_loss_timeout_sec": 0, 00:19:16.457 "reconnect_delay_sec": 0, 00:19:16.457 "fast_io_fail_timeout_sec": 0, 00:19:16.457 "disable_auto_failback": false, 00:19:16.457 "generate_uuids": false, 00:19:16.457 "transport_tos": 0, 00:19:16.457 "nvme_error_stat": false, 00:19:16.457 "rdma_srq_size": 0, 00:19:16.457 "io_path_stat": false, 00:19:16.457 "allow_accel_sequence": false, 00:19:16.457 "rdma_max_cq_size": 0, 00:19:16.457 "rdma_cm_event_timeout_ms": 0, 00:19:16.457 "dhchap_digests": [ 00:19:16.457 "sha256", 00:19:16.457 "sha384", 00:19:16.457 "sha512" 00:19:16.457 ], 00:19:16.457 "dhchap_dhgroups": [ 00:19:16.457 "null", 00:19:16.457 "ffdhe2048", 00:19:16.457 "ffdhe3072", 00:19:16.457 "ffdhe4096", 00:19:16.457 "ffdhe6144", 00:19:16.457 "ffdhe8192" 00:19:16.457 ] 00:19:16.457 } 00:19:16.457 }, 00:19:16.457 { 00:19:16.457 "method": "bdev_nvme_set_hotplug", 00:19:16.457 "params": { 00:19:16.457 "period_us": 100000, 00:19:16.457 "enable": false 00:19:16.457 } 00:19:16.457 }, 00:19:16.457 { 00:19:16.457 "method": "bdev_malloc_create", 00:19:16.457 "params": { 00:19:16.457 "name": "malloc0", 00:19:16.457 "num_blocks": 8192, 00:19:16.457 "block_size": 4096, 00:19:16.457 "physical_block_size": 4096, 00:19:16.457 "uuid": "ab037137-0d60-4bef-a873-cdb8623d83ed", 00:19:16.457 "optimal_io_boundary": 0 00:19:16.457 } 00:19:16.457 }, 00:19:16.457 { 00:19:16.457 "method": "bdev_wait_for_examine" 00:19:16.457 } 00:19:16.457 ] 00:19:16.457 }, 00:19:16.457 { 00:19:16.457 "subsystem": "nbd", 00:19:16.457 "config": [] 00:19:16.457 }, 00:19:16.457 { 00:19:16.457 "subsystem": "scheduler", 00:19:16.457 "config": [ 00:19:16.457 { 00:19:16.457 "method": "framework_set_scheduler", 00:19:16.457 "params": { 00:19:16.457 "name": "static" 00:19:16.457 } 00:19:16.457 } 00:19:16.457 ] 00:19:16.457 }, 00:19:16.457 { 00:19:16.457 "subsystem": "nvmf", 00:19:16.457 "config": [ 00:19:16.457 { 00:19:16.457 "method": "nvmf_set_config", 00:19:16.457 "params": { 00:19:16.457 "discovery_filter": "match_any", 00:19:16.457 "admin_cmd_passthru": { 00:19:16.457 "identify_ctrlr": false 00:19:16.457 } 00:19:16.457 } 00:19:16.457 }, 00:19:16.457 { 00:19:16.457 "method": "nvmf_set_max_subsystems", 00:19:16.457 "params": { 00:19:16.457 "max_subsystems": 1024 00:19:16.457 } 00:19:16.457 }, 00:19:16.457 { 00:19:16.457 "method": "nvmf_set_crdt", 00:19:16.457 "params": { 00:19:16.457 "crdt1": 0, 00:19:16.457 "crdt2": 0, 00:19:16.457 "crdt3": 0 00:19:16.457 } 00:19:16.457 }, 00:19:16.457 { 00:19:16.457 "method": "nvmf_create_transport", 00:19:16.457 "params": { 00:19:16.457 "trtype": "TCP", 00:19:16.457 "max_queue_depth": 128, 00:19:16.457 "max_io_qpairs_per_ctrlr": 127, 00:19:16.457 "in_capsule_data_size": 4096, 00:19:16.457 "max_io_size": 131072, 00:19:16.457 "io_unit_size": 131072, 00:19:16.457 "max_aq_depth": 128, 00:19:16.457 "num_shared_buffers": 511, 00:19:16.457 "buf_cache_size": 4294967295, 00:19:16.457 "dif_insert_or_strip": false, 00:19:16.457 "zcopy": false, 00:19:16.457 "c2h_success": false, 00:19:16.457 "sock_priority": 0, 00:19:16.457 "abort_timeout_sec": 1, 00:19:16.457 "ack_timeout": 0, 00:19:16.457 "data_wr_pool_size": 0 00:19:16.457 } 00:19:16.457 }, 00:19:16.457 { 00:19:16.457 "method": "nvmf_create_subsystem", 00:19:16.457 "params": { 00:19:16.457 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:16.457 "allow_any_host": false, 00:19:16.457 "serial_number": "00000000000000000000", 00:19:16.457 "model_number": "SPDK bdev Controller", 00:19:16.457 "max_namespaces": 32, 00:19:16.457 "min_cntlid": 1, 00:19:16.457 "max_cntlid": 65519, 00:19:16.457 "ana_reporting": false 00:19:16.457 } 00:19:16.457 }, 00:19:16.457 { 00:19:16.457 "method": "nvmf_subsystem_add_host", 00:19:16.457 "params": { 00:19:16.457 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:16.457 "host": "nqn.2016-06.io.spdk:host1", 00:19:16.457 "psk": "key0" 00:19:16.457 } 00:19:16.457 }, 00:19:16.457 { 00:19:16.457 "method": "nvmf_subsystem_add_ns", 00:19:16.457 "params": { 00:19:16.457 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:16.457 "namespace": { 00:19:16.457 "nsid": 1, 00:19:16.457 "bdev_name": "malloc0", 00:19:16.457 "nguid": "AB0371370D604BEFA873CDB8623D83ED", 00:19:16.457 "uuid": "ab037137-0d60-4bef-a873-cdb8623d83ed", 00:19:16.457 "no_auto_visible": false 00:19:16.457 } 00:19:16.457 } 00:19:16.457 }, 00:19:16.457 { 00:19:16.457 "method": "nvmf_subsystem_add_listener", 00:19:16.457 "params": { 00:19:16.457 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:16.457 "listen_address": { 00:19:16.457 "trtype": "TCP", 00:19:16.457 "adrfam": "IPv4", 00:19:16.457 "traddr": "10.0.0.2", 00:19:16.457 "trsvcid": "4420" 00:19:16.457 }, 00:19:16.457 "secure_channel": true 00:19:16.457 } 00:19:16.457 } 00:19:16.457 ] 00:19:16.457 } 00:19:16.457 ] 00:19:16.457 }' 00:19:16.457 17:10:03 nvmf_tcp.nvmf_tls -- target/tls.sh@264 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:19:16.457 17:10:04 nvmf_tcp.nvmf_tls -- target/tls.sh@264 -- # bperfcfg='{ 00:19:16.457 "subsystems": [ 00:19:16.457 { 00:19:16.457 "subsystem": "keyring", 00:19:16.457 "config": [ 00:19:16.457 { 00:19:16.457 "method": "keyring_file_add_key", 00:19:16.457 "params": { 00:19:16.457 "name": "key0", 00:19:16.457 "path": "/tmp/tmp.sG4BvOsBCe" 00:19:16.457 } 00:19:16.457 } 00:19:16.457 ] 00:19:16.457 }, 00:19:16.457 { 00:19:16.457 "subsystem": "iobuf", 00:19:16.457 "config": [ 00:19:16.457 { 00:19:16.457 "method": "iobuf_set_options", 00:19:16.457 "params": { 00:19:16.457 "small_pool_count": 8192, 00:19:16.457 "large_pool_count": 1024, 00:19:16.457 "small_bufsize": 8192, 00:19:16.457 "large_bufsize": 135168 00:19:16.457 } 00:19:16.457 } 00:19:16.457 ] 00:19:16.457 }, 00:19:16.457 { 00:19:16.457 "subsystem": "sock", 00:19:16.457 "config": [ 00:19:16.457 { 00:19:16.457 "method": "sock_impl_set_options", 00:19:16.457 "params": { 00:19:16.457 "impl_name": "posix", 00:19:16.457 "recv_buf_size": 2097152, 00:19:16.457 "send_buf_size": 2097152, 00:19:16.457 "enable_recv_pipe": true, 00:19:16.457 "enable_quickack": false, 00:19:16.457 "enable_placement_id": 0, 00:19:16.457 "enable_zerocopy_send_server": true, 00:19:16.457 "enable_zerocopy_send_client": false, 00:19:16.457 "zerocopy_threshold": 0, 00:19:16.457 "tls_version": 0, 00:19:16.457 "enable_ktls": false 00:19:16.457 } 00:19:16.457 }, 00:19:16.457 { 00:19:16.457 "method": "sock_impl_set_options", 00:19:16.457 "params": { 00:19:16.457 "impl_name": "ssl", 00:19:16.457 "recv_buf_size": 4096, 00:19:16.457 "send_buf_size": 4096, 00:19:16.457 "enable_recv_pipe": true, 00:19:16.457 "enable_quickack": false, 00:19:16.457 "enable_placement_id": 0, 00:19:16.457 "enable_zerocopy_send_server": true, 00:19:16.457 "enable_zerocopy_send_client": false, 00:19:16.457 "zerocopy_threshold": 0, 00:19:16.457 "tls_version": 0, 00:19:16.457 "enable_ktls": false 00:19:16.457 } 00:19:16.457 } 00:19:16.457 ] 00:19:16.457 }, 00:19:16.457 { 00:19:16.457 "subsystem": "vmd", 00:19:16.457 "config": [] 00:19:16.457 }, 00:19:16.457 { 00:19:16.457 "subsystem": "accel", 00:19:16.457 "config": [ 00:19:16.457 { 00:19:16.457 "method": "accel_set_options", 00:19:16.457 "params": { 00:19:16.457 "small_cache_size": 128, 00:19:16.457 "large_cache_size": 16, 00:19:16.457 "task_count": 2048, 00:19:16.457 "sequence_count": 2048, 00:19:16.457 "buf_count": 2048 00:19:16.457 } 00:19:16.457 } 00:19:16.457 ] 00:19:16.457 }, 00:19:16.457 { 00:19:16.458 "subsystem": "bdev", 00:19:16.458 "config": [ 00:19:16.458 { 00:19:16.458 "method": "bdev_set_options", 00:19:16.458 "params": { 00:19:16.458 "bdev_io_pool_size": 65535, 00:19:16.458 "bdev_io_cache_size": 256, 00:19:16.458 "bdev_auto_examine": true, 00:19:16.458 "iobuf_small_cache_size": 128, 00:19:16.458 "iobuf_large_cache_size": 16 00:19:16.458 } 00:19:16.458 }, 00:19:16.458 { 00:19:16.458 "method": "bdev_raid_set_options", 00:19:16.458 "params": { 00:19:16.458 "process_window_size_kb": 1024 00:19:16.458 } 00:19:16.458 }, 00:19:16.458 { 00:19:16.458 "method": "bdev_iscsi_set_options", 00:19:16.458 "params": { 00:19:16.458 "timeout_sec": 30 00:19:16.458 } 00:19:16.458 }, 00:19:16.458 { 00:19:16.458 "method": "bdev_nvme_set_options", 00:19:16.458 "params": { 00:19:16.458 "action_on_timeout": "none", 00:19:16.458 "timeout_us": 0, 00:19:16.458 "timeout_admin_us": 0, 00:19:16.458 "keep_alive_timeout_ms": 10000, 00:19:16.458 "arbitration_burst": 0, 00:19:16.458 "low_priority_weight": 0, 00:19:16.458 "medium_priority_weight": 0, 00:19:16.458 "high_priority_weight": 0, 00:19:16.458 "nvme_adminq_poll_period_us": 10000, 00:19:16.458 "nvme_ioq_poll_period_us": 0, 00:19:16.458 "io_queue_requests": 512, 00:19:16.458 "delay_cmd_submit": true, 00:19:16.458 "transport_retry_count": 4, 00:19:16.458 "bdev_retry_count": 3, 00:19:16.458 "transport_ack_timeout": 0, 00:19:16.458 "ctrlr_loss_timeout_sec": 0, 00:19:16.458 "reconnect_delay_sec": 0, 00:19:16.458 "fast_io_fail_timeout_sec": 0, 00:19:16.458 "disable_auto_failback": false, 00:19:16.458 "generate_uuids": false, 00:19:16.458 "transport_tos": 0, 00:19:16.458 "nvme_error_stat": false, 00:19:16.458 "rdma_srq_size": 0, 00:19:16.458 "io_path_stat": false, 00:19:16.458 "allow_accel_sequence": false, 00:19:16.458 "rdma_max_cq_size": 0, 00:19:16.458 "rdma_cm_event_timeout_ms": 0, 00:19:16.458 "dhchap_digests": [ 00:19:16.458 "sha256", 00:19:16.458 "sha384", 00:19:16.458 "sha512" 00:19:16.458 ], 00:19:16.458 "dhchap_dhgroups": [ 00:19:16.458 "null", 00:19:16.458 "ffdhe2048", 00:19:16.458 "ffdhe3072", 00:19:16.458 "ffdhe4096", 00:19:16.458 "ffdhe6144", 00:19:16.458 "ffdhe8192" 00:19:16.458 ] 00:19:16.458 } 00:19:16.458 }, 00:19:16.458 { 00:19:16.458 "method": "bdev_nvme_attach_controller", 00:19:16.458 "params": { 00:19:16.458 "name": "nvme0", 00:19:16.458 "trtype": "TCP", 00:19:16.458 "adrfam": "IPv4", 00:19:16.458 "traddr": "10.0.0.2", 00:19:16.458 "trsvcid": "4420", 00:19:16.458 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:16.458 "prchk_reftag": false, 00:19:16.458 "prchk_guard": false, 00:19:16.458 "ctrlr_loss_timeout_sec": 0, 00:19:16.458 "reconnect_delay_sec": 0, 00:19:16.458 "fast_io_fail_timeout_sec": 0, 00:19:16.458 "psk": "key0", 00:19:16.458 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:16.458 "hdgst": false, 00:19:16.458 "ddgst": false 00:19:16.458 } 00:19:16.458 }, 00:19:16.458 { 00:19:16.458 "method": "bdev_nvme_set_hotplug", 00:19:16.458 "params": { 00:19:16.458 "period_us": 100000, 00:19:16.458 "enable": false 00:19:16.458 } 00:19:16.458 }, 00:19:16.458 { 00:19:16.458 "method": "bdev_enable_histogram", 00:19:16.458 "params": { 00:19:16.458 "name": "nvme0n1", 00:19:16.458 "enable": true 00:19:16.458 } 00:19:16.458 }, 00:19:16.458 { 00:19:16.458 "method": "bdev_wait_for_examine" 00:19:16.458 } 00:19:16.458 ] 00:19:16.458 }, 00:19:16.458 { 00:19:16.458 "subsystem": "nbd", 00:19:16.458 "config": [] 00:19:16.458 } 00:19:16.458 ] 00:19:16.458 }' 00:19:16.458 17:10:04 nvmf_tcp.nvmf_tls -- target/tls.sh@266 -- # killprocess 3100394 00:19:16.458 17:10:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 3100394 ']' 00:19:16.458 17:10:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 3100394 00:19:16.458 17:10:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:19:16.458 17:10:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:19:16.458 17:10:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3100394 00:19:16.717 17:10:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:19:16.717 17:10:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:19:16.717 17:10:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3100394' 00:19:16.717 killing process with pid 3100394 00:19:16.717 17:10:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 3100394 00:19:16.717 Received shutdown signal, test time was about 1.000000 seconds 00:19:16.717 00:19:16.717 Latency(us) 00:19:16.717 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:16.717 =================================================================================================================== 00:19:16.717 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:19:16.717 17:10:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 3100394 00:19:16.717 17:10:04 nvmf_tcp.nvmf_tls -- target/tls.sh@267 -- # killprocess 3100331 00:19:16.717 17:10:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 3100331 ']' 00:19:16.717 17:10:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 3100331 00:19:16.717 17:10:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:19:16.717 17:10:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:19:16.717 17:10:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3100331 00:19:16.975 17:10:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:19:16.975 17:10:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:19:16.975 17:10:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3100331' 00:19:16.975 killing process with pid 3100331 00:19:16.975 17:10:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 3100331 00:19:16.975 [2024-05-15 17:10:04.382092] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:19:16.975 17:10:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 3100331 00:19:16.975 17:10:04 nvmf_tcp.nvmf_tls -- target/tls.sh@269 -- # nvmfappstart -c /dev/fd/62 00:19:16.975 17:10:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:19:16.975 17:10:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@720 -- # xtrace_disable 00:19:16.975 17:10:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:16.975 17:10:04 nvmf_tcp.nvmf_tls -- target/tls.sh@269 -- # echo '{ 00:19:16.975 "subsystems": [ 00:19:16.975 { 00:19:16.975 "subsystem": "keyring", 00:19:16.975 "config": [ 00:19:16.975 { 00:19:16.975 "method": "keyring_file_add_key", 00:19:16.975 "params": { 00:19:16.975 "name": "key0", 00:19:16.975 "path": "/tmp/tmp.sG4BvOsBCe" 00:19:16.975 } 00:19:16.975 } 00:19:16.975 ] 00:19:16.975 }, 00:19:16.975 { 00:19:16.975 "subsystem": "iobuf", 00:19:16.975 "config": [ 00:19:16.975 { 00:19:16.975 "method": "iobuf_set_options", 00:19:16.975 "params": { 00:19:16.975 "small_pool_count": 8192, 00:19:16.975 "large_pool_count": 1024, 00:19:16.975 "small_bufsize": 8192, 00:19:16.975 "large_bufsize": 135168 00:19:16.975 } 00:19:16.975 } 00:19:16.975 ] 00:19:16.975 }, 00:19:16.975 { 00:19:16.975 "subsystem": "sock", 00:19:16.975 "config": [ 00:19:16.975 { 00:19:16.975 "method": "sock_impl_set_options", 00:19:16.975 "params": { 00:19:16.975 "impl_name": "posix", 00:19:16.975 "recv_buf_size": 2097152, 00:19:16.975 "send_buf_size": 2097152, 00:19:16.975 "enable_recv_pipe": true, 00:19:16.975 "enable_quickack": false, 00:19:16.975 "enable_placement_id": 0, 00:19:16.975 "enable_zerocopy_send_server": true, 00:19:16.975 "enable_zerocopy_send_client": false, 00:19:16.975 "zerocopy_threshold": 0, 00:19:16.975 "tls_version": 0, 00:19:16.975 "enable_ktls": false 00:19:16.975 } 00:19:16.975 }, 00:19:16.975 { 00:19:16.975 "method": "sock_impl_set_options", 00:19:16.975 "params": { 00:19:16.975 "impl_name": "ssl", 00:19:16.975 "recv_buf_size": 4096, 00:19:16.975 "send_buf_size": 4096, 00:19:16.975 "enable_recv_pipe": true, 00:19:16.975 "enable_quickack": false, 00:19:16.975 "enable_placement_id": 0, 00:19:16.975 "enable_zerocopy_send_server": true, 00:19:16.975 "enable_zerocopy_send_client": false, 00:19:16.975 "zerocopy_threshold": 0, 00:19:16.975 "tls_version": 0, 00:19:16.975 "enable_ktls": false 00:19:16.975 } 00:19:16.975 } 00:19:16.975 ] 00:19:16.975 }, 00:19:16.975 { 00:19:16.975 "subsystem": "vmd", 00:19:16.975 "config": [] 00:19:16.975 }, 00:19:16.975 { 00:19:16.975 "subsystem": "accel", 00:19:16.975 "config": [ 00:19:16.975 { 00:19:16.975 "method": "accel_set_options", 00:19:16.975 "params": { 00:19:16.975 "small_cache_size": 128, 00:19:16.975 "large_cache_size": 16, 00:19:16.975 "task_count": 2048, 00:19:16.975 "sequence_count": 2048, 00:19:16.975 "buf_count": 2048 00:19:16.975 } 00:19:16.975 } 00:19:16.975 ] 00:19:16.975 }, 00:19:16.975 { 00:19:16.975 "subsystem": "bdev", 00:19:16.975 "config": [ 00:19:16.975 { 00:19:16.975 "method": "bdev_set_options", 00:19:16.975 "params": { 00:19:16.975 "bdev_io_pool_size": 65535, 00:19:16.975 "bdev_io_cache_size": 256, 00:19:16.975 "bdev_auto_examine": true, 00:19:16.975 "iobuf_small_cache_size": 128, 00:19:16.975 "iobuf_large_cache_size": 16 00:19:16.975 } 00:19:16.975 }, 00:19:16.975 { 00:19:16.975 "method": "bdev_raid_set_options", 00:19:16.975 "params": { 00:19:16.975 "process_window_size_kb": 1024 00:19:16.975 } 00:19:16.976 }, 00:19:16.976 { 00:19:16.976 "method": "bdev_iscsi_set_options", 00:19:16.976 "params": { 00:19:16.976 "timeout_sec": 30 00:19:16.976 } 00:19:16.976 }, 00:19:16.976 { 00:19:16.976 "method": "bdev_nvme_set_options", 00:19:16.976 "params": { 00:19:16.976 "action_on_timeout": "none", 00:19:16.976 "timeout_us": 0, 00:19:16.976 "timeout_admin_us": 0, 00:19:16.976 "keep_alive_timeout_ms": 10000, 00:19:16.976 "arbitration_burst": 0, 00:19:16.976 "low_priority_weight": 0, 00:19:16.976 "medium_priority_weight": 0, 00:19:16.976 "high_priority_weight": 0, 00:19:16.976 "nvme_adminq_poll_period_us": 10000, 00:19:16.976 "nvme_ioq_poll_period_us": 0, 00:19:16.976 "io_queue_requests": 0, 00:19:16.976 "delay_cmd_submit": true, 00:19:16.976 "transport_retry_count": 4, 00:19:16.976 "bdev_retry_count": 3, 00:19:16.976 "transport_ack_timeout": 0, 00:19:16.976 "ctrlr_loss_timeout_sec": 0, 00:19:16.976 "reconnect_delay_sec": 0, 00:19:16.976 "fast_io_fail_timeout_sec": 0, 00:19:16.976 "disable_auto_failback": false, 00:19:16.976 "generate_uuids": false, 00:19:16.976 "transport_tos": 0, 00:19:16.976 "nvme_error_stat": false, 00:19:16.976 "rdma_srq_size": 0, 00:19:16.976 "io_path_stat": false, 00:19:16.976 "allow_accel_sequence": false, 00:19:16.976 "rdma_max_cq_size": 0, 00:19:16.976 "rdma_cm_event_timeout_ms": 0, 00:19:16.976 "dhchap_digests": [ 00:19:16.976 "sha256", 00:19:16.976 "sha384", 00:19:16.976 "sha512" 00:19:16.976 ], 00:19:16.976 "dhchap_dhgroups": [ 00:19:16.976 "null", 00:19:16.976 "ffdhe2048", 00:19:16.976 "ffdhe3072", 00:19:16.976 "ffdhe4096", 00:19:16.976 "ffdhe6144", 00:19:16.976 "ffdhe8192" 00:19:16.976 ] 00:19:16.976 } 00:19:16.976 }, 00:19:16.976 { 00:19:16.976 "method": "bdev_nvme_set_hotplug", 00:19:16.976 "params": { 00:19:16.976 "period_us": 100000, 00:19:16.976 "enable": false 00:19:16.976 } 00:19:16.976 }, 00:19:16.976 { 00:19:16.976 "method": "bdev_malloc_create", 00:19:16.976 "params": { 00:19:16.976 "name": "malloc0", 00:19:16.976 "num_blocks": 8192, 00:19:16.976 "block_size": 4096, 00:19:16.976 "physical_block_size": 4096, 00:19:16.976 "uuid": "ab037137-0d60-4bef-a873-cdb8623d83ed", 00:19:16.976 "optimal_io_boundary": 0 00:19:16.976 } 00:19:16.976 }, 00:19:16.976 { 00:19:16.976 "method": "bdev_wait_for_examine" 00:19:16.976 } 00:19:16.976 ] 00:19:16.976 }, 00:19:16.976 { 00:19:16.976 "subsystem": "nbd", 00:19:16.976 "config": [] 00:19:16.976 }, 00:19:16.976 { 00:19:16.976 "subsystem": "scheduler", 00:19:16.976 "config": [ 00:19:16.976 { 00:19:16.976 "method": "framework_set_scheduler", 00:19:16.976 "params": { 00:19:16.976 "name": "static" 00:19:16.976 } 00:19:16.976 } 00:19:16.976 ] 00:19:16.976 }, 00:19:16.976 { 00:19:16.976 "subsystem": "nvmf", 00:19:16.976 "config": [ 00:19:16.976 { 00:19:16.976 "method": "nvmf_set_config", 00:19:16.976 "params": { 00:19:16.976 "discovery_filter": "match_any", 00:19:16.976 "admin_cmd_passthru": { 00:19:16.976 "identify_ctrlr": false 00:19:16.976 } 00:19:16.976 } 00:19:16.976 }, 00:19:16.976 { 00:19:16.976 "method": "nvmf_set_max_subsystems", 00:19:16.976 "params": { 00:19:16.976 "max_subsystems": 1024 00:19:16.976 } 00:19:16.976 }, 00:19:16.976 { 00:19:16.976 "method": "nvmf_set_crdt", 00:19:16.976 "params": { 00:19:16.976 "crdt1": 0, 00:19:16.976 "crdt2": 0, 00:19:16.976 "crdt3": 0 00:19:16.976 } 00:19:16.976 }, 00:19:16.976 { 00:19:16.976 "method": "nvmf_create_transport", 00:19:16.976 "params": { 00:19:16.976 "trtype": "TCP", 00:19:16.976 "max_queue_depth": 128, 00:19:16.976 "max_io_qpairs_per_ctrlr": 127, 00:19:16.976 "in_capsule_data_size": 4096, 00:19:16.976 "max_io_size": 131072, 00:19:16.976 "io_unit_size": 131072, 00:19:16.976 "max_aq_depth": 128, 00:19:16.976 "num_shared_buffers": 511, 00:19:16.976 "buf_cache_size": 4294967295, 00:19:16.976 "dif_insert_or_strip": false, 00:19:16.976 "zcopy": false, 00:19:16.976 "c2h_success": false, 00:19:16.976 "sock_priority": 0, 00:19:16.976 "abort_timeout_sec": 1, 00:19:16.976 "ack_timeout": 0, 00:19:16.976 "data_wr_pool_size": 0 00:19:16.976 } 00:19:16.976 }, 00:19:16.976 { 00:19:16.976 "method": "nvmf_create_subsystem", 00:19:16.976 "params": { 00:19:16.976 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:16.976 "allow_any_host": false, 00:19:16.976 "serial_number": "00000000000000000000", 00:19:16.976 "model_number": "SPDK bdev Controller", 00:19:16.976 "max_namespaces": 32, 00:19:16.976 "min_cntlid": 1, 00:19:16.976 "max_cntlid": 65519, 00:19:16.976 "ana_reporting": false 00:19:16.976 } 00:19:16.976 }, 00:19:16.976 { 00:19:16.976 "method": "nvmf_subsystem_add_host", 00:19:16.976 "params": { 00:19:16.976 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:16.976 "host": "nqn.2016-06.io.spdk:host1", 00:19:16.976 "psk": "key0" 00:19:16.976 } 00:19:16.976 }, 00:19:16.976 { 00:19:16.976 "method": "nvmf_subsystem_add_ns", 00:19:16.976 "params": { 00:19:16.976 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:16.976 "namespace": { 00:19:16.976 "nsid": 1, 00:19:16.976 "bdev_name": "malloc0", 00:19:16.976 "nguid": "AB0371370D604BEFA873CDB8623D83ED", 00:19:16.976 "uuid": "ab037137-0d60-4bef-a873-cdb8623d83ed", 00:19:16.976 "no_auto_visible": false 00:19:16.976 } 00:19:16.976 } 00:19:16.976 }, 00:19:16.976 { 00:19:16.976 "method": "nvmf_subsystem_add_listener", 00:19:16.976 "params": { 00:19:16.976 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:16.976 "listen_address": { 00:19:16.976 "trtype": "TCP", 00:19:16.976 "adrfam": "IPv4", 00:19:16.976 "traddr": "10.0.0.2", 00:19:16.976 "trsvcid": "4420" 00:19:16.976 }, 00:19:16.976 "secure_channel": true 00:19:16.976 } 00:19:16.976 } 00:19:16.976 ] 00:19:16.976 } 00:19:16.976 ] 00:19:16.976 }' 00:19:16.976 17:10:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=3101052 00:19:16.976 17:10:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 3101052 00:19:16.976 17:10:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 3101052 ']' 00:19:16.976 17:10:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:16.976 17:10:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:19:16.976 17:10:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:16.976 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:16.976 17:10:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:19:16.976 17:10:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -c /dev/fd/62 00:19:16.976 17:10:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:17.234 [2024-05-15 17:10:04.653691] Starting SPDK v24.05-pre git sha1 0ba8ca574 / DPDK 23.11.0 initialization... 00:19:17.234 [2024-05-15 17:10:04.653735] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:17.234 EAL: No free 2048 kB hugepages reported on node 1 00:19:17.234 [2024-05-15 17:10:04.710197] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:17.234 [2024-05-15 17:10:04.788745] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:17.234 [2024-05-15 17:10:04.788782] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:17.234 [2024-05-15 17:10:04.788788] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:17.234 [2024-05-15 17:10:04.788795] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:17.234 [2024-05-15 17:10:04.788800] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:17.234 [2024-05-15 17:10:04.788849] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:19:17.492 [2024-05-15 17:10:04.991888] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:17.492 [2024-05-15 17:10:05.023892] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:19:17.492 [2024-05-15 17:10:05.023933] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:19:17.492 [2024-05-15 17:10:05.032503] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:18.058 17:10:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:19:18.058 17:10:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:19:18.058 17:10:05 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:19:18.058 17:10:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:19:18.058 17:10:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:18.058 17:10:05 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:18.058 17:10:05 nvmf_tcp.nvmf_tls -- target/tls.sh@272 -- # bdevperf_pid=3101092 00:19:18.058 17:10:05 nvmf_tcp.nvmf_tls -- target/tls.sh@273 -- # waitforlisten 3101092 /var/tmp/bdevperf.sock 00:19:18.058 17:10:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 3101092 ']' 00:19:18.058 17:10:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:18.058 17:10:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:19:18.058 17:10:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:18.058 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:18.058 17:10:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:19:18.059 17:10:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:18.059 17:10:05 nvmf_tcp.nvmf_tls -- target/tls.sh@270 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 -c /dev/fd/63 00:19:18.059 17:10:05 nvmf_tcp.nvmf_tls -- target/tls.sh@270 -- # echo '{ 00:19:18.059 "subsystems": [ 00:19:18.059 { 00:19:18.059 "subsystem": "keyring", 00:19:18.059 "config": [ 00:19:18.059 { 00:19:18.059 "method": "keyring_file_add_key", 00:19:18.059 "params": { 00:19:18.059 "name": "key0", 00:19:18.059 "path": "/tmp/tmp.sG4BvOsBCe" 00:19:18.059 } 00:19:18.059 } 00:19:18.059 ] 00:19:18.059 }, 00:19:18.059 { 00:19:18.059 "subsystem": "iobuf", 00:19:18.059 "config": [ 00:19:18.059 { 00:19:18.059 "method": "iobuf_set_options", 00:19:18.059 "params": { 00:19:18.059 "small_pool_count": 8192, 00:19:18.059 "large_pool_count": 1024, 00:19:18.059 "small_bufsize": 8192, 00:19:18.059 "large_bufsize": 135168 00:19:18.059 } 00:19:18.059 } 00:19:18.059 ] 00:19:18.059 }, 00:19:18.059 { 00:19:18.059 "subsystem": "sock", 00:19:18.059 "config": [ 00:19:18.059 { 00:19:18.059 "method": "sock_impl_set_options", 00:19:18.059 "params": { 00:19:18.059 "impl_name": "posix", 00:19:18.059 "recv_buf_size": 2097152, 00:19:18.059 "send_buf_size": 2097152, 00:19:18.059 "enable_recv_pipe": true, 00:19:18.059 "enable_quickack": false, 00:19:18.059 "enable_placement_id": 0, 00:19:18.059 "enable_zerocopy_send_server": true, 00:19:18.059 "enable_zerocopy_send_client": false, 00:19:18.059 "zerocopy_threshold": 0, 00:19:18.059 "tls_version": 0, 00:19:18.059 "enable_ktls": false 00:19:18.059 } 00:19:18.059 }, 00:19:18.059 { 00:19:18.059 "method": "sock_impl_set_options", 00:19:18.059 "params": { 00:19:18.059 "impl_name": "ssl", 00:19:18.059 "recv_buf_size": 4096, 00:19:18.059 "send_buf_size": 4096, 00:19:18.059 "enable_recv_pipe": true, 00:19:18.059 "enable_quickack": false, 00:19:18.059 "enable_placement_id": 0, 00:19:18.059 "enable_zerocopy_send_server": true, 00:19:18.059 "enable_zerocopy_send_client": false, 00:19:18.059 "zerocopy_threshold": 0, 00:19:18.059 "tls_version": 0, 00:19:18.059 "enable_ktls": false 00:19:18.059 } 00:19:18.059 } 00:19:18.059 ] 00:19:18.059 }, 00:19:18.059 { 00:19:18.059 "subsystem": "vmd", 00:19:18.059 "config": [] 00:19:18.059 }, 00:19:18.059 { 00:19:18.059 "subsystem": "accel", 00:19:18.059 "config": [ 00:19:18.059 { 00:19:18.059 "method": "accel_set_options", 00:19:18.059 "params": { 00:19:18.059 "small_cache_size": 128, 00:19:18.059 "large_cache_size": 16, 00:19:18.059 "task_count": 2048, 00:19:18.059 "sequence_count": 2048, 00:19:18.059 "buf_count": 2048 00:19:18.059 } 00:19:18.059 } 00:19:18.059 ] 00:19:18.059 }, 00:19:18.059 { 00:19:18.059 "subsystem": "bdev", 00:19:18.059 "config": [ 00:19:18.059 { 00:19:18.059 "method": "bdev_set_options", 00:19:18.059 "params": { 00:19:18.059 "bdev_io_pool_size": 65535, 00:19:18.059 "bdev_io_cache_size": 256, 00:19:18.059 "bdev_auto_examine": true, 00:19:18.059 "iobuf_small_cache_size": 128, 00:19:18.059 "iobuf_large_cache_size": 16 00:19:18.059 } 00:19:18.059 }, 00:19:18.059 { 00:19:18.059 "method": "bdev_raid_set_options", 00:19:18.059 "params": { 00:19:18.059 "process_window_size_kb": 1024 00:19:18.059 } 00:19:18.059 }, 00:19:18.059 { 00:19:18.059 "method": "bdev_iscsi_set_options", 00:19:18.059 "params": { 00:19:18.059 "timeout_sec": 30 00:19:18.059 } 00:19:18.059 }, 00:19:18.059 { 00:19:18.059 "method": "bdev_nvme_set_options", 00:19:18.059 "params": { 00:19:18.059 "action_on_timeout": "none", 00:19:18.059 "timeout_us": 0, 00:19:18.059 "timeout_admin_us": 0, 00:19:18.059 "keep_alive_timeout_ms": 10000, 00:19:18.059 "arbitration_burst": 0, 00:19:18.059 "low_priority_weight": 0, 00:19:18.059 "medium_priority_weight": 0, 00:19:18.059 "high_priority_weight": 0, 00:19:18.059 "nvme_adminq_poll_period_us": 10000, 00:19:18.059 "nvme_ioq_poll_period_us": 0, 00:19:18.059 "io_queue_requests": 512, 00:19:18.059 "delay_cmd_submit": true, 00:19:18.059 "transport_retry_count": 4, 00:19:18.059 "bdev_retry_count": 3, 00:19:18.059 "transport_ack_timeout": 0, 00:19:18.059 "ctrlr_loss_timeout_sec": 0, 00:19:18.059 "reconnect_delay_sec": 0, 00:19:18.059 "fast_io_fail_timeout_sec": 0, 00:19:18.059 "disable_auto_failback": false, 00:19:18.059 "generate_uuids": false, 00:19:18.059 "transport_tos": 0, 00:19:18.059 "nvme_error_stat": false, 00:19:18.059 "rdma_srq_size": 0, 00:19:18.059 "io_path_stat": false, 00:19:18.059 "allow_accel_sequence": false, 00:19:18.059 "rdma_max_cq_size": 0, 00:19:18.059 "rdma_cm_event_timeout_ms": 0, 00:19:18.059 "dhchap_digests": [ 00:19:18.059 "sha256", 00:19:18.059 "sha384", 00:19:18.059 "sha512" 00:19:18.059 ], 00:19:18.059 "dhchap_dhgroups": [ 00:19:18.059 "null", 00:19:18.059 "ffdhe2048", 00:19:18.059 "ffdhe3072", 00:19:18.059 "ffdhe4096", 00:19:18.059 "ffdhe6144", 00:19:18.059 "ffdhe8192" 00:19:18.059 ] 00:19:18.059 } 00:19:18.059 }, 00:19:18.059 { 00:19:18.059 "method": "bdev_nvme_attach_controller", 00:19:18.059 "params": { 00:19:18.059 "name": "nvme0", 00:19:18.059 "trtype": "TCP", 00:19:18.059 "adrfam": "IPv4", 00:19:18.059 "traddr": "10.0.0.2", 00:19:18.059 "trsvcid": "4420", 00:19:18.059 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:18.059 "prchk_reftag": false, 00:19:18.059 "prchk_guard": false, 00:19:18.059 "ctrlr_loss_timeout_sec": 0, 00:19:18.059 "reconnect_delay_sec": 0, 00:19:18.059 "fast_io_fail_timeout_sec": 0, 00:19:18.059 "psk": "key0", 00:19:18.059 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:18.059 "hdgst": false, 00:19:18.059 "ddgst": false 00:19:18.059 } 00:19:18.059 }, 00:19:18.059 { 00:19:18.059 "method": "bdev_nvme_set_hotplug", 00:19:18.059 "params": { 00:19:18.059 "period_us": 100000, 00:19:18.059 "enable": false 00:19:18.059 } 00:19:18.059 }, 00:19:18.059 { 00:19:18.059 "method": "bdev_enable_histogram", 00:19:18.059 "params": { 00:19:18.059 "name": "nvme0n1", 00:19:18.059 "enable": true 00:19:18.059 } 00:19:18.059 }, 00:19:18.059 { 00:19:18.059 "method": "bdev_wait_for_examine" 00:19:18.059 } 00:19:18.059 ] 00:19:18.059 }, 00:19:18.059 { 00:19:18.059 "subsystem": "nbd", 00:19:18.059 "config": [] 00:19:18.059 } 00:19:18.059 ] 00:19:18.059 }' 00:19:18.059 [2024-05-15 17:10:05.532002] Starting SPDK v24.05-pre git sha1 0ba8ca574 / DPDK 23.11.0 initialization... 00:19:18.059 [2024-05-15 17:10:05.532047] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3101092 ] 00:19:18.059 EAL: No free 2048 kB hugepages reported on node 1 00:19:18.059 [2024-05-15 17:10:05.586602] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:18.059 [2024-05-15 17:10:05.666951] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:19:18.317 [2024-05-15 17:10:05.809512] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:18.882 17:10:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:19:18.882 17:10:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:19:18.882 17:10:06 nvmf_tcp.nvmf_tls -- target/tls.sh@275 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:19:18.882 17:10:06 nvmf_tcp.nvmf_tls -- target/tls.sh@275 -- # jq -r '.[].name' 00:19:18.882 17:10:06 nvmf_tcp.nvmf_tls -- target/tls.sh@275 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:18.882 17:10:06 nvmf_tcp.nvmf_tls -- target/tls.sh@276 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:19:19.140 Running I/O for 1 seconds... 00:19:20.074 00:19:20.074 Latency(us) 00:19:20.074 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:20.074 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:19:20.074 Verification LBA range: start 0x0 length 0x2000 00:19:20.074 nvme0n1 : 1.02 5435.35 21.23 0.00 0.00 23338.43 6610.59 33052.94 00:19:20.074 =================================================================================================================== 00:19:20.075 Total : 5435.35 21.23 0.00 0.00 23338.43 6610.59 33052.94 00:19:20.075 0 00:19:20.075 17:10:07 nvmf_tcp.nvmf_tls -- target/tls.sh@278 -- # trap - SIGINT SIGTERM EXIT 00:19:20.075 17:10:07 nvmf_tcp.nvmf_tls -- target/tls.sh@279 -- # cleanup 00:19:20.075 17:10:07 nvmf_tcp.nvmf_tls -- target/tls.sh@15 -- # process_shm --id 0 00:19:20.075 17:10:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@804 -- # type=--id 00:19:20.075 17:10:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@805 -- # id=0 00:19:20.075 17:10:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@806 -- # '[' --id = --pid ']' 00:19:20.075 17:10:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@810 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:19:20.075 17:10:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@810 -- # shm_files=nvmf_trace.0 00:19:20.075 17:10:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@812 -- # [[ -z nvmf_trace.0 ]] 00:19:20.075 17:10:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@816 -- # for n in $shm_files 00:19:20.075 17:10:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@817 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:19:20.075 nvmf_trace.0 00:19:20.075 17:10:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@819 -- # return 0 00:19:20.075 17:10:07 nvmf_tcp.nvmf_tls -- target/tls.sh@16 -- # killprocess 3101092 00:19:20.075 17:10:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 3101092 ']' 00:19:20.075 17:10:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 3101092 00:19:20.075 17:10:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:19:20.075 17:10:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:19:20.075 17:10:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3101092 00:19:20.333 17:10:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:19:20.333 17:10:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:19:20.333 17:10:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3101092' 00:19:20.333 killing process with pid 3101092 00:19:20.333 17:10:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 3101092 00:19:20.333 Received shutdown signal, test time was about 1.000000 seconds 00:19:20.333 00:19:20.333 Latency(us) 00:19:20.333 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:20.333 =================================================================================================================== 00:19:20.333 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:19:20.333 17:10:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 3101092 00:19:20.333 17:10:07 nvmf_tcp.nvmf_tls -- target/tls.sh@17 -- # nvmftestfini 00:19:20.333 17:10:07 nvmf_tcp.nvmf_tls -- nvmf/common.sh@488 -- # nvmfcleanup 00:19:20.333 17:10:07 nvmf_tcp.nvmf_tls -- nvmf/common.sh@117 -- # sync 00:19:20.333 17:10:07 nvmf_tcp.nvmf_tls -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:19:20.333 17:10:07 nvmf_tcp.nvmf_tls -- nvmf/common.sh@120 -- # set +e 00:19:20.333 17:10:07 nvmf_tcp.nvmf_tls -- nvmf/common.sh@121 -- # for i in {1..20} 00:19:20.333 17:10:07 nvmf_tcp.nvmf_tls -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:19:20.333 rmmod nvme_tcp 00:19:20.333 rmmod nvme_fabrics 00:19:20.333 rmmod nvme_keyring 00:19:20.334 17:10:07 nvmf_tcp.nvmf_tls -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:19:20.592 17:10:07 nvmf_tcp.nvmf_tls -- nvmf/common.sh@124 -- # set -e 00:19:20.592 17:10:07 nvmf_tcp.nvmf_tls -- nvmf/common.sh@125 -- # return 0 00:19:20.592 17:10:07 nvmf_tcp.nvmf_tls -- nvmf/common.sh@489 -- # '[' -n 3101052 ']' 00:19:20.592 17:10:07 nvmf_tcp.nvmf_tls -- nvmf/common.sh@490 -- # killprocess 3101052 00:19:20.592 17:10:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 3101052 ']' 00:19:20.592 17:10:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 3101052 00:19:20.592 17:10:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:19:20.592 17:10:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:19:20.592 17:10:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3101052 00:19:20.592 17:10:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:19:20.592 17:10:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:19:20.592 17:10:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3101052' 00:19:20.592 killing process with pid 3101052 00:19:20.592 17:10:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 3101052 00:19:20.592 [2024-05-15 17:10:08.044150] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:19:20.592 17:10:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 3101052 00:19:20.851 17:10:08 nvmf_tcp.nvmf_tls -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:19:20.851 17:10:08 nvmf_tcp.nvmf_tls -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:19:20.851 17:10:08 nvmf_tcp.nvmf_tls -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:19:20.851 17:10:08 nvmf_tcp.nvmf_tls -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:19:20.851 17:10:08 nvmf_tcp.nvmf_tls -- nvmf/common.sh@278 -- # remove_spdk_ns 00:19:20.851 17:10:08 nvmf_tcp.nvmf_tls -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:20.851 17:10:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:20.851 17:10:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:22.755 17:10:10 nvmf_tcp.nvmf_tls -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:19:22.755 17:10:10 nvmf_tcp.nvmf_tls -- target/tls.sh@18 -- # rm -f /tmp/tmp.Xm8ApRGOFe /tmp/tmp.GoybeJ8Iud /tmp/tmp.sG4BvOsBCe 00:19:22.755 00:19:22.755 real 1m24.756s 00:19:22.755 user 2m11.320s 00:19:22.755 sys 0m27.972s 00:19:22.755 17:10:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@1122 -- # xtrace_disable 00:19:22.755 17:10:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:22.755 ************************************ 00:19:22.755 END TEST nvmf_tls 00:19:22.755 ************************************ 00:19:22.755 17:10:10 nvmf_tcp -- nvmf/nvmf.sh@62 -- # run_test nvmf_fips /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:19:22.755 17:10:10 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:19:22.755 17:10:10 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:19:22.755 17:10:10 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:19:22.755 ************************************ 00:19:22.755 START TEST nvmf_fips 00:19:22.755 ************************************ 00:19:22.755 17:10:10 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:19:23.014 * Looking for test storage... 00:19:23.014 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips 00:19:23.014 17:10:10 nvmf_tcp.nvmf_fips -- fips/fips.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:19:23.014 17:10:10 nvmf_tcp.nvmf_fips -- nvmf/common.sh@7 -- # uname -s 00:19:23.014 17:10:10 nvmf_tcp.nvmf_fips -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:23.014 17:10:10 nvmf_tcp.nvmf_fips -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:23.014 17:10:10 nvmf_tcp.nvmf_fips -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:23.014 17:10:10 nvmf_tcp.nvmf_fips -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:23.014 17:10:10 nvmf_tcp.nvmf_fips -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:23.014 17:10:10 nvmf_tcp.nvmf_fips -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:23.014 17:10:10 nvmf_tcp.nvmf_fips -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:23.014 17:10:10 nvmf_tcp.nvmf_fips -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:23.014 17:10:10 nvmf_tcp.nvmf_fips -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:23.014 17:10:10 nvmf_tcp.nvmf_fips -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:23.014 17:10:10 nvmf_tcp.nvmf_fips -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:19:23.014 17:10:10 nvmf_tcp.nvmf_fips -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:19:23.014 17:10:10 nvmf_tcp.nvmf_fips -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:23.014 17:10:10 nvmf_tcp.nvmf_fips -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:23.014 17:10:10 nvmf_tcp.nvmf_fips -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:23.014 17:10:10 nvmf_tcp.nvmf_fips -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:23.014 17:10:10 nvmf_tcp.nvmf_fips -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:19:23.014 17:10:10 nvmf_tcp.nvmf_fips -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:23.014 17:10:10 nvmf_tcp.nvmf_fips -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:23.014 17:10:10 nvmf_tcp.nvmf_fips -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:23.014 17:10:10 nvmf_tcp.nvmf_fips -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:23.014 17:10:10 nvmf_tcp.nvmf_fips -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:23.014 17:10:10 nvmf_tcp.nvmf_fips -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:23.014 17:10:10 nvmf_tcp.nvmf_fips -- paths/export.sh@5 -- # export PATH 00:19:23.014 17:10:10 nvmf_tcp.nvmf_fips -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:23.014 17:10:10 nvmf_tcp.nvmf_fips -- nvmf/common.sh@47 -- # : 0 00:19:23.014 17:10:10 nvmf_tcp.nvmf_fips -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:19:23.014 17:10:10 nvmf_tcp.nvmf_fips -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:19:23.014 17:10:10 nvmf_tcp.nvmf_fips -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:23.014 17:10:10 nvmf_tcp.nvmf_fips -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:23.014 17:10:10 nvmf_tcp.nvmf_fips -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:23.014 17:10:10 nvmf_tcp.nvmf_fips -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:19:23.014 17:10:10 nvmf_tcp.nvmf_fips -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:19:23.014 17:10:10 nvmf_tcp.nvmf_fips -- nvmf/common.sh@51 -- # have_pci_nics=0 00:19:23.014 17:10:10 nvmf_tcp.nvmf_fips -- fips/fips.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:19:23.014 17:10:10 nvmf_tcp.nvmf_fips -- fips/fips.sh@89 -- # check_openssl_version 00:19:23.014 17:10:10 nvmf_tcp.nvmf_fips -- fips/fips.sh@83 -- # local target=3.0.0 00:19:23.014 17:10:10 nvmf_tcp.nvmf_fips -- fips/fips.sh@85 -- # openssl version 00:19:23.014 17:10:10 nvmf_tcp.nvmf_fips -- fips/fips.sh@85 -- # awk '{print $2}' 00:19:23.014 17:10:10 nvmf_tcp.nvmf_fips -- fips/fips.sh@85 -- # ge 3.0.9 3.0.0 00:19:23.014 17:10:10 nvmf_tcp.nvmf_fips -- scripts/common.sh@373 -- # cmp_versions 3.0.9 '>=' 3.0.0 00:19:23.014 17:10:10 nvmf_tcp.nvmf_fips -- scripts/common.sh@330 -- # local ver1 ver1_l 00:19:23.014 17:10:10 nvmf_tcp.nvmf_fips -- scripts/common.sh@331 -- # local ver2 ver2_l 00:19:23.014 17:10:10 nvmf_tcp.nvmf_fips -- scripts/common.sh@333 -- # IFS=.-: 00:19:23.014 17:10:10 nvmf_tcp.nvmf_fips -- scripts/common.sh@333 -- # read -ra ver1 00:19:23.014 17:10:10 nvmf_tcp.nvmf_fips -- scripts/common.sh@334 -- # IFS=.-: 00:19:23.014 17:10:10 nvmf_tcp.nvmf_fips -- scripts/common.sh@334 -- # read -ra ver2 00:19:23.014 17:10:10 nvmf_tcp.nvmf_fips -- scripts/common.sh@335 -- # local 'op=>=' 00:19:23.014 17:10:10 nvmf_tcp.nvmf_fips -- scripts/common.sh@337 -- # ver1_l=3 00:19:23.014 17:10:10 nvmf_tcp.nvmf_fips -- scripts/common.sh@338 -- # ver2_l=3 00:19:23.014 17:10:10 nvmf_tcp.nvmf_fips -- scripts/common.sh@340 -- # local lt=0 gt=0 eq=0 v 00:19:23.014 17:10:10 nvmf_tcp.nvmf_fips -- scripts/common.sh@341 -- # case "$op" in 00:19:23.014 17:10:10 nvmf_tcp.nvmf_fips -- scripts/common.sh@345 -- # : 1 00:19:23.014 17:10:10 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v = 0 )) 00:19:23.014 17:10:10 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:23.014 17:10:10 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # decimal 3 00:19:23.014 17:10:10 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=3 00:19:23.014 17:10:10 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 3 =~ ^[0-9]+$ ]] 00:19:23.014 17:10:10 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 3 00:19:23.014 17:10:10 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # ver1[v]=3 00:19:23.014 17:10:10 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # decimal 3 00:19:23.014 17:10:10 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=3 00:19:23.014 17:10:10 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 3 =~ ^[0-9]+$ ]] 00:19:23.014 17:10:10 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 3 00:19:23.014 17:10:10 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # ver2[v]=3 00:19:23.014 17:10:10 nvmf_tcp.nvmf_fips -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:19:23.014 17:10:10 nvmf_tcp.nvmf_fips -- scripts/common.sh@365 -- # (( ver1[v] < ver2[v] )) 00:19:23.014 17:10:10 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v++ )) 00:19:23.014 17:10:10 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:23.014 17:10:10 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # decimal 0 00:19:23.014 17:10:10 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=0 00:19:23.014 17:10:10 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:19:23.014 17:10:10 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 0 00:19:23.014 17:10:10 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # ver1[v]=0 00:19:23.014 17:10:10 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # decimal 0 00:19:23.014 17:10:10 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=0 00:19:23.014 17:10:10 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:19:23.015 17:10:10 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 0 00:19:23.015 17:10:10 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # ver2[v]=0 00:19:23.015 17:10:10 nvmf_tcp.nvmf_fips -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:19:23.015 17:10:10 nvmf_tcp.nvmf_fips -- scripts/common.sh@365 -- # (( ver1[v] < ver2[v] )) 00:19:23.015 17:10:10 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v++ )) 00:19:23.015 17:10:10 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:23.015 17:10:10 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # decimal 9 00:19:23.015 17:10:10 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=9 00:19:23.015 17:10:10 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 9 =~ ^[0-9]+$ ]] 00:19:23.015 17:10:10 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 9 00:19:23.015 17:10:10 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # ver1[v]=9 00:19:23.015 17:10:10 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # decimal 0 00:19:23.015 17:10:10 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=0 00:19:23.015 17:10:10 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:19:23.015 17:10:10 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 0 00:19:23.015 17:10:10 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # ver2[v]=0 00:19:23.015 17:10:10 nvmf_tcp.nvmf_fips -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:19:23.015 17:10:10 nvmf_tcp.nvmf_fips -- scripts/common.sh@364 -- # return 0 00:19:23.015 17:10:10 nvmf_tcp.nvmf_fips -- fips/fips.sh@95 -- # openssl info -modulesdir 00:19:23.015 17:10:10 nvmf_tcp.nvmf_fips -- fips/fips.sh@95 -- # [[ ! -f /usr/lib64/ossl-modules/fips.so ]] 00:19:23.015 17:10:10 nvmf_tcp.nvmf_fips -- fips/fips.sh@100 -- # openssl fipsinstall -help 00:19:23.015 17:10:10 nvmf_tcp.nvmf_fips -- fips/fips.sh@100 -- # warn='This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode' 00:19:23.015 17:10:10 nvmf_tcp.nvmf_fips -- fips/fips.sh@101 -- # [[ This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode == \T\h\i\s\ \c\o\m\m\a\n\d\ \i\s\ \n\o\t\ \e\n\a\b\l\e\d* ]] 00:19:23.015 17:10:10 nvmf_tcp.nvmf_fips -- fips/fips.sh@104 -- # export callback=build_openssl_config 00:19:23.015 17:10:10 nvmf_tcp.nvmf_fips -- fips/fips.sh@104 -- # callback=build_openssl_config 00:19:23.015 17:10:10 nvmf_tcp.nvmf_fips -- fips/fips.sh@113 -- # build_openssl_config 00:19:23.015 17:10:10 nvmf_tcp.nvmf_fips -- fips/fips.sh@37 -- # cat 00:19:23.015 17:10:10 nvmf_tcp.nvmf_fips -- fips/fips.sh@57 -- # [[ ! -t 0 ]] 00:19:23.015 17:10:10 nvmf_tcp.nvmf_fips -- fips/fips.sh@58 -- # cat - 00:19:23.015 17:10:10 nvmf_tcp.nvmf_fips -- fips/fips.sh@114 -- # export OPENSSL_CONF=spdk_fips.conf 00:19:23.015 17:10:10 nvmf_tcp.nvmf_fips -- fips/fips.sh@114 -- # OPENSSL_CONF=spdk_fips.conf 00:19:23.015 17:10:10 nvmf_tcp.nvmf_fips -- fips/fips.sh@116 -- # mapfile -t providers 00:19:23.015 17:10:10 nvmf_tcp.nvmf_fips -- fips/fips.sh@116 -- # openssl list -providers 00:19:23.015 17:10:10 nvmf_tcp.nvmf_fips -- fips/fips.sh@116 -- # grep name 00:19:23.015 17:10:10 nvmf_tcp.nvmf_fips -- fips/fips.sh@120 -- # (( 2 != 2 )) 00:19:23.015 17:10:10 nvmf_tcp.nvmf_fips -- fips/fips.sh@120 -- # [[ name: openssl base provider != *base* ]] 00:19:23.015 17:10:10 nvmf_tcp.nvmf_fips -- fips/fips.sh@120 -- # [[ name: red hat enterprise linux 9 - openssl fips provider != *fips* ]] 00:19:23.015 17:10:10 nvmf_tcp.nvmf_fips -- fips/fips.sh@127 -- # NOT openssl md5 /dev/fd/62 00:19:23.015 17:10:10 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@648 -- # local es=0 00:19:23.015 17:10:10 nvmf_tcp.nvmf_fips -- fips/fips.sh@127 -- # : 00:19:23.015 17:10:10 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@650 -- # valid_exec_arg openssl md5 /dev/fd/62 00:19:23.015 17:10:10 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@636 -- # local arg=openssl 00:19:23.015 17:10:10 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:19:23.015 17:10:10 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@640 -- # type -t openssl 00:19:23.015 17:10:10 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:19:23.015 17:10:10 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@642 -- # type -P openssl 00:19:23.015 17:10:10 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:19:23.015 17:10:10 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@642 -- # arg=/usr/bin/openssl 00:19:23.015 17:10:10 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@642 -- # [[ -x /usr/bin/openssl ]] 00:19:23.015 17:10:10 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@651 -- # openssl md5 /dev/fd/62 00:19:23.273 Error setting digest 00:19:23.273 00025678007F0000:error:0308010C:digital envelope routines:inner_evp_generic_fetch:unsupported:crypto/evp/evp_fetch.c:373:Global default library context, Algorithm (MD5 : 97), Properties () 00:19:23.273 00025678007F0000:error:03000086:digital envelope routines:evp_md_init_internal:initialization error:crypto/evp/digest.c:254: 00:19:23.273 17:10:10 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@651 -- # es=1 00:19:23.273 17:10:10 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:19:23.273 17:10:10 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:19:23.273 17:10:10 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:19:23.273 17:10:10 nvmf_tcp.nvmf_fips -- fips/fips.sh@130 -- # nvmftestinit 00:19:23.273 17:10:10 nvmf_tcp.nvmf_fips -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:19:23.273 17:10:10 nvmf_tcp.nvmf_fips -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:23.273 17:10:10 nvmf_tcp.nvmf_fips -- nvmf/common.sh@448 -- # prepare_net_devs 00:19:23.273 17:10:10 nvmf_tcp.nvmf_fips -- nvmf/common.sh@410 -- # local -g is_hw=no 00:19:23.273 17:10:10 nvmf_tcp.nvmf_fips -- nvmf/common.sh@412 -- # remove_spdk_ns 00:19:23.273 17:10:10 nvmf_tcp.nvmf_fips -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:23.273 17:10:10 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:23.273 17:10:10 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:23.273 17:10:10 nvmf_tcp.nvmf_fips -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:19:23.273 17:10:10 nvmf_tcp.nvmf_fips -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:19:23.273 17:10:10 nvmf_tcp.nvmf_fips -- nvmf/common.sh@285 -- # xtrace_disable 00:19:23.273 17:10:10 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:19:28.539 17:10:15 nvmf_tcp.nvmf_fips -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:19:28.539 17:10:15 nvmf_tcp.nvmf_fips -- nvmf/common.sh@291 -- # pci_devs=() 00:19:28.539 17:10:15 nvmf_tcp.nvmf_fips -- nvmf/common.sh@291 -- # local -a pci_devs 00:19:28.539 17:10:15 nvmf_tcp.nvmf_fips -- nvmf/common.sh@292 -- # pci_net_devs=() 00:19:28.539 17:10:15 nvmf_tcp.nvmf_fips -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:19:28.539 17:10:15 nvmf_tcp.nvmf_fips -- nvmf/common.sh@293 -- # pci_drivers=() 00:19:28.539 17:10:15 nvmf_tcp.nvmf_fips -- nvmf/common.sh@293 -- # local -A pci_drivers 00:19:28.539 17:10:15 nvmf_tcp.nvmf_fips -- nvmf/common.sh@295 -- # net_devs=() 00:19:28.539 17:10:15 nvmf_tcp.nvmf_fips -- nvmf/common.sh@295 -- # local -ga net_devs 00:19:28.539 17:10:15 nvmf_tcp.nvmf_fips -- nvmf/common.sh@296 -- # e810=() 00:19:28.539 17:10:15 nvmf_tcp.nvmf_fips -- nvmf/common.sh@296 -- # local -ga e810 00:19:28.539 17:10:15 nvmf_tcp.nvmf_fips -- nvmf/common.sh@297 -- # x722=() 00:19:28.539 17:10:15 nvmf_tcp.nvmf_fips -- nvmf/common.sh@297 -- # local -ga x722 00:19:28.539 17:10:15 nvmf_tcp.nvmf_fips -- nvmf/common.sh@298 -- # mlx=() 00:19:28.539 17:10:15 nvmf_tcp.nvmf_fips -- nvmf/common.sh@298 -- # local -ga mlx 00:19:28.539 17:10:15 nvmf_tcp.nvmf_fips -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:28.539 17:10:15 nvmf_tcp.nvmf_fips -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:28.539 17:10:15 nvmf_tcp.nvmf_fips -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:28.539 17:10:15 nvmf_tcp.nvmf_fips -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:28.539 17:10:15 nvmf_tcp.nvmf_fips -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:28.539 17:10:15 nvmf_tcp.nvmf_fips -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:28.539 17:10:15 nvmf_tcp.nvmf_fips -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:28.539 17:10:15 nvmf_tcp.nvmf_fips -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:28.539 17:10:15 nvmf_tcp.nvmf_fips -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:28.539 17:10:15 nvmf_tcp.nvmf_fips -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:28.539 17:10:15 nvmf_tcp.nvmf_fips -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:28.539 17:10:15 nvmf_tcp.nvmf_fips -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:19:28.539 17:10:15 nvmf_tcp.nvmf_fips -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:19:28.539 17:10:15 nvmf_tcp.nvmf_fips -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:19:28.539 17:10:15 nvmf_tcp.nvmf_fips -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:19:28.539 17:10:15 nvmf_tcp.nvmf_fips -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:19:28.539 17:10:15 nvmf_tcp.nvmf_fips -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:19:28.539 17:10:15 nvmf_tcp.nvmf_fips -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:28.539 17:10:15 nvmf_tcp.nvmf_fips -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:19:28.539 Found 0000:86:00.0 (0x8086 - 0x159b) 00:19:28.539 17:10:15 nvmf_tcp.nvmf_fips -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:28.539 17:10:15 nvmf_tcp.nvmf_fips -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:28.539 17:10:15 nvmf_tcp.nvmf_fips -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:28.539 17:10:15 nvmf_tcp.nvmf_fips -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:28.539 17:10:15 nvmf_tcp.nvmf_fips -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:28.539 17:10:15 nvmf_tcp.nvmf_fips -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:28.539 17:10:15 nvmf_tcp.nvmf_fips -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:19:28.539 Found 0000:86:00.1 (0x8086 - 0x159b) 00:19:28.539 17:10:15 nvmf_tcp.nvmf_fips -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:28.539 17:10:15 nvmf_tcp.nvmf_fips -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:28.539 17:10:15 nvmf_tcp.nvmf_fips -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:28.539 17:10:15 nvmf_tcp.nvmf_fips -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:28.539 17:10:15 nvmf_tcp.nvmf_fips -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:28.539 17:10:15 nvmf_tcp.nvmf_fips -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:19:28.539 17:10:15 nvmf_tcp.nvmf_fips -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:19:28.539 17:10:15 nvmf_tcp.nvmf_fips -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:19:28.539 17:10:15 nvmf_tcp.nvmf_fips -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:28.539 17:10:15 nvmf_tcp.nvmf_fips -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:28.539 17:10:15 nvmf_tcp.nvmf_fips -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:19:28.539 17:10:15 nvmf_tcp.nvmf_fips -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:28.539 17:10:15 nvmf_tcp.nvmf_fips -- nvmf/common.sh@390 -- # [[ up == up ]] 00:19:28.539 17:10:15 nvmf_tcp.nvmf_fips -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:19:28.539 17:10:15 nvmf_tcp.nvmf_fips -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:28.539 17:10:15 nvmf_tcp.nvmf_fips -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:19:28.539 Found net devices under 0000:86:00.0: cvl_0_0 00:19:28.539 17:10:15 nvmf_tcp.nvmf_fips -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:19:28.539 17:10:15 nvmf_tcp.nvmf_fips -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:28.539 17:10:15 nvmf_tcp.nvmf_fips -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:28.539 17:10:15 nvmf_tcp.nvmf_fips -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:19:28.539 17:10:15 nvmf_tcp.nvmf_fips -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:28.539 17:10:15 nvmf_tcp.nvmf_fips -- nvmf/common.sh@390 -- # [[ up == up ]] 00:19:28.539 17:10:15 nvmf_tcp.nvmf_fips -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:19:28.539 17:10:15 nvmf_tcp.nvmf_fips -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:28.539 17:10:15 nvmf_tcp.nvmf_fips -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:19:28.539 Found net devices under 0000:86:00.1: cvl_0_1 00:19:28.539 17:10:15 nvmf_tcp.nvmf_fips -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:19:28.539 17:10:15 nvmf_tcp.nvmf_fips -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:19:28.539 17:10:15 nvmf_tcp.nvmf_fips -- nvmf/common.sh@414 -- # is_hw=yes 00:19:28.539 17:10:15 nvmf_tcp.nvmf_fips -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:19:28.539 17:10:15 nvmf_tcp.nvmf_fips -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:19:28.539 17:10:15 nvmf_tcp.nvmf_fips -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:19:28.539 17:10:15 nvmf_tcp.nvmf_fips -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:28.539 17:10:15 nvmf_tcp.nvmf_fips -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:28.539 17:10:15 nvmf_tcp.nvmf_fips -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:28.539 17:10:15 nvmf_tcp.nvmf_fips -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:19:28.539 17:10:15 nvmf_tcp.nvmf_fips -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:19:28.539 17:10:15 nvmf_tcp.nvmf_fips -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:19:28.539 17:10:15 nvmf_tcp.nvmf_fips -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:19:28.539 17:10:15 nvmf_tcp.nvmf_fips -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:19:28.539 17:10:15 nvmf_tcp.nvmf_fips -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:28.539 17:10:15 nvmf_tcp.nvmf_fips -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:19:28.539 17:10:15 nvmf_tcp.nvmf_fips -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:19:28.539 17:10:15 nvmf_tcp.nvmf_fips -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:19:28.539 17:10:15 nvmf_tcp.nvmf_fips -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:19:28.539 17:10:15 nvmf_tcp.nvmf_fips -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:19:28.539 17:10:15 nvmf_tcp.nvmf_fips -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:19:28.539 17:10:15 nvmf_tcp.nvmf_fips -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:19:28.540 17:10:15 nvmf_tcp.nvmf_fips -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:19:28.540 17:10:15 nvmf_tcp.nvmf_fips -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:19:28.540 17:10:15 nvmf_tcp.nvmf_fips -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:19:28.540 17:10:15 nvmf_tcp.nvmf_fips -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:19:28.540 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:28.540 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.209 ms 00:19:28.540 00:19:28.540 --- 10.0.0.2 ping statistics --- 00:19:28.540 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:28.540 rtt min/avg/max/mdev = 0.209/0.209/0.209/0.000 ms 00:19:28.540 17:10:15 nvmf_tcp.nvmf_fips -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:19:28.540 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:28.540 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.233 ms 00:19:28.540 00:19:28.540 --- 10.0.0.1 ping statistics --- 00:19:28.540 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:28.540 rtt min/avg/max/mdev = 0.233/0.233/0.233/0.000 ms 00:19:28.540 17:10:16 nvmf_tcp.nvmf_fips -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:28.540 17:10:16 nvmf_tcp.nvmf_fips -- nvmf/common.sh@422 -- # return 0 00:19:28.540 17:10:16 nvmf_tcp.nvmf_fips -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:19:28.540 17:10:16 nvmf_tcp.nvmf_fips -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:28.540 17:10:16 nvmf_tcp.nvmf_fips -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:19:28.540 17:10:16 nvmf_tcp.nvmf_fips -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:19:28.540 17:10:16 nvmf_tcp.nvmf_fips -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:28.540 17:10:16 nvmf_tcp.nvmf_fips -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:19:28.540 17:10:16 nvmf_tcp.nvmf_fips -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:19:28.540 17:10:16 nvmf_tcp.nvmf_fips -- fips/fips.sh@131 -- # nvmfappstart -m 0x2 00:19:28.540 17:10:16 nvmf_tcp.nvmf_fips -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:19:28.540 17:10:16 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@720 -- # xtrace_disable 00:19:28.540 17:10:16 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:19:28.540 17:10:16 nvmf_tcp.nvmf_fips -- nvmf/common.sh@481 -- # nvmfpid=3105094 00:19:28.540 17:10:16 nvmf_tcp.nvmf_fips -- nvmf/common.sh@482 -- # waitforlisten 3105094 00:19:28.540 17:10:16 nvmf_tcp.nvmf_fips -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:19:28.540 17:10:16 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@827 -- # '[' -z 3105094 ']' 00:19:28.540 17:10:16 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:28.540 17:10:16 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@832 -- # local max_retries=100 00:19:28.540 17:10:16 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:28.540 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:28.540 17:10:16 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@836 -- # xtrace_disable 00:19:28.540 17:10:16 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:19:28.540 [2024-05-15 17:10:16.119121] Starting SPDK v24.05-pre git sha1 0ba8ca574 / DPDK 23.11.0 initialization... 00:19:28.540 [2024-05-15 17:10:16.119177] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:28.540 EAL: No free 2048 kB hugepages reported on node 1 00:19:28.540 [2024-05-15 17:10:16.177060] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:28.798 [2024-05-15 17:10:16.254712] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:28.798 [2024-05-15 17:10:16.254746] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:28.798 [2024-05-15 17:10:16.254752] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:28.798 [2024-05-15 17:10:16.254758] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:28.798 [2024-05-15 17:10:16.254763] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:28.798 [2024-05-15 17:10:16.254796] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:19:29.364 17:10:16 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:19:29.364 17:10:16 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@860 -- # return 0 00:19:29.364 17:10:16 nvmf_tcp.nvmf_fips -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:19:29.364 17:10:16 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@726 -- # xtrace_disable 00:19:29.364 17:10:16 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:19:29.364 17:10:16 nvmf_tcp.nvmf_fips -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:29.364 17:10:16 nvmf_tcp.nvmf_fips -- fips/fips.sh@133 -- # trap cleanup EXIT 00:19:29.364 17:10:16 nvmf_tcp.nvmf_fips -- fips/fips.sh@136 -- # key=NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:19:29.364 17:10:16 nvmf_tcp.nvmf_fips -- fips/fips.sh@137 -- # key_path=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:19:29.364 17:10:16 nvmf_tcp.nvmf_fips -- fips/fips.sh@138 -- # echo -n NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:19:29.364 17:10:16 nvmf_tcp.nvmf_fips -- fips/fips.sh@139 -- # chmod 0600 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:19:29.364 17:10:16 nvmf_tcp.nvmf_fips -- fips/fips.sh@141 -- # setup_nvmf_tgt_conf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:19:29.364 17:10:16 nvmf_tcp.nvmf_fips -- fips/fips.sh@22 -- # local key=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:19:29.364 17:10:16 nvmf_tcp.nvmf_fips -- fips/fips.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:19:29.621 [2024-05-15 17:10:17.101957] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:29.622 [2024-05-15 17:10:17.117934] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:19:29.622 [2024-05-15 17:10:17.117975] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:19:29.622 [2024-05-15 17:10:17.118149] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:29.622 [2024-05-15 17:10:17.146343] tcp.c:3665:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:19:29.622 malloc0 00:19:29.622 17:10:17 nvmf_tcp.nvmf_fips -- fips/fips.sh@144 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:19:29.622 17:10:17 nvmf_tcp.nvmf_fips -- fips/fips.sh@147 -- # bdevperf_pid=3105345 00:19:29.622 17:10:17 nvmf_tcp.nvmf_fips -- fips/fips.sh@145 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:19:29.622 17:10:17 nvmf_tcp.nvmf_fips -- fips/fips.sh@148 -- # waitforlisten 3105345 /var/tmp/bdevperf.sock 00:19:29.622 17:10:17 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@827 -- # '[' -z 3105345 ']' 00:19:29.622 17:10:17 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:29.622 17:10:17 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@832 -- # local max_retries=100 00:19:29.622 17:10:17 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:29.622 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:29.622 17:10:17 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@836 -- # xtrace_disable 00:19:29.622 17:10:17 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:19:29.622 [2024-05-15 17:10:17.226742] Starting SPDK v24.05-pre git sha1 0ba8ca574 / DPDK 23.11.0 initialization... 00:19:29.622 [2024-05-15 17:10:17.226789] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3105345 ] 00:19:29.622 EAL: No free 2048 kB hugepages reported on node 1 00:19:29.622 [2024-05-15 17:10:17.275711] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:29.880 [2024-05-15 17:10:17.348146] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:19:30.447 17:10:18 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:19:30.447 17:10:18 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@860 -- # return 0 00:19:30.447 17:10:18 nvmf_tcp.nvmf_fips -- fips/fips.sh@150 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:19:30.704 [2024-05-15 17:10:18.159058] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:30.704 [2024-05-15 17:10:18.159135] nvme_tcp.c:2580:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:19:30.704 TLSTESTn1 00:19:30.704 17:10:18 nvmf_tcp.nvmf_fips -- fips/fips.sh@154 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:19:30.704 Running I/O for 10 seconds... 00:19:40.734 00:19:40.734 Latency(us) 00:19:40.734 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:40.734 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:19:40.734 Verification LBA range: start 0x0 length 0x2000 00:19:40.734 TLSTESTn1 : 10.03 5085.47 19.87 0.00 0.00 25125.43 6610.59 57671.68 00:19:40.734 =================================================================================================================== 00:19:40.734 Total : 5085.47 19.87 0.00 0.00 25125.43 6610.59 57671.68 00:19:40.734 0 00:19:40.992 17:10:28 nvmf_tcp.nvmf_fips -- fips/fips.sh@1 -- # cleanup 00:19:40.992 17:10:28 nvmf_tcp.nvmf_fips -- fips/fips.sh@15 -- # process_shm --id 0 00:19:40.992 17:10:28 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@804 -- # type=--id 00:19:40.992 17:10:28 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@805 -- # id=0 00:19:40.992 17:10:28 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@806 -- # '[' --id = --pid ']' 00:19:40.992 17:10:28 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@810 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:19:40.992 17:10:28 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@810 -- # shm_files=nvmf_trace.0 00:19:40.992 17:10:28 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@812 -- # [[ -z nvmf_trace.0 ]] 00:19:40.992 17:10:28 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@816 -- # for n in $shm_files 00:19:40.992 17:10:28 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@817 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:19:40.992 nvmf_trace.0 00:19:40.992 17:10:28 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@819 -- # return 0 00:19:40.992 17:10:28 nvmf_tcp.nvmf_fips -- fips/fips.sh@16 -- # killprocess 3105345 00:19:40.992 17:10:28 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@946 -- # '[' -z 3105345 ']' 00:19:40.992 17:10:28 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@950 -- # kill -0 3105345 00:19:40.992 17:10:28 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@951 -- # uname 00:19:40.992 17:10:28 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:19:40.992 17:10:28 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3105345 00:19:40.992 17:10:28 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@952 -- # process_name=reactor_2 00:19:40.992 17:10:28 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@956 -- # '[' reactor_2 = sudo ']' 00:19:40.992 17:10:28 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3105345' 00:19:40.992 killing process with pid 3105345 00:19:40.992 17:10:28 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@965 -- # kill 3105345 00:19:40.992 Received shutdown signal, test time was about 10.000000 seconds 00:19:40.992 00:19:40.992 Latency(us) 00:19:40.992 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:40.992 =================================================================================================================== 00:19:40.992 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:19:40.992 [2024-05-15 17:10:28.521293] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:19:40.992 17:10:28 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@970 -- # wait 3105345 00:19:41.250 17:10:28 nvmf_tcp.nvmf_fips -- fips/fips.sh@17 -- # nvmftestfini 00:19:41.250 17:10:28 nvmf_tcp.nvmf_fips -- nvmf/common.sh@488 -- # nvmfcleanup 00:19:41.250 17:10:28 nvmf_tcp.nvmf_fips -- nvmf/common.sh@117 -- # sync 00:19:41.250 17:10:28 nvmf_tcp.nvmf_fips -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:19:41.250 17:10:28 nvmf_tcp.nvmf_fips -- nvmf/common.sh@120 -- # set +e 00:19:41.250 17:10:28 nvmf_tcp.nvmf_fips -- nvmf/common.sh@121 -- # for i in {1..20} 00:19:41.250 17:10:28 nvmf_tcp.nvmf_fips -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:19:41.250 rmmod nvme_tcp 00:19:41.250 rmmod nvme_fabrics 00:19:41.250 rmmod nvme_keyring 00:19:41.250 17:10:28 nvmf_tcp.nvmf_fips -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:19:41.250 17:10:28 nvmf_tcp.nvmf_fips -- nvmf/common.sh@124 -- # set -e 00:19:41.250 17:10:28 nvmf_tcp.nvmf_fips -- nvmf/common.sh@125 -- # return 0 00:19:41.250 17:10:28 nvmf_tcp.nvmf_fips -- nvmf/common.sh@489 -- # '[' -n 3105094 ']' 00:19:41.250 17:10:28 nvmf_tcp.nvmf_fips -- nvmf/common.sh@490 -- # killprocess 3105094 00:19:41.250 17:10:28 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@946 -- # '[' -z 3105094 ']' 00:19:41.250 17:10:28 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@950 -- # kill -0 3105094 00:19:41.250 17:10:28 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@951 -- # uname 00:19:41.250 17:10:28 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:19:41.250 17:10:28 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3105094 00:19:41.250 17:10:28 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:19:41.250 17:10:28 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:19:41.250 17:10:28 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3105094' 00:19:41.250 killing process with pid 3105094 00:19:41.250 17:10:28 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@965 -- # kill 3105094 00:19:41.250 [2024-05-15 17:10:28.838665] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:19:41.250 [2024-05-15 17:10:28.838700] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:19:41.250 17:10:28 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@970 -- # wait 3105094 00:19:41.508 17:10:29 nvmf_tcp.nvmf_fips -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:19:41.508 17:10:29 nvmf_tcp.nvmf_fips -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:19:41.508 17:10:29 nvmf_tcp.nvmf_fips -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:19:41.508 17:10:29 nvmf_tcp.nvmf_fips -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:19:41.508 17:10:29 nvmf_tcp.nvmf_fips -- nvmf/common.sh@278 -- # remove_spdk_ns 00:19:41.508 17:10:29 nvmf_tcp.nvmf_fips -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:41.508 17:10:29 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:41.508 17:10:29 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:44.039 17:10:31 nvmf_tcp.nvmf_fips -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:19:44.039 17:10:31 nvmf_tcp.nvmf_fips -- fips/fips.sh@18 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:19:44.039 00:19:44.039 real 0m20.719s 00:19:44.039 user 0m22.812s 00:19:44.039 sys 0m8.716s 00:19:44.039 17:10:31 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@1122 -- # xtrace_disable 00:19:44.039 17:10:31 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:19:44.039 ************************************ 00:19:44.039 END TEST nvmf_fips 00:19:44.039 ************************************ 00:19:44.039 17:10:31 nvmf_tcp -- nvmf/nvmf.sh@65 -- # '[' 0 -eq 1 ']' 00:19:44.039 17:10:31 nvmf_tcp -- nvmf/nvmf.sh@71 -- # [[ phy == phy ]] 00:19:44.039 17:10:31 nvmf_tcp -- nvmf/nvmf.sh@72 -- # '[' tcp = tcp ']' 00:19:44.039 17:10:31 nvmf_tcp -- nvmf/nvmf.sh@73 -- # gather_supported_nvmf_pci_devs 00:19:44.039 17:10:31 nvmf_tcp -- nvmf/common.sh@285 -- # xtrace_disable 00:19:44.039 17:10:31 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:19:49.304 17:10:35 nvmf_tcp -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:19:49.304 17:10:35 nvmf_tcp -- nvmf/common.sh@291 -- # pci_devs=() 00:19:49.304 17:10:35 nvmf_tcp -- nvmf/common.sh@291 -- # local -a pci_devs 00:19:49.304 17:10:35 nvmf_tcp -- nvmf/common.sh@292 -- # pci_net_devs=() 00:19:49.304 17:10:35 nvmf_tcp -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:19:49.304 17:10:35 nvmf_tcp -- nvmf/common.sh@293 -- # pci_drivers=() 00:19:49.304 17:10:35 nvmf_tcp -- nvmf/common.sh@293 -- # local -A pci_drivers 00:19:49.304 17:10:35 nvmf_tcp -- nvmf/common.sh@295 -- # net_devs=() 00:19:49.304 17:10:35 nvmf_tcp -- nvmf/common.sh@295 -- # local -ga net_devs 00:19:49.304 17:10:35 nvmf_tcp -- nvmf/common.sh@296 -- # e810=() 00:19:49.304 17:10:35 nvmf_tcp -- nvmf/common.sh@296 -- # local -ga e810 00:19:49.304 17:10:35 nvmf_tcp -- nvmf/common.sh@297 -- # x722=() 00:19:49.304 17:10:35 nvmf_tcp -- nvmf/common.sh@297 -- # local -ga x722 00:19:49.304 17:10:35 nvmf_tcp -- nvmf/common.sh@298 -- # mlx=() 00:19:49.304 17:10:35 nvmf_tcp -- nvmf/common.sh@298 -- # local -ga mlx 00:19:49.304 17:10:35 nvmf_tcp -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:49.304 17:10:35 nvmf_tcp -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:49.304 17:10:35 nvmf_tcp -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:49.304 17:10:35 nvmf_tcp -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:49.304 17:10:35 nvmf_tcp -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:49.304 17:10:35 nvmf_tcp -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:49.304 17:10:35 nvmf_tcp -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:49.304 17:10:35 nvmf_tcp -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:49.304 17:10:35 nvmf_tcp -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:49.304 17:10:35 nvmf_tcp -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:49.304 17:10:35 nvmf_tcp -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:49.304 17:10:35 nvmf_tcp -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:19:49.304 17:10:35 nvmf_tcp -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:19:49.304 17:10:35 nvmf_tcp -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:19:49.304 17:10:35 nvmf_tcp -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:19:49.304 17:10:35 nvmf_tcp -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:19:49.304 17:10:35 nvmf_tcp -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:19:49.304 17:10:35 nvmf_tcp -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:49.304 17:10:35 nvmf_tcp -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:19:49.304 Found 0000:86:00.0 (0x8086 - 0x159b) 00:19:49.304 17:10:35 nvmf_tcp -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:49.304 17:10:35 nvmf_tcp -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:49.304 17:10:35 nvmf_tcp -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:49.304 17:10:35 nvmf_tcp -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:49.304 17:10:35 nvmf_tcp -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:49.304 17:10:35 nvmf_tcp -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:49.304 17:10:35 nvmf_tcp -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:19:49.304 Found 0000:86:00.1 (0x8086 - 0x159b) 00:19:49.304 17:10:35 nvmf_tcp -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:49.304 17:10:35 nvmf_tcp -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:49.304 17:10:35 nvmf_tcp -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:49.304 17:10:35 nvmf_tcp -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:49.304 17:10:35 nvmf_tcp -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:49.304 17:10:35 nvmf_tcp -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:19:49.304 17:10:35 nvmf_tcp -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:19:49.304 17:10:35 nvmf_tcp -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:19:49.304 17:10:35 nvmf_tcp -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:49.304 17:10:35 nvmf_tcp -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:49.304 17:10:35 nvmf_tcp -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:19:49.304 17:10:35 nvmf_tcp -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:49.304 17:10:35 nvmf_tcp -- nvmf/common.sh@390 -- # [[ up == up ]] 00:19:49.304 17:10:35 nvmf_tcp -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:19:49.304 17:10:35 nvmf_tcp -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:49.304 17:10:35 nvmf_tcp -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:19:49.304 Found net devices under 0000:86:00.0: cvl_0_0 00:19:49.304 17:10:35 nvmf_tcp -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:19:49.304 17:10:35 nvmf_tcp -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:49.304 17:10:35 nvmf_tcp -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:49.304 17:10:35 nvmf_tcp -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:19:49.304 17:10:35 nvmf_tcp -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:49.304 17:10:35 nvmf_tcp -- nvmf/common.sh@390 -- # [[ up == up ]] 00:19:49.304 17:10:35 nvmf_tcp -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:19:49.304 17:10:35 nvmf_tcp -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:49.304 17:10:35 nvmf_tcp -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:19:49.304 Found net devices under 0000:86:00.1: cvl_0_1 00:19:49.304 17:10:35 nvmf_tcp -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:19:49.304 17:10:35 nvmf_tcp -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:19:49.304 17:10:35 nvmf_tcp -- nvmf/nvmf.sh@74 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:49.304 17:10:35 nvmf_tcp -- nvmf/nvmf.sh@75 -- # (( 2 > 0 )) 00:19:49.304 17:10:35 nvmf_tcp -- nvmf/nvmf.sh@76 -- # run_test nvmf_perf_adq /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:19:49.304 17:10:35 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:19:49.304 17:10:35 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:19:49.304 17:10:35 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:19:49.304 ************************************ 00:19:49.304 START TEST nvmf_perf_adq 00:19:49.304 ************************************ 00:19:49.304 17:10:35 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:19:49.304 * Looking for test storage... 00:19:49.304 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:19:49.304 17:10:36 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:19:49.304 17:10:36 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@7 -- # uname -s 00:19:49.304 17:10:36 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:49.304 17:10:36 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:49.304 17:10:36 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:49.304 17:10:36 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:49.304 17:10:36 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:49.304 17:10:36 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:49.304 17:10:36 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:49.304 17:10:36 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:49.304 17:10:36 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:49.304 17:10:36 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:49.304 17:10:36 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:19:49.304 17:10:36 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:19:49.304 17:10:36 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:49.304 17:10:36 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:49.304 17:10:36 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:49.304 17:10:36 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:49.304 17:10:36 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:19:49.304 17:10:36 nvmf_tcp.nvmf_perf_adq -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:49.305 17:10:36 nvmf_tcp.nvmf_perf_adq -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:49.305 17:10:36 nvmf_tcp.nvmf_perf_adq -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:49.305 17:10:36 nvmf_tcp.nvmf_perf_adq -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:49.305 17:10:36 nvmf_tcp.nvmf_perf_adq -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:49.305 17:10:36 nvmf_tcp.nvmf_perf_adq -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:49.305 17:10:36 nvmf_tcp.nvmf_perf_adq -- paths/export.sh@5 -- # export PATH 00:19:49.305 17:10:36 nvmf_tcp.nvmf_perf_adq -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:49.305 17:10:36 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@47 -- # : 0 00:19:49.305 17:10:36 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:19:49.305 17:10:36 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:19:49.305 17:10:36 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:49.305 17:10:36 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:49.305 17:10:36 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:49.305 17:10:36 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:19:49.305 17:10:36 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:19:49.305 17:10:36 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@51 -- # have_pci_nics=0 00:19:49.305 17:10:36 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@11 -- # gather_supported_nvmf_pci_devs 00:19:49.305 17:10:36 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@285 -- # xtrace_disable 00:19:49.305 17:10:36 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:53.495 17:10:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:19:53.495 17:10:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@291 -- # pci_devs=() 00:19:53.495 17:10:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@291 -- # local -a pci_devs 00:19:53.495 17:10:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@292 -- # pci_net_devs=() 00:19:53.495 17:10:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:19:53.495 17:10:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@293 -- # pci_drivers=() 00:19:53.495 17:10:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@293 -- # local -A pci_drivers 00:19:53.495 17:10:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@295 -- # net_devs=() 00:19:53.495 17:10:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@295 -- # local -ga net_devs 00:19:53.495 17:10:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@296 -- # e810=() 00:19:53.495 17:10:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@296 -- # local -ga e810 00:19:53.495 17:10:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@297 -- # x722=() 00:19:53.495 17:10:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@297 -- # local -ga x722 00:19:53.495 17:10:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@298 -- # mlx=() 00:19:53.495 17:10:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@298 -- # local -ga mlx 00:19:53.495 17:10:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:53.495 17:10:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:53.495 17:10:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:53.495 17:10:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:53.495 17:10:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:53.495 17:10:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:53.495 17:10:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:53.495 17:10:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:53.495 17:10:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:53.495 17:10:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:53.495 17:10:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:53.495 17:10:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:19:53.495 17:10:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:19:53.495 17:10:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:19:53.495 17:10:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:19:53.495 17:10:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:19:53.495 17:10:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:19:53.495 17:10:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:53.495 17:10:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:19:53.495 Found 0000:86:00.0 (0x8086 - 0x159b) 00:19:53.495 17:10:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:53.495 17:10:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:53.495 17:10:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:53.495 17:10:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:53.495 17:10:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:53.495 17:10:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:53.495 17:10:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:19:53.495 Found 0000:86:00.1 (0x8086 - 0x159b) 00:19:53.495 17:10:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:53.495 17:10:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:53.495 17:10:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:53.495 17:10:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:53.495 17:10:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:53.495 17:10:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:19:53.495 17:10:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:19:53.495 17:10:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:19:53.495 17:10:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:53.495 17:10:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:53.495 17:10:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:19:53.495 17:10:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:53.495 17:10:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:19:53.495 17:10:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:19:53.495 17:10:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:53.495 17:10:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:19:53.495 Found net devices under 0000:86:00.0: cvl_0_0 00:19:53.495 17:10:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:19:53.495 17:10:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:53.495 17:10:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:53.495 17:10:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:19:53.495 17:10:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:53.495 17:10:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:19:53.495 17:10:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:19:53.495 17:10:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:53.495 17:10:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:19:53.495 Found net devices under 0000:86:00.1: cvl_0_1 00:19:53.495 17:10:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:19:53.495 17:10:40 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:19:53.495 17:10:40 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@12 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:53.495 17:10:40 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@13 -- # (( 2 == 0 )) 00:19:53.495 17:10:40 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@18 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:19:53.495 17:10:40 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@60 -- # adq_reload_driver 00:19:53.495 17:10:40 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@53 -- # rmmod ice 00:19:54.431 17:10:41 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@54 -- # modprobe ice 00:19:56.332 17:10:43 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@55 -- # sleep 5 00:20:01.606 17:10:48 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@68 -- # nvmftestinit 00:20:01.606 17:10:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:20:01.606 17:10:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:01.606 17:10:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@448 -- # prepare_net_devs 00:20:01.606 17:10:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@410 -- # local -g is_hw=no 00:20:01.606 17:10:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@412 -- # remove_spdk_ns 00:20:01.606 17:10:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:01.606 17:10:48 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:01.606 17:10:48 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:01.606 17:10:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:20:01.606 17:10:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:20:01.606 17:10:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@285 -- # xtrace_disable 00:20:01.606 17:10:48 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:01.606 17:10:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:01.606 17:10:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@291 -- # pci_devs=() 00:20:01.606 17:10:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@291 -- # local -a pci_devs 00:20:01.606 17:10:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@292 -- # pci_net_devs=() 00:20:01.606 17:10:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:20:01.606 17:10:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@293 -- # pci_drivers=() 00:20:01.606 17:10:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@293 -- # local -A pci_drivers 00:20:01.606 17:10:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@295 -- # net_devs=() 00:20:01.606 17:10:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@295 -- # local -ga net_devs 00:20:01.606 17:10:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@296 -- # e810=() 00:20:01.606 17:10:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@296 -- # local -ga e810 00:20:01.606 17:10:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@297 -- # x722=() 00:20:01.606 17:10:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@297 -- # local -ga x722 00:20:01.606 17:10:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@298 -- # mlx=() 00:20:01.606 17:10:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@298 -- # local -ga mlx 00:20:01.606 17:10:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:01.606 17:10:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:01.606 17:10:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:01.606 17:10:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:01.606 17:10:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:01.606 17:10:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:01.606 17:10:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:01.606 17:10:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:01.606 17:10:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:01.606 17:10:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:01.606 17:10:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:01.606 17:10:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:20:01.606 17:10:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:20:01.606 17:10:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:20:01.606 17:10:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:20:01.606 17:10:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:20:01.606 17:10:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:20:01.606 17:10:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:01.606 17:10:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:20:01.606 Found 0000:86:00.0 (0x8086 - 0x159b) 00:20:01.606 17:10:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:01.606 17:10:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:01.606 17:10:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:01.606 17:10:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:01.606 17:10:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:01.606 17:10:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:01.606 17:10:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:20:01.606 Found 0000:86:00.1 (0x8086 - 0x159b) 00:20:01.606 17:10:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:01.606 17:10:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:01.606 17:10:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:01.606 17:10:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:01.606 17:10:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:01.606 17:10:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:20:01.606 17:10:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:20:01.606 17:10:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:20:01.606 17:10:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:01.606 17:10:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:01.606 17:10:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:20:01.606 17:10:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:01.606 17:10:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:20:01.606 17:10:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:01.606 17:10:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:01.606 17:10:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:20:01.606 Found net devices under 0000:86:00.0: cvl_0_0 00:20:01.606 17:10:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:01.606 17:10:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:01.606 17:10:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:01.606 17:10:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:20:01.606 17:10:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:01.606 17:10:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:20:01.606 17:10:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:01.606 17:10:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:01.606 17:10:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:20:01.606 Found net devices under 0000:86:00.1: cvl_0_1 00:20:01.606 17:10:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:01.606 17:10:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:20:01.606 17:10:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@414 -- # is_hw=yes 00:20:01.606 17:10:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:20:01.606 17:10:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:20:01.606 17:10:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:20:01.606 17:10:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:01.606 17:10:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:01.606 17:10:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:01.606 17:10:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:20:01.606 17:10:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:01.606 17:10:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:01.606 17:10:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:20:01.606 17:10:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:01.606 17:10:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:01.606 17:10:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:20:01.606 17:10:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:20:01.606 17:10:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:20:01.606 17:10:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:01.606 17:10:49 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:01.606 17:10:49 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:01.606 17:10:49 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:20:01.606 17:10:49 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:01.606 17:10:49 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:01.606 17:10:49 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:01.606 17:10:49 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:20:01.606 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:01.606 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.162 ms 00:20:01.606 00:20:01.606 --- 10.0.0.2 ping statistics --- 00:20:01.606 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:01.606 rtt min/avg/max/mdev = 0.162/0.162/0.162/0.000 ms 00:20:01.606 17:10:49 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:01.606 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:01.606 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.090 ms 00:20:01.606 00:20:01.606 --- 10.0.0.1 ping statistics --- 00:20:01.606 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:01.606 rtt min/avg/max/mdev = 0.090/0.090/0.090/0.000 ms 00:20:01.606 17:10:49 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:01.606 17:10:49 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@422 -- # return 0 00:20:01.606 17:10:49 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:20:01.606 17:10:49 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:01.606 17:10:49 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:20:01.606 17:10:49 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:20:01.606 17:10:49 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:01.606 17:10:49 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:20:01.606 17:10:49 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:20:01.607 17:10:49 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@69 -- # nvmfappstart -m 0xF --wait-for-rpc 00:20:01.607 17:10:49 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:20:01.607 17:10:49 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@720 -- # xtrace_disable 00:20:01.607 17:10:49 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:01.607 17:10:49 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@481 -- # nvmfpid=3114827 00:20:01.607 17:10:49 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@482 -- # waitforlisten 3114827 00:20:01.607 17:10:49 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@827 -- # '[' -z 3114827 ']' 00:20:01.607 17:10:49 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:01.607 17:10:49 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@832 -- # local max_retries=100 00:20:01.607 17:10:49 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:01.607 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:01.607 17:10:49 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@836 -- # xtrace_disable 00:20:01.607 17:10:49 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:01.607 17:10:49 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:20:01.607 [2024-05-15 17:10:49.225529] Starting SPDK v24.05-pre git sha1 0ba8ca574 / DPDK 23.11.0 initialization... 00:20:01.607 [2024-05-15 17:10:49.225573] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:01.607 EAL: No free 2048 kB hugepages reported on node 1 00:20:01.865 [2024-05-15 17:10:49.282677] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:20:01.865 [2024-05-15 17:10:49.369028] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:01.865 [2024-05-15 17:10:49.369063] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:01.865 [2024-05-15 17:10:49.369070] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:01.865 [2024-05-15 17:10:49.369076] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:01.865 [2024-05-15 17:10:49.369081] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:01.865 [2024-05-15 17:10:49.369123] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:20:01.865 [2024-05-15 17:10:49.369140] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:20:01.865 [2024-05-15 17:10:49.369396] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:20:01.865 [2024-05-15 17:10:49.369399] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:20:02.429 17:10:50 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:20:02.429 17:10:50 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@860 -- # return 0 00:20:02.429 17:10:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:20:02.429 17:10:50 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:02.429 17:10:50 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:02.429 17:10:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:02.429 17:10:50 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@70 -- # adq_configure_nvmf_target 0 00:20:02.429 17:10:50 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@42 -- # rpc_cmd sock_get_default_impl 00:20:02.430 17:10:50 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@42 -- # jq -r .impl_name 00:20:02.430 17:10:50 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:02.430 17:10:50 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:02.430 17:10:50 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:02.688 17:10:50 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@42 -- # socket_impl=posix 00:20:02.688 17:10:50 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@43 -- # rpc_cmd sock_impl_set_options --enable-placement-id 0 --enable-zerocopy-send-server -i posix 00:20:02.688 17:10:50 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:02.688 17:10:50 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:02.688 17:10:50 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:02.688 17:10:50 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@44 -- # rpc_cmd framework_start_init 00:20:02.688 17:10:50 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:02.688 17:10:50 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:02.688 17:10:50 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:02.688 17:10:50 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 0 00:20:02.688 17:10:50 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:02.688 17:10:50 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:02.688 [2024-05-15 17:10:50.213328] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:02.688 17:10:50 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:02.688 17:10:50 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@46 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:20:02.689 17:10:50 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:02.689 17:10:50 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:02.689 Malloc1 00:20:02.689 17:10:50 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:02.689 17:10:50 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:20:02.689 17:10:50 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:02.689 17:10:50 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:02.689 17:10:50 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:02.689 17:10:50 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:20:02.689 17:10:50 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:02.689 17:10:50 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:02.689 17:10:50 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:02.689 17:10:50 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:20:02.689 17:10:50 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:02.689 17:10:50 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:02.689 [2024-05-15 17:10:50.264967] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:20:02.689 [2024-05-15 17:10:50.265205] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:02.689 17:10:50 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:02.689 17:10:50 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@74 -- # perfpid=3115064 00:20:02.689 17:10:50 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@75 -- # sleep 2 00:20:02.689 17:10:50 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:20:02.689 EAL: No free 2048 kB hugepages reported on node 1 00:20:05.213 17:10:52 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@77 -- # rpc_cmd nvmf_get_stats 00:20:05.213 17:10:52 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:05.213 17:10:52 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:05.213 17:10:52 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:05.213 17:10:52 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@77 -- # nvmf_stats='{ 00:20:05.213 "tick_rate": 2300000000, 00:20:05.213 "poll_groups": [ 00:20:05.213 { 00:20:05.213 "name": "nvmf_tgt_poll_group_000", 00:20:05.213 "admin_qpairs": 1, 00:20:05.213 "io_qpairs": 1, 00:20:05.213 "current_admin_qpairs": 1, 00:20:05.213 "current_io_qpairs": 1, 00:20:05.213 "pending_bdev_io": 0, 00:20:05.213 "completed_nvme_io": 19537, 00:20:05.213 "transports": [ 00:20:05.213 { 00:20:05.213 "trtype": "TCP" 00:20:05.213 } 00:20:05.213 ] 00:20:05.213 }, 00:20:05.213 { 00:20:05.213 "name": "nvmf_tgt_poll_group_001", 00:20:05.213 "admin_qpairs": 0, 00:20:05.213 "io_qpairs": 1, 00:20:05.213 "current_admin_qpairs": 0, 00:20:05.213 "current_io_qpairs": 1, 00:20:05.213 "pending_bdev_io": 0, 00:20:05.213 "completed_nvme_io": 19640, 00:20:05.213 "transports": [ 00:20:05.213 { 00:20:05.213 "trtype": "TCP" 00:20:05.213 } 00:20:05.213 ] 00:20:05.213 }, 00:20:05.213 { 00:20:05.213 "name": "nvmf_tgt_poll_group_002", 00:20:05.213 "admin_qpairs": 0, 00:20:05.213 "io_qpairs": 1, 00:20:05.213 "current_admin_qpairs": 0, 00:20:05.213 "current_io_qpairs": 1, 00:20:05.213 "pending_bdev_io": 0, 00:20:05.213 "completed_nvme_io": 19573, 00:20:05.213 "transports": [ 00:20:05.213 { 00:20:05.213 "trtype": "TCP" 00:20:05.213 } 00:20:05.213 ] 00:20:05.213 }, 00:20:05.213 { 00:20:05.213 "name": "nvmf_tgt_poll_group_003", 00:20:05.213 "admin_qpairs": 0, 00:20:05.213 "io_qpairs": 1, 00:20:05.213 "current_admin_qpairs": 0, 00:20:05.213 "current_io_qpairs": 1, 00:20:05.213 "pending_bdev_io": 0, 00:20:05.213 "completed_nvme_io": 19238, 00:20:05.213 "transports": [ 00:20:05.213 { 00:20:05.213 "trtype": "TCP" 00:20:05.213 } 00:20:05.213 ] 00:20:05.213 } 00:20:05.213 ] 00:20:05.213 }' 00:20:05.213 17:10:52 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@78 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 1) | length' 00:20:05.213 17:10:52 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@78 -- # wc -l 00:20:05.213 17:10:52 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@78 -- # count=4 00:20:05.213 17:10:52 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@79 -- # [[ 4 -ne 4 ]] 00:20:05.213 17:10:52 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@83 -- # wait 3115064 00:20:13.355 Initializing NVMe Controllers 00:20:13.355 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:20:13.355 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:20:13.355 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:20:13.355 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:20:13.355 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:20:13.355 Initialization complete. Launching workers. 00:20:13.355 ======================================================== 00:20:13.355 Latency(us) 00:20:13.355 Device Information : IOPS MiB/s Average min max 00:20:13.355 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 10313.90 40.29 6204.66 2123.03 10328.31 00:20:13.355 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 10401.80 40.63 6154.13 1684.19 11468.22 00:20:13.355 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 10158.90 39.68 6302.05 2594.75 10825.03 00:20:13.355 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 10291.10 40.20 6219.33 2563.55 10894.64 00:20:13.355 ======================================================== 00:20:13.355 Total : 41165.68 160.80 6219.59 1684.19 11468.22 00:20:13.355 00:20:13.355 17:11:00 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@84 -- # nvmftestfini 00:20:13.355 17:11:00 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@488 -- # nvmfcleanup 00:20:13.355 17:11:00 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@117 -- # sync 00:20:13.355 17:11:00 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:20:13.355 17:11:00 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@120 -- # set +e 00:20:13.355 17:11:00 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@121 -- # for i in {1..20} 00:20:13.355 17:11:00 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:20:13.355 rmmod nvme_tcp 00:20:13.355 rmmod nvme_fabrics 00:20:13.355 rmmod nvme_keyring 00:20:13.355 17:11:00 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:20:13.355 17:11:00 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@124 -- # set -e 00:20:13.355 17:11:00 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@125 -- # return 0 00:20:13.355 17:11:00 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@489 -- # '[' -n 3114827 ']' 00:20:13.355 17:11:00 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@490 -- # killprocess 3114827 00:20:13.355 17:11:00 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@946 -- # '[' -z 3114827 ']' 00:20:13.355 17:11:00 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@950 -- # kill -0 3114827 00:20:13.355 17:11:00 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@951 -- # uname 00:20:13.355 17:11:00 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:20:13.355 17:11:00 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3114827 00:20:13.355 17:11:00 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:20:13.355 17:11:00 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:20:13.355 17:11:00 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3114827' 00:20:13.355 killing process with pid 3114827 00:20:13.355 17:11:00 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@965 -- # kill 3114827 00:20:13.355 [2024-05-15 17:11:00.526932] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:20:13.355 17:11:00 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@970 -- # wait 3114827 00:20:13.355 17:11:00 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:20:13.355 17:11:00 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:20:13.355 17:11:00 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:20:13.355 17:11:00 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:20:13.355 17:11:00 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@278 -- # remove_spdk_ns 00:20:13.355 17:11:00 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:13.355 17:11:00 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:13.355 17:11:00 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:15.290 17:11:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:20:15.290 17:11:02 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@86 -- # adq_reload_driver 00:20:15.290 17:11:02 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@53 -- # rmmod ice 00:20:16.669 17:11:03 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@54 -- # modprobe ice 00:20:18.575 17:11:05 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@55 -- # sleep 5 00:20:23.859 17:11:10 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@89 -- # nvmftestinit 00:20:23.859 17:11:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:20:23.859 17:11:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:23.859 17:11:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@448 -- # prepare_net_devs 00:20:23.859 17:11:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@410 -- # local -g is_hw=no 00:20:23.859 17:11:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@412 -- # remove_spdk_ns 00:20:23.859 17:11:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:23.859 17:11:10 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:23.859 17:11:10 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:23.859 17:11:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:20:23.859 17:11:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:20:23.859 17:11:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@285 -- # xtrace_disable 00:20:23.859 17:11:10 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:23.859 17:11:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:23.859 17:11:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@291 -- # pci_devs=() 00:20:23.859 17:11:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@291 -- # local -a pci_devs 00:20:23.859 17:11:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@292 -- # pci_net_devs=() 00:20:23.859 17:11:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:20:23.859 17:11:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@293 -- # pci_drivers=() 00:20:23.859 17:11:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@293 -- # local -A pci_drivers 00:20:23.859 17:11:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@295 -- # net_devs=() 00:20:23.859 17:11:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@295 -- # local -ga net_devs 00:20:23.859 17:11:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@296 -- # e810=() 00:20:23.859 17:11:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@296 -- # local -ga e810 00:20:23.859 17:11:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@297 -- # x722=() 00:20:23.859 17:11:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@297 -- # local -ga x722 00:20:23.859 17:11:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@298 -- # mlx=() 00:20:23.859 17:11:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@298 -- # local -ga mlx 00:20:23.859 17:11:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:23.859 17:11:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:23.859 17:11:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:23.859 17:11:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:23.859 17:11:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:23.859 17:11:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:23.859 17:11:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:23.859 17:11:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:23.859 17:11:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:23.860 17:11:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:23.860 17:11:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:23.860 17:11:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:20:23.860 17:11:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:20:23.860 17:11:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:20:23.860 17:11:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:20:23.860 17:11:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:20:23.860 17:11:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:20:23.860 17:11:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:23.860 17:11:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:20:23.860 Found 0000:86:00.0 (0x8086 - 0x159b) 00:20:23.860 17:11:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:23.860 17:11:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:23.860 17:11:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:23.860 17:11:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:23.860 17:11:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:23.860 17:11:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:23.860 17:11:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:20:23.860 Found 0000:86:00.1 (0x8086 - 0x159b) 00:20:23.860 17:11:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:23.860 17:11:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:23.860 17:11:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:23.860 17:11:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:23.860 17:11:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:23.860 17:11:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:20:23.860 17:11:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:20:23.860 17:11:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:20:23.860 17:11:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:23.860 17:11:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:23.860 17:11:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:20:23.860 17:11:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:23.860 17:11:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:20:23.860 17:11:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:23.860 17:11:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:23.860 17:11:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:20:23.860 Found net devices under 0000:86:00.0: cvl_0_0 00:20:23.860 17:11:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:23.860 17:11:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:23.860 17:11:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:23.860 17:11:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:20:23.860 17:11:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:23.860 17:11:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:20:23.860 17:11:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:23.860 17:11:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:23.860 17:11:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:20:23.860 Found net devices under 0000:86:00.1: cvl_0_1 00:20:23.860 17:11:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:23.860 17:11:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:20:23.860 17:11:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@414 -- # is_hw=yes 00:20:23.860 17:11:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:20:23.860 17:11:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:20:23.860 17:11:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:20:23.860 17:11:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:23.860 17:11:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:23.860 17:11:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:23.860 17:11:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:20:23.860 17:11:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:23.860 17:11:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:23.860 17:11:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:20:23.860 17:11:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:23.860 17:11:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:23.860 17:11:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:20:23.860 17:11:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:20:23.860 17:11:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:20:23.860 17:11:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:23.860 17:11:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:23.860 17:11:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:23.860 17:11:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:20:23.860 17:11:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:23.860 17:11:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:23.860 17:11:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:23.860 17:11:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:20:23.860 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:23.860 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.185 ms 00:20:23.860 00:20:23.860 --- 10.0.0.2 ping statistics --- 00:20:23.860 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:23.860 rtt min/avg/max/mdev = 0.185/0.185/0.185/0.000 ms 00:20:23.860 17:11:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:23.860 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:23.860 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.218 ms 00:20:23.860 00:20:23.860 --- 10.0.0.1 ping statistics --- 00:20:23.860 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:23.860 rtt min/avg/max/mdev = 0.218/0.218/0.218/0.000 ms 00:20:23.860 17:11:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:23.860 17:11:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@422 -- # return 0 00:20:23.860 17:11:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:20:23.860 17:11:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:23.860 17:11:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:20:23.860 17:11:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:20:23.860 17:11:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:23.860 17:11:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:20:23.860 17:11:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:20:23.860 17:11:11 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@90 -- # adq_configure_driver 00:20:23.860 17:11:11 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@22 -- # ip netns exec cvl_0_0_ns_spdk ethtool --offload cvl_0_0 hw-tc-offload on 00:20:23.860 17:11:11 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@24 -- # ip netns exec cvl_0_0_ns_spdk ethtool --set-priv-flags cvl_0_0 channel-pkt-inspect-optimize off 00:20:23.860 17:11:11 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@26 -- # sysctl -w net.core.busy_poll=1 00:20:23.860 net.core.busy_poll = 1 00:20:23.860 17:11:11 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@27 -- # sysctl -w net.core.busy_read=1 00:20:23.860 net.core.busy_read = 1 00:20:23.860 17:11:11 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@29 -- # tc=/usr/sbin/tc 00:20:23.860 17:11:11 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@31 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 root mqprio num_tc 2 map 0 1 queues 2@0 2@2 hw 1 mode channel 00:20:23.860 17:11:11 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 ingress 00:20:23.860 17:11:11 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@35 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc filter add dev cvl_0_0 protocol ip parent ffff: prio 1 flower dst_ip 10.0.0.2/32 ip_proto tcp dst_port 4420 skip_sw hw_tc 1 00:20:23.860 17:11:11 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@38 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/nvmf/set_xps_rxqs cvl_0_0 00:20:23.860 17:11:11 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@91 -- # nvmfappstart -m 0xF --wait-for-rpc 00:20:23.860 17:11:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:20:23.860 17:11:11 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@720 -- # xtrace_disable 00:20:23.860 17:11:11 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:23.860 17:11:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@481 -- # nvmfpid=3118852 00:20:23.860 17:11:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@482 -- # waitforlisten 3118852 00:20:23.860 17:11:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:20:23.860 17:11:11 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@827 -- # '[' -z 3118852 ']' 00:20:23.860 17:11:11 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:23.860 17:11:11 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@832 -- # local max_retries=100 00:20:23.860 17:11:11 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:23.860 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:23.860 17:11:11 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@836 -- # xtrace_disable 00:20:23.860 17:11:11 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:23.860 [2024-05-15 17:11:11.498150] Starting SPDK v24.05-pre git sha1 0ba8ca574 / DPDK 23.11.0 initialization... 00:20:23.860 [2024-05-15 17:11:11.498205] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:24.120 EAL: No free 2048 kB hugepages reported on node 1 00:20:24.120 [2024-05-15 17:11:11.556145] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:20:24.120 [2024-05-15 17:11:11.630152] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:24.120 [2024-05-15 17:11:11.630194] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:24.120 [2024-05-15 17:11:11.630202] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:24.120 [2024-05-15 17:11:11.630208] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:24.120 [2024-05-15 17:11:11.630213] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:24.120 [2024-05-15 17:11:11.630259] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:20:24.120 [2024-05-15 17:11:11.630355] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:20:24.120 [2024-05-15 17:11:11.630440] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:20:24.120 [2024-05-15 17:11:11.630442] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:20:24.685 17:11:12 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:20:24.685 17:11:12 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@860 -- # return 0 00:20:24.685 17:11:12 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:20:24.685 17:11:12 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:24.685 17:11:12 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:24.685 17:11:12 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:24.685 17:11:12 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@92 -- # adq_configure_nvmf_target 1 00:20:24.685 17:11:12 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@42 -- # rpc_cmd sock_get_default_impl 00:20:24.685 17:11:12 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@42 -- # jq -r .impl_name 00:20:24.685 17:11:12 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:24.685 17:11:12 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:24.943 17:11:12 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:24.943 17:11:12 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@42 -- # socket_impl=posix 00:20:24.943 17:11:12 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@43 -- # rpc_cmd sock_impl_set_options --enable-placement-id 1 --enable-zerocopy-send-server -i posix 00:20:24.943 17:11:12 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:24.943 17:11:12 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:24.943 17:11:12 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:24.943 17:11:12 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@44 -- # rpc_cmd framework_start_init 00:20:24.943 17:11:12 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:24.943 17:11:12 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:24.943 17:11:12 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:24.943 17:11:12 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 1 00:20:24.943 17:11:12 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:24.943 17:11:12 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:24.943 [2024-05-15 17:11:12.485949] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:24.943 17:11:12 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:24.943 17:11:12 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@46 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:20:24.943 17:11:12 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:24.943 17:11:12 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:24.943 Malloc1 00:20:24.943 17:11:12 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:24.943 17:11:12 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:20:24.943 17:11:12 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:24.943 17:11:12 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:24.943 17:11:12 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:24.943 17:11:12 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:20:24.943 17:11:12 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:24.943 17:11:12 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:24.943 17:11:12 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:24.943 17:11:12 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:20:24.943 17:11:12 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:24.943 17:11:12 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:24.943 [2024-05-15 17:11:12.533388] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:20:24.943 [2024-05-15 17:11:12.533640] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:24.943 17:11:12 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:24.943 17:11:12 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@96 -- # perfpid=3119104 00:20:24.943 17:11:12 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@97 -- # sleep 2 00:20:24.943 17:11:12 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:20:24.943 EAL: No free 2048 kB hugepages reported on node 1 00:20:27.466 17:11:14 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@99 -- # rpc_cmd nvmf_get_stats 00:20:27.466 17:11:14 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:27.466 17:11:14 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:27.466 17:11:14 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:27.466 17:11:14 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@99 -- # nvmf_stats='{ 00:20:27.466 "tick_rate": 2300000000, 00:20:27.466 "poll_groups": [ 00:20:27.466 { 00:20:27.466 "name": "nvmf_tgt_poll_group_000", 00:20:27.466 "admin_qpairs": 1, 00:20:27.466 "io_qpairs": 3, 00:20:27.466 "current_admin_qpairs": 1, 00:20:27.466 "current_io_qpairs": 3, 00:20:27.466 "pending_bdev_io": 0, 00:20:27.466 "completed_nvme_io": 29629, 00:20:27.466 "transports": [ 00:20:27.466 { 00:20:27.466 "trtype": "TCP" 00:20:27.466 } 00:20:27.466 ] 00:20:27.466 }, 00:20:27.466 { 00:20:27.466 "name": "nvmf_tgt_poll_group_001", 00:20:27.466 "admin_qpairs": 0, 00:20:27.466 "io_qpairs": 1, 00:20:27.466 "current_admin_qpairs": 0, 00:20:27.466 "current_io_qpairs": 1, 00:20:27.466 "pending_bdev_io": 0, 00:20:27.466 "completed_nvme_io": 27638, 00:20:27.466 "transports": [ 00:20:27.466 { 00:20:27.466 "trtype": "TCP" 00:20:27.466 } 00:20:27.466 ] 00:20:27.466 }, 00:20:27.466 { 00:20:27.466 "name": "nvmf_tgt_poll_group_002", 00:20:27.466 "admin_qpairs": 0, 00:20:27.466 "io_qpairs": 0, 00:20:27.466 "current_admin_qpairs": 0, 00:20:27.466 "current_io_qpairs": 0, 00:20:27.466 "pending_bdev_io": 0, 00:20:27.466 "completed_nvme_io": 0, 00:20:27.466 "transports": [ 00:20:27.466 { 00:20:27.466 "trtype": "TCP" 00:20:27.466 } 00:20:27.466 ] 00:20:27.466 }, 00:20:27.466 { 00:20:27.466 "name": "nvmf_tgt_poll_group_003", 00:20:27.466 "admin_qpairs": 0, 00:20:27.466 "io_qpairs": 0, 00:20:27.466 "current_admin_qpairs": 0, 00:20:27.466 "current_io_qpairs": 0, 00:20:27.466 "pending_bdev_io": 0, 00:20:27.466 "completed_nvme_io": 0, 00:20:27.466 "transports": [ 00:20:27.466 { 00:20:27.466 "trtype": "TCP" 00:20:27.466 } 00:20:27.466 ] 00:20:27.466 } 00:20:27.466 ] 00:20:27.466 }' 00:20:27.466 17:11:14 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@100 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 0) | length' 00:20:27.466 17:11:14 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@100 -- # wc -l 00:20:27.467 17:11:14 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@100 -- # count=2 00:20:27.467 17:11:14 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@101 -- # [[ 2 -lt 2 ]] 00:20:27.467 17:11:14 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@106 -- # wait 3119104 00:20:35.575 Initializing NVMe Controllers 00:20:35.575 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:20:35.575 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:20:35.575 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:20:35.575 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:20:35.575 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:20:35.575 Initialization complete. Launching workers. 00:20:35.575 ======================================================== 00:20:35.575 Latency(us) 00:20:35.575 Device Information : IOPS MiB/s Average min max 00:20:35.575 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 5427.45 21.20 11796.83 1458.30 60941.75 00:20:35.575 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 14426.47 56.35 4435.73 1219.60 7241.40 00:20:35.575 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 5095.15 19.90 12562.78 1701.70 60166.45 00:20:35.575 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 4681.36 18.29 13675.78 1935.09 59668.00 00:20:35.575 ======================================================== 00:20:35.575 Total : 29630.44 115.74 8641.43 1219.60 60941.75 00:20:35.575 00:20:35.575 17:11:22 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@107 -- # nvmftestfini 00:20:35.575 17:11:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@488 -- # nvmfcleanup 00:20:35.575 17:11:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@117 -- # sync 00:20:35.575 17:11:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:20:35.575 17:11:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@120 -- # set +e 00:20:35.575 17:11:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@121 -- # for i in {1..20} 00:20:35.575 17:11:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:20:35.575 rmmod nvme_tcp 00:20:35.575 rmmod nvme_fabrics 00:20:35.575 rmmod nvme_keyring 00:20:35.575 17:11:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:20:35.575 17:11:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@124 -- # set -e 00:20:35.575 17:11:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@125 -- # return 0 00:20:35.575 17:11:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@489 -- # '[' -n 3118852 ']' 00:20:35.575 17:11:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@490 -- # killprocess 3118852 00:20:35.575 17:11:22 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@946 -- # '[' -z 3118852 ']' 00:20:35.575 17:11:22 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@950 -- # kill -0 3118852 00:20:35.575 17:11:22 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@951 -- # uname 00:20:35.575 17:11:22 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:20:35.575 17:11:22 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3118852 00:20:35.575 17:11:22 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:20:35.575 17:11:22 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:20:35.575 17:11:22 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3118852' 00:20:35.575 killing process with pid 3118852 00:20:35.575 17:11:22 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@965 -- # kill 3118852 00:20:35.575 [2024-05-15 17:11:22.757969] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:20:35.575 17:11:22 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@970 -- # wait 3118852 00:20:35.575 17:11:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:20:35.575 17:11:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:20:35.575 17:11:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:20:35.575 17:11:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:20:35.575 17:11:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@278 -- # remove_spdk_ns 00:20:35.575 17:11:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:35.575 17:11:22 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:35.575 17:11:22 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:38.856 17:11:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:20:38.856 17:11:26 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:20:38.856 00:20:38.856 real 0m50.076s 00:20:38.856 user 2m48.972s 00:20:38.856 sys 0m8.980s 00:20:38.856 17:11:26 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@1122 -- # xtrace_disable 00:20:38.856 17:11:26 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:38.856 ************************************ 00:20:38.856 END TEST nvmf_perf_adq 00:20:38.856 ************************************ 00:20:38.856 17:11:26 nvmf_tcp -- nvmf/nvmf.sh@82 -- # run_test nvmf_shutdown /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:20:38.856 17:11:26 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:20:38.856 17:11:26 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:20:38.856 17:11:26 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:20:38.856 ************************************ 00:20:38.856 START TEST nvmf_shutdown 00:20:38.856 ************************************ 00:20:38.856 17:11:26 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:20:38.856 * Looking for test storage... 00:20:38.856 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:20:38.856 17:11:26 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:20:38.856 17:11:26 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@7 -- # uname -s 00:20:38.856 17:11:26 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:38.856 17:11:26 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:38.856 17:11:26 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:38.856 17:11:26 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:38.856 17:11:26 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:38.856 17:11:26 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:38.856 17:11:26 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:38.856 17:11:26 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:38.856 17:11:26 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:38.856 17:11:26 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:38.856 17:11:26 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:20:38.856 17:11:26 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:20:38.856 17:11:26 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:38.856 17:11:26 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:38.856 17:11:26 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:38.856 17:11:26 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:38.856 17:11:26 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:20:38.856 17:11:26 nvmf_tcp.nvmf_shutdown -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:38.856 17:11:26 nvmf_tcp.nvmf_shutdown -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:38.856 17:11:26 nvmf_tcp.nvmf_shutdown -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:38.856 17:11:26 nvmf_tcp.nvmf_shutdown -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:38.856 17:11:26 nvmf_tcp.nvmf_shutdown -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:38.856 17:11:26 nvmf_tcp.nvmf_shutdown -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:38.856 17:11:26 nvmf_tcp.nvmf_shutdown -- paths/export.sh@5 -- # export PATH 00:20:38.857 17:11:26 nvmf_tcp.nvmf_shutdown -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:38.857 17:11:26 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@47 -- # : 0 00:20:38.857 17:11:26 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:20:38.857 17:11:26 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:20:38.857 17:11:26 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:38.857 17:11:26 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:38.857 17:11:26 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:38.857 17:11:26 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:20:38.857 17:11:26 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:20:38.857 17:11:26 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@51 -- # have_pci_nics=0 00:20:38.857 17:11:26 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@11 -- # MALLOC_BDEV_SIZE=64 00:20:38.857 17:11:26 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:20:38.857 17:11:26 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@147 -- # run_test nvmf_shutdown_tc1 nvmf_shutdown_tc1 00:20:38.857 17:11:26 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:20:38.857 17:11:26 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1103 -- # xtrace_disable 00:20:38.857 17:11:26 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:20:38.857 ************************************ 00:20:38.857 START TEST nvmf_shutdown_tc1 00:20:38.857 ************************************ 00:20:38.857 17:11:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1121 -- # nvmf_shutdown_tc1 00:20:38.857 17:11:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@74 -- # starttarget 00:20:38.857 17:11:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@15 -- # nvmftestinit 00:20:38.857 17:11:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:20:38.857 17:11:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:38.857 17:11:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@448 -- # prepare_net_devs 00:20:38.857 17:11:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@410 -- # local -g is_hw=no 00:20:38.857 17:11:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@412 -- # remove_spdk_ns 00:20:38.857 17:11:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:38.857 17:11:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:38.857 17:11:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:38.857 17:11:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:20:38.857 17:11:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:20:38.857 17:11:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@285 -- # xtrace_disable 00:20:38.857 17:11:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:20:44.120 17:11:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:44.120 17:11:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@291 -- # pci_devs=() 00:20:44.120 17:11:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@291 -- # local -a pci_devs 00:20:44.120 17:11:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:20:44.120 17:11:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:20:44.120 17:11:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@293 -- # pci_drivers=() 00:20:44.120 17:11:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:20:44.120 17:11:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@295 -- # net_devs=() 00:20:44.120 17:11:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@295 -- # local -ga net_devs 00:20:44.120 17:11:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@296 -- # e810=() 00:20:44.120 17:11:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@296 -- # local -ga e810 00:20:44.120 17:11:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@297 -- # x722=() 00:20:44.120 17:11:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@297 -- # local -ga x722 00:20:44.120 17:11:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@298 -- # mlx=() 00:20:44.120 17:11:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@298 -- # local -ga mlx 00:20:44.120 17:11:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:44.120 17:11:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:44.120 17:11:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:44.120 17:11:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:44.120 17:11:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:44.120 17:11:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:44.120 17:11:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:44.120 17:11:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:44.120 17:11:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:44.120 17:11:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:44.120 17:11:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:44.120 17:11:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:20:44.120 17:11:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:20:44.120 17:11:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:20:44.120 17:11:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:20:44.120 17:11:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:20:44.120 17:11:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:20:44.120 17:11:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:44.120 17:11:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:20:44.120 Found 0000:86:00.0 (0x8086 - 0x159b) 00:20:44.120 17:11:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:44.120 17:11:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:44.120 17:11:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:44.120 17:11:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:44.120 17:11:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:44.120 17:11:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:44.120 17:11:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:20:44.120 Found 0000:86:00.1 (0x8086 - 0x159b) 00:20:44.120 17:11:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:44.120 17:11:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:44.120 17:11:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:44.120 17:11:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:44.120 17:11:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:44.120 17:11:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:20:44.120 17:11:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:20:44.120 17:11:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:20:44.120 17:11:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:44.120 17:11:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:44.120 17:11:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:20:44.120 17:11:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:44.120 17:11:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:20:44.120 17:11:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:44.120 17:11:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:44.120 17:11:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:20:44.120 Found net devices under 0000:86:00.0: cvl_0_0 00:20:44.120 17:11:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:44.120 17:11:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:44.120 17:11:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:44.120 17:11:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:20:44.120 17:11:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:44.120 17:11:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:20:44.120 17:11:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:44.120 17:11:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:44.120 17:11:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:20:44.120 Found net devices under 0000:86:00.1: cvl_0_1 00:20:44.120 17:11:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:44.120 17:11:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:20:44.120 17:11:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@414 -- # is_hw=yes 00:20:44.120 17:11:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:20:44.120 17:11:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:20:44.121 17:11:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:20:44.121 17:11:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:44.121 17:11:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:44.121 17:11:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:44.121 17:11:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:20:44.121 17:11:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:44.121 17:11:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:44.121 17:11:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:20:44.121 17:11:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:44.121 17:11:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:44.121 17:11:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:20:44.121 17:11:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:20:44.121 17:11:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:20:44.121 17:11:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:44.121 17:11:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:44.121 17:11:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:44.121 17:11:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:20:44.121 17:11:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:44.121 17:11:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:44.121 17:11:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:44.121 17:11:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:20:44.121 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:44.121 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.158 ms 00:20:44.121 00:20:44.121 --- 10.0.0.2 ping statistics --- 00:20:44.121 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:44.121 rtt min/avg/max/mdev = 0.158/0.158/0.158/0.000 ms 00:20:44.121 17:11:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:44.121 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:44.121 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.078 ms 00:20:44.121 00:20:44.121 --- 10.0.0.1 ping statistics --- 00:20:44.121 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:44.121 rtt min/avg/max/mdev = 0.078/0.078/0.078/0.000 ms 00:20:44.121 17:11:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:44.121 17:11:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@422 -- # return 0 00:20:44.121 17:11:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:20:44.121 17:11:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:44.121 17:11:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:20:44.121 17:11:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:20:44.121 17:11:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:44.121 17:11:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:20:44.121 17:11:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:20:44.121 17:11:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:20:44.121 17:11:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:20:44.121 17:11:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@720 -- # xtrace_disable 00:20:44.121 17:11:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:20:44.121 17:11:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@481 -- # nvmfpid=3124481 00:20:44.121 17:11:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@482 -- # waitforlisten 3124481 00:20:44.121 17:11:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:20:44.121 17:11:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@827 -- # '[' -z 3124481 ']' 00:20:44.121 17:11:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:44.121 17:11:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@832 -- # local max_retries=100 00:20:44.121 17:11:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:44.121 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:44.121 17:11:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@836 -- # xtrace_disable 00:20:44.121 17:11:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:20:44.121 [2024-05-15 17:11:31.731657] Starting SPDK v24.05-pre git sha1 0ba8ca574 / DPDK 23.11.0 initialization... 00:20:44.121 [2024-05-15 17:11:31.731702] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:44.121 EAL: No free 2048 kB hugepages reported on node 1 00:20:44.379 [2024-05-15 17:11:31.790420] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:20:44.379 [2024-05-15 17:11:31.862708] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:44.380 [2024-05-15 17:11:31.862749] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:44.380 [2024-05-15 17:11:31.862756] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:44.380 [2024-05-15 17:11:31.862763] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:44.380 [2024-05-15 17:11:31.862767] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:44.380 [2024-05-15 17:11:31.862870] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:20:44.380 [2024-05-15 17:11:31.862977] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:20:44.380 [2024-05-15 17:11:31.863063] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:20:44.380 [2024-05-15 17:11:31.863064] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:20:44.945 17:11:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:20:44.945 17:11:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@860 -- # return 0 00:20:44.945 17:11:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:20:44.945 17:11:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:44.945 17:11:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:20:44.945 17:11:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:44.945 17:11:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:20:44.945 17:11:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:44.945 17:11:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:20:44.945 [2024-05-15 17:11:32.576159] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:44.945 17:11:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:44.945 17:11:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:20:44.945 17:11:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:20:44.945 17:11:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@720 -- # xtrace_disable 00:20:44.945 17:11:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:20:44.945 17:11:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:20:44.945 17:11:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:20:44.945 17:11:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:20:44.945 17:11:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:20:44.945 17:11:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:20:44.945 17:11:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:20:44.945 17:11:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:20:45.204 17:11:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:20:45.204 17:11:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:20:45.204 17:11:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:20:45.204 17:11:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:20:45.204 17:11:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:20:45.204 17:11:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:20:45.204 17:11:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:20:45.204 17:11:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:20:45.204 17:11:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:20:45.204 17:11:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:20:45.204 17:11:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:20:45.204 17:11:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:20:45.204 17:11:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:20:45.204 17:11:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:20:45.204 17:11:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@35 -- # rpc_cmd 00:20:45.204 17:11:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:45.204 17:11:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:20:45.204 Malloc1 00:20:45.204 [2024-05-15 17:11:32.671964] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:20:45.204 [2024-05-15 17:11:32.672209] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:45.204 Malloc2 00:20:45.204 Malloc3 00:20:45.204 Malloc4 00:20:45.204 Malloc5 00:20:45.204 Malloc6 00:20:45.466 Malloc7 00:20:45.466 Malloc8 00:20:45.466 Malloc9 00:20:45.466 Malloc10 00:20:45.467 17:11:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:45.467 17:11:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:20:45.467 17:11:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:45.467 17:11:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:20:45.467 17:11:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@78 -- # perfpid=3124769 00:20:45.467 17:11:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@79 -- # waitforlisten 3124769 /var/tmp/bdevperf.sock 00:20:45.467 17:11:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@827 -- # '[' -z 3124769 ']' 00:20:45.467 17:11:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:45.467 17:11:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json /dev/fd/63 00:20:45.467 17:11:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@832 -- # local max_retries=100 00:20:45.467 17:11:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@77 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:20:45.467 17:11:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:45.467 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:45.467 17:11:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@532 -- # config=() 00:20:45.467 17:11:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@836 -- # xtrace_disable 00:20:45.467 17:11:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@532 -- # local subsystem config 00:20:45.467 17:11:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:20:45.467 17:11:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:45.467 17:11:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:45.467 { 00:20:45.467 "params": { 00:20:45.467 "name": "Nvme$subsystem", 00:20:45.467 "trtype": "$TEST_TRANSPORT", 00:20:45.467 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:45.467 "adrfam": "ipv4", 00:20:45.467 "trsvcid": "$NVMF_PORT", 00:20:45.467 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:45.467 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:45.467 "hdgst": ${hdgst:-false}, 00:20:45.467 "ddgst": ${ddgst:-false} 00:20:45.467 }, 00:20:45.467 "method": "bdev_nvme_attach_controller" 00:20:45.467 } 00:20:45.467 EOF 00:20:45.467 )") 00:20:45.467 17:11:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:20:45.467 17:11:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:45.467 17:11:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:45.467 { 00:20:45.467 "params": { 00:20:45.467 "name": "Nvme$subsystem", 00:20:45.467 "trtype": "$TEST_TRANSPORT", 00:20:45.467 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:45.467 "adrfam": "ipv4", 00:20:45.467 "trsvcid": "$NVMF_PORT", 00:20:45.467 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:45.467 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:45.467 "hdgst": ${hdgst:-false}, 00:20:45.467 "ddgst": ${ddgst:-false} 00:20:45.467 }, 00:20:45.467 "method": "bdev_nvme_attach_controller" 00:20:45.467 } 00:20:45.467 EOF 00:20:45.467 )") 00:20:45.467 17:11:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:20:45.467 17:11:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:45.467 17:11:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:45.467 { 00:20:45.467 "params": { 00:20:45.467 "name": "Nvme$subsystem", 00:20:45.467 "trtype": "$TEST_TRANSPORT", 00:20:45.467 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:45.467 "adrfam": "ipv4", 00:20:45.467 "trsvcid": "$NVMF_PORT", 00:20:45.467 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:45.467 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:45.467 "hdgst": ${hdgst:-false}, 00:20:45.467 "ddgst": ${ddgst:-false} 00:20:45.467 }, 00:20:45.467 "method": "bdev_nvme_attach_controller" 00:20:45.467 } 00:20:45.467 EOF 00:20:45.467 )") 00:20:45.467 17:11:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:20:45.467 17:11:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:45.467 17:11:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:45.467 { 00:20:45.467 "params": { 00:20:45.467 "name": "Nvme$subsystem", 00:20:45.467 "trtype": "$TEST_TRANSPORT", 00:20:45.467 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:45.467 "adrfam": "ipv4", 00:20:45.467 "trsvcid": "$NVMF_PORT", 00:20:45.467 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:45.467 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:45.467 "hdgst": ${hdgst:-false}, 00:20:45.467 "ddgst": ${ddgst:-false} 00:20:45.467 }, 00:20:45.467 "method": "bdev_nvme_attach_controller" 00:20:45.467 } 00:20:45.467 EOF 00:20:45.467 )") 00:20:45.725 17:11:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:20:45.725 17:11:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:45.725 17:11:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:45.725 { 00:20:45.725 "params": { 00:20:45.725 "name": "Nvme$subsystem", 00:20:45.725 "trtype": "$TEST_TRANSPORT", 00:20:45.725 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:45.725 "adrfam": "ipv4", 00:20:45.725 "trsvcid": "$NVMF_PORT", 00:20:45.725 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:45.725 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:45.725 "hdgst": ${hdgst:-false}, 00:20:45.725 "ddgst": ${ddgst:-false} 00:20:45.725 }, 00:20:45.725 "method": "bdev_nvme_attach_controller" 00:20:45.725 } 00:20:45.725 EOF 00:20:45.725 )") 00:20:45.725 17:11:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:20:45.725 17:11:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:45.725 17:11:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:45.725 { 00:20:45.725 "params": { 00:20:45.725 "name": "Nvme$subsystem", 00:20:45.725 "trtype": "$TEST_TRANSPORT", 00:20:45.725 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:45.725 "adrfam": "ipv4", 00:20:45.725 "trsvcid": "$NVMF_PORT", 00:20:45.725 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:45.725 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:45.725 "hdgst": ${hdgst:-false}, 00:20:45.725 "ddgst": ${ddgst:-false} 00:20:45.725 }, 00:20:45.725 "method": "bdev_nvme_attach_controller" 00:20:45.725 } 00:20:45.725 EOF 00:20:45.725 )") 00:20:45.725 17:11:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:20:45.725 17:11:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:45.725 17:11:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:45.725 { 00:20:45.725 "params": { 00:20:45.725 "name": "Nvme$subsystem", 00:20:45.725 "trtype": "$TEST_TRANSPORT", 00:20:45.725 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:45.725 "adrfam": "ipv4", 00:20:45.725 "trsvcid": "$NVMF_PORT", 00:20:45.725 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:45.725 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:45.725 "hdgst": ${hdgst:-false}, 00:20:45.725 "ddgst": ${ddgst:-false} 00:20:45.725 }, 00:20:45.725 "method": "bdev_nvme_attach_controller" 00:20:45.725 } 00:20:45.725 EOF 00:20:45.725 )") 00:20:45.725 17:11:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:20:45.725 [2024-05-15 17:11:33.146539] Starting SPDK v24.05-pre git sha1 0ba8ca574 / DPDK 23.11.0 initialization... 00:20:45.725 [2024-05-15 17:11:33.146591] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:20:45.725 17:11:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:45.725 17:11:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:45.725 { 00:20:45.725 "params": { 00:20:45.725 "name": "Nvme$subsystem", 00:20:45.725 "trtype": "$TEST_TRANSPORT", 00:20:45.725 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:45.725 "adrfam": "ipv4", 00:20:45.725 "trsvcid": "$NVMF_PORT", 00:20:45.725 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:45.725 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:45.725 "hdgst": ${hdgst:-false}, 00:20:45.725 "ddgst": ${ddgst:-false} 00:20:45.725 }, 00:20:45.725 "method": "bdev_nvme_attach_controller" 00:20:45.725 } 00:20:45.725 EOF 00:20:45.725 )") 00:20:45.725 17:11:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:20:45.725 17:11:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:45.725 17:11:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:45.725 { 00:20:45.725 "params": { 00:20:45.725 "name": "Nvme$subsystem", 00:20:45.725 "trtype": "$TEST_TRANSPORT", 00:20:45.725 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:45.725 "adrfam": "ipv4", 00:20:45.725 "trsvcid": "$NVMF_PORT", 00:20:45.725 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:45.725 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:45.725 "hdgst": ${hdgst:-false}, 00:20:45.725 "ddgst": ${ddgst:-false} 00:20:45.725 }, 00:20:45.725 "method": "bdev_nvme_attach_controller" 00:20:45.725 } 00:20:45.725 EOF 00:20:45.725 )") 00:20:45.725 17:11:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:20:45.726 17:11:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:45.726 17:11:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:45.726 { 00:20:45.726 "params": { 00:20:45.726 "name": "Nvme$subsystem", 00:20:45.726 "trtype": "$TEST_TRANSPORT", 00:20:45.726 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:45.726 "adrfam": "ipv4", 00:20:45.726 "trsvcid": "$NVMF_PORT", 00:20:45.726 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:45.726 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:45.726 "hdgst": ${hdgst:-false}, 00:20:45.726 "ddgst": ${ddgst:-false} 00:20:45.726 }, 00:20:45.726 "method": "bdev_nvme_attach_controller" 00:20:45.726 } 00:20:45.726 EOF 00:20:45.726 )") 00:20:45.726 17:11:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:20:45.726 17:11:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@556 -- # jq . 00:20:45.726 EAL: No free 2048 kB hugepages reported on node 1 00:20:45.726 17:11:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@557 -- # IFS=, 00:20:45.726 17:11:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:20:45.726 "params": { 00:20:45.726 "name": "Nvme1", 00:20:45.726 "trtype": "tcp", 00:20:45.726 "traddr": "10.0.0.2", 00:20:45.726 "adrfam": "ipv4", 00:20:45.726 "trsvcid": "4420", 00:20:45.726 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:45.726 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:45.726 "hdgst": false, 00:20:45.726 "ddgst": false 00:20:45.726 }, 00:20:45.726 "method": "bdev_nvme_attach_controller" 00:20:45.726 },{ 00:20:45.726 "params": { 00:20:45.726 "name": "Nvme2", 00:20:45.726 "trtype": "tcp", 00:20:45.726 "traddr": "10.0.0.2", 00:20:45.726 "adrfam": "ipv4", 00:20:45.726 "trsvcid": "4420", 00:20:45.726 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:20:45.726 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:20:45.726 "hdgst": false, 00:20:45.726 "ddgst": false 00:20:45.726 }, 00:20:45.726 "method": "bdev_nvme_attach_controller" 00:20:45.726 },{ 00:20:45.726 "params": { 00:20:45.726 "name": "Nvme3", 00:20:45.726 "trtype": "tcp", 00:20:45.726 "traddr": "10.0.0.2", 00:20:45.726 "adrfam": "ipv4", 00:20:45.726 "trsvcid": "4420", 00:20:45.726 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:20:45.726 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:20:45.726 "hdgst": false, 00:20:45.726 "ddgst": false 00:20:45.726 }, 00:20:45.726 "method": "bdev_nvme_attach_controller" 00:20:45.726 },{ 00:20:45.726 "params": { 00:20:45.726 "name": "Nvme4", 00:20:45.726 "trtype": "tcp", 00:20:45.726 "traddr": "10.0.0.2", 00:20:45.726 "adrfam": "ipv4", 00:20:45.726 "trsvcid": "4420", 00:20:45.726 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:20:45.726 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:20:45.726 "hdgst": false, 00:20:45.726 "ddgst": false 00:20:45.726 }, 00:20:45.726 "method": "bdev_nvme_attach_controller" 00:20:45.726 },{ 00:20:45.726 "params": { 00:20:45.726 "name": "Nvme5", 00:20:45.726 "trtype": "tcp", 00:20:45.726 "traddr": "10.0.0.2", 00:20:45.726 "adrfam": "ipv4", 00:20:45.726 "trsvcid": "4420", 00:20:45.726 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:20:45.726 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:20:45.726 "hdgst": false, 00:20:45.726 "ddgst": false 00:20:45.726 }, 00:20:45.726 "method": "bdev_nvme_attach_controller" 00:20:45.726 },{ 00:20:45.726 "params": { 00:20:45.726 "name": "Nvme6", 00:20:45.726 "trtype": "tcp", 00:20:45.726 "traddr": "10.0.0.2", 00:20:45.726 "adrfam": "ipv4", 00:20:45.726 "trsvcid": "4420", 00:20:45.726 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:20:45.726 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:20:45.726 "hdgst": false, 00:20:45.726 "ddgst": false 00:20:45.726 }, 00:20:45.726 "method": "bdev_nvme_attach_controller" 00:20:45.726 },{ 00:20:45.726 "params": { 00:20:45.726 "name": "Nvme7", 00:20:45.726 "trtype": "tcp", 00:20:45.726 "traddr": "10.0.0.2", 00:20:45.726 "adrfam": "ipv4", 00:20:45.726 "trsvcid": "4420", 00:20:45.726 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:20:45.726 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:20:45.726 "hdgst": false, 00:20:45.726 "ddgst": false 00:20:45.726 }, 00:20:45.726 "method": "bdev_nvme_attach_controller" 00:20:45.726 },{ 00:20:45.726 "params": { 00:20:45.726 "name": "Nvme8", 00:20:45.726 "trtype": "tcp", 00:20:45.726 "traddr": "10.0.0.2", 00:20:45.726 "adrfam": "ipv4", 00:20:45.726 "trsvcid": "4420", 00:20:45.726 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:20:45.726 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:20:45.726 "hdgst": false, 00:20:45.726 "ddgst": false 00:20:45.726 }, 00:20:45.726 "method": "bdev_nvme_attach_controller" 00:20:45.726 },{ 00:20:45.726 "params": { 00:20:45.726 "name": "Nvme9", 00:20:45.726 "trtype": "tcp", 00:20:45.726 "traddr": "10.0.0.2", 00:20:45.726 "adrfam": "ipv4", 00:20:45.726 "trsvcid": "4420", 00:20:45.726 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:20:45.726 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:20:45.726 "hdgst": false, 00:20:45.726 "ddgst": false 00:20:45.726 }, 00:20:45.726 "method": "bdev_nvme_attach_controller" 00:20:45.726 },{ 00:20:45.726 "params": { 00:20:45.726 "name": "Nvme10", 00:20:45.726 "trtype": "tcp", 00:20:45.726 "traddr": "10.0.0.2", 00:20:45.726 "adrfam": "ipv4", 00:20:45.726 "trsvcid": "4420", 00:20:45.726 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:20:45.726 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:20:45.726 "hdgst": false, 00:20:45.726 "ddgst": false 00:20:45.726 }, 00:20:45.726 "method": "bdev_nvme_attach_controller" 00:20:45.726 }' 00:20:45.726 [2024-05-15 17:11:33.204186] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:45.726 [2024-05-15 17:11:33.276621] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:20:47.103 17:11:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:20:47.103 17:11:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@860 -- # return 0 00:20:47.103 17:11:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@80 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:20:47.103 17:11:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:47.103 17:11:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:20:47.103 17:11:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:47.103 17:11:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@83 -- # kill -9 3124769 00:20:47.103 17:11:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@84 -- # rm -f /var/run/spdk_bdev1 00:20:47.103 17:11:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@87 -- # sleep 1 00:20:48.121 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh: line 73: 3124769 Killed $rootdir/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json <(gen_nvmf_target_json "${num_subsystems[@]}") 00:20:48.122 17:11:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@88 -- # kill -0 3124481 00:20:48.122 17:11:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@91 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:20:48.122 17:11:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@91 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:20:48.122 17:11:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@532 -- # config=() 00:20:48.122 17:11:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@532 -- # local subsystem config 00:20:48.122 17:11:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:48.122 17:11:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:48.122 { 00:20:48.122 "params": { 00:20:48.122 "name": "Nvme$subsystem", 00:20:48.122 "trtype": "$TEST_TRANSPORT", 00:20:48.122 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:48.122 "adrfam": "ipv4", 00:20:48.122 "trsvcid": "$NVMF_PORT", 00:20:48.122 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:48.122 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:48.122 "hdgst": ${hdgst:-false}, 00:20:48.122 "ddgst": ${ddgst:-false} 00:20:48.122 }, 00:20:48.122 "method": "bdev_nvme_attach_controller" 00:20:48.122 } 00:20:48.122 EOF 00:20:48.122 )") 00:20:48.122 17:11:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:20:48.122 17:11:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:48.122 17:11:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:48.122 { 00:20:48.122 "params": { 00:20:48.122 "name": "Nvme$subsystem", 00:20:48.122 "trtype": "$TEST_TRANSPORT", 00:20:48.122 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:48.122 "adrfam": "ipv4", 00:20:48.122 "trsvcid": "$NVMF_PORT", 00:20:48.122 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:48.122 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:48.122 "hdgst": ${hdgst:-false}, 00:20:48.122 "ddgst": ${ddgst:-false} 00:20:48.122 }, 00:20:48.122 "method": "bdev_nvme_attach_controller" 00:20:48.122 } 00:20:48.122 EOF 00:20:48.122 )") 00:20:48.122 17:11:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:20:48.122 17:11:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:48.122 17:11:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:48.122 { 00:20:48.122 "params": { 00:20:48.122 "name": "Nvme$subsystem", 00:20:48.122 "trtype": "$TEST_TRANSPORT", 00:20:48.122 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:48.122 "adrfam": "ipv4", 00:20:48.122 "trsvcid": "$NVMF_PORT", 00:20:48.122 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:48.122 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:48.122 "hdgst": ${hdgst:-false}, 00:20:48.122 "ddgst": ${ddgst:-false} 00:20:48.122 }, 00:20:48.122 "method": "bdev_nvme_attach_controller" 00:20:48.122 } 00:20:48.122 EOF 00:20:48.122 )") 00:20:48.122 17:11:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:20:48.122 17:11:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:48.122 17:11:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:48.122 { 00:20:48.122 "params": { 00:20:48.122 "name": "Nvme$subsystem", 00:20:48.122 "trtype": "$TEST_TRANSPORT", 00:20:48.122 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:48.122 "adrfam": "ipv4", 00:20:48.122 "trsvcid": "$NVMF_PORT", 00:20:48.122 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:48.122 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:48.122 "hdgst": ${hdgst:-false}, 00:20:48.122 "ddgst": ${ddgst:-false} 00:20:48.122 }, 00:20:48.122 "method": "bdev_nvme_attach_controller" 00:20:48.122 } 00:20:48.122 EOF 00:20:48.122 )") 00:20:48.122 17:11:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:20:48.122 17:11:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:48.122 17:11:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:48.122 { 00:20:48.122 "params": { 00:20:48.122 "name": "Nvme$subsystem", 00:20:48.122 "trtype": "$TEST_TRANSPORT", 00:20:48.122 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:48.122 "adrfam": "ipv4", 00:20:48.122 "trsvcid": "$NVMF_PORT", 00:20:48.122 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:48.122 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:48.122 "hdgst": ${hdgst:-false}, 00:20:48.122 "ddgst": ${ddgst:-false} 00:20:48.122 }, 00:20:48.122 "method": "bdev_nvme_attach_controller" 00:20:48.122 } 00:20:48.122 EOF 00:20:48.122 )") 00:20:48.122 17:11:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:20:48.122 17:11:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:48.122 17:11:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:48.122 { 00:20:48.122 "params": { 00:20:48.122 "name": "Nvme$subsystem", 00:20:48.122 "trtype": "$TEST_TRANSPORT", 00:20:48.122 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:48.122 "adrfam": "ipv4", 00:20:48.122 "trsvcid": "$NVMF_PORT", 00:20:48.122 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:48.122 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:48.123 "hdgst": ${hdgst:-false}, 00:20:48.123 "ddgst": ${ddgst:-false} 00:20:48.123 }, 00:20:48.123 "method": "bdev_nvme_attach_controller" 00:20:48.123 } 00:20:48.123 EOF 00:20:48.123 )") 00:20:48.123 17:11:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:20:48.123 17:11:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:48.123 17:11:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:48.123 { 00:20:48.123 "params": { 00:20:48.123 "name": "Nvme$subsystem", 00:20:48.123 "trtype": "$TEST_TRANSPORT", 00:20:48.123 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:48.123 "adrfam": "ipv4", 00:20:48.123 "trsvcid": "$NVMF_PORT", 00:20:48.123 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:48.123 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:48.123 "hdgst": ${hdgst:-false}, 00:20:48.123 "ddgst": ${ddgst:-false} 00:20:48.123 }, 00:20:48.123 "method": "bdev_nvme_attach_controller" 00:20:48.123 } 00:20:48.123 EOF 00:20:48.123 )") 00:20:48.123 [2024-05-15 17:11:35.724170] Starting SPDK v24.05-pre git sha1 0ba8ca574 / DPDK 23.11.0 initialization... 00:20:48.123 [2024-05-15 17:11:35.724225] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3125121 ] 00:20:48.123 17:11:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:20:48.123 17:11:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:48.123 17:11:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:48.123 { 00:20:48.123 "params": { 00:20:48.123 "name": "Nvme$subsystem", 00:20:48.123 "trtype": "$TEST_TRANSPORT", 00:20:48.123 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:48.123 "adrfam": "ipv4", 00:20:48.123 "trsvcid": "$NVMF_PORT", 00:20:48.123 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:48.123 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:48.123 "hdgst": ${hdgst:-false}, 00:20:48.123 "ddgst": ${ddgst:-false} 00:20:48.123 }, 00:20:48.123 "method": "bdev_nvme_attach_controller" 00:20:48.123 } 00:20:48.123 EOF 00:20:48.123 )") 00:20:48.123 17:11:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:20:48.123 17:11:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:48.123 17:11:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:48.123 { 00:20:48.123 "params": { 00:20:48.123 "name": "Nvme$subsystem", 00:20:48.123 "trtype": "$TEST_TRANSPORT", 00:20:48.123 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:48.123 "adrfam": "ipv4", 00:20:48.123 "trsvcid": "$NVMF_PORT", 00:20:48.123 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:48.123 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:48.123 "hdgst": ${hdgst:-false}, 00:20:48.123 "ddgst": ${ddgst:-false} 00:20:48.123 }, 00:20:48.123 "method": "bdev_nvme_attach_controller" 00:20:48.123 } 00:20:48.123 EOF 00:20:48.123 )") 00:20:48.123 17:11:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:20:48.123 17:11:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:48.123 17:11:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:48.123 { 00:20:48.123 "params": { 00:20:48.123 "name": "Nvme$subsystem", 00:20:48.123 "trtype": "$TEST_TRANSPORT", 00:20:48.123 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:48.123 "adrfam": "ipv4", 00:20:48.123 "trsvcid": "$NVMF_PORT", 00:20:48.123 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:48.123 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:48.123 "hdgst": ${hdgst:-false}, 00:20:48.123 "ddgst": ${ddgst:-false} 00:20:48.123 }, 00:20:48.123 "method": "bdev_nvme_attach_controller" 00:20:48.123 } 00:20:48.123 EOF 00:20:48.123 )") 00:20:48.123 17:11:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:20:48.123 EAL: No free 2048 kB hugepages reported on node 1 00:20:48.123 17:11:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@556 -- # jq . 00:20:48.123 17:11:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@557 -- # IFS=, 00:20:48.123 17:11:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:20:48.123 "params": { 00:20:48.123 "name": "Nvme1", 00:20:48.123 "trtype": "tcp", 00:20:48.123 "traddr": "10.0.0.2", 00:20:48.123 "adrfam": "ipv4", 00:20:48.123 "trsvcid": "4420", 00:20:48.123 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:48.123 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:48.123 "hdgst": false, 00:20:48.123 "ddgst": false 00:20:48.123 }, 00:20:48.123 "method": "bdev_nvme_attach_controller" 00:20:48.123 },{ 00:20:48.123 "params": { 00:20:48.123 "name": "Nvme2", 00:20:48.123 "trtype": "tcp", 00:20:48.123 "traddr": "10.0.0.2", 00:20:48.123 "adrfam": "ipv4", 00:20:48.123 "trsvcid": "4420", 00:20:48.123 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:20:48.123 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:20:48.123 "hdgst": false, 00:20:48.123 "ddgst": false 00:20:48.123 }, 00:20:48.123 "method": "bdev_nvme_attach_controller" 00:20:48.123 },{ 00:20:48.123 "params": { 00:20:48.123 "name": "Nvme3", 00:20:48.123 "trtype": "tcp", 00:20:48.123 "traddr": "10.0.0.2", 00:20:48.123 "adrfam": "ipv4", 00:20:48.123 "trsvcid": "4420", 00:20:48.124 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:20:48.124 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:20:48.124 "hdgst": false, 00:20:48.124 "ddgst": false 00:20:48.124 }, 00:20:48.124 "method": "bdev_nvme_attach_controller" 00:20:48.124 },{ 00:20:48.124 "params": { 00:20:48.124 "name": "Nvme4", 00:20:48.124 "trtype": "tcp", 00:20:48.124 "traddr": "10.0.0.2", 00:20:48.124 "adrfam": "ipv4", 00:20:48.124 "trsvcid": "4420", 00:20:48.124 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:20:48.124 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:20:48.124 "hdgst": false, 00:20:48.124 "ddgst": false 00:20:48.124 }, 00:20:48.124 "method": "bdev_nvme_attach_controller" 00:20:48.124 },{ 00:20:48.124 "params": { 00:20:48.124 "name": "Nvme5", 00:20:48.124 "trtype": "tcp", 00:20:48.124 "traddr": "10.0.0.2", 00:20:48.124 "adrfam": "ipv4", 00:20:48.124 "trsvcid": "4420", 00:20:48.124 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:20:48.124 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:20:48.124 "hdgst": false, 00:20:48.124 "ddgst": false 00:20:48.124 }, 00:20:48.124 "method": "bdev_nvme_attach_controller" 00:20:48.124 },{ 00:20:48.124 "params": { 00:20:48.124 "name": "Nvme6", 00:20:48.124 "trtype": "tcp", 00:20:48.124 "traddr": "10.0.0.2", 00:20:48.124 "adrfam": "ipv4", 00:20:48.124 "trsvcid": "4420", 00:20:48.124 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:20:48.124 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:20:48.124 "hdgst": false, 00:20:48.124 "ddgst": false 00:20:48.124 }, 00:20:48.124 "method": "bdev_nvme_attach_controller" 00:20:48.124 },{ 00:20:48.124 "params": { 00:20:48.124 "name": "Nvme7", 00:20:48.124 "trtype": "tcp", 00:20:48.124 "traddr": "10.0.0.2", 00:20:48.124 "adrfam": "ipv4", 00:20:48.124 "trsvcid": "4420", 00:20:48.124 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:20:48.124 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:20:48.124 "hdgst": false, 00:20:48.124 "ddgst": false 00:20:48.124 }, 00:20:48.124 "method": "bdev_nvme_attach_controller" 00:20:48.124 },{ 00:20:48.124 "params": { 00:20:48.124 "name": "Nvme8", 00:20:48.124 "trtype": "tcp", 00:20:48.124 "traddr": "10.0.0.2", 00:20:48.124 "adrfam": "ipv4", 00:20:48.124 "trsvcid": "4420", 00:20:48.124 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:20:48.124 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:20:48.124 "hdgst": false, 00:20:48.124 "ddgst": false 00:20:48.124 }, 00:20:48.124 "method": "bdev_nvme_attach_controller" 00:20:48.124 },{ 00:20:48.124 "params": { 00:20:48.124 "name": "Nvme9", 00:20:48.124 "trtype": "tcp", 00:20:48.124 "traddr": "10.0.0.2", 00:20:48.124 "adrfam": "ipv4", 00:20:48.124 "trsvcid": "4420", 00:20:48.124 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:20:48.124 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:20:48.124 "hdgst": false, 00:20:48.124 "ddgst": false 00:20:48.124 }, 00:20:48.124 "method": "bdev_nvme_attach_controller" 00:20:48.124 },{ 00:20:48.124 "params": { 00:20:48.124 "name": "Nvme10", 00:20:48.124 "trtype": "tcp", 00:20:48.124 "traddr": "10.0.0.2", 00:20:48.124 "adrfam": "ipv4", 00:20:48.124 "trsvcid": "4420", 00:20:48.124 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:20:48.124 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:20:48.124 "hdgst": false, 00:20:48.124 "ddgst": false 00:20:48.124 }, 00:20:48.124 "method": "bdev_nvme_attach_controller" 00:20:48.124 }' 00:20:48.383 [2024-05-15 17:11:35.781014] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:48.383 [2024-05-15 17:11:35.854786] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:20:49.759 Running I/O for 1 seconds... 00:20:50.694 00:20:50.694 Latency(us) 00:20:50.694 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:50.694 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:50.694 Verification LBA range: start 0x0 length 0x400 00:20:50.694 Nvme1n1 : 1.08 238.03 14.88 0.00 0.00 266502.68 18805.98 219745.06 00:20:50.694 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:50.694 Verification LBA range: start 0x0 length 0x400 00:20:50.694 Nvme2n1 : 1.05 243.05 15.19 0.00 0.00 256952.10 17324.30 220656.86 00:20:50.694 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:50.694 Verification LBA range: start 0x0 length 0x400 00:20:50.694 Nvme3n1 : 1.10 294.37 18.40 0.00 0.00 205322.21 16298.52 196949.93 00:20:50.694 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:50.694 Verification LBA range: start 0x0 length 0x400 00:20:50.694 Nvme4n1 : 1.11 289.52 18.10 0.00 0.00 209178.27 15158.76 226127.69 00:20:50.694 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:50.694 Verification LBA range: start 0x0 length 0x400 00:20:50.694 Nvme5n1 : 1.11 289.06 18.07 0.00 0.00 206350.60 15842.62 213362.42 00:20:50.694 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:50.694 Verification LBA range: start 0x0 length 0x400 00:20:50.694 Nvme6n1 : 1.12 286.51 17.91 0.00 0.00 205240.94 15386.71 216097.84 00:20:50.694 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:50.694 Verification LBA range: start 0x0 length 0x400 00:20:50.694 Nvme7n1 : 1.11 287.34 17.96 0.00 0.00 201629.34 18919.96 206979.78 00:20:50.694 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:50.694 Verification LBA range: start 0x0 length 0x400 00:20:50.694 Nvme8n1 : 1.12 285.31 17.83 0.00 0.00 200095.21 12765.27 224304.08 00:20:50.694 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:50.694 Verification LBA range: start 0x0 length 0x400 00:20:50.694 Nvme9n1 : 1.16 329.11 20.57 0.00 0.00 171412.15 8035.28 224304.08 00:20:50.694 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:50.694 Verification LBA range: start 0x0 length 0x400 00:20:50.694 Nvme10n1 : 1.15 279.04 17.44 0.00 0.00 198783.15 18008.15 246187.41 00:20:50.694 =================================================================================================================== 00:20:50.694 Total : 2821.34 176.33 0.00 0.00 209310.73 8035.28 246187.41 00:20:50.952 17:11:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@94 -- # stoptarget 00:20:50.952 17:11:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:20:50.952 17:11:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:20:50.952 17:11:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:20:50.952 17:11:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@45 -- # nvmftestfini 00:20:50.952 17:11:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@488 -- # nvmfcleanup 00:20:50.952 17:11:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@117 -- # sync 00:20:50.952 17:11:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:20:50.952 17:11:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@120 -- # set +e 00:20:50.953 17:11:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@121 -- # for i in {1..20} 00:20:50.953 17:11:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:20:50.953 rmmod nvme_tcp 00:20:50.953 rmmod nvme_fabrics 00:20:50.953 rmmod nvme_keyring 00:20:51.211 17:11:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:20:51.211 17:11:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@124 -- # set -e 00:20:51.211 17:11:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@125 -- # return 0 00:20:51.211 17:11:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@489 -- # '[' -n 3124481 ']' 00:20:51.211 17:11:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@490 -- # killprocess 3124481 00:20:51.211 17:11:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@946 -- # '[' -z 3124481 ']' 00:20:51.211 17:11:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@950 -- # kill -0 3124481 00:20:51.211 17:11:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@951 -- # uname 00:20:51.211 17:11:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:20:51.211 17:11:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3124481 00:20:51.211 17:11:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:20:51.211 17:11:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:20:51.211 17:11:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3124481' 00:20:51.211 killing process with pid 3124481 00:20:51.211 17:11:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@965 -- # kill 3124481 00:20:51.211 [2024-05-15 17:11:38.679622] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:20:51.211 17:11:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@970 -- # wait 3124481 00:20:51.470 17:11:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:20:51.470 17:11:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:20:51.470 17:11:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:20:51.470 17:11:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:20:51.470 17:11:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:20:51.470 17:11:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:51.470 17:11:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:51.470 17:11:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:54.009 17:11:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:20:54.009 00:20:54.009 real 0m14.867s 00:20:54.009 user 0m34.144s 00:20:54.009 sys 0m5.352s 00:20:54.009 17:11:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1122 -- # xtrace_disable 00:20:54.009 17:11:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:20:54.009 ************************************ 00:20:54.009 END TEST nvmf_shutdown_tc1 00:20:54.009 ************************************ 00:20:54.009 17:11:41 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@148 -- # run_test nvmf_shutdown_tc2 nvmf_shutdown_tc2 00:20:54.009 17:11:41 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:20:54.009 17:11:41 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1103 -- # xtrace_disable 00:20:54.009 17:11:41 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:20:54.009 ************************************ 00:20:54.009 START TEST nvmf_shutdown_tc2 00:20:54.009 ************************************ 00:20:54.009 17:11:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1121 -- # nvmf_shutdown_tc2 00:20:54.009 17:11:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@99 -- # starttarget 00:20:54.009 17:11:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@15 -- # nvmftestinit 00:20:54.009 17:11:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:20:54.009 17:11:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:54.009 17:11:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@448 -- # prepare_net_devs 00:20:54.009 17:11:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@410 -- # local -g is_hw=no 00:20:54.009 17:11:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@412 -- # remove_spdk_ns 00:20:54.009 17:11:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:54.009 17:11:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:54.009 17:11:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:54.009 17:11:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:20:54.009 17:11:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:20:54.009 17:11:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@285 -- # xtrace_disable 00:20:54.009 17:11:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:20:54.009 17:11:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:54.009 17:11:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@291 -- # pci_devs=() 00:20:54.009 17:11:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@291 -- # local -a pci_devs 00:20:54.009 17:11:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:20:54.009 17:11:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:20:54.009 17:11:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@293 -- # pci_drivers=() 00:20:54.009 17:11:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:20:54.009 17:11:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@295 -- # net_devs=() 00:20:54.009 17:11:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@295 -- # local -ga net_devs 00:20:54.009 17:11:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@296 -- # e810=() 00:20:54.009 17:11:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@296 -- # local -ga e810 00:20:54.009 17:11:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@297 -- # x722=() 00:20:54.009 17:11:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@297 -- # local -ga x722 00:20:54.009 17:11:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@298 -- # mlx=() 00:20:54.009 17:11:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@298 -- # local -ga mlx 00:20:54.009 17:11:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:54.009 17:11:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:54.009 17:11:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:54.009 17:11:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:54.009 17:11:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:54.009 17:11:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:54.010 17:11:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:54.010 17:11:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:54.010 17:11:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:54.010 17:11:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:54.010 17:11:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:54.010 17:11:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:20:54.010 17:11:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:20:54.010 17:11:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:20:54.010 17:11:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:20:54.010 17:11:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:20:54.010 17:11:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:20:54.010 17:11:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:54.010 17:11:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:20:54.010 Found 0000:86:00.0 (0x8086 - 0x159b) 00:20:54.010 17:11:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:54.010 17:11:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:54.010 17:11:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:54.010 17:11:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:54.010 17:11:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:54.010 17:11:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:54.010 17:11:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:20:54.010 Found 0000:86:00.1 (0x8086 - 0x159b) 00:20:54.010 17:11:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:54.010 17:11:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:54.010 17:11:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:54.010 17:11:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:54.010 17:11:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:54.010 17:11:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:20:54.010 17:11:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:20:54.010 17:11:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:20:54.010 17:11:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:54.010 17:11:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:54.010 17:11:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:20:54.010 17:11:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:54.010 17:11:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:20:54.010 17:11:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:54.010 17:11:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:54.010 17:11:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:20:54.010 Found net devices under 0000:86:00.0: cvl_0_0 00:20:54.010 17:11:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:54.010 17:11:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:54.010 17:11:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:54.010 17:11:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:20:54.010 17:11:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:54.010 17:11:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:20:54.010 17:11:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:54.010 17:11:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:54.010 17:11:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:20:54.010 Found net devices under 0000:86:00.1: cvl_0_1 00:20:54.010 17:11:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:54.010 17:11:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:20:54.010 17:11:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@414 -- # is_hw=yes 00:20:54.010 17:11:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:20:54.010 17:11:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:20:54.010 17:11:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:20:54.010 17:11:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:54.010 17:11:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:54.010 17:11:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:54.010 17:11:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:20:54.010 17:11:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:54.010 17:11:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:54.010 17:11:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:20:54.010 17:11:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:54.010 17:11:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:54.010 17:11:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:20:54.010 17:11:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:20:54.010 17:11:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:20:54.010 17:11:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:54.010 17:11:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:54.010 17:11:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:54.010 17:11:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:20:54.010 17:11:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:54.010 17:11:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:54.010 17:11:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:54.010 17:11:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:20:54.010 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:54.010 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.173 ms 00:20:54.010 00:20:54.010 --- 10.0.0.2 ping statistics --- 00:20:54.010 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:54.010 rtt min/avg/max/mdev = 0.173/0.173/0.173/0.000 ms 00:20:54.010 17:11:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:54.010 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:54.010 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.217 ms 00:20:54.010 00:20:54.010 --- 10.0.0.1 ping statistics --- 00:20:54.010 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:54.010 rtt min/avg/max/mdev = 0.217/0.217/0.217/0.000 ms 00:20:54.010 17:11:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:54.010 17:11:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@422 -- # return 0 00:20:54.010 17:11:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:20:54.010 17:11:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:54.010 17:11:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:20:54.010 17:11:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:20:54.010 17:11:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:54.010 17:11:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:20:54.010 17:11:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:20:54.010 17:11:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:20:54.010 17:11:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:20:54.010 17:11:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@720 -- # xtrace_disable 00:20:54.010 17:11:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:20:54.010 17:11:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@481 -- # nvmfpid=3126342 00:20:54.010 17:11:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@482 -- # waitforlisten 3126342 00:20:54.010 17:11:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:20:54.011 17:11:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@827 -- # '[' -z 3126342 ']' 00:20:54.011 17:11:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:54.011 17:11:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@832 -- # local max_retries=100 00:20:54.011 17:11:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:54.011 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:54.011 17:11:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@836 -- # xtrace_disable 00:20:54.011 17:11:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:20:54.011 [2024-05-15 17:11:41.613195] Starting SPDK v24.05-pre git sha1 0ba8ca574 / DPDK 23.11.0 initialization... 00:20:54.011 [2024-05-15 17:11:41.613236] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:54.011 EAL: No free 2048 kB hugepages reported on node 1 00:20:54.269 [2024-05-15 17:11:41.670837] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:20:54.269 [2024-05-15 17:11:41.749875] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:54.269 [2024-05-15 17:11:41.749910] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:54.270 [2024-05-15 17:11:41.749918] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:54.270 [2024-05-15 17:11:41.749923] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:54.270 [2024-05-15 17:11:41.749932] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:54.270 [2024-05-15 17:11:41.750028] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:20:54.270 [2024-05-15 17:11:41.750110] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:20:54.270 [2024-05-15 17:11:41.750230] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:20:54.270 [2024-05-15 17:11:41.750231] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:20:54.835 17:11:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:20:54.835 17:11:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@860 -- # return 0 00:20:54.835 17:11:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:20:54.835 17:11:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:54.835 17:11:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:20:54.835 17:11:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:54.835 17:11:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:20:54.835 17:11:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:54.835 17:11:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:20:54.835 [2024-05-15 17:11:42.457978] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:54.835 17:11:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:54.835 17:11:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:20:54.835 17:11:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:20:54.835 17:11:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@720 -- # xtrace_disable 00:20:54.835 17:11:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:20:54.835 17:11:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:20:54.835 17:11:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:20:54.835 17:11:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:20:54.835 17:11:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:20:54.835 17:11:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:20:54.835 17:11:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:20:54.835 17:11:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:20:54.835 17:11:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:20:54.835 17:11:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:20:54.835 17:11:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:20:54.835 17:11:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:20:54.835 17:11:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:20:54.835 17:11:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:20:55.093 17:11:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:20:55.093 17:11:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:20:55.093 17:11:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:20:55.093 17:11:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:20:55.093 17:11:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:20:55.093 17:11:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:20:55.093 17:11:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:20:55.093 17:11:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:20:55.093 17:11:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@35 -- # rpc_cmd 00:20:55.093 17:11:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:55.093 17:11:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:20:55.093 Malloc1 00:20:55.093 [2024-05-15 17:11:42.549789] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:20:55.093 [2024-05-15 17:11:42.550024] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:55.093 Malloc2 00:20:55.093 Malloc3 00:20:55.093 Malloc4 00:20:55.093 Malloc5 00:20:55.093 Malloc6 00:20:55.352 Malloc7 00:20:55.352 Malloc8 00:20:55.352 Malloc9 00:20:55.352 Malloc10 00:20:55.352 17:11:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:55.352 17:11:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:20:55.352 17:11:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:55.352 17:11:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:20:55.352 17:11:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@103 -- # perfpid=3126616 00:20:55.353 17:11:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@104 -- # waitforlisten 3126616 /var/tmp/bdevperf.sock 00:20:55.353 17:11:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@827 -- # '[' -z 3126616 ']' 00:20:55.353 17:11:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:55.353 17:11:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@102 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:20:55.353 17:11:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@832 -- # local max_retries=100 00:20:55.353 17:11:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@102 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:20:55.353 17:11:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:55.353 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:55.353 17:11:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@532 -- # config=() 00:20:55.353 17:11:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@836 -- # xtrace_disable 00:20:55.353 17:11:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@532 -- # local subsystem config 00:20:55.353 17:11:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:20:55.353 17:11:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:55.353 17:11:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:55.353 { 00:20:55.353 "params": { 00:20:55.353 "name": "Nvme$subsystem", 00:20:55.353 "trtype": "$TEST_TRANSPORT", 00:20:55.353 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:55.353 "adrfam": "ipv4", 00:20:55.353 "trsvcid": "$NVMF_PORT", 00:20:55.353 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:55.353 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:55.353 "hdgst": ${hdgst:-false}, 00:20:55.353 "ddgst": ${ddgst:-false} 00:20:55.353 }, 00:20:55.353 "method": "bdev_nvme_attach_controller" 00:20:55.353 } 00:20:55.353 EOF 00:20:55.353 )") 00:20:55.353 17:11:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:20:55.353 17:11:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:55.353 17:11:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:55.353 { 00:20:55.353 "params": { 00:20:55.353 "name": "Nvme$subsystem", 00:20:55.353 "trtype": "$TEST_TRANSPORT", 00:20:55.353 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:55.353 "adrfam": "ipv4", 00:20:55.353 "trsvcid": "$NVMF_PORT", 00:20:55.353 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:55.353 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:55.353 "hdgst": ${hdgst:-false}, 00:20:55.353 "ddgst": ${ddgst:-false} 00:20:55.353 }, 00:20:55.353 "method": "bdev_nvme_attach_controller" 00:20:55.353 } 00:20:55.353 EOF 00:20:55.353 )") 00:20:55.353 17:11:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:20:55.353 17:11:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:55.353 17:11:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:55.353 { 00:20:55.353 "params": { 00:20:55.353 "name": "Nvme$subsystem", 00:20:55.353 "trtype": "$TEST_TRANSPORT", 00:20:55.353 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:55.353 "adrfam": "ipv4", 00:20:55.353 "trsvcid": "$NVMF_PORT", 00:20:55.353 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:55.353 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:55.353 "hdgst": ${hdgst:-false}, 00:20:55.353 "ddgst": ${ddgst:-false} 00:20:55.353 }, 00:20:55.353 "method": "bdev_nvme_attach_controller" 00:20:55.353 } 00:20:55.353 EOF 00:20:55.353 )") 00:20:55.353 17:11:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:20:55.353 17:11:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:55.353 17:11:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:55.353 { 00:20:55.353 "params": { 00:20:55.353 "name": "Nvme$subsystem", 00:20:55.353 "trtype": "$TEST_TRANSPORT", 00:20:55.353 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:55.353 "adrfam": "ipv4", 00:20:55.353 "trsvcid": "$NVMF_PORT", 00:20:55.353 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:55.353 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:55.353 "hdgst": ${hdgst:-false}, 00:20:55.353 "ddgst": ${ddgst:-false} 00:20:55.353 }, 00:20:55.353 "method": "bdev_nvme_attach_controller" 00:20:55.353 } 00:20:55.353 EOF 00:20:55.353 )") 00:20:55.353 17:11:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:20:55.353 17:11:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:55.353 17:11:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:55.353 { 00:20:55.353 "params": { 00:20:55.353 "name": "Nvme$subsystem", 00:20:55.353 "trtype": "$TEST_TRANSPORT", 00:20:55.353 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:55.353 "adrfam": "ipv4", 00:20:55.353 "trsvcid": "$NVMF_PORT", 00:20:55.353 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:55.353 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:55.353 "hdgst": ${hdgst:-false}, 00:20:55.353 "ddgst": ${ddgst:-false} 00:20:55.353 }, 00:20:55.353 "method": "bdev_nvme_attach_controller" 00:20:55.353 } 00:20:55.353 EOF 00:20:55.353 )") 00:20:55.353 17:11:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:20:55.613 17:11:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:55.613 17:11:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:55.613 { 00:20:55.613 "params": { 00:20:55.613 "name": "Nvme$subsystem", 00:20:55.613 "trtype": "$TEST_TRANSPORT", 00:20:55.613 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:55.613 "adrfam": "ipv4", 00:20:55.613 "trsvcid": "$NVMF_PORT", 00:20:55.613 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:55.613 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:55.613 "hdgst": ${hdgst:-false}, 00:20:55.613 "ddgst": ${ddgst:-false} 00:20:55.613 }, 00:20:55.613 "method": "bdev_nvme_attach_controller" 00:20:55.613 } 00:20:55.613 EOF 00:20:55.613 )") 00:20:55.613 17:11:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:20:55.613 [2024-05-15 17:11:43.018142] Starting SPDK v24.05-pre git sha1 0ba8ca574 / DPDK 23.11.0 initialization... 00:20:55.613 [2024-05-15 17:11:43.018196] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3126616 ] 00:20:55.613 17:11:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:55.613 17:11:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:55.613 { 00:20:55.613 "params": { 00:20:55.613 "name": "Nvme$subsystem", 00:20:55.613 "trtype": "$TEST_TRANSPORT", 00:20:55.613 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:55.613 "adrfam": "ipv4", 00:20:55.614 "trsvcid": "$NVMF_PORT", 00:20:55.614 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:55.614 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:55.614 "hdgst": ${hdgst:-false}, 00:20:55.614 "ddgst": ${ddgst:-false} 00:20:55.614 }, 00:20:55.614 "method": "bdev_nvme_attach_controller" 00:20:55.614 } 00:20:55.614 EOF 00:20:55.614 )") 00:20:55.614 17:11:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:20:55.614 17:11:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:55.614 17:11:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:55.614 { 00:20:55.614 "params": { 00:20:55.614 "name": "Nvme$subsystem", 00:20:55.614 "trtype": "$TEST_TRANSPORT", 00:20:55.614 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:55.614 "adrfam": "ipv4", 00:20:55.614 "trsvcid": "$NVMF_PORT", 00:20:55.614 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:55.614 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:55.614 "hdgst": ${hdgst:-false}, 00:20:55.614 "ddgst": ${ddgst:-false} 00:20:55.614 }, 00:20:55.614 "method": "bdev_nvme_attach_controller" 00:20:55.614 } 00:20:55.614 EOF 00:20:55.614 )") 00:20:55.614 17:11:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:20:55.614 17:11:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:55.614 17:11:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:55.614 { 00:20:55.614 "params": { 00:20:55.614 "name": "Nvme$subsystem", 00:20:55.614 "trtype": "$TEST_TRANSPORT", 00:20:55.614 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:55.614 "adrfam": "ipv4", 00:20:55.614 "trsvcid": "$NVMF_PORT", 00:20:55.614 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:55.614 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:55.614 "hdgst": ${hdgst:-false}, 00:20:55.614 "ddgst": ${ddgst:-false} 00:20:55.614 }, 00:20:55.614 "method": "bdev_nvme_attach_controller" 00:20:55.614 } 00:20:55.614 EOF 00:20:55.614 )") 00:20:55.614 17:11:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:20:55.614 17:11:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:55.614 17:11:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:55.614 { 00:20:55.614 "params": { 00:20:55.614 "name": "Nvme$subsystem", 00:20:55.614 "trtype": "$TEST_TRANSPORT", 00:20:55.614 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:55.614 "adrfam": "ipv4", 00:20:55.614 "trsvcid": "$NVMF_PORT", 00:20:55.614 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:55.614 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:55.614 "hdgst": ${hdgst:-false}, 00:20:55.614 "ddgst": ${ddgst:-false} 00:20:55.614 }, 00:20:55.614 "method": "bdev_nvme_attach_controller" 00:20:55.614 } 00:20:55.614 EOF 00:20:55.614 )") 00:20:55.614 17:11:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:20:55.614 EAL: No free 2048 kB hugepages reported on node 1 00:20:55.614 17:11:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@556 -- # jq . 00:20:55.614 17:11:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@557 -- # IFS=, 00:20:55.614 17:11:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:20:55.614 "params": { 00:20:55.614 "name": "Nvme1", 00:20:55.614 "trtype": "tcp", 00:20:55.614 "traddr": "10.0.0.2", 00:20:55.614 "adrfam": "ipv4", 00:20:55.614 "trsvcid": "4420", 00:20:55.614 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:55.614 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:55.614 "hdgst": false, 00:20:55.614 "ddgst": false 00:20:55.614 }, 00:20:55.614 "method": "bdev_nvme_attach_controller" 00:20:55.614 },{ 00:20:55.614 "params": { 00:20:55.614 "name": "Nvme2", 00:20:55.614 "trtype": "tcp", 00:20:55.614 "traddr": "10.0.0.2", 00:20:55.614 "adrfam": "ipv4", 00:20:55.614 "trsvcid": "4420", 00:20:55.614 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:20:55.614 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:20:55.614 "hdgst": false, 00:20:55.614 "ddgst": false 00:20:55.614 }, 00:20:55.614 "method": "bdev_nvme_attach_controller" 00:20:55.614 },{ 00:20:55.614 "params": { 00:20:55.614 "name": "Nvme3", 00:20:55.614 "trtype": "tcp", 00:20:55.614 "traddr": "10.0.0.2", 00:20:55.614 "adrfam": "ipv4", 00:20:55.614 "trsvcid": "4420", 00:20:55.614 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:20:55.614 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:20:55.614 "hdgst": false, 00:20:55.614 "ddgst": false 00:20:55.614 }, 00:20:55.614 "method": "bdev_nvme_attach_controller" 00:20:55.614 },{ 00:20:55.614 "params": { 00:20:55.614 "name": "Nvme4", 00:20:55.614 "trtype": "tcp", 00:20:55.614 "traddr": "10.0.0.2", 00:20:55.614 "adrfam": "ipv4", 00:20:55.614 "trsvcid": "4420", 00:20:55.614 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:20:55.614 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:20:55.614 "hdgst": false, 00:20:55.614 "ddgst": false 00:20:55.614 }, 00:20:55.614 "method": "bdev_nvme_attach_controller" 00:20:55.614 },{ 00:20:55.614 "params": { 00:20:55.614 "name": "Nvme5", 00:20:55.614 "trtype": "tcp", 00:20:55.614 "traddr": "10.0.0.2", 00:20:55.614 "adrfam": "ipv4", 00:20:55.614 "trsvcid": "4420", 00:20:55.614 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:20:55.614 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:20:55.614 "hdgst": false, 00:20:55.614 "ddgst": false 00:20:55.614 }, 00:20:55.614 "method": "bdev_nvme_attach_controller" 00:20:55.614 },{ 00:20:55.614 "params": { 00:20:55.614 "name": "Nvme6", 00:20:55.614 "trtype": "tcp", 00:20:55.614 "traddr": "10.0.0.2", 00:20:55.614 "adrfam": "ipv4", 00:20:55.614 "trsvcid": "4420", 00:20:55.614 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:20:55.614 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:20:55.614 "hdgst": false, 00:20:55.614 "ddgst": false 00:20:55.614 }, 00:20:55.614 "method": "bdev_nvme_attach_controller" 00:20:55.614 },{ 00:20:55.614 "params": { 00:20:55.614 "name": "Nvme7", 00:20:55.614 "trtype": "tcp", 00:20:55.614 "traddr": "10.0.0.2", 00:20:55.614 "adrfam": "ipv4", 00:20:55.614 "trsvcid": "4420", 00:20:55.614 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:20:55.614 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:20:55.614 "hdgst": false, 00:20:55.614 "ddgst": false 00:20:55.614 }, 00:20:55.614 "method": "bdev_nvme_attach_controller" 00:20:55.614 },{ 00:20:55.614 "params": { 00:20:55.614 "name": "Nvme8", 00:20:55.614 "trtype": "tcp", 00:20:55.614 "traddr": "10.0.0.2", 00:20:55.614 "adrfam": "ipv4", 00:20:55.614 "trsvcid": "4420", 00:20:55.614 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:20:55.614 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:20:55.614 "hdgst": false, 00:20:55.614 "ddgst": false 00:20:55.614 }, 00:20:55.614 "method": "bdev_nvme_attach_controller" 00:20:55.614 },{ 00:20:55.614 "params": { 00:20:55.614 "name": "Nvme9", 00:20:55.614 "trtype": "tcp", 00:20:55.614 "traddr": "10.0.0.2", 00:20:55.614 "adrfam": "ipv4", 00:20:55.614 "trsvcid": "4420", 00:20:55.614 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:20:55.614 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:20:55.614 "hdgst": false, 00:20:55.614 "ddgst": false 00:20:55.614 }, 00:20:55.614 "method": "bdev_nvme_attach_controller" 00:20:55.614 },{ 00:20:55.614 "params": { 00:20:55.614 "name": "Nvme10", 00:20:55.614 "trtype": "tcp", 00:20:55.614 "traddr": "10.0.0.2", 00:20:55.614 "adrfam": "ipv4", 00:20:55.614 "trsvcid": "4420", 00:20:55.614 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:20:55.615 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:20:55.615 "hdgst": false, 00:20:55.615 "ddgst": false 00:20:55.615 }, 00:20:55.615 "method": "bdev_nvme_attach_controller" 00:20:55.615 }' 00:20:55.615 [2024-05-15 17:11:43.073916] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:55.615 [2024-05-15 17:11:43.146288] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:20:56.985 Running I/O for 10 seconds... 00:20:56.985 17:11:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:20:56.985 17:11:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@860 -- # return 0 00:20:56.985 17:11:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@105 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:20:56.985 17:11:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:56.985 17:11:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:20:56.985 17:11:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:56.985 17:11:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@107 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:20:56.985 17:11:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@50 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:20:56.985 17:11:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@54 -- # '[' -z Nvme1n1 ']' 00:20:56.985 17:11:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@57 -- # local ret=1 00:20:56.985 17:11:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@58 -- # local i 00:20:56.985 17:11:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i = 10 )) 00:20:56.985 17:11:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:20:56.985 17:11:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:20:56.985 17:11:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:56.985 17:11:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:20:56.985 17:11:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:20:56.985 17:11:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:56.985 17:11:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # read_io_count=3 00:20:56.985 17:11:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@63 -- # '[' 3 -ge 100 ']' 00:20:56.985 17:11:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@67 -- # sleep 0.25 00:20:57.243 17:11:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i-- )) 00:20:57.243 17:11:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:20:57.243 17:11:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:20:57.243 17:11:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:20:57.243 17:11:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:57.243 17:11:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:20:57.243 17:11:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:57.243 17:11:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # read_io_count=67 00:20:57.243 17:11:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@63 -- # '[' 67 -ge 100 ']' 00:20:57.243 17:11:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@67 -- # sleep 0.25 00:20:57.501 17:11:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i-- )) 00:20:57.501 17:11:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:20:57.501 17:11:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:20:57.501 17:11:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:57.501 17:11:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:20:57.501 17:11:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:20:57.501 17:11:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:57.501 17:11:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # read_io_count=131 00:20:57.501 17:11:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@63 -- # '[' 131 -ge 100 ']' 00:20:57.501 17:11:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@64 -- # ret=0 00:20:57.501 17:11:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@65 -- # break 00:20:57.501 17:11:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@69 -- # return 0 00:20:57.501 17:11:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@110 -- # killprocess 3126616 00:20:57.501 17:11:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@946 -- # '[' -z 3126616 ']' 00:20:57.501 17:11:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@950 -- # kill -0 3126616 00:20:57.758 17:11:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@951 -- # uname 00:20:57.758 17:11:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:20:57.758 17:11:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3126616 00:20:57.758 17:11:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:20:57.758 17:11:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:20:57.758 17:11:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3126616' 00:20:57.758 killing process with pid 3126616 00:20:57.758 17:11:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@965 -- # kill 3126616 00:20:57.758 17:11:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@970 -- # wait 3126616 00:20:57.758 Received shutdown signal, test time was about 0.908080 seconds 00:20:57.758 00:20:57.758 Latency(us) 00:20:57.758 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:57.758 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:57.758 Verification LBA range: start 0x0 length 0x400 00:20:57.758 Nvme1n1 : 0.91 282.12 17.63 0.00 0.00 224205.91 17324.30 226127.69 00:20:57.758 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:57.758 Verification LBA range: start 0x0 length 0x400 00:20:57.758 Nvme2n1 : 0.91 282.36 17.65 0.00 0.00 220049.81 15044.79 217921.45 00:20:57.758 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:57.758 Verification LBA range: start 0x0 length 0x400 00:20:57.758 Nvme3n1 : 0.88 304.75 19.05 0.00 0.00 198395.70 3162.82 212450.62 00:20:57.758 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:57.758 Verification LBA range: start 0x0 length 0x400 00:20:57.758 Nvme4n1 : 0.88 289.34 18.08 0.00 0.00 206985.57 22681.15 206067.98 00:20:57.758 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:57.758 Verification LBA range: start 0x0 length 0x400 00:20:57.758 Nvme5n1 : 0.90 285.60 17.85 0.00 0.00 205895.01 17324.30 211538.81 00:20:57.758 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:57.758 Verification LBA range: start 0x0 length 0x400 00:20:57.758 Nvme6n1 : 0.89 287.18 17.95 0.00 0.00 200694.21 18008.15 213362.42 00:20:57.758 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:57.758 Verification LBA range: start 0x0 length 0x400 00:20:57.758 Nvme7n1 : 0.90 283.94 17.75 0.00 0.00 199069.16 32597.04 212450.62 00:20:57.759 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:57.759 Verification LBA range: start 0x0 length 0x400 00:20:57.759 Nvme8n1 : 0.90 284.19 17.76 0.00 0.00 194838.04 15158.76 216097.84 00:20:57.759 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:57.759 Verification LBA range: start 0x0 length 0x400 00:20:57.759 Nvme9n1 : 0.88 223.16 13.95 0.00 0.00 239646.39 11055.64 238892.97 00:20:57.759 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:57.759 Verification LBA range: start 0x0 length 0x400 00:20:57.759 Nvme10n1 : 0.89 225.50 14.09 0.00 0.00 231820.99 3519.00 229774.91 00:20:57.759 =================================================================================================================== 00:20:57.759 Total : 2748.15 171.76 0.00 0.00 210984.71 3162.82 238892.97 00:20:58.016 17:11:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@113 -- # sleep 1 00:20:58.947 17:11:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@114 -- # kill -0 3126342 00:20:58.947 17:11:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@116 -- # stoptarget 00:20:58.947 17:11:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:20:58.947 17:11:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:20:58.947 17:11:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:20:58.947 17:11:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@45 -- # nvmftestfini 00:20:58.947 17:11:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@488 -- # nvmfcleanup 00:20:58.947 17:11:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@117 -- # sync 00:20:58.947 17:11:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:20:58.947 17:11:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@120 -- # set +e 00:20:58.947 17:11:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@121 -- # for i in {1..20} 00:20:58.947 17:11:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:20:58.947 rmmod nvme_tcp 00:20:58.947 rmmod nvme_fabrics 00:20:58.947 rmmod nvme_keyring 00:20:58.947 17:11:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:20:58.947 17:11:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@124 -- # set -e 00:20:58.947 17:11:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@125 -- # return 0 00:20:58.947 17:11:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@489 -- # '[' -n 3126342 ']' 00:20:58.947 17:11:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@490 -- # killprocess 3126342 00:20:58.947 17:11:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@946 -- # '[' -z 3126342 ']' 00:20:58.947 17:11:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@950 -- # kill -0 3126342 00:20:58.947 17:11:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@951 -- # uname 00:20:58.947 17:11:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:20:58.947 17:11:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3126342 00:20:58.947 17:11:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:20:58.947 17:11:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:20:58.947 17:11:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3126342' 00:20:58.947 killing process with pid 3126342 00:20:58.947 17:11:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@965 -- # kill 3126342 00:20:58.947 [2024-05-15 17:11:46.596481] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:20:58.947 17:11:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@970 -- # wait 3126342 00:20:59.514 17:11:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:20:59.514 17:11:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:20:59.514 17:11:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:20:59.514 17:11:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:20:59.514 17:11:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:20:59.514 17:11:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:59.514 17:11:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:59.514 17:11:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:01.419 17:11:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:21:01.419 00:21:01.419 real 0m7.832s 00:21:01.419 user 0m23.286s 00:21:01.419 sys 0m1.352s 00:21:01.419 17:11:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1122 -- # xtrace_disable 00:21:01.419 17:11:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:21:01.419 ************************************ 00:21:01.419 END TEST nvmf_shutdown_tc2 00:21:01.419 ************************************ 00:21:01.678 17:11:49 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@149 -- # run_test nvmf_shutdown_tc3 nvmf_shutdown_tc3 00:21:01.678 17:11:49 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:21:01.678 17:11:49 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1103 -- # xtrace_disable 00:21:01.678 17:11:49 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:21:01.678 ************************************ 00:21:01.678 START TEST nvmf_shutdown_tc3 00:21:01.678 ************************************ 00:21:01.678 17:11:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1121 -- # nvmf_shutdown_tc3 00:21:01.678 17:11:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@121 -- # starttarget 00:21:01.678 17:11:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@15 -- # nvmftestinit 00:21:01.678 17:11:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:21:01.678 17:11:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:01.678 17:11:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@448 -- # prepare_net_devs 00:21:01.678 17:11:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@410 -- # local -g is_hw=no 00:21:01.678 17:11:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@412 -- # remove_spdk_ns 00:21:01.678 17:11:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:01.678 17:11:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:01.678 17:11:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:01.678 17:11:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:21:01.678 17:11:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:21:01.678 17:11:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@285 -- # xtrace_disable 00:21:01.678 17:11:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:21:01.678 17:11:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:01.678 17:11:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@291 -- # pci_devs=() 00:21:01.678 17:11:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@291 -- # local -a pci_devs 00:21:01.678 17:11:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:21:01.678 17:11:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:21:01.678 17:11:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@293 -- # pci_drivers=() 00:21:01.678 17:11:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:21:01.678 17:11:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@295 -- # net_devs=() 00:21:01.678 17:11:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@295 -- # local -ga net_devs 00:21:01.678 17:11:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@296 -- # e810=() 00:21:01.678 17:11:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@296 -- # local -ga e810 00:21:01.678 17:11:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@297 -- # x722=() 00:21:01.678 17:11:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@297 -- # local -ga x722 00:21:01.678 17:11:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@298 -- # mlx=() 00:21:01.678 17:11:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@298 -- # local -ga mlx 00:21:01.678 17:11:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:01.678 17:11:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:01.678 17:11:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:01.678 17:11:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:01.678 17:11:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:01.678 17:11:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:01.678 17:11:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:01.678 17:11:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:01.678 17:11:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:01.678 17:11:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:01.679 17:11:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:01.679 17:11:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:21:01.679 17:11:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:21:01.679 17:11:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:21:01.679 17:11:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:21:01.679 17:11:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:21:01.679 17:11:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:21:01.679 17:11:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:01.679 17:11:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:21:01.679 Found 0000:86:00.0 (0x8086 - 0x159b) 00:21:01.679 17:11:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:01.679 17:11:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:01.679 17:11:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:01.679 17:11:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:01.679 17:11:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:01.679 17:11:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:01.679 17:11:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:21:01.679 Found 0000:86:00.1 (0x8086 - 0x159b) 00:21:01.679 17:11:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:01.679 17:11:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:01.679 17:11:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:01.679 17:11:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:01.679 17:11:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:01.679 17:11:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:21:01.679 17:11:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:21:01.679 17:11:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:21:01.679 17:11:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:01.679 17:11:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:01.679 17:11:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:21:01.679 17:11:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:01.679 17:11:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:21:01.679 17:11:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:01.679 17:11:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:01.679 17:11:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:21:01.679 Found net devices under 0000:86:00.0: cvl_0_0 00:21:01.679 17:11:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:01.679 17:11:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:01.679 17:11:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:01.679 17:11:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:21:01.679 17:11:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:01.679 17:11:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:21:01.679 17:11:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:01.679 17:11:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:01.679 17:11:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:21:01.679 Found net devices under 0000:86:00.1: cvl_0_1 00:21:01.679 17:11:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:01.679 17:11:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:21:01.679 17:11:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@414 -- # is_hw=yes 00:21:01.679 17:11:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:21:01.679 17:11:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:21:01.679 17:11:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:21:01.679 17:11:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:01.679 17:11:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:01.679 17:11:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:01.679 17:11:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:21:01.679 17:11:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:01.679 17:11:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:01.679 17:11:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:21:01.679 17:11:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:01.679 17:11:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:01.679 17:11:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:21:01.679 17:11:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:21:01.679 17:11:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:21:01.679 17:11:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:01.679 17:11:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:01.679 17:11:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:01.679 17:11:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:21:01.679 17:11:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:01.938 17:11:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:01.938 17:11:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:01.938 17:11:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:21:01.938 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:01.938 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.263 ms 00:21:01.938 00:21:01.938 --- 10.0.0.2 ping statistics --- 00:21:01.938 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:01.938 rtt min/avg/max/mdev = 0.263/0.263/0.263/0.000 ms 00:21:01.938 17:11:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:01.938 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:01.938 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.243 ms 00:21:01.938 00:21:01.938 --- 10.0.0.1 ping statistics --- 00:21:01.938 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:01.938 rtt min/avg/max/mdev = 0.243/0.243/0.243/0.000 ms 00:21:01.938 17:11:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:01.938 17:11:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@422 -- # return 0 00:21:01.938 17:11:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:21:01.938 17:11:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:01.938 17:11:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:21:01.938 17:11:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:21:01.938 17:11:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:01.938 17:11:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:21:01.938 17:11:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:21:01.938 17:11:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:21:01.938 17:11:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:21:01.938 17:11:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@720 -- # xtrace_disable 00:21:01.938 17:11:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:21:01.938 17:11:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@481 -- # nvmfpid=3127673 00:21:01.938 17:11:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@482 -- # waitforlisten 3127673 00:21:01.938 17:11:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:21:01.938 17:11:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@827 -- # '[' -z 3127673 ']' 00:21:01.938 17:11:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:01.938 17:11:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@832 -- # local max_retries=100 00:21:01.938 17:11:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:01.938 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:01.938 17:11:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@836 -- # xtrace_disable 00:21:01.938 17:11:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:21:01.938 [2024-05-15 17:11:49.503065] Starting SPDK v24.05-pre git sha1 0ba8ca574 / DPDK 23.11.0 initialization... 00:21:01.938 [2024-05-15 17:11:49.503108] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:01.938 EAL: No free 2048 kB hugepages reported on node 1 00:21:01.938 [2024-05-15 17:11:49.558915] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:21:02.195 [2024-05-15 17:11:49.641306] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:02.195 [2024-05-15 17:11:49.641340] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:02.195 [2024-05-15 17:11:49.641347] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:02.195 [2024-05-15 17:11:49.641354] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:02.195 [2024-05-15 17:11:49.641359] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:02.195 [2024-05-15 17:11:49.641456] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:21:02.195 [2024-05-15 17:11:49.641539] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:21:02.195 [2024-05-15 17:11:49.641645] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:21:02.195 [2024-05-15 17:11:49.641646] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:21:02.759 17:11:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:21:02.759 17:11:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@860 -- # return 0 00:21:02.759 17:11:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:21:02.759 17:11:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@726 -- # xtrace_disable 00:21:02.759 17:11:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:21:02.759 17:11:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:02.759 17:11:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:21:02.759 17:11:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:02.759 17:11:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:21:02.759 [2024-05-15 17:11:50.346056] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:02.759 17:11:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:02.759 17:11:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:21:02.759 17:11:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:21:02.759 17:11:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@720 -- # xtrace_disable 00:21:02.759 17:11:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:21:02.759 17:11:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:21:02.759 17:11:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:02.759 17:11:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:21:02.759 17:11:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:02.759 17:11:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:21:02.759 17:11:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:02.759 17:11:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:21:02.759 17:11:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:02.759 17:11:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:21:02.759 17:11:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:02.759 17:11:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:21:02.759 17:11:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:02.759 17:11:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:21:02.759 17:11:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:02.759 17:11:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:21:02.759 17:11:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:02.759 17:11:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:21:02.759 17:11:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:02.759 17:11:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:21:02.759 17:11:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:02.759 17:11:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:21:02.759 17:11:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@35 -- # rpc_cmd 00:21:02.759 17:11:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:02.759 17:11:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:21:03.016 Malloc1 00:21:03.016 [2024-05-15 17:11:50.441886] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:21:03.016 [2024-05-15 17:11:50.442114] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:03.016 Malloc2 00:21:03.016 Malloc3 00:21:03.016 Malloc4 00:21:03.016 Malloc5 00:21:03.016 Malloc6 00:21:03.016 Malloc7 00:21:03.273 Malloc8 00:21:03.273 Malloc9 00:21:03.273 Malloc10 00:21:03.274 17:11:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:03.274 17:11:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:21:03.274 17:11:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@726 -- # xtrace_disable 00:21:03.274 17:11:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:21:03.274 17:11:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@125 -- # perfpid=3127958 00:21:03.274 17:11:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@126 -- # waitforlisten 3127958 /var/tmp/bdevperf.sock 00:21:03.274 17:11:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@827 -- # '[' -z 3127958 ']' 00:21:03.274 17:11:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:03.274 17:11:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:21:03.274 17:11:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@832 -- # local max_retries=100 00:21:03.274 17:11:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@124 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:21:03.274 17:11:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:03.274 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:03.274 17:11:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@836 -- # xtrace_disable 00:21:03.274 17:11:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@532 -- # config=() 00:21:03.274 17:11:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:21:03.274 17:11:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@532 -- # local subsystem config 00:21:03.274 17:11:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:03.274 17:11:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:03.274 { 00:21:03.274 "params": { 00:21:03.274 "name": "Nvme$subsystem", 00:21:03.274 "trtype": "$TEST_TRANSPORT", 00:21:03.274 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:03.274 "adrfam": "ipv4", 00:21:03.274 "trsvcid": "$NVMF_PORT", 00:21:03.274 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:03.274 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:03.274 "hdgst": ${hdgst:-false}, 00:21:03.274 "ddgst": ${ddgst:-false} 00:21:03.274 }, 00:21:03.274 "method": "bdev_nvme_attach_controller" 00:21:03.274 } 00:21:03.274 EOF 00:21:03.274 )") 00:21:03.274 17:11:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:21:03.274 17:11:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:03.274 17:11:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:03.274 { 00:21:03.274 "params": { 00:21:03.274 "name": "Nvme$subsystem", 00:21:03.274 "trtype": "$TEST_TRANSPORT", 00:21:03.274 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:03.274 "adrfam": "ipv4", 00:21:03.274 "trsvcid": "$NVMF_PORT", 00:21:03.274 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:03.274 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:03.274 "hdgst": ${hdgst:-false}, 00:21:03.274 "ddgst": ${ddgst:-false} 00:21:03.274 }, 00:21:03.274 "method": "bdev_nvme_attach_controller" 00:21:03.274 } 00:21:03.274 EOF 00:21:03.274 )") 00:21:03.274 17:11:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:21:03.274 17:11:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:03.274 17:11:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:03.274 { 00:21:03.274 "params": { 00:21:03.274 "name": "Nvme$subsystem", 00:21:03.274 "trtype": "$TEST_TRANSPORT", 00:21:03.274 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:03.274 "adrfam": "ipv4", 00:21:03.274 "trsvcid": "$NVMF_PORT", 00:21:03.274 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:03.274 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:03.274 "hdgst": ${hdgst:-false}, 00:21:03.274 "ddgst": ${ddgst:-false} 00:21:03.274 }, 00:21:03.274 "method": "bdev_nvme_attach_controller" 00:21:03.274 } 00:21:03.274 EOF 00:21:03.274 )") 00:21:03.274 17:11:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:21:03.274 17:11:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:03.274 17:11:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:03.274 { 00:21:03.274 "params": { 00:21:03.274 "name": "Nvme$subsystem", 00:21:03.274 "trtype": "$TEST_TRANSPORT", 00:21:03.274 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:03.274 "adrfam": "ipv4", 00:21:03.274 "trsvcid": "$NVMF_PORT", 00:21:03.274 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:03.274 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:03.274 "hdgst": ${hdgst:-false}, 00:21:03.274 "ddgst": ${ddgst:-false} 00:21:03.274 }, 00:21:03.274 "method": "bdev_nvme_attach_controller" 00:21:03.274 } 00:21:03.274 EOF 00:21:03.274 )") 00:21:03.274 17:11:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:21:03.274 17:11:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:03.274 17:11:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:03.274 { 00:21:03.274 "params": { 00:21:03.274 "name": "Nvme$subsystem", 00:21:03.274 "trtype": "$TEST_TRANSPORT", 00:21:03.274 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:03.274 "adrfam": "ipv4", 00:21:03.274 "trsvcid": "$NVMF_PORT", 00:21:03.274 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:03.274 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:03.274 "hdgst": ${hdgst:-false}, 00:21:03.274 "ddgst": ${ddgst:-false} 00:21:03.274 }, 00:21:03.274 "method": "bdev_nvme_attach_controller" 00:21:03.274 } 00:21:03.274 EOF 00:21:03.274 )") 00:21:03.274 17:11:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:21:03.274 17:11:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:03.274 17:11:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:03.274 { 00:21:03.274 "params": { 00:21:03.274 "name": "Nvme$subsystem", 00:21:03.274 "trtype": "$TEST_TRANSPORT", 00:21:03.274 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:03.274 "adrfam": "ipv4", 00:21:03.274 "trsvcid": "$NVMF_PORT", 00:21:03.274 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:03.274 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:03.274 "hdgst": ${hdgst:-false}, 00:21:03.274 "ddgst": ${ddgst:-false} 00:21:03.274 }, 00:21:03.274 "method": "bdev_nvme_attach_controller" 00:21:03.274 } 00:21:03.274 EOF 00:21:03.274 )") 00:21:03.274 17:11:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:21:03.274 17:11:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:03.274 17:11:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:03.274 { 00:21:03.274 "params": { 00:21:03.274 "name": "Nvme$subsystem", 00:21:03.274 "trtype": "$TEST_TRANSPORT", 00:21:03.274 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:03.274 "adrfam": "ipv4", 00:21:03.274 "trsvcid": "$NVMF_PORT", 00:21:03.274 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:03.274 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:03.274 "hdgst": ${hdgst:-false}, 00:21:03.274 "ddgst": ${ddgst:-false} 00:21:03.274 }, 00:21:03.274 "method": "bdev_nvme_attach_controller" 00:21:03.274 } 00:21:03.274 EOF 00:21:03.274 )") 00:21:03.274 [2024-05-15 17:11:50.915491] Starting SPDK v24.05-pre git sha1 0ba8ca574 / DPDK 23.11.0 initialization... 00:21:03.274 [2024-05-15 17:11:50.915539] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3127958 ] 00:21:03.274 17:11:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:21:03.274 17:11:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:03.274 17:11:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:03.274 { 00:21:03.274 "params": { 00:21:03.274 "name": "Nvme$subsystem", 00:21:03.274 "trtype": "$TEST_TRANSPORT", 00:21:03.274 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:03.274 "adrfam": "ipv4", 00:21:03.274 "trsvcid": "$NVMF_PORT", 00:21:03.274 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:03.274 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:03.274 "hdgst": ${hdgst:-false}, 00:21:03.274 "ddgst": ${ddgst:-false} 00:21:03.274 }, 00:21:03.274 "method": "bdev_nvme_attach_controller" 00:21:03.274 } 00:21:03.274 EOF 00:21:03.274 )") 00:21:03.274 17:11:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:21:03.274 17:11:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:03.274 17:11:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:03.274 { 00:21:03.274 "params": { 00:21:03.274 "name": "Nvme$subsystem", 00:21:03.274 "trtype": "$TEST_TRANSPORT", 00:21:03.274 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:03.274 "adrfam": "ipv4", 00:21:03.274 "trsvcid": "$NVMF_PORT", 00:21:03.274 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:03.274 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:03.274 "hdgst": ${hdgst:-false}, 00:21:03.274 "ddgst": ${ddgst:-false} 00:21:03.274 }, 00:21:03.274 "method": "bdev_nvme_attach_controller" 00:21:03.274 } 00:21:03.274 EOF 00:21:03.274 )") 00:21:03.274 17:11:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:21:03.532 17:11:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:03.532 17:11:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:03.532 { 00:21:03.532 "params": { 00:21:03.532 "name": "Nvme$subsystem", 00:21:03.532 "trtype": "$TEST_TRANSPORT", 00:21:03.532 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:03.532 "adrfam": "ipv4", 00:21:03.532 "trsvcid": "$NVMF_PORT", 00:21:03.532 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:03.532 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:03.532 "hdgst": ${hdgst:-false}, 00:21:03.532 "ddgst": ${ddgst:-false} 00:21:03.532 }, 00:21:03.532 "method": "bdev_nvme_attach_controller" 00:21:03.532 } 00:21:03.532 EOF 00:21:03.532 )") 00:21:03.532 17:11:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:21:03.532 EAL: No free 2048 kB hugepages reported on node 1 00:21:03.532 17:11:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@556 -- # jq . 00:21:03.532 17:11:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@557 -- # IFS=, 00:21:03.532 17:11:50 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:21:03.532 "params": { 00:21:03.532 "name": "Nvme1", 00:21:03.532 "trtype": "tcp", 00:21:03.532 "traddr": "10.0.0.2", 00:21:03.532 "adrfam": "ipv4", 00:21:03.532 "trsvcid": "4420", 00:21:03.532 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:03.532 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:21:03.532 "hdgst": false, 00:21:03.532 "ddgst": false 00:21:03.532 }, 00:21:03.532 "method": "bdev_nvme_attach_controller" 00:21:03.532 },{ 00:21:03.532 "params": { 00:21:03.532 "name": "Nvme2", 00:21:03.532 "trtype": "tcp", 00:21:03.532 "traddr": "10.0.0.2", 00:21:03.532 "adrfam": "ipv4", 00:21:03.532 "trsvcid": "4420", 00:21:03.532 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:21:03.532 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:21:03.532 "hdgst": false, 00:21:03.532 "ddgst": false 00:21:03.532 }, 00:21:03.532 "method": "bdev_nvme_attach_controller" 00:21:03.532 },{ 00:21:03.532 "params": { 00:21:03.532 "name": "Nvme3", 00:21:03.532 "trtype": "tcp", 00:21:03.532 "traddr": "10.0.0.2", 00:21:03.532 "adrfam": "ipv4", 00:21:03.532 "trsvcid": "4420", 00:21:03.532 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:21:03.532 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:21:03.532 "hdgst": false, 00:21:03.532 "ddgst": false 00:21:03.532 }, 00:21:03.532 "method": "bdev_nvme_attach_controller" 00:21:03.532 },{ 00:21:03.532 "params": { 00:21:03.532 "name": "Nvme4", 00:21:03.532 "trtype": "tcp", 00:21:03.532 "traddr": "10.0.0.2", 00:21:03.532 "adrfam": "ipv4", 00:21:03.532 "trsvcid": "4420", 00:21:03.532 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:21:03.532 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:21:03.532 "hdgst": false, 00:21:03.532 "ddgst": false 00:21:03.532 }, 00:21:03.532 "method": "bdev_nvme_attach_controller" 00:21:03.532 },{ 00:21:03.532 "params": { 00:21:03.532 "name": "Nvme5", 00:21:03.532 "trtype": "tcp", 00:21:03.532 "traddr": "10.0.0.2", 00:21:03.532 "adrfam": "ipv4", 00:21:03.532 "trsvcid": "4420", 00:21:03.532 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:21:03.532 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:21:03.532 "hdgst": false, 00:21:03.532 "ddgst": false 00:21:03.532 }, 00:21:03.532 "method": "bdev_nvme_attach_controller" 00:21:03.532 },{ 00:21:03.532 "params": { 00:21:03.532 "name": "Nvme6", 00:21:03.532 "trtype": "tcp", 00:21:03.532 "traddr": "10.0.0.2", 00:21:03.532 "adrfam": "ipv4", 00:21:03.532 "trsvcid": "4420", 00:21:03.532 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:21:03.532 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:21:03.532 "hdgst": false, 00:21:03.532 "ddgst": false 00:21:03.532 }, 00:21:03.532 "method": "bdev_nvme_attach_controller" 00:21:03.532 },{ 00:21:03.532 "params": { 00:21:03.532 "name": "Nvme7", 00:21:03.532 "trtype": "tcp", 00:21:03.532 "traddr": "10.0.0.2", 00:21:03.532 "adrfam": "ipv4", 00:21:03.532 "trsvcid": "4420", 00:21:03.532 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:21:03.532 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:21:03.532 "hdgst": false, 00:21:03.532 "ddgst": false 00:21:03.532 }, 00:21:03.532 "method": "bdev_nvme_attach_controller" 00:21:03.532 },{ 00:21:03.532 "params": { 00:21:03.532 "name": "Nvme8", 00:21:03.532 "trtype": "tcp", 00:21:03.532 "traddr": "10.0.0.2", 00:21:03.532 "adrfam": "ipv4", 00:21:03.532 "trsvcid": "4420", 00:21:03.532 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:21:03.532 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:21:03.532 "hdgst": false, 00:21:03.532 "ddgst": false 00:21:03.532 }, 00:21:03.532 "method": "bdev_nvme_attach_controller" 00:21:03.532 },{ 00:21:03.532 "params": { 00:21:03.532 "name": "Nvme9", 00:21:03.532 "trtype": "tcp", 00:21:03.532 "traddr": "10.0.0.2", 00:21:03.532 "adrfam": "ipv4", 00:21:03.532 "trsvcid": "4420", 00:21:03.532 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:21:03.532 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:21:03.532 "hdgst": false, 00:21:03.532 "ddgst": false 00:21:03.532 }, 00:21:03.532 "method": "bdev_nvme_attach_controller" 00:21:03.532 },{ 00:21:03.532 "params": { 00:21:03.532 "name": "Nvme10", 00:21:03.532 "trtype": "tcp", 00:21:03.532 "traddr": "10.0.0.2", 00:21:03.532 "adrfam": "ipv4", 00:21:03.532 "trsvcid": "4420", 00:21:03.532 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:21:03.532 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:21:03.532 "hdgst": false, 00:21:03.532 "ddgst": false 00:21:03.532 }, 00:21:03.532 "method": "bdev_nvme_attach_controller" 00:21:03.532 }' 00:21:03.532 [2024-05-15 17:11:50.971627] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:03.532 [2024-05-15 17:11:51.043835] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:21:05.435 Running I/O for 10 seconds... 00:21:05.999 17:11:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:21:05.999 17:11:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@860 -- # return 0 00:21:05.999 17:11:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@127 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:21:05.999 17:11:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:05.999 17:11:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:21:05.999 17:11:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:05.999 17:11:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@130 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:21:05.999 17:11:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@132 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:21:05.999 17:11:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@50 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:21:05.999 17:11:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@54 -- # '[' -z Nvme1n1 ']' 00:21:05.999 17:11:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@57 -- # local ret=1 00:21:05.999 17:11:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@58 -- # local i 00:21:05.999 17:11:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i = 10 )) 00:21:05.999 17:11:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:21:05.999 17:11:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:21:05.999 17:11:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:21:05.999 17:11:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:05.999 17:11:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:21:05.999 17:11:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:05.999 17:11:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # read_io_count=67 00:21:05.999 17:11:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@63 -- # '[' 67 -ge 100 ']' 00:21:05.999 17:11:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@67 -- # sleep 0.25 00:21:06.265 17:11:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i-- )) 00:21:06.265 17:11:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:21:06.265 17:11:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:21:06.265 17:11:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:21:06.265 17:11:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:06.265 17:11:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:21:06.265 17:11:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:06.265 17:11:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # read_io_count=135 00:21:06.265 17:11:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@63 -- # '[' 135 -ge 100 ']' 00:21:06.265 17:11:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@64 -- # ret=0 00:21:06.265 17:11:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@65 -- # break 00:21:06.265 17:11:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@69 -- # return 0 00:21:06.265 17:11:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@135 -- # killprocess 3127673 00:21:06.265 17:11:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@946 -- # '[' -z 3127673 ']' 00:21:06.265 17:11:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@950 -- # kill -0 3127673 00:21:06.265 17:11:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@951 -- # uname 00:21:06.265 17:11:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:21:06.265 17:11:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3127673 00:21:06.265 17:11:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:21:06.265 17:11:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:21:06.265 17:11:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3127673' 00:21:06.265 killing process with pid 3127673 00:21:06.265 17:11:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@965 -- # kill 3127673 00:21:06.265 [2024-05-15 17:11:53.872705] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:21:06.265 17:11:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@970 -- # wait 3127673 00:21:06.265 [2024-05-15 17:11:53.877489] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1133570 is same with the state(5) to be set 00:21:06.265 [2024-05-15 17:11:53.877524] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1133570 is same with the state(5) to be set 00:21:06.265 [2024-05-15 17:11:53.877532] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1133570 is same with the state(5) to be set 00:21:06.265 [2024-05-15 17:11:53.877539] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1133570 is same with the state(5) to be set 00:21:06.265 [2024-05-15 17:11:53.877546] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1133570 is same with the state(5) to be set 00:21:06.265 [2024-05-15 17:11:53.877552] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1133570 is same with the state(5) to be set 00:21:06.265 [2024-05-15 17:11:53.877559] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1133570 is same with the state(5) to be set 00:21:06.265 [2024-05-15 17:11:53.877565] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1133570 is same with the state(5) to be set 00:21:06.265 [2024-05-15 17:11:53.877571] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1133570 is same with the state(5) to be set 00:21:06.265 [2024-05-15 17:11:53.877577] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1133570 is same with the state(5) to be set 00:21:06.265 [2024-05-15 17:11:53.877583] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1133570 is same with the state(5) to be set 00:21:06.265 [2024-05-15 17:11:53.877589] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1133570 is same with the state(5) to be set 00:21:06.265 [2024-05-15 17:11:53.877595] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1133570 is same with the state(5) to be set 00:21:06.265 [2024-05-15 17:11:53.877601] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1133570 is same with the state(5) to be set 00:21:06.265 [2024-05-15 17:11:53.877607] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1133570 is same with the state(5) to be set 00:21:06.265 [2024-05-15 17:11:53.877613] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1133570 is same with the state(5) to be set 00:21:06.265 [2024-05-15 17:11:53.877620] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1133570 is same with the state(5) to be set 00:21:06.265 [2024-05-15 17:11:53.877627] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1133570 is same with the state(5) to be set 00:21:06.266 [2024-05-15 17:11:53.877633] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1133570 is same with the state(5) to be set 00:21:06.266 [2024-05-15 17:11:53.877639] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1133570 is same with the state(5) to be set 00:21:06.266 [2024-05-15 17:11:53.877645] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1133570 is same with the state(5) to be set 00:21:06.266 [2024-05-15 17:11:53.877651] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1133570 is same with the state(5) to be set 00:21:06.266 [2024-05-15 17:11:53.877657] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1133570 is same with the state(5) to be set 00:21:06.266 [2024-05-15 17:11:53.877663] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1133570 is same with the state(5) to be set 00:21:06.266 [2024-05-15 17:11:53.877669] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1133570 is same with the state(5) to be set 00:21:06.266 [2024-05-15 17:11:53.877675] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1133570 is same with the state(5) to be set 00:21:06.266 [2024-05-15 17:11:53.877683] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1133570 is same with the state(5) to be set 00:21:06.266 [2024-05-15 17:11:53.877693] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1133570 is same with the state(5) to be set 00:21:06.266 [2024-05-15 17:11:53.877699] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1133570 is same with the state(5) to be set 00:21:06.266 [2024-05-15 17:11:53.877704] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1133570 is same with the state(5) to be set 00:21:06.266 [2024-05-15 17:11:53.877710] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1133570 is same with the state(5) to be set 00:21:06.266 [2024-05-15 17:11:53.877716] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1133570 is same with the state(5) to be set 00:21:06.266 [2024-05-15 17:11:53.877722] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1133570 is same with the state(5) to be set 00:21:06.266 [2024-05-15 17:11:53.877729] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1133570 is same with the state(5) to be set 00:21:06.266 [2024-05-15 17:11:53.877737] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1133570 is same with the state(5) to be set 00:21:06.266 [2024-05-15 17:11:53.877744] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1133570 is same with the state(5) to be set 00:21:06.266 [2024-05-15 17:11:53.877750] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1133570 is same with the state(5) to be set 00:21:06.266 [2024-05-15 17:11:53.877757] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1133570 is same with the state(5) to be set 00:21:06.266 [2024-05-15 17:11:53.877763] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1133570 is same with the state(5) to be set 00:21:06.266 [2024-05-15 17:11:53.877769] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1133570 is same with the state(5) to be set 00:21:06.266 [2024-05-15 17:11:53.877775] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1133570 is same with the state(5) to be set 00:21:06.266 [2024-05-15 17:11:53.877782] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1133570 is same with the state(5) to be set 00:21:06.266 [2024-05-15 17:11:53.877787] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1133570 is same with the state(5) to be set 00:21:06.266 [2024-05-15 17:11:53.877794] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1133570 is same with the state(5) to be set 00:21:06.266 [2024-05-15 17:11:53.877799] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1133570 is same with the state(5) to be set 00:21:06.266 [2024-05-15 17:11:53.877805] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1133570 is same with the state(5) to be set 00:21:06.266 [2024-05-15 17:11:53.877811] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1133570 is same with the state(5) to be set 00:21:06.266 [2024-05-15 17:11:53.877817] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1133570 is same with the state(5) to be set 00:21:06.266 [2024-05-15 17:11:53.877823] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1133570 is same with the state(5) to be set 00:21:06.266 [2024-05-15 17:11:53.877828] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1133570 is same with the state(5) to be set 00:21:06.266 [2024-05-15 17:11:53.877834] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1133570 is same with the state(5) to be set 00:21:06.266 [2024-05-15 17:11:53.877840] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1133570 is same with the state(5) to be set 00:21:06.266 [2024-05-15 17:11:53.877847] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1133570 is same with the state(5) to be set 00:21:06.266 [2024-05-15 17:11:53.877853] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1133570 is same with the state(5) to be set 00:21:06.266 [2024-05-15 17:11:53.877860] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1133570 is same with the state(5) to be set 00:21:06.266 [2024-05-15 17:11:53.877866] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1133570 is same with the state(5) to be set 00:21:06.266 [2024-05-15 17:11:53.877871] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1133570 is same with the state(5) to be set 00:21:06.266 [2024-05-15 17:11:53.877877] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1133570 is same with the state(5) to be set 00:21:06.266 [2024-05-15 17:11:53.877883] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1133570 is same with the state(5) to be set 00:21:06.266 [2024-05-15 17:11:53.878893] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1135f10 is same with the state(5) to be set 00:21:06.266 [2024-05-15 17:11:53.878924] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1135f10 is same with the state(5) to be set 00:21:06.266 [2024-05-15 17:11:53.878931] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1135f10 is same with the state(5) to be set 00:21:06.266 [2024-05-15 17:11:53.878937] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1135f10 is same with the state(5) to be set 00:21:06.266 [2024-05-15 17:11:53.878944] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1135f10 is same with the state(5) to be set 00:21:06.266 [2024-05-15 17:11:53.878950] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1135f10 is same with the state(5) to be set 00:21:06.266 [2024-05-15 17:11:53.878956] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1135f10 is same with the state(5) to be set 00:21:06.266 [2024-05-15 17:11:53.878963] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1135f10 is same with the state(5) to be set 00:21:06.266 [2024-05-15 17:11:53.878969] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1135f10 is same with the state(5) to be set 00:21:06.266 [2024-05-15 17:11:53.878975] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1135f10 is same with the state(5) to be set 00:21:06.266 [2024-05-15 17:11:53.878981] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1135f10 is same with the state(5) to be set 00:21:06.266 [2024-05-15 17:11:53.878986] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1135f10 is same with the state(5) to be set 00:21:06.266 [2024-05-15 17:11:53.878992] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1135f10 is same with the state(5) to be set 00:21:06.266 [2024-05-15 17:11:53.878998] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1135f10 is same with the state(5) to be set 00:21:06.266 [2024-05-15 17:11:53.879004] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1135f10 is same with the state(5) to be set 00:21:06.266 [2024-05-15 17:11:53.879011] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1135f10 is same with the state(5) to be set 00:21:06.266 [2024-05-15 17:11:53.879017] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1135f10 is same with the state(5) to be set 00:21:06.266 [2024-05-15 17:11:53.879023] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1135f10 is same with the state(5) to be set 00:21:06.266 [2024-05-15 17:11:53.879029] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1135f10 is same with the state(5) to be set 00:21:06.266 [2024-05-15 17:11:53.879035] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1135f10 is same with the state(5) to be set 00:21:06.266 [2024-05-15 17:11:53.879041] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1135f10 is same with the state(5) to be set 00:21:06.266 [2024-05-15 17:11:53.879047] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1135f10 is same with the state(5) to be set 00:21:06.266 [2024-05-15 17:11:53.879058] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1135f10 is same with the state(5) to be set 00:21:06.266 [2024-05-15 17:11:53.879064] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1135f10 is same with the state(5) to be set 00:21:06.266 [2024-05-15 17:11:53.879070] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1135f10 is same with the state(5) to be set 00:21:06.266 [2024-05-15 17:11:53.879076] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1135f10 is same with the state(5) to be set 00:21:06.266 [2024-05-15 17:11:53.879081] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1135f10 is same with the state(5) to be set 00:21:06.266 [2024-05-15 17:11:53.879087] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1135f10 is same with the state(5) to be set 00:21:06.266 [2024-05-15 17:11:53.879093] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1135f10 is same with the state(5) to be set 00:21:06.266 [2024-05-15 17:11:53.879100] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1135f10 is same with the state(5) to be set 00:21:06.266 [2024-05-15 17:11:53.879106] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1135f10 is same with the state(5) to be set 00:21:06.266 [2024-05-15 17:11:53.879113] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1135f10 is same with the state(5) to be set 00:21:06.266 [2024-05-15 17:11:53.879119] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1135f10 is same with the state(5) to be set 00:21:06.266 [2024-05-15 17:11:53.879125] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1135f10 is same with the state(5) to be set 00:21:06.266 [2024-05-15 17:11:53.879131] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1135f10 is same with the state(5) to be set 00:21:06.266 [2024-05-15 17:11:53.879136] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1135f10 is same with the state(5) to be set 00:21:06.266 [2024-05-15 17:11:53.879142] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1135f10 is same with the state(5) to be set 00:21:06.266 [2024-05-15 17:11:53.879149] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1135f10 is same with the state(5) to be set 00:21:06.266 [2024-05-15 17:11:53.879155] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1135f10 is same with the state(5) to be set 00:21:06.266 [2024-05-15 17:11:53.879161] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1135f10 is same with the state(5) to be set 00:21:06.266 [2024-05-15 17:11:53.879173] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1135f10 is same with the state(5) to be set 00:21:06.266 [2024-05-15 17:11:53.879179] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1135f10 is same with the state(5) to be set 00:21:06.266 [2024-05-15 17:11:53.879185] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1135f10 is same with the state(5) to be set 00:21:06.266 [2024-05-15 17:11:53.879191] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1135f10 is same with the state(5) to be set 00:21:06.266 [2024-05-15 17:11:53.879196] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1135f10 is same with the state(5) to be set 00:21:06.266 [2024-05-15 17:11:53.879203] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1135f10 is same with the state(5) to be set 00:21:06.267 [2024-05-15 17:11:53.879211] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1135f10 is same with the state(5) to be set 00:21:06.267 [2024-05-15 17:11:53.879218] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1135f10 is same with the state(5) to be set 00:21:06.267 [2024-05-15 17:11:53.879224] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1135f10 is same with the state(5) to be set 00:21:06.267 [2024-05-15 17:11:53.879231] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1135f10 is same with the state(5) to be set 00:21:06.267 [2024-05-15 17:11:53.879237] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1135f10 is same with the state(5) to be set 00:21:06.267 [2024-05-15 17:11:53.879243] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1135f10 is same with the state(5) to be set 00:21:06.267 [2024-05-15 17:11:53.879249] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1135f10 is same with the state(5) to be set 00:21:06.267 [2024-05-15 17:11:53.879256] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1135f10 is same with the state(5) to be set 00:21:06.267 [2024-05-15 17:11:53.879262] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1135f10 is same with the state(5) to be set 00:21:06.267 [2024-05-15 17:11:53.879268] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1135f10 is same with the state(5) to be set 00:21:06.267 [2024-05-15 17:11:53.879274] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1135f10 is same with the state(5) to be set 00:21:06.267 [2024-05-15 17:11:53.879280] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1135f10 is same with the state(5) to be set 00:21:06.267 [2024-05-15 17:11:53.879285] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1135f10 is same with the state(5) to be set 00:21:06.267 [2024-05-15 17:11:53.880406] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1133a10 is same with the state(5) to be set 00:21:06.267 [2024-05-15 17:11:53.880417] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1133a10 is same with the state(5) to be set 00:21:06.267 [2024-05-15 17:11:53.880423] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1133a10 is same with the state(5) to be set 00:21:06.267 [2024-05-15 17:11:53.880429] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1133a10 is same with the state(5) to be set 00:21:06.267 [2024-05-15 17:11:53.880435] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1133a10 is same with the state(5) to be set 00:21:06.267 [2024-05-15 17:11:53.880442] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1133a10 is same with the state(5) to be set 00:21:06.267 [2024-05-15 17:11:53.880455] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1133a10 is same with the state(5) to be set 00:21:06.267 [2024-05-15 17:11:53.880462] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1133a10 is same with the state(5) to be set 00:21:06.267 [2024-05-15 17:11:53.880468] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1133a10 is same with the state(5) to be set 00:21:06.267 [2024-05-15 17:11:53.880474] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1133a10 is same with the state(5) to be set 00:21:06.267 [2024-05-15 17:11:53.880480] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1133a10 is same with the state(5) to be set 00:21:06.267 [2024-05-15 17:11:53.880486] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1133a10 is same with the state(5) to be set 00:21:06.267 [2024-05-15 17:11:53.880493] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1133a10 is same with the state(5) to be set 00:21:06.267 [2024-05-15 17:11:53.880500] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1133a10 is same with the state(5) to be set 00:21:06.267 [2024-05-15 17:11:53.880506] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1133a10 is same with the state(5) to be set 00:21:06.267 [2024-05-15 17:11:53.880512] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1133a10 is same with the state(5) to be set 00:21:06.267 [2024-05-15 17:11:53.880518] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1133a10 is same with the state(5) to be set 00:21:06.267 [2024-05-15 17:11:53.880527] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1133a10 is same with the state(5) to be set 00:21:06.267 [2024-05-15 17:11:53.880532] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1133a10 is same with the state(5) to be set 00:21:06.267 [2024-05-15 17:11:53.880538] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1133a10 is same with the state(5) to be set 00:21:06.267 [2024-05-15 17:11:53.880544] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1133a10 is same with the state(5) to be set 00:21:06.267 [2024-05-15 17:11:53.880551] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1133a10 is same with the state(5) to be set 00:21:06.267 [2024-05-15 17:11:53.880556] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1133a10 is same with the state(5) to be set 00:21:06.267 [2024-05-15 17:11:53.880562] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1133a10 is same with the state(5) to be set 00:21:06.267 [2024-05-15 17:11:53.880569] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1133a10 is same with the state(5) to be set 00:21:06.267 [2024-05-15 17:11:53.880575] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1133a10 is same with the state(5) to be set 00:21:06.267 [2024-05-15 17:11:53.880581] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1133a10 is same with the state(5) to be set 00:21:06.267 [2024-05-15 17:11:53.880586] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1133a10 is same with the state(5) to be set 00:21:06.267 [2024-05-15 17:11:53.880593] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1133a10 is same with the state(5) to be set 00:21:06.267 [2024-05-15 17:11:53.880600] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1133a10 is same with the state(5) to be set 00:21:06.267 [2024-05-15 17:11:53.880606] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1133a10 is same with the state(5) to be set 00:21:06.267 [2024-05-15 17:11:53.880612] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1133a10 is same with the state(5) to be set 00:21:06.267 [2024-05-15 17:11:53.880618] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1133a10 is same with the state(5) to be set 00:21:06.267 [2024-05-15 17:11:53.880623] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1133a10 is same with the state(5) to be set 00:21:06.267 [2024-05-15 17:11:53.880629] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1133a10 is same with the state(5) to be set 00:21:06.267 [2024-05-15 17:11:53.880635] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1133a10 is same with the state(5) to be set 00:21:06.267 [2024-05-15 17:11:53.880642] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1133a10 is same with the state(5) to be set 00:21:06.267 [2024-05-15 17:11:53.880648] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1133a10 is same with the state(5) to be set 00:21:06.267 [2024-05-15 17:11:53.880655] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1133a10 is same with the state(5) to be set 00:21:06.267 [2024-05-15 17:11:53.880661] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1133a10 is same with the state(5) to be set 00:21:06.267 [2024-05-15 17:11:53.880667] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1133a10 is same with the state(5) to be set 00:21:06.267 [2024-05-15 17:11:53.880673] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1133a10 is same with the state(5) to be set 00:21:06.267 [2024-05-15 17:11:53.880678] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1133a10 is same with the state(5) to be set 00:21:06.267 [2024-05-15 17:11:53.880684] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1133a10 is same with the state(5) to be set 00:21:06.267 [2024-05-15 17:11:53.880694] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1133a10 is same with the state(5) to be set 00:21:06.267 [2024-05-15 17:11:53.880700] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1133a10 is same with the state(5) to be set 00:21:06.267 [2024-05-15 17:11:53.880706] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1133a10 is same with the state(5) to be set 00:21:06.267 [2024-05-15 17:11:53.880712] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1133a10 is same with the state(5) to be set 00:21:06.267 [2024-05-15 17:11:53.880718] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1133a10 is same with the state(5) to be set 00:21:06.267 [2024-05-15 17:11:53.880724] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1133a10 is same with the state(5) to be set 00:21:06.267 [2024-05-15 17:11:53.880730] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1133a10 is same with the state(5) to be set 00:21:06.267 [2024-05-15 17:11:53.880736] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1133a10 is same with the state(5) to be set 00:21:06.267 [2024-05-15 17:11:53.880743] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1133a10 is same with the state(5) to be set 00:21:06.267 [2024-05-15 17:11:53.880749] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1133a10 is same with the state(5) to be set 00:21:06.267 [2024-05-15 17:11:53.880754] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1133a10 is same with the state(5) to be set 00:21:06.267 [2024-05-15 17:11:53.880760] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1133a10 is same with the state(5) to be set 00:21:06.267 [2024-05-15 17:11:53.880766] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1133a10 is same with the state(5) to be set 00:21:06.267 [2024-05-15 17:11:53.880772] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1133a10 is same with the state(5) to be set 00:21:06.267 [2024-05-15 17:11:53.880778] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1133a10 is same with the state(5) to be set 00:21:06.267 [2024-05-15 17:11:53.880784] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1133a10 is same with the state(5) to be set 00:21:06.267 [2024-05-15 17:11:53.880790] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1133a10 is same with the state(5) to be set 00:21:06.267 [2024-05-15 17:11:53.880796] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1133a10 is same with the state(5) to be set 00:21:06.267 [2024-05-15 17:11:53.880802] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1133a10 is same with the state(5) to be set 00:21:06.267 [2024-05-15 17:11:53.882907] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1134350 is same with the state(5) to be set 00:21:06.267 [2024-05-15 17:11:53.882927] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1134350 is same with the state(5) to be set 00:21:06.267 [2024-05-15 17:11:53.882935] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1134350 is same with the state(5) to be set 00:21:06.267 [2024-05-15 17:11:53.882942] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1134350 is same with the state(5) to be set 00:21:06.267 [2024-05-15 17:11:53.882951] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1134350 is same with the state(5) to be set 00:21:06.267 [2024-05-15 17:11:53.882958] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1134350 is same with the state(5) to be set 00:21:06.267 [2024-05-15 17:11:53.882964] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1134350 is same with the state(5) to be set 00:21:06.267 [2024-05-15 17:11:53.882971] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1134350 is same with the state(5) to be set 00:21:06.268 [2024-05-15 17:11:53.882980] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1134350 is same with the state(5) to be set 00:21:06.268 [2024-05-15 17:11:53.882987] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1134350 is same with the state(5) to be set 00:21:06.268 [2024-05-15 17:11:53.882993] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1134350 is same with the state(5) to be set 00:21:06.268 [2024-05-15 17:11:53.882999] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1134350 is same with the state(5) to be set 00:21:06.268 [2024-05-15 17:11:53.883005] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1134350 is same with the state(5) to be set 00:21:06.268 [2024-05-15 17:11:53.883012] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1134350 is same with the state(5) to be set 00:21:06.268 [2024-05-15 17:11:53.883018] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1134350 is same with the state(5) to be set 00:21:06.268 [2024-05-15 17:11:53.883024] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1134350 is same with the state(5) to be set 00:21:06.268 [2024-05-15 17:11:53.883030] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1134350 is same with the state(5) to be set 00:21:06.268 [2024-05-15 17:11:53.883037] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1134350 is same with the state(5) to be set 00:21:06.268 [2024-05-15 17:11:53.883043] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1134350 is same with the state(5) to be set 00:21:06.268 [2024-05-15 17:11:53.883048] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1134350 is same with the state(5) to be set 00:21:06.268 [2024-05-15 17:11:53.883055] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1134350 is same with the state(5) to be set 00:21:06.268 [2024-05-15 17:11:53.883061] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1134350 is same with the state(5) to be set 00:21:06.268 [2024-05-15 17:11:53.883069] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1134350 is same with the state(5) to be set 00:21:06.268 [2024-05-15 17:11:53.883074] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1134350 is same with the state(5) to be set 00:21:06.268 [2024-05-15 17:11:53.883080] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1134350 is same with the state(5) to be set 00:21:06.268 [2024-05-15 17:11:53.883086] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1134350 is same with the state(5) to be set 00:21:06.268 [2024-05-15 17:11:53.883092] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1134350 is same with the state(5) to be set 00:21:06.268 [2024-05-15 17:11:53.883098] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1134350 is same with the state(5) to be set 00:21:06.268 [2024-05-15 17:11:53.883104] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1134350 is same with the state(5) to be set 00:21:06.268 [2024-05-15 17:11:53.883111] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1134350 is same with the state(5) to be set 00:21:06.268 [2024-05-15 17:11:53.883117] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1134350 is same with the state(5) to be set 00:21:06.268 [2024-05-15 17:11:53.883123] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1134350 is same with the state(5) to be set 00:21:06.268 [2024-05-15 17:11:53.883130] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1134350 is same with the state(5) to be set 00:21:06.268 [2024-05-15 17:11:53.883135] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1134350 is same with the state(5) to be set 00:21:06.268 [2024-05-15 17:11:53.883141] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1134350 is same with the state(5) to be set 00:21:06.268 [2024-05-15 17:11:53.883149] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1134350 is same with the state(5) to be set 00:21:06.268 [2024-05-15 17:11:53.883157] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1134350 is same with the state(5) to be set 00:21:06.268 [2024-05-15 17:11:53.883169] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1134350 is same with the state(5) to be set 00:21:06.268 [2024-05-15 17:11:53.883176] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1134350 is same with the state(5) to be set 00:21:06.268 [2024-05-15 17:11:53.883183] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1134350 is same with the state(5) to be set 00:21:06.268 [2024-05-15 17:11:53.883189] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1134350 is same with the state(5) to be set 00:21:06.268 [2024-05-15 17:11:53.883195] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1134350 is same with the state(5) to be set 00:21:06.268 [2024-05-15 17:11:53.883201] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1134350 is same with the state(5) to be set 00:21:06.268 [2024-05-15 17:11:53.883207] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1134350 is same with the state(5) to be set 00:21:06.268 [2024-05-15 17:11:53.883213] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1134350 is same with the state(5) to be set 00:21:06.268 [2024-05-15 17:11:53.883219] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1134350 is same with the state(5) to be set 00:21:06.268 [2024-05-15 17:11:53.883225] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1134350 is same with the state(5) to be set 00:21:06.268 [2024-05-15 17:11:53.883231] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1134350 is same with the state(5) to be set 00:21:06.268 [2024-05-15 17:11:53.883237] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1134350 is same with the state(5) to be set 00:21:06.268 [2024-05-15 17:11:53.883243] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1134350 is same with the state(5) to be set 00:21:06.268 [2024-05-15 17:11:53.883249] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1134350 is same with the state(5) to be set 00:21:06.268 [2024-05-15 17:11:53.883255] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1134350 is same with the state(5) to be set 00:21:06.268 [2024-05-15 17:11:53.883262] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1134350 is same with the state(5) to be set 00:21:06.268 [2024-05-15 17:11:53.883268] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1134350 is same with the state(5) to be set 00:21:06.268 [2024-05-15 17:11:53.883275] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1134350 is same with the state(5) to be set 00:21:06.268 [2024-05-15 17:11:53.883280] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1134350 is same with the state(5) to be set 00:21:06.268 [2024-05-15 17:11:53.883286] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1134350 is same with the state(5) to be set 00:21:06.268 [2024-05-15 17:11:53.883293] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1134350 is same with the state(5) to be set 00:21:06.268 [2024-05-15 17:11:53.883298] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1134350 is same with the state(5) to be set 00:21:06.268 [2024-05-15 17:11:53.883304] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1134350 is same with the state(5) to be set 00:21:06.268 [2024-05-15 17:11:53.883310] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1134350 is same with the state(5) to be set 00:21:06.268 [2024-05-15 17:11:53.883317] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1134350 is same with the state(5) to be set 00:21:06.268 [2024-05-15 17:11:53.883323] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1134350 is same with the state(5) to be set 00:21:06.268 [2024-05-15 17:11:53.884046] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11347f0 is same with the state(5) to be set 00:21:06.268 [2024-05-15 17:11:53.884063] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11347f0 is same with the state(5) to be set 00:21:06.268 [2024-05-15 17:11:53.884069] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11347f0 is same with the state(5) to be set 00:21:06.268 [2024-05-15 17:11:53.884075] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11347f0 is same with the state(5) to be set 00:21:06.268 [2024-05-15 17:11:53.884081] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11347f0 is same with the state(5) to be set 00:21:06.268 [2024-05-15 17:11:53.884089] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11347f0 is same with the state(5) to be set 00:21:06.268 [2024-05-15 17:11:53.884095] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11347f0 is same with the state(5) to be set 00:21:06.268 [2024-05-15 17:11:53.884101] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11347f0 is same with the state(5) to be set 00:21:06.268 [2024-05-15 17:11:53.884108] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11347f0 is same with the state(5) to be set 00:21:06.268 [2024-05-15 17:11:53.884114] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11347f0 is same with the state(5) to be set 00:21:06.268 [2024-05-15 17:11:53.884120] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11347f0 is same with the state(5) to be set 00:21:06.268 [2024-05-15 17:11:53.884126] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11347f0 is same with the state(5) to be set 00:21:06.268 [2024-05-15 17:11:53.884133] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11347f0 is same with the state(5) to be set 00:21:06.268 [2024-05-15 17:11:53.884140] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11347f0 is same with the state(5) to be set 00:21:06.268 [2024-05-15 17:11:53.884146] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11347f0 is same with the state(5) to be set 00:21:06.268 [2024-05-15 17:11:53.884152] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11347f0 is same with the state(5) to be set 00:21:06.268 [2024-05-15 17:11:53.884158] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11347f0 is same with the state(5) to be set 00:21:06.268 [2024-05-15 17:11:53.884168] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11347f0 is same with the state(5) to be set 00:21:06.268 [2024-05-15 17:11:53.884174] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11347f0 is same with the state(5) to be set 00:21:06.268 [2024-05-15 17:11:53.884181] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11347f0 is same with the state(5) to be set 00:21:06.268 [2024-05-15 17:11:53.884187] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11347f0 is same with the state(5) to be set 00:21:06.268 [2024-05-15 17:11:53.884194] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11347f0 is same with the state(5) to be set 00:21:06.268 [2024-05-15 17:11:53.884201] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11347f0 is same with the state(5) to be set 00:21:06.268 [2024-05-15 17:11:53.884207] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11347f0 is same with the state(5) to be set 00:21:06.268 [2024-05-15 17:11:53.884213] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11347f0 is same with the state(5) to be set 00:21:06.268 [2024-05-15 17:11:53.884220] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11347f0 is same with the state(5) to be set 00:21:06.268 [2024-05-15 17:11:53.884226] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11347f0 is same with the state(5) to be set 00:21:06.268 [2024-05-15 17:11:53.884236] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11347f0 is same with the state(5) to be set 00:21:06.268 [2024-05-15 17:11:53.884243] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11347f0 is same with the state(5) to be set 00:21:06.269 [2024-05-15 17:11:53.884250] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11347f0 is same with the state(5) to be set 00:21:06.269 [2024-05-15 17:11:53.884256] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11347f0 is same with the state(5) to be set 00:21:06.269 [2024-05-15 17:11:53.884262] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11347f0 is same with the state(5) to be set 00:21:06.269 [2024-05-15 17:11:53.884269] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11347f0 is same with the state(5) to be set 00:21:06.269 [2024-05-15 17:11:53.884275] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11347f0 is same with the state(5) to be set 00:21:06.269 [2024-05-15 17:11:53.884282] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11347f0 is same with the state(5) to be set 00:21:06.269 [2024-05-15 17:11:53.884289] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11347f0 is same with the state(5) to be set 00:21:06.269 [2024-05-15 17:11:53.884295] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11347f0 is same with the state(5) to be set 00:21:06.269 [2024-05-15 17:11:53.884301] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11347f0 is same with the state(5) to be set 00:21:06.269 [2024-05-15 17:11:53.884307] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11347f0 is same with the state(5) to be set 00:21:06.269 [2024-05-15 17:11:53.884313] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11347f0 is same with the state(5) to be set 00:21:06.269 [2024-05-15 17:11:53.884319] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11347f0 is same with the state(5) to be set 00:21:06.269 [2024-05-15 17:11:53.884326] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11347f0 is same with the state(5) to be set 00:21:06.269 [2024-05-15 17:11:53.884333] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11347f0 is same with the state(5) to be set 00:21:06.269 [2024-05-15 17:11:53.884339] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11347f0 is same with the state(5) to be set 00:21:06.269 [2024-05-15 17:11:53.884345] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11347f0 is same with the state(5) to be set 00:21:06.269 [2024-05-15 17:11:53.884351] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11347f0 is same with the state(5) to be set 00:21:06.269 [2024-05-15 17:11:53.884356] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11347f0 is same with the state(5) to be set 00:21:06.269 [2024-05-15 17:11:53.884362] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11347f0 is same with the state(5) to be set 00:21:06.269 [2024-05-15 17:11:53.884368] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11347f0 is same with the state(5) to be set 00:21:06.269 [2024-05-15 17:11:53.884375] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11347f0 is same with the state(5) to be set 00:21:06.269 [2024-05-15 17:11:53.884382] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11347f0 is same with the state(5) to be set 00:21:06.269 [2024-05-15 17:11:53.884388] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11347f0 is same with the state(5) to be set 00:21:06.269 [2024-05-15 17:11:53.884394] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11347f0 is same with the state(5) to be set 00:21:06.269 [2024-05-15 17:11:53.884400] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11347f0 is same with the state(5) to be set 00:21:06.269 [2024-05-15 17:11:53.884407] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11347f0 is same with the state(5) to be set 00:21:06.269 [2024-05-15 17:11:53.884413] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11347f0 is same with the state(5) to be set 00:21:06.269 [2024-05-15 17:11:53.884419] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11347f0 is same with the state(5) to be set 00:21:06.269 [2024-05-15 17:11:53.884425] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11347f0 is same with the state(5) to be set 00:21:06.269 [2024-05-15 17:11:53.884432] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11347f0 is same with the state(5) to be set 00:21:06.269 [2024-05-15 17:11:53.884438] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11347f0 is same with the state(5) to be set 00:21:06.269 [2024-05-15 17:11:53.884444] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11347f0 is same with the state(5) to be set 00:21:06.269 [2024-05-15 17:11:53.884450] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11347f0 is same with the state(5) to be set 00:21:06.269 [2024-05-15 17:11:53.884456] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11347f0 is same with the state(5) to be set 00:21:06.269 [2024-05-15 17:11:53.885379] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1134c90 is same with the state(5) to be set 00:21:06.269 [2024-05-15 17:11:53.885392] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1134c90 is same with the state(5) to be set 00:21:06.269 [2024-05-15 17:11:53.885398] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1134c90 is same with the state(5) to be set 00:21:06.269 [2024-05-15 17:11:53.885404] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1134c90 is same with the state(5) to be set 00:21:06.269 [2024-05-15 17:11:53.885411] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1134c90 is same with the state(5) to be set 00:21:06.269 [2024-05-15 17:11:53.885417] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1134c90 is same with the state(5) to be set 00:21:06.269 [2024-05-15 17:11:53.885424] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1134c90 is same with the state(5) to be set 00:21:06.269 [2024-05-15 17:11:53.885431] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1134c90 is same with the state(5) to be set 00:21:06.269 [2024-05-15 17:11:53.885437] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1134c90 is same with the state(5) to be set 00:21:06.269 [2024-05-15 17:11:53.885444] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1134c90 is same with the state(5) to be set 00:21:06.269 [2024-05-15 17:11:53.885450] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1134c90 is same with the state(5) to be set 00:21:06.269 [2024-05-15 17:11:53.885457] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1134c90 is same with the state(5) to be set 00:21:06.269 [2024-05-15 17:11:53.885463] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1134c90 is same with the state(5) to be set 00:21:06.269 [2024-05-15 17:11:53.885469] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1134c90 is same with the state(5) to be set 00:21:06.269 [2024-05-15 17:11:53.885477] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1134c90 is same with the state(5) to be set 00:21:06.269 [2024-05-15 17:11:53.885483] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1134c90 is same with the state(5) to be set 00:21:06.269 [2024-05-15 17:11:53.885488] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1134c90 is same with the state(5) to be set 00:21:06.269 [2024-05-15 17:11:53.885494] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1134c90 is same with the state(5) to be set 00:21:06.269 [2024-05-15 17:11:53.885503] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1134c90 is same with the state(5) to be set 00:21:06.269 [2024-05-15 17:11:53.885510] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1134c90 is same with the state(5) to be set 00:21:06.269 [2024-05-15 17:11:53.885516] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1134c90 is same with the state(5) to be set 00:21:06.269 [2024-05-15 17:11:53.885522] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1134c90 is same with the state(5) to be set 00:21:06.269 [2024-05-15 17:11:53.885527] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1134c90 is same with the state(5) to be set 00:21:06.269 [2024-05-15 17:11:53.885533] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1134c90 is same with the state(5) to be set 00:21:06.269 [2024-05-15 17:11:53.885539] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1134c90 is same with the state(5) to be set 00:21:06.269 [2024-05-15 17:11:53.885546] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1134c90 is same with the state(5) to be set 00:21:06.269 [2024-05-15 17:11:53.885552] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1134c90 is same with the state(5) to be set 00:21:06.269 [2024-05-15 17:11:53.885558] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1134c90 is same with the state(5) to be set 00:21:06.269 [2024-05-15 17:11:53.885564] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1134c90 is same with the state(5) to be set 00:21:06.269 [2024-05-15 17:11:53.885570] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1134c90 is same with the state(5) to be set 00:21:06.269 [2024-05-15 17:11:53.885575] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1134c90 is same with the state(5) to be set 00:21:06.269 [2024-05-15 17:11:53.885581] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1134c90 is same with the state(5) to be set 00:21:06.269 [2024-05-15 17:11:53.885587] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1134c90 is same with the state(5) to be set 00:21:06.269 [2024-05-15 17:11:53.885592] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1134c90 is same with the state(5) to be set 00:21:06.269 [2024-05-15 17:11:53.885599] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1134c90 is same with the state(5) to be set 00:21:06.269 [2024-05-15 17:11:53.885605] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1134c90 is same with the state(5) to be set 00:21:06.269 [2024-05-15 17:11:53.885611] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1134c90 is same with the state(5) to be set 00:21:06.269 [2024-05-15 17:11:53.885617] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1134c90 is same with the state(5) to be set 00:21:06.269 [2024-05-15 17:11:53.885622] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1134c90 is same with the state(5) to be set 00:21:06.269 [2024-05-15 17:11:53.885628] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1134c90 is same with the state(5) to be set 00:21:06.269 [2024-05-15 17:11:53.885633] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1134c90 is same with the state(5) to be set 00:21:06.269 [2024-05-15 17:11:53.885639] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1134c90 is same with the state(5) to be set 00:21:06.269 [2024-05-15 17:11:53.885645] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1134c90 is same with the state(5) to be set 00:21:06.270 [2024-05-15 17:11:53.885651] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1134c90 is same with the state(5) to be set 00:21:06.270 [2024-05-15 17:11:53.885659] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1134c90 is same with the state(5) to be set 00:21:06.270 [2024-05-15 17:11:53.885666] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1134c90 is same with the state(5) to be set 00:21:06.270 [2024-05-15 17:11:53.885674] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1134c90 is same with the state(5) to be set 00:21:06.270 [2024-05-15 17:11:53.885681] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1134c90 is same with the state(5) to be set 00:21:06.270 [2024-05-15 17:11:53.885688] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1134c90 is same with the state(5) to be set 00:21:06.270 [2024-05-15 17:11:53.885694] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1134c90 is same with the state(5) to be set 00:21:06.270 [2024-05-15 17:11:53.885701] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1134c90 is same with the state(5) to be set 00:21:06.270 [2024-05-15 17:11:53.885707] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1134c90 is same with the state(5) to be set 00:21:06.270 [2024-05-15 17:11:53.885713] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1134c90 is same with the state(5) to be set 00:21:06.270 [2024-05-15 17:11:53.885719] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1134c90 is same with the state(5) to be set 00:21:06.270 [2024-05-15 17:11:53.885726] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1134c90 is same with the state(5) to be set 00:21:06.270 [2024-05-15 17:11:53.885732] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1134c90 is same with the state(5) to be set 00:21:06.270 [2024-05-15 17:11:53.885737] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1134c90 is same with the state(5) to be set 00:21:06.270 [2024-05-15 17:11:53.885744] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1134c90 is same with the state(5) to be set 00:21:06.270 [2024-05-15 17:11:53.885750] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1134c90 is same with the state(5) to be set 00:21:06.270 [2024-05-15 17:11:53.885757] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1134c90 is same with the state(5) to be set 00:21:06.270 [2024-05-15 17:11:53.885763] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1134c90 is same with the state(5) to be set 00:21:06.270 [2024-05-15 17:11:53.885769] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1134c90 is same with the state(5) to be set 00:21:06.270 [2024-05-15 17:11:53.885775] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1134c90 is same with the state(5) to be set 00:21:06.270 [2024-05-15 17:11:53.886336] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:06.270 [2024-05-15 17:11:53.886367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.270 [2024-05-15 17:11:53.886376] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:06.270 [2024-05-15 17:11:53.886384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.270 [2024-05-15 17:11:53.886391] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:06.270 [2024-05-15 17:11:53.886398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.270 [2024-05-15 17:11:53.886406] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:06.270 [2024-05-15 17:11:53.886412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.270 [2024-05-15 17:11:53.886419] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d81e10 is same with the state(5) to be set 00:21:06.270 [2024-05-15 17:11:53.886456] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:06.270 [2024-05-15 17:11:53.886465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.270 [2024-05-15 17:11:53.886473] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:06.270 [2024-05-15 17:11:53.886479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.270 [2024-05-15 17:11:53.886486] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:06.270 [2024-05-15 17:11:53.886492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.270 [2024-05-15 17:11:53.886499] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:06.270 [2024-05-15 17:11:53.886507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.270 [2024-05-15 17:11:53.886513] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e57db0 is same with the state(5) to be set 00:21:06.270 [2024-05-15 17:11:53.886537] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:06.270 [2024-05-15 17:11:53.886545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.270 [2024-05-15 17:11:53.886554] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:06.270 [2024-05-15 17:11:53.886560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.270 [2024-05-15 17:11:53.886568] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:06.270 [2024-05-15 17:11:53.886574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.270 [2024-05-15 17:11:53.886581] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:06.270 [2024-05-15 17:11:53.886588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.270 [2024-05-15 17:11:53.886594] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c9d8a0 is same with the state(5) to be set 00:21:06.270 [2024-05-15 17:11:53.886617] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:06.270 [2024-05-15 17:11:53.886625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.270 [2024-05-15 17:11:53.886632] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:06.270 [2024-05-15 17:11:53.886639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.270 [2024-05-15 17:11:53.886647] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:06.270 [2024-05-15 17:11:53.886654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.270 [2024-05-15 17:11:53.886661] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:06.270 [2024-05-15 17:11:53.886670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.270 [2024-05-15 17:11:53.886676] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c9b730 is same with the state(5) to be set 00:21:06.270 [2024-05-15 17:11:53.886711] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:06.270 [2024-05-15 17:11:53.886719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.270 [2024-05-15 17:11:53.886727] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:06.270 [2024-05-15 17:11:53.886733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.270 [2024-05-15 17:11:53.886740] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:06.270 [2024-05-15 17:11:53.886747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.270 [2024-05-15 17:11:53.886754] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:06.270 [2024-05-15 17:11:53.886761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.270 [2024-05-15 17:11:53.886767] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e67960 is same with the state(5) to be set 00:21:06.270 [2024-05-15 17:11:53.886768] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1135130 is same with the state(5) to be set 00:21:06.270 [2024-05-15 17:11:53.886790] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:06.270 [2024-05-15 17:11:53.886793] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1135130 is same with the state(5) to be set 00:21:06.270 [2024-05-15 17:11:53.886799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.270 [2024-05-15 17:11:53.886802] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1135130 is same with the state(5) to be set 00:21:06.270 [2024-05-15 17:11:53.886807] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:06.270 [2024-05-15 17:11:53.886810] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1135130 is same with the state(5) to be set 00:21:06.271 [2024-05-15 17:11:53.886815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.271 [2024-05-15 17:11:53.886817] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1135130 is same with the state(5) to be set 00:21:06.271 [2024-05-15 17:11:53.886823] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:06.271 [2024-05-15 17:11:53.886825] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1135130 is same with the state(5) to be set 00:21:06.271 [2024-05-15 17:11:53.886830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.271 [2024-05-15 17:11:53.886832] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1135130 is same with the state(5) to be set 00:21:06.271 [2024-05-15 17:11:53.886838] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 ns[2024-05-15 17:11:53.886839] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1135130 is same with id:0 cdw10:00000000 cdw11:00000000 00:21:06.271 the state(5) to be set 00:21:06.271 [2024-05-15 17:11:53.886850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.271 [2024-05-15 17:11:53.886851] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1135130 is same with the state(5) to be set 00:21:06.271 [2024-05-15 17:11:53.886857] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d4df60 is same with the state(5) to be set 00:21:06.271 [2024-05-15 17:11:53.886859] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1135130 is same with the state(5) to be set 00:21:06.271 [2024-05-15 17:11:53.886867] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1135130 is same with the state(5) to be set 00:21:06.271 [2024-05-15 17:11:53.886873] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1135130 is same with the state(5) to be set 00:21:06.271 [2024-05-15 17:11:53.886879] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1135130 is same with the state(5) to be set 00:21:06.271 [2024-05-15 17:11:53.886881] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:06.271 [2024-05-15 17:11:53.886885] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1135130 is same with the state(5) to be set 00:21:06.271 [2024-05-15 17:11:53.886890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.271 [2024-05-15 17:11:53.886892] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1135130 is same with the state(5) to be set 00:21:06.271 [2024-05-15 17:11:53.886899] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:06.271 [2024-05-15 17:11:53.886900] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1135130 is same with the state(5) to be set 00:21:06.271 [2024-05-15 17:11:53.886907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.271 [2024-05-15 17:11:53.886908] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1135130 is same with the state(5) to be set 00:21:06.271 [2024-05-15 17:11:53.886916] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 ns[2024-05-15 17:11:53.886916] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1135130 is same with id:0 cdw10:00000000 cdw11:00000000 00:21:06.271 the state(5) to be set 00:21:06.271 [2024-05-15 17:11:53.886925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 c[2024-05-15 17:11:53.886925] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1135130 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.271 the state(5) to be set 00:21:06.271 [2024-05-15 17:11:53.886935] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1135130 is same with [2024-05-15 17:11:53.886935] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsthe state(5) to be set 00:21:06.271 id:0 cdw10:00000000 cdw11:00000000 00:21:06.271 [2024-05-15 17:11:53.886944] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1135130 is same with [2024-05-15 17:11:53.886945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cthe state(5) to be set 00:21:06.271 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.271 [2024-05-15 17:11:53.886953] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1135130 is same with [2024-05-15 17:11:53.886954] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cc8b70 is same the state(5) to be set 00:21:06.271 with the state(5) to be set 00:21:06.271 [2024-05-15 17:11:53.886962] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1135130 is same with the state(5) to be set 00:21:06.271 [2024-05-15 17:11:53.886969] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1135130 is same with the state(5) to be set 00:21:06.271 [2024-05-15 17:11:53.886979] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1135130 is same with the state(5) to be set 00:21:06.271 [2024-05-15 17:11:53.886987] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1135130 is same with the state(5) to be set 00:21:06.271 [2024-05-15 17:11:53.886994] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1135130 is same with the state(5) to be set 00:21:06.271 [2024-05-15 17:11:53.887000] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1135130 is same with the state(5) to be set 00:21:06.271 [2024-05-15 17:11:53.887006] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1135130 is same with the state(5) to be set 00:21:06.271 [2024-05-15 17:11:53.887012] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1135130 is same with the state(5) to be set 00:21:06.271 [2024-05-15 17:11:53.887018] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1135130 is same with the state(5) to be set 00:21:06.271 [2024-05-15 17:11:53.887024] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1135130 is same with the state(5) to be set 00:21:06.271 [2024-05-15 17:11:53.887029] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1135130 is same with the state(5) to be set 00:21:06.271 [2024-05-15 17:11:53.887037] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1135130 is same with the state(5) to be set 00:21:06.271 [2024-05-15 17:11:53.887044] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1135130 is same with the state(5) to be set 00:21:06.271 [2024-05-15 17:11:53.887050] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1135130 is same with the state(5) to be set 00:21:06.271 [2024-05-15 17:11:53.887056] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1135130 is same with the state(5) to be set 00:21:06.271 [2024-05-15 17:11:53.887062] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1135130 is same with the state(5) to be set 00:21:06.271 [2024-05-15 17:11:53.887067] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1135130 is same with the state(5) to be set 00:21:06.271 [2024-05-15 17:11:53.887073] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1135130 is same with the state(5) to be set 00:21:06.271 [2024-05-15 17:11:53.887079] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1135130 is same with the state(5) to be set 00:21:06.271 [2024-05-15 17:11:53.887084] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1135130 is same with the state(5) to be set 00:21:06.271 [2024-05-15 17:11:53.887091] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1135130 is same with the state(5) to be set 00:21:06.271 [2024-05-15 17:11:53.887097] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1135130 is same with the state(5) to be set 00:21:06.271 [2024-05-15 17:11:53.887103] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1135130 is same with the state(5) to be set 00:21:06.271 [2024-05-15 17:11:53.887110] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1135130 is same with the state(5) to be set 00:21:06.271 [2024-05-15 17:11:53.887115] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1135130 is same with the state(5) to be set 00:21:06.271 [2024-05-15 17:11:53.887114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:33152 len:12[2024-05-15 17:11:53.887121] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1135130 is same with 8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.271 the state(5) to be set 00:21:06.271 [2024-05-15 17:11:53.887133] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1135130 is same with [2024-05-15 17:11:53.887133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cthe state(5) to be set 00:21:06.271 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.271 [2024-05-15 17:11:53.887145] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1135130 is same with the state(5) to be set 00:21:06.271 [2024-05-15 17:11:53.887152] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1135130 is same with the state(5) to be set 00:21:06.271 [2024-05-15 17:11:53.887154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:33280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.271 [2024-05-15 17:11:53.887159] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1135130 is same with the state(5) to be set 00:21:06.271 [2024-05-15 17:11:53.887162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.271 [2024-05-15 17:11:53.887170] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1135130 is same with the state(5) to be set 00:21:06.271 [2024-05-15 17:11:53.887177] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1135130 is same with the state(5) to be set 00:21:06.271 [2024-05-15 17:11:53.887179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:33408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.271 [2024-05-15 17:11:53.887183] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1135130 is same with the state(5) to be set 00:21:06.271 [2024-05-15 17:11:53.887186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.271 [2024-05-15 17:11:53.887190] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1135130 is same with the state(5) to be set 00:21:06.271 [2024-05-15 17:11:53.887195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:33536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.271 [2024-05-15 17:11:53.887198] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1135130 is same with the state(5) to be set 00:21:06.271 [2024-05-15 17:11:53.887204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.271 [2024-05-15 17:11:53.887205] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1135130 is same with the state(5) to be set 00:21:06.271 [2024-05-15 17:11:53.887212] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1135130 is same with the state(5) to be set 00:21:06.271 [2024-05-15 17:11:53.887213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:33664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.271 [2024-05-15 17:11:53.887218] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1135130 is same with the state(5) to be set 00:21:06.271 [2024-05-15 17:11:53.887221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.271 [2024-05-15 17:11:53.887225] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1135130 is same with the state(5) to be set 00:21:06.271 [2024-05-15 17:11:53.887230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:33792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.272 [2024-05-15 17:11:53.887232] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1135130 is same with the state(5) to be set 00:21:06.272 [2024-05-15 17:11:53.887238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.272 [2024-05-15 17:11:53.887239] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1135130 is same with the state(5) to be set 00:21:06.272 [2024-05-15 17:11:53.887247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:33920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.272 [2024-05-15 17:11:53.887257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.272 [2024-05-15 17:11:53.887267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:34048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.272 [2024-05-15 17:11:53.887274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.272 [2024-05-15 17:11:53.887282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:34176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.272 [2024-05-15 17:11:53.887288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.272 [2024-05-15 17:11:53.887297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:34304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.272 [2024-05-15 17:11:53.887303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.272 [2024-05-15 17:11:53.887311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:34432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.272 [2024-05-15 17:11:53.887318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.272 [2024-05-15 17:11:53.887326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:34560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.272 [2024-05-15 17:11:53.887332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.272 [2024-05-15 17:11:53.887341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:34688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.272 [2024-05-15 17:11:53.887347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.272 [2024-05-15 17:11:53.887356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:34816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.272 [2024-05-15 17:11:53.887362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.272 [2024-05-15 17:11:53.887370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:34944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.272 [2024-05-15 17:11:53.887376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.272 [2024-05-15 17:11:53.887384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:35072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.272 [2024-05-15 17:11:53.887391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.272 [2024-05-15 17:11:53.887399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:35200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.272 [2024-05-15 17:11:53.887405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.272 [2024-05-15 17:11:53.887415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:35328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.272 [2024-05-15 17:11:53.887421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.272 [2024-05-15 17:11:53.887429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:35456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.272 [2024-05-15 17:11:53.887436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.272 [2024-05-15 17:11:53.887444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:35584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.272 [2024-05-15 17:11:53.887451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.272 [2024-05-15 17:11:53.887459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:35712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.272 [2024-05-15 17:11:53.887465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.272 [2024-05-15 17:11:53.887473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:35840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.272 [2024-05-15 17:11:53.887480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.272 [2024-05-15 17:11:53.887488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:35968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.272 [2024-05-15 17:11:53.887496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.272 [2024-05-15 17:11:53.887504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:36096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.272 [2024-05-15 17:11:53.887511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.272 [2024-05-15 17:11:53.887518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:36224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.272 [2024-05-15 17:11:53.887525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.272 [2024-05-15 17:11:53.887533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:36352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.272 [2024-05-15 17:11:53.887540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.272 [2024-05-15 17:11:53.887548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:36480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.272 [2024-05-15 17:11:53.887554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.272 [2024-05-15 17:11:53.887561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:36608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.272 [2024-05-15 17:11:53.887568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.272 [2024-05-15 17:11:53.887575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:36736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.272 [2024-05-15 17:11:53.887582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.272 [2024-05-15 17:11:53.887590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:36864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.272 [2024-05-15 17:11:53.887596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.272 [2024-05-15 17:11:53.887604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:36992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.272 [2024-05-15 17:11:53.887610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.272 [2024-05-15 17:11:53.887618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:37120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.272 [2024-05-15 17:11:53.887625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.272 [2024-05-15 17:11:53.887634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:37248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.272 [2024-05-15 17:11:53.887641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.272 [2024-05-15 17:11:53.887650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:37376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.272 [2024-05-15 17:11:53.887656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.272 [2024-05-15 17:11:53.887664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:37504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.272 [2024-05-15 17:11:53.887672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.272 [2024-05-15 17:11:53.887680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:37632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.272 [2024-05-15 17:11:53.887687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.272 [2024-05-15 17:11:53.887694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:37760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.272 [2024-05-15 17:11:53.887700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.272 [2024-05-15 17:11:53.887708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:37888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.272 [2024-05-15 17:11:53.887714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.272 [2024-05-15 17:11:53.887723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:38016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.272 [2024-05-15 17:11:53.887731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.272 [2024-05-15 17:11:53.887739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:38144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.272 [2024-05-15 17:11:53.887745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.272 [2024-05-15 17:11:53.887753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:38272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.272 [2024-05-15 17:11:53.887759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.272 [2024-05-15 17:11:53.887767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:38400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.272 [2024-05-15 17:11:53.887774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.272 [2024-05-15 17:11:53.887781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:38528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.272 [2024-05-15 17:11:53.887788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.272 [2024-05-15 17:11:53.887796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:38656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.272 [2024-05-15 17:11:53.887802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.272 [2024-05-15 17:11:53.887809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:38784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.272 [2024-05-15 17:11:53.887818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.273 [2024-05-15 17:11:53.887826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:38912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.273 [2024-05-15 17:11:53.887832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.273 [2024-05-15 17:11:53.887840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:39040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.273 [2024-05-15 17:11:53.887846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.273 [2024-05-15 17:11:53.887854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:39168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.273 [2024-05-15 17:11:53.887861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.273 [2024-05-15 17:11:53.887870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:39296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.273 [2024-05-15 17:11:53.887876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.273 [2024-05-15 17:11:53.887886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:39424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.273 [2024-05-15 17:11:53.887892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.273 [2024-05-15 17:11:53.887900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:39552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.273 [2024-05-15 17:11:53.887907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.273 [2024-05-15 17:11:53.887915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:39680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.273 [2024-05-15 17:11:53.887921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.273 [2024-05-15 17:11:53.887929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:39808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.273 [2024-05-15 17:11:53.887935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.273 [2024-05-15 17:11:53.887943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:39936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.273 [2024-05-15 17:11:53.887949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.273 [2024-05-15 17:11:53.887957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:40064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.273 [2024-05-15 17:11:53.887966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.273 [2024-05-15 17:11:53.887974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:40192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.273 [2024-05-15 17:11:53.887981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.273 [2024-05-15 17:11:53.887988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:40320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.273 [2024-05-15 17:11:53.887995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.273 [2024-05-15 17:11:53.888005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:40448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.273 [2024-05-15 17:11:53.888012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.273 [2024-05-15 17:11:53.888020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:40576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.273 [2024-05-15 17:11:53.888020] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11355d0 is same with the state(5) to be set 00:21:06.273 [2024-05-15 17:11:53.888027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.273 [2024-05-15 17:11:53.888035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:40704 len:1[2024-05-15 17:11:53.888035] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11355d0 is same with 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.273 the state(5) to be set 00:21:06.273 [2024-05-15 17:11:53.888044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-05-15 17:11:53.888045] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11355d0 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.273 the state(5) to be set 00:21:06.273 [2024-05-15 17:11:53.888055] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11355d0 is same with [2024-05-15 17:11:53.888055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:40832 len:1the state(5) to be set 00:21:06.273 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.273 [2024-05-15 17:11:53.888064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-05-15 17:11:53.888064] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11355d0 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.273 the state(5) to be set 00:21:06.273 [2024-05-15 17:11:53.888075] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11355d0 is same with the state(5) to be set 00:21:06.273 [2024-05-15 17:11:53.888075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:32768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.273 [2024-05-15 17:11:53.888081] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11355d0 is same with the state(5) to be set 00:21:06.273 [2024-05-15 17:11:53.888083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.273 [2024-05-15 17:11:53.888088] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11355d0 is same with the state(5) to be set 00:21:06.273 [2024-05-15 17:11:53.888092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:32896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.273 [2024-05-15 17:11:53.888095] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11355d0 is same with the state(5) to be set 00:21:06.273 [2024-05-15 17:11:53.888099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.273 [2024-05-15 17:11:53.888102] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11355d0 is same with the state(5) to be set 00:21:06.273 [2024-05-15 17:11:53.888108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:33024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.273 [2024-05-15 17:11:53.888110] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11355d0 is same with the state(5) to be set 00:21:06.273 [2024-05-15 17:11:53.888115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.273 [2024-05-15 17:11:53.888117] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11355d0 is same with the state(5) to be set 00:21:06.273 [2024-05-15 17:11:53.888127] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11355d0 is same with the state(5) to be set 00:21:06.273 [2024-05-15 17:11:53.888134] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11355d0 is same with the state(5) to be set 00:21:06.273 [2024-05-15 17:11:53.888140] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11355d0 is same with the state(5) to be set 00:21:06.273 [2024-05-15 17:11:53.888146] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11355d0 is same with the state(5) to be set 00:21:06.273 [2024-05-15 17:11:53.888153] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11355d0 is same with the state(5) to be set 00:21:06.273 [2024-05-15 17:11:53.888160] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11355d0 is same with the state(5) to be set 00:21:06.273 [2024-05-15 17:11:53.888170] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11355d0 is same with the state(5) to be set 00:21:06.273 [2024-05-15 17:11:53.888177] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11355d0 is same with the state(5) to be set 00:21:06.273 [2024-05-15 17:11:53.888183] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11355d0 is same with the state(5) to be set 00:21:06.273 [2024-05-15 17:11:53.888185] bdev_nvme.c:1602:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1dffa00 was disconnected and freed. reset controller. 00:21:06.273 [2024-05-15 17:11:53.888188] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11355d0 is same with the state(5) to be set 00:21:06.273 [2024-05-15 17:11:53.888195] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11355d0 is same with the state(5) to be set 00:21:06.273 [2024-05-15 17:11:53.888201] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11355d0 is same with the state(5) to be set 00:21:06.273 [2024-05-15 17:11:53.888207] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11355d0 is same with the state(5) to be set 00:21:06.273 [2024-05-15 17:11:53.888214] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11355d0 is same with the state(5) to be set 00:21:06.273 [2024-05-15 17:11:53.888220] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11355d0 is same with the state(5) to be set 00:21:06.273 [2024-05-15 17:11:53.888226] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11355d0 is same with the state(5) to be set 00:21:06.273 [2024-05-15 17:11:53.888232] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11355d0 is same with the state(5) to be set 00:21:06.273 [2024-05-15 17:11:53.888238] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11355d0 is same with the state(5) to be set 00:21:06.273 [2024-05-15 17:11:53.888244] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11355d0 is same with the state(5) to be set 00:21:06.273 [2024-05-15 17:11:53.888250] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11355d0 is same with the state(5) to be set 00:21:06.273 [2024-05-15 17:11:53.888255] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11355d0 is same with the state(5) to be set 00:21:06.273 [2024-05-15 17:11:53.888263] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11355d0 is same with the state(5) to be set 00:21:06.273 [2024-05-15 17:11:53.888269] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11355d0 is same with the state(5) to be set 00:21:06.273 [2024-05-15 17:11:53.888276] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11355d0 is same with the state(5) to be set 00:21:06.273 [2024-05-15 17:11:53.888282] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11355d0 is same with the state(5) to be set 00:21:06.273 [2024-05-15 17:11:53.888296] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11355d0 is same with the state(5) to be set 00:21:06.273 [2024-05-15 17:11:53.888303] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11355d0 is same with the state(5) to be set 00:21:06.273 [2024-05-15 17:11:53.888310] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11355d0 is same with the state(5) to be set 00:21:06.273 [2024-05-15 17:11:53.888317] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11355d0 is same with the state(5) to be set 00:21:06.273 [2024-05-15 17:11:53.888323] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11355d0 is same with the state(5) to be set 00:21:06.273 [2024-05-15 17:11:53.888328] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11355d0 is same with the state(5) to be set 00:21:06.273 [2024-05-15 17:11:53.888334] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11355d0 is same with the state(5) to be set 00:21:06.274 [2024-05-15 17:11:53.888340] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11355d0 is same with the state(5) to be set 00:21:06.274 [2024-05-15 17:11:53.888346] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11355d0 is same with the state(5) to be set 00:21:06.274 [2024-05-15 17:11:53.888352] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11355d0 is same with the state(5) to be set 00:21:06.274 [2024-05-15 17:11:53.888359] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11355d0 is same with the state(5) to be set 00:21:06.274 [2024-05-15 17:11:53.888365] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11355d0 is same with the state(5) to be set 00:21:06.274 [2024-05-15 17:11:53.888370] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11355d0 is same with the state(5) to be set 00:21:06.274 [2024-05-15 17:11:53.888376] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11355d0 is same with the state(5) to be set 00:21:06.274 [2024-05-15 17:11:53.888382] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11355d0 is same with the state(5) to be set 00:21:06.274 [2024-05-15 17:11:53.888389] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11355d0 is same with the state(5) to be set 00:21:06.274 [2024-05-15 17:11:53.888395] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11355d0 is same with the state(5) to be set 00:21:06.274 [2024-05-15 17:11:53.888401] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11355d0 is same with the state(5) to be set 00:21:06.274 [2024-05-15 17:11:53.888407] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11355d0 is same with the state(5) to be set 00:21:06.274 [2024-05-15 17:11:53.888413] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11355d0 is same with the state(5) to be set 00:21:06.274 [2024-05-15 17:11:53.888419] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11355d0 is same with the state(5) to be set 00:21:06.274 [2024-05-15 17:11:53.888424] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11355d0 is same with the state(5) to be set 00:21:06.274 [2024-05-15 17:11:53.888430] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11355d0 is same with the state(5) to be set 00:21:06.274 [2024-05-15 17:11:53.888436] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11355d0 is same with the state(5) to be set 00:21:06.274 [2024-05-15 17:11:53.888441] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11355d0 is same with the state(5) to be set 00:21:06.274 [2024-05-15 17:11:53.888448] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11355d0 is same with the state(5) to be set 00:21:06.274 [2024-05-15 17:11:53.888564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.274 [2024-05-15 17:11:53.888586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.274 [2024-05-15 17:11:53.888601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.274 [2024-05-15 17:11:53.888608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.274 [2024-05-15 17:11:53.888617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.274 [2024-05-15 17:11:53.888624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.274 [2024-05-15 17:11:53.888632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.274 [2024-05-15 17:11:53.888638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.274 [2024-05-15 17:11:53.888646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.274 [2024-05-15 17:11:53.888652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.274 [2024-05-15 17:11:53.888660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:32768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.274 [2024-05-15 17:11:53.888666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.274 [2024-05-15 17:11:53.888674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:32896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.274 [2024-05-15 17:11:53.888680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.274 [2024-05-15 17:11:53.888688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:33024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.274 [2024-05-15 17:11:53.888695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.275 [2024-05-15 17:11:53.888702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:33152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.275 [2024-05-15 17:11:53.888709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.275 [2024-05-15 17:11:53.888716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.275 [2024-05-15 17:11:53.888723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.275 [2024-05-15 17:11:53.888730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.275 [2024-05-15 17:11:53.888737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.275 [2024-05-15 17:11:53.888745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.275 [2024-05-15 17:11:53.888752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.275 [2024-05-15 17:11:53.888760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.275 [2024-05-15 17:11:53.888767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.275 [2024-05-15 17:11:53.888775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.275 [2024-05-15 17:11:53.888783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.275 [2024-05-15 17:11:53.888791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.275 [2024-05-15 17:11:53.888797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.275 [2024-05-15 17:11:53.888805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.275 [2024-05-15 17:11:53.888811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.275 [2024-05-15 17:11:53.888819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.275 [2024-05-15 17:11:53.888827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.275 [2024-05-15 17:11:53.888836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.275 [2024-05-15 17:11:53.888843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.275 [2024-05-15 17:11:53.888852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.275 [2024-05-15 17:11:53.888858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.275 [2024-05-15 17:11:53.888866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.275 [2024-05-15 17:11:53.888872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.275 [2024-05-15 17:11:53.888881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.275 [2024-05-15 17:11:53.888887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.275 [2024-05-15 17:11:53.888895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.275 [2024-05-15 17:11:53.888902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.275 [2024-05-15 17:11:53.888910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.275 [2024-05-15 17:11:53.888916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.275 [2024-05-15 17:11:53.888924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.275 [2024-05-15 17:11:53.888931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.275 [2024-05-15 17:11:53.888939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.275 [2024-05-15 17:11:53.888946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.275 [2024-05-15 17:11:53.888954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.275 [2024-05-15 17:11:53.888960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.275 [2024-05-15 17:11:53.888970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.275 [2024-05-15 17:11:53.888978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.275 [2024-05-15 17:11:53.888985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.275 [2024-05-15 17:11:53.889011] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1135a70 is same with the state(5) to be set 00:21:06.275 [2024-05-15 17:11:53.889027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.275 [2024-05-15 17:11:53.889074] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1135a70 is same with the state(5) to be set 00:21:06.275 [2024-05-15 17:11:53.889124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.275 [2024-05-15 17:11:53.889179] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1135a70 is same with the state(5) to be set 00:21:06.275 [2024-05-15 17:11:53.889229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.275 [2024-05-15 17:11:53.889277] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1135a70 is same with the state(5) to be set 00:21:06.275 [2024-05-15 17:11:53.889327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.275 [2024-05-15 17:11:53.889373] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1135a70 is same with the state(5) to be set 00:21:06.276 [2024-05-15 17:11:53.889421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.276 [2024-05-15 17:11:53.889469] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1135a70 is same with the state(5) to be set 00:21:06.276 [2024-05-15 17:11:53.889526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.276 [2024-05-15 17:11:53.889573] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1135a70 is same with the state(5) to be set 00:21:06.276 [2024-05-15 17:11:53.889620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.276 [2024-05-15 17:11:53.889668] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1135a70 is same with the state(5) to be set 00:21:06.276 [2024-05-15 17:11:53.889719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.276 [2024-05-15 17:11:53.889765] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1135a70 is same with the state(5) to be set 00:21:06.276 [2024-05-15 17:11:53.889817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.276 [2024-05-15 17:11:53.889866] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1135a70 is same with the state(5) to be set 00:21:06.276 [2024-05-15 17:11:53.889916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.276 [2024-05-15 17:11:53.889963] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1135a70 is same with the state(5) to be set 00:21:06.276 [2024-05-15 17:11:53.890043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.276 [2024-05-15 17:11:53.890099] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1135a70 is same with the state(5) to be set 00:21:06.276 [2024-05-15 17:11:53.890150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.276 [2024-05-15 17:11:53.890203] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1135a70 is same with the state(5) to be set 00:21:06.276 [2024-05-15 17:11:53.890253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.276 [2024-05-15 17:11:53.890300] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1135a70 is same with the state(5) to be set 00:21:06.276 [2024-05-15 17:11:53.890351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.276 [2024-05-15 17:11:53.890398] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1135a70 is same with the state(5) to be set 00:21:06.276 [2024-05-15 17:11:53.890452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.276 [2024-05-15 17:11:53.890501] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1135a70 is same with the state(5) to be set 00:21:06.276 [2024-05-15 17:11:53.890551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.276 [2024-05-15 17:11:53.890602] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1135a70 is same with the state(5) to be set 00:21:06.276 [2024-05-15 17:11:53.890651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.276 [2024-05-15 17:11:53.890709] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1135a70 is same with the state(5) to be set 00:21:06.276 [2024-05-15 17:11:53.890742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.276 [2024-05-15 17:11:53.890772] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1135a70 is same with the state(5) to be set 00:21:06.276 [2024-05-15 17:11:53.890802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.276 [2024-05-15 17:11:53.890833] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1135a70 is same with the state(5) to be set 00:21:06.276 [2024-05-15 17:11:53.890866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.276 [2024-05-15 17:11:53.890895] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1135a70 is same with the state(5) to be set 00:21:06.276 [2024-05-15 17:11:53.890926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.276 [2024-05-15 17:11:53.890958] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1135a70 is same with the state(5) to be set 00:21:06.276 [2024-05-15 17:11:53.890990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.276 [2024-05-15 17:11:53.891020] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1135a70 is same with the state(5) to be set 00:21:06.276 [2024-05-15 17:11:53.891055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.276 [2024-05-15 17:11:53.891086] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1135a70 is same with the state(5) to be set 00:21:06.276 [2024-05-15 17:11:53.891118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.276 [2024-05-15 17:11:53.891147] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1135a70 is same with the state(5) to be set 00:21:06.276 [2024-05-15 17:11:53.891185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.276 [2024-05-15 17:11:53.891216] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1135a70 is same with the state(5) to be set 00:21:06.276 [2024-05-15 17:11:53.891249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.276 [2024-05-15 17:11:53.891279] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1135a70 is same with the state(5) to be set 00:21:06.276 [2024-05-15 17:11:53.891310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.276 [2024-05-15 17:11:53.891340] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1135a70 is same with the state(5) to be set 00:21:06.276 [2024-05-15 17:11:53.891374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.276 [2024-05-15 17:11:53.891403] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1135a70 is same with the state(5) to be set 00:21:06.276 [2024-05-15 17:11:53.891438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.276 [2024-05-15 17:11:53.891469] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1135a70 is same with the state(5) to be set 00:21:06.276 [2024-05-15 17:11:53.891502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.276 [2024-05-15 17:11:53.891532] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1135a70 is same with the state(5) to be set 00:21:06.276 [2024-05-15 17:11:53.891564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.276 [2024-05-15 17:11:53.891594] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1135a70 is same with the state(5) to be set 00:21:06.276 [2024-05-15 17:11:53.891626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.276 [2024-05-15 17:11:53.891656] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1135a70 is same with the state(5) to be set 00:21:06.276 [2024-05-15 17:11:53.891686] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1135a70 is same with the state(5) to be set 00:21:06.276 [2024-05-15 17:11:53.891718] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1135a70 is same with the state(5) to be set 00:21:06.276 [2024-05-15 17:11:53.891749] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1135a70 is same with the state(5) to be set 00:21:06.276 [2024-05-15 17:11:53.891780] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1135a70 is same with the state(5) to be set 00:21:06.276 [2024-05-15 17:11:53.891811] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1135a70 is same with the state(5) to be set 00:21:06.276 [2024-05-15 17:11:53.891847] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1135a70 is same with the state(5) to be set 00:21:06.276 [2024-05-15 17:11:53.891877] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1135a70 is same with the state(5) to be set 00:21:06.276 [2024-05-15 17:11:53.891908] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1135a70 is same with the state(5) to be set 00:21:06.276 [2024-05-15 17:11:53.891939] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1135a70 is same with the state(5) to be set 00:21:06.276 [2024-05-15 17:11:53.891971] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1135a70 is same with the state(5) to be set 00:21:06.276 [2024-05-15 17:11:53.892005] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1135a70 is same with the state(5) to be set 00:21:06.276 [2024-05-15 17:11:53.892038] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1135a70 is same with the state(5) to be set 00:21:06.276 [2024-05-15 17:11:53.892068] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1135a70 is same with the state(5) to be set 00:21:06.276 [2024-05-15 17:11:53.892099] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1135a70 is same with the state(5) to be set 00:21:06.276 [2024-05-15 17:11:53.892130] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1135a70 is same with the state(5) to be set 00:21:06.276 [2024-05-15 17:11:53.892167] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1135a70 is same with the state(5) to be set 00:21:06.276 [2024-05-15 17:11:53.892200] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1135a70 is same with the state(5) to be set 00:21:06.276 [2024-05-15 17:11:53.892231] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1135a70 is same with the state(5) to be set 00:21:06.276 [2024-05-15 17:11:53.892262] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1135a70 is same with the state(5) to be set 00:21:06.276 [2024-05-15 17:11:53.892294] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1135a70 is same with the state(5) to be set 00:21:06.276 [2024-05-15 17:11:53.892332] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1135a70 is same with the state(5) to be set 00:21:06.276 [2024-05-15 17:11:53.892364] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1135a70 is same with the state(5) to be set 00:21:06.276 [2024-05-15 17:11:53.892395] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1135a70 is same with the state(5) to be set 00:21:06.276 [2024-05-15 17:11:53.892425] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1135a70 is same with the state(5) to be set 00:21:06.276 [2024-05-15 17:11:53.892456] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1135a70 is same with the state(5) to be set 00:21:06.276 [2024-05-15 17:11:53.892487] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1135a70 is same with the state(5) to be set 00:21:06.276 [2024-05-15 17:11:53.892519] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1135a70 is same with the state(5) to be set 00:21:06.277 [2024-05-15 17:11:53.892550] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1135a70 is same with the state(5) to be set 00:21:06.277 [2024-05-15 17:11:53.892582] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1135a70 is same with the state(5) to be set 00:21:06.277 [2024-05-15 17:11:53.892613] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1135a70 is same with the state(5) to be set 00:21:06.277 [2024-05-15 17:11:53.903475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.277 [2024-05-15 17:11:53.903494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.277 [2024-05-15 17:11:53.903504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.277 [2024-05-15 17:11:53.903516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.277 [2024-05-15 17:11:53.903525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.277 [2024-05-15 17:11:53.903537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.277 [2024-05-15 17:11:53.903546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.277 [2024-05-15 17:11:53.903560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.277 [2024-05-15 17:11:53.903570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.277 [2024-05-15 17:11:53.903581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.277 [2024-05-15 17:11:53.903592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.277 [2024-05-15 17:11:53.903604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.277 [2024-05-15 17:11:53.903613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.277 [2024-05-15 17:11:53.903625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.277 [2024-05-15 17:11:53.903634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.277 [2024-05-15 17:11:53.903646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.277 [2024-05-15 17:11:53.903656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.277 [2024-05-15 17:11:53.903667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.277 [2024-05-15 17:11:53.903676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.277 [2024-05-15 17:11:53.903688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.277 [2024-05-15 17:11:53.903697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.277 [2024-05-15 17:11:53.903708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.277 [2024-05-15 17:11:53.903719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.277 [2024-05-15 17:11:53.903731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.277 [2024-05-15 17:11:53.903741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.277 [2024-05-15 17:11:53.903753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.277 [2024-05-15 17:11:53.903763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.277 [2024-05-15 17:11:53.903774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.277 [2024-05-15 17:11:53.903784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.277 [2024-05-15 17:11:53.903795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.277 [2024-05-15 17:11:53.903805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.277 [2024-05-15 17:11:53.903816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.277 [2024-05-15 17:11:53.903828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.277 [2024-05-15 17:11:53.903840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.277 [2024-05-15 17:11:53.903849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.277 [2024-05-15 17:11:53.903860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.277 [2024-05-15 17:11:53.903869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.277 [2024-05-15 17:11:53.903880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.277 [2024-05-15 17:11:53.903890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.277 [2024-05-15 17:11:53.903901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.277 [2024-05-15 17:11:53.903911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.277 [2024-05-15 17:11:53.904004] bdev_nvme.c:1602:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1d3d950 was disconnected and freed. reset controller. 00:21:06.277 [2024-05-15 17:11:53.905676] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3] resetting controller 00:21:06.277 [2024-05-15 17:11:53.905719] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cc8b70 (9): Bad file descriptor 00:21:06.277 [2024-05-15 17:11:53.905770] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:06.277 [2024-05-15 17:11:53.905783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.277 [2024-05-15 17:11:53.905794] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:06.277 [2024-05-15 17:11:53.905804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.277 [2024-05-15 17:11:53.905814] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:06.277 [2024-05-15 17:11:53.905824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.277 [2024-05-15 17:11:53.905834] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:06.277 [2024-05-15 17:11:53.905845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.277 [2024-05-15 17:11:53.905854] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d4d0a0 is same with the state(5) to be set 00:21:06.277 [2024-05-15 17:11:53.905876] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d81e10 (9): Bad file descriptor 00:21:06.277 [2024-05-15 17:11:53.905898] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e57db0 (9): Bad file descriptor 00:21:06.277 [2024-05-15 17:11:53.905917] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c9d8a0 (9): Bad file descriptor 00:21:06.277 [2024-05-15 17:11:53.905932] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c9b730 (9): Bad file descriptor 00:21:06.277 [2024-05-15 17:11:53.905963] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:06.277 [2024-05-15 17:11:53.905984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.277 [2024-05-15 17:11:53.905993] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:06.277 [2024-05-15 17:11:53.906004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.277 [2024-05-15 17:11:53.906016] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:06.277 [2024-05-15 17:11:53.906026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.277 [2024-05-15 17:11:53.906035] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:06.277 [2024-05-15 17:11:53.906045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.277 [2024-05-15 17:11:53.906054] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d790b0 is same with the state(5) to be set 00:21:06.277 [2024-05-15 17:11:53.906085] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:06.277 [2024-05-15 17:11:53.906097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.277 [2024-05-15 17:11:53.906107] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:06.277 [2024-05-15 17:11:53.906116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.278 [2024-05-15 17:11:53.906126] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:06.278 [2024-05-15 17:11:53.906136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.278 [2024-05-15 17:11:53.906146] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:06.278 [2024-05-15 17:11:53.906156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.278 [2024-05-15 17:11:53.906173] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17a2610 is same with the state(5) to be set 00:21:06.278 [2024-05-15 17:11:53.906193] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e67960 (9): Bad file descriptor 00:21:06.278 [2024-05-15 17:11:53.906213] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d4df60 (9): Bad file descriptor 00:21:06.278 [2024-05-15 17:11:53.907949] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:06.278 [2024-05-15 17:11:53.908405] nvme_tcp.c:1218:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:21:06.278 [2024-05-15 17:11:53.908466] nvme_tcp.c:1218:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:21:06.278 [2024-05-15 17:11:53.908870] nvme_tcp.c:1218:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:21:06.278 [2024-05-15 17:11:53.909047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:06.278 [2024-05-15 17:11:53.909191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:06.278 [2024-05-15 17:11:53.909208] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cc8b70 with addr=10.0.0.2, port=4420 00:21:06.278 [2024-05-15 17:11:53.909220] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cc8b70 is same with the state(5) to be set 00:21:06.278 [2024-05-15 17:11:53.909399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:06.278 [2024-05-15 17:11:53.909602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:06.278 [2024-05-15 17:11:53.909617] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c9d8a0 with addr=10.0.0.2, port=4420 00:21:06.278 [2024-05-15 17:11:53.909627] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c9d8a0 is same with the state(5) to be set 00:21:06.278 [2024-05-15 17:11:53.909975] nvme_tcp.c:1218:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:21:06.278 [2024-05-15 17:11:53.910083] nvme_tcp.c:1218:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:21:06.278 [2024-05-15 17:11:53.910145] nvme_tcp.c:1218:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:21:06.278 [2024-05-15 17:11:53.910284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.278 [2024-05-15 17:11:53.910300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.278 [2024-05-15 17:11:53.910320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.278 [2024-05-15 17:11:53.910330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.278 [2024-05-15 17:11:53.910343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.278 [2024-05-15 17:11:53.910354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.278 [2024-05-15 17:11:53.910366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.278 [2024-05-15 17:11:53.910377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.278 [2024-05-15 17:11:53.910388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.278 [2024-05-15 17:11:53.910398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.278 [2024-05-15 17:11:53.910409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.278 [2024-05-15 17:11:53.910419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.278 [2024-05-15 17:11:53.910431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.278 [2024-05-15 17:11:53.910440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.278 [2024-05-15 17:11:53.910452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.278 [2024-05-15 17:11:53.910462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.278 [2024-05-15 17:11:53.910474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.278 [2024-05-15 17:11:53.910483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.278 [2024-05-15 17:11:53.910495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.278 [2024-05-15 17:11:53.910504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.278 [2024-05-15 17:11:53.910516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.278 [2024-05-15 17:11:53.910530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.278 [2024-05-15 17:11:53.910542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.278 [2024-05-15 17:11:53.910551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.278 [2024-05-15 17:11:53.910563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.278 [2024-05-15 17:11:53.910574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.278 [2024-05-15 17:11:53.910587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.278 [2024-05-15 17:11:53.910596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.278 [2024-05-15 17:11:53.910609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.278 [2024-05-15 17:11:53.910618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.278 [2024-05-15 17:11:53.910630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.278 [2024-05-15 17:11:53.910639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.278 [2024-05-15 17:11:53.910652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.278 [2024-05-15 17:11:53.910661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.278 [2024-05-15 17:11:53.910674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.278 [2024-05-15 17:11:53.910684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.278 [2024-05-15 17:11:53.910697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.278 [2024-05-15 17:11:53.910706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.278 [2024-05-15 17:11:53.910717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.278 [2024-05-15 17:11:53.910734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.278 [2024-05-15 17:11:53.910745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.278 [2024-05-15 17:11:53.910756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.278 [2024-05-15 17:11:53.910767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.278 [2024-05-15 17:11:53.910777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.278 [2024-05-15 17:11:53.910788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.278 [2024-05-15 17:11:53.910798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.278 [2024-05-15 17:11:53.910813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.278 [2024-05-15 17:11:53.910823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.278 [2024-05-15 17:11:53.910834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.278 [2024-05-15 17:11:53.910844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.278 [2024-05-15 17:11:53.910855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.278 [2024-05-15 17:11:53.910864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.278 [2024-05-15 17:11:53.910877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.278 [2024-05-15 17:11:53.910886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.278 [2024-05-15 17:11:53.910909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.278 [2024-05-15 17:11:53.910919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.278 [2024-05-15 17:11:53.910931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.278 [2024-05-15 17:11:53.910941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.278 [2024-05-15 17:11:53.910953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.278 [2024-05-15 17:11:53.910963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.278 [2024-05-15 17:11:53.910976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.278 [2024-05-15 17:11:53.910986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.279 [2024-05-15 17:11:53.910997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.279 [2024-05-15 17:11:53.911007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.279 [2024-05-15 17:11:53.911020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.279 [2024-05-15 17:11:53.911030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.279 [2024-05-15 17:11:53.911042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.279 [2024-05-15 17:11:53.911052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.279 [2024-05-15 17:11:53.911065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.279 [2024-05-15 17:11:53.911076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.279 [2024-05-15 17:11:53.911088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.279 [2024-05-15 17:11:53.911101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.279 [2024-05-15 17:11:53.911114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.279 [2024-05-15 17:11:53.911125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.279 [2024-05-15 17:11:53.911137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.279 [2024-05-15 17:11:53.911147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.279 [2024-05-15 17:11:53.911159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.279 [2024-05-15 17:11:53.911177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.279 [2024-05-15 17:11:53.911190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.279 [2024-05-15 17:11:53.911201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.279 [2024-05-15 17:11:53.911213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.279 [2024-05-15 17:11:53.911222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.279 [2024-05-15 17:11:53.911236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.279 [2024-05-15 17:11:53.911246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.279 [2024-05-15 17:11:53.911259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.279 [2024-05-15 17:11:53.911269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.279 [2024-05-15 17:11:53.911281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.279 [2024-05-15 17:11:53.911292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.279 [2024-05-15 17:11:53.911304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.279 [2024-05-15 17:11:53.911314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.279 [2024-05-15 17:11:53.911326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.279 [2024-05-15 17:11:53.911336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.279 [2024-05-15 17:11:53.911348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.279 [2024-05-15 17:11:53.911358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.279 [2024-05-15 17:11:53.911370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.279 [2024-05-15 17:11:53.911380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.279 [2024-05-15 17:11:53.911395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.279 [2024-05-15 17:11:53.911404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.279 [2024-05-15 17:11:53.911417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.279 [2024-05-15 17:11:53.911426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.279 [2024-05-15 17:11:53.911438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.279 [2024-05-15 17:11:53.911448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.279 [2024-05-15 17:11:53.911459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.279 [2024-05-15 17:11:53.911470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.279 [2024-05-15 17:11:53.911482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.279 [2024-05-15 17:11:53.911494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.279 [2024-05-15 17:11:53.911506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.279 [2024-05-15 17:11:53.911516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.279 [2024-05-15 17:11:53.911528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.279 [2024-05-15 17:11:53.911538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.279 [2024-05-15 17:11:53.911550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.279 [2024-05-15 17:11:53.911559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.279 [2024-05-15 17:11:53.911572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.279 [2024-05-15 17:11:53.911583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.279 [2024-05-15 17:11:53.911595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.279 [2024-05-15 17:11:53.911606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.279 [2024-05-15 17:11:53.911618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.279 [2024-05-15 17:11:53.911628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.279 [2024-05-15 17:11:53.911640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.279 [2024-05-15 17:11:53.911650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.279 [2024-05-15 17:11:53.911662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.279 [2024-05-15 17:11:53.911674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.279 [2024-05-15 17:11:53.911686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.279 [2024-05-15 17:11:53.911696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.279 [2024-05-15 17:11:53.911709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.279 [2024-05-15 17:11:53.911718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.279 [2024-05-15 17:11:53.911730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.280 [2024-05-15 17:11:53.911740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.280 [2024-05-15 17:11:53.911751] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d369a0 is same with the state(5) to be set 00:21:06.280 [2024-05-15 17:11:53.915773] bdev_nvme.c:1602:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1d369a0 was disconnected and freed. reset controller. 00:21:06.280 [2024-05-15 17:11:53.915812] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cc8b70 (9): Bad file descriptor 00:21:06.280 [2024-05-15 17:11:53.915829] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c9d8a0 (9): Bad file descriptor 00:21:06.280 [2024-05-15 17:11:53.915858] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d4d0a0 (9): Bad file descriptor 00:21:06.280 [2024-05-15 17:11:53.915884] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:21:06.280 [2024-05-15 17:11:53.915913] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d790b0 (9): Bad file descriptor 00:21:06.280 [2024-05-15 17:11:53.915937] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17a2610 (9): Bad file descriptor 00:21:06.280 [2024-05-15 17:11:53.916034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.546 [2024-05-15 17:11:53.916050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.546 [2024-05-15 17:11:53.916066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.546 [2024-05-15 17:11:53.916077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.546 [2024-05-15 17:11:53.916090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.546 [2024-05-15 17:11:53.916101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.546 [2024-05-15 17:11:53.916115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.546 [2024-05-15 17:11:53.916126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.546 [2024-05-15 17:11:53.916139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.546 [2024-05-15 17:11:53.916149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.546 [2024-05-15 17:11:53.916161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.546 [2024-05-15 17:11:53.916185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.546 [2024-05-15 17:11:53.916198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.546 [2024-05-15 17:11:53.916208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.546 [2024-05-15 17:11:53.916220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.546 [2024-05-15 17:11:53.916230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.546 [2024-05-15 17:11:53.916243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.546 [2024-05-15 17:11:53.916253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.546 [2024-05-15 17:11:53.916266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.546 [2024-05-15 17:11:53.916276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.546 [2024-05-15 17:11:53.916289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.546 [2024-05-15 17:11:53.916299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.546 [2024-05-15 17:11:53.916312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.546 [2024-05-15 17:11:53.916322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.546 [2024-05-15 17:11:53.916335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.546 [2024-05-15 17:11:53.916345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.546 [2024-05-15 17:11:53.916357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.546 [2024-05-15 17:11:53.916367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.546 [2024-05-15 17:11:53.916380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.546 [2024-05-15 17:11:53.916391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.546 [2024-05-15 17:11:53.916403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.546 [2024-05-15 17:11:53.916413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.546 [2024-05-15 17:11:53.916425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.546 [2024-05-15 17:11:53.916436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.546 [2024-05-15 17:11:53.916448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.546 [2024-05-15 17:11:53.916458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.546 [2024-05-15 17:11:53.916472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.546 [2024-05-15 17:11:53.916483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.546 [2024-05-15 17:11:53.916495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.546 [2024-05-15 17:11:53.916505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.546 [2024-05-15 17:11:53.916518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.546 [2024-05-15 17:11:53.916529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.546 [2024-05-15 17:11:53.916541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.546 [2024-05-15 17:11:53.916551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.546 [2024-05-15 17:11:53.916563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.546 [2024-05-15 17:11:53.916574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.546 [2024-05-15 17:11:53.916586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.546 [2024-05-15 17:11:53.916596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.546 [2024-05-15 17:11:53.916608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.546 [2024-05-15 17:11:53.916618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.546 [2024-05-15 17:11:53.916630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.546 [2024-05-15 17:11:53.916641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.546 [2024-05-15 17:11:53.916653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.546 [2024-05-15 17:11:53.916663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.546 [2024-05-15 17:11:53.916676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.546 [2024-05-15 17:11:53.916686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.546 [2024-05-15 17:11:53.916698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.546 [2024-05-15 17:11:53.916708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.546 [2024-05-15 17:11:53.916720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.547 [2024-05-15 17:11:53.916730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.547 [2024-05-15 17:11:53.916743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.547 [2024-05-15 17:11:53.916755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.547 [2024-05-15 17:11:53.916768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.547 [2024-05-15 17:11:53.916778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.547 [2024-05-15 17:11:53.916790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.547 [2024-05-15 17:11:53.916800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.547 [2024-05-15 17:11:53.916813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.547 [2024-05-15 17:11:53.916822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.547 [2024-05-15 17:11:53.916835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.547 [2024-05-15 17:11:53.916847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.547 [2024-05-15 17:11:53.916858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.547 [2024-05-15 17:11:53.916870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.547 [2024-05-15 17:11:53.916883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.547 [2024-05-15 17:11:53.916894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.547 [2024-05-15 17:11:53.916906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.547 [2024-05-15 17:11:53.916917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.547 [2024-05-15 17:11:53.916929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.547 [2024-05-15 17:11:53.916939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.547 [2024-05-15 17:11:53.916951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.547 [2024-05-15 17:11:53.916962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.547 [2024-05-15 17:11:53.916974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.547 [2024-05-15 17:11:53.916984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.547 [2024-05-15 17:11:53.916997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.547 [2024-05-15 17:11:53.917006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.547 [2024-05-15 17:11:53.917019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.547 [2024-05-15 17:11:53.917030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.547 [2024-05-15 17:11:53.917044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.547 [2024-05-15 17:11:53.917054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.547 [2024-05-15 17:11:53.917067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.547 [2024-05-15 17:11:53.917076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.547 [2024-05-15 17:11:53.917089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.547 [2024-05-15 17:11:53.917099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.547 [2024-05-15 17:11:53.917111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.547 [2024-05-15 17:11:53.917121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.547 [2024-05-15 17:11:53.917134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.547 [2024-05-15 17:11:53.917143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.547 [2024-05-15 17:11:53.917156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.547 [2024-05-15 17:11:53.917173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.547 [2024-05-15 17:11:53.917186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.547 [2024-05-15 17:11:53.917196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.547 [2024-05-15 17:11:53.917207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.547 [2024-05-15 17:11:53.917218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.547 [2024-05-15 17:11:53.917230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.547 [2024-05-15 17:11:53.917240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.547 [2024-05-15 17:11:53.917254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.547 [2024-05-15 17:11:53.917264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.547 [2024-05-15 17:11:53.917276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.547 [2024-05-15 17:11:53.917286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.547 [2024-05-15 17:11:53.917298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.547 [2024-05-15 17:11:53.917308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.547 [2024-05-15 17:11:53.917319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.547 [2024-05-15 17:11:53.917331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.547 [2024-05-15 17:11:53.917343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.547 [2024-05-15 17:11:53.917352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.547 [2024-05-15 17:11:53.917365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.547 [2024-05-15 17:11:53.917375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.547 [2024-05-15 17:11:53.917386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.547 [2024-05-15 17:11:53.917396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.547 [2024-05-15 17:11:53.917408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.547 [2024-05-15 17:11:53.917417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.547 [2024-05-15 17:11:53.917429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.547 [2024-05-15 17:11:53.917439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.547 [2024-05-15 17:11:53.917451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.547 [2024-05-15 17:11:53.917461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.547 [2024-05-15 17:11:53.917472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.547 [2024-05-15 17:11:53.917482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.547 [2024-05-15 17:11:53.917494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.547 [2024-05-15 17:11:53.917505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.547 [2024-05-15 17:11:53.917516] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c96150 is same with the state(5) to be set 00:21:06.547 [2024-05-15 17:11:53.917597] bdev_nvme.c:1602:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1c96150 was disconnected and freed. reset controller. 00:21:06.547 [2024-05-15 17:11:53.917609] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:21:06.547 [2024-05-15 17:11:53.917685] nvme_tcp.c:1218:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:21:06.547 [2024-05-15 17:11:53.917730] nvme_tcp.c:1218:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:21:06.547 [2024-05-15 17:11:53.919067] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10] resetting controller 00:21:06.547 [2024-05-15 17:11:53.919098] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3] Ctrlr is in error state 00:21:06.547 [2024-05-15 17:11:53.919109] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3] controller reinitialization failed 00:21:06.547 [2024-05-15 17:11:53.919121] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3] in failed state. 00:21:06.547 [2024-05-15 17:11:53.919137] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:06.547 [2024-05-15 17:11:53.919150] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:06.547 [2024-05-15 17:11:53.919160] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:06.547 [2024-05-15 17:11:53.919193] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:21:06.547 [2024-05-15 17:11:53.919209] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:21:06.548 [2024-05-15 17:11:53.919272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.548 [2024-05-15 17:11:53.919288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.548 [2024-05-15 17:11:53.919304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.548 [2024-05-15 17:11:53.919315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.548 [2024-05-15 17:11:53.919327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.548 [2024-05-15 17:11:53.919338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.548 [2024-05-15 17:11:53.919351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.548 [2024-05-15 17:11:53.919361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.548 [2024-05-15 17:11:53.919374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.548 [2024-05-15 17:11:53.919385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.548 [2024-05-15 17:11:53.919398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.548 [2024-05-15 17:11:53.919408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.548 [2024-05-15 17:11:53.919420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.548 [2024-05-15 17:11:53.919431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.548 [2024-05-15 17:11:53.919442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.548 [2024-05-15 17:11:53.919453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.548 [2024-05-15 17:11:53.919465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.548 [2024-05-15 17:11:53.919475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.548 [2024-05-15 17:11:53.919487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.548 [2024-05-15 17:11:53.919498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.548 [2024-05-15 17:11:53.919510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.548 [2024-05-15 17:11:53.919520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.548 [2024-05-15 17:11:53.919536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.548 [2024-05-15 17:11:53.919546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.548 [2024-05-15 17:11:53.919558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.548 [2024-05-15 17:11:53.919569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.548 [2024-05-15 17:11:53.919582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.548 [2024-05-15 17:11:53.919593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.548 [2024-05-15 17:11:53.919606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.548 [2024-05-15 17:11:53.919616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.548 [2024-05-15 17:11:53.919629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.548 [2024-05-15 17:11:53.919639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.548 [2024-05-15 17:11:53.919651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.548 [2024-05-15 17:11:53.919661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.548 [2024-05-15 17:11:53.919674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.548 [2024-05-15 17:11:53.919685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.548 [2024-05-15 17:11:53.919697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.548 [2024-05-15 17:11:53.919708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.548 [2024-05-15 17:11:53.919720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.548 [2024-05-15 17:11:53.919731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.548 [2024-05-15 17:11:53.919743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.548 [2024-05-15 17:11:53.919754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.548 [2024-05-15 17:11:53.919766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.548 [2024-05-15 17:11:53.919777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.548 [2024-05-15 17:11:53.919789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.548 [2024-05-15 17:11:53.919799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.548 [2024-05-15 17:11:53.919812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.548 [2024-05-15 17:11:53.919826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.548 [2024-05-15 17:11:53.919838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.548 [2024-05-15 17:11:53.919848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.548 [2024-05-15 17:11:53.919861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.548 [2024-05-15 17:11:53.919872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.548 [2024-05-15 17:11:53.919885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.548 [2024-05-15 17:11:53.919895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.548 [2024-05-15 17:11:53.919908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.548 [2024-05-15 17:11:53.919919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.548 [2024-05-15 17:11:53.919931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.548 [2024-05-15 17:11:53.919942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.548 [2024-05-15 17:11:53.919955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.548 [2024-05-15 17:11:53.919965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.548 [2024-05-15 17:11:53.919978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.548 [2024-05-15 17:11:53.919989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.548 [2024-05-15 17:11:53.920000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.548 [2024-05-15 17:11:53.920010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.548 [2024-05-15 17:11:53.920023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.548 [2024-05-15 17:11:53.920034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.548 [2024-05-15 17:11:53.920046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.548 [2024-05-15 17:11:53.920056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.548 [2024-05-15 17:11:53.920068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.548 [2024-05-15 17:11:53.920078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.548 [2024-05-15 17:11:53.920090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.548 [2024-05-15 17:11:53.920101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.548 [2024-05-15 17:11:53.920115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.548 [2024-05-15 17:11:53.920125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.548 [2024-05-15 17:11:53.920138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.548 [2024-05-15 17:11:53.920148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.548 [2024-05-15 17:11:53.920160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.548 [2024-05-15 17:11:53.920181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.548 [2024-05-15 17:11:53.920194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.548 [2024-05-15 17:11:53.920204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.548 [2024-05-15 17:11:53.920216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.549 [2024-05-15 17:11:53.920226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.549 [2024-05-15 17:11:53.920240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.549 [2024-05-15 17:11:53.920250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.549 [2024-05-15 17:11:53.920263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.549 [2024-05-15 17:11:53.920273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.549 [2024-05-15 17:11:53.920286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.549 [2024-05-15 17:11:53.920296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.549 [2024-05-15 17:11:53.920308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.549 [2024-05-15 17:11:53.920318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.549 [2024-05-15 17:11:53.920330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.549 [2024-05-15 17:11:53.920340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.549 [2024-05-15 17:11:53.920352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.549 [2024-05-15 17:11:53.920363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.549 [2024-05-15 17:11:53.920375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.549 [2024-05-15 17:11:53.920384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.549 [2024-05-15 17:11:53.920397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.549 [2024-05-15 17:11:53.920409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.549 [2024-05-15 17:11:53.920421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.549 [2024-05-15 17:11:53.920432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.549 [2024-05-15 17:11:53.920444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.549 [2024-05-15 17:11:53.920454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.549 [2024-05-15 17:11:53.920468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.549 [2024-05-15 17:11:53.920478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.549 [2024-05-15 17:11:53.920491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.549 [2024-05-15 17:11:53.920501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.549 [2024-05-15 17:11:53.920513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:32768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.549 [2024-05-15 17:11:53.920523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.549 [2024-05-15 17:11:53.920535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:32896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.549 [2024-05-15 17:11:53.920545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.549 [2024-05-15 17:11:53.920556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:33024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.549 [2024-05-15 17:11:53.920566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.549 [2024-05-15 17:11:53.920578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.549 [2024-05-15 17:11:53.920588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.549 [2024-05-15 17:11:53.920600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.549 [2024-05-15 17:11:53.920612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.549 [2024-05-15 17:11:53.920624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.549 [2024-05-15 17:11:53.920634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.549 [2024-05-15 17:11:53.920647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.549 [2024-05-15 17:11:53.920657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.549 [2024-05-15 17:11:53.920669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:33152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.549 [2024-05-15 17:11:53.920679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.549 [2024-05-15 17:11:53.920694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.549 [2024-05-15 17:11:53.920704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.549 [2024-05-15 17:11:53.920716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.549 [2024-05-15 17:11:53.920726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.549 [2024-05-15 17:11:53.920738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.549 [2024-05-15 17:11:53.920748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.549 [2024-05-15 17:11:53.920760] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d3ec80 is same with the state(5) to be set 00:21:06.549 [2024-05-15 17:11:53.922385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.549 [2024-05-15 17:11:53.922412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.549 [2024-05-15 17:11:53.922428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.549 [2024-05-15 17:11:53.922439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.549 [2024-05-15 17:11:53.922452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.549 [2024-05-15 17:11:53.922463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.549 [2024-05-15 17:11:53.922476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.549 [2024-05-15 17:11:53.922486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.549 [2024-05-15 17:11:53.922499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.549 [2024-05-15 17:11:53.922509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.549 [2024-05-15 17:11:53.922522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.549 [2024-05-15 17:11:53.922533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.549 [2024-05-15 17:11:53.922544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.549 [2024-05-15 17:11:53.922556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.549 [2024-05-15 17:11:53.922569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.549 [2024-05-15 17:11:53.922579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.549 [2024-05-15 17:11:53.922591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.549 [2024-05-15 17:11:53.922602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.549 [2024-05-15 17:11:53.922618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.549 [2024-05-15 17:11:53.922628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.549 [2024-05-15 17:11:53.922641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.549 [2024-05-15 17:11:53.922650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.549 [2024-05-15 17:11:53.922663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.549 [2024-05-15 17:11:53.922674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.549 [2024-05-15 17:11:53.922686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.549 [2024-05-15 17:11:53.922697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.549 [2024-05-15 17:11:53.922709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.549 [2024-05-15 17:11:53.922720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.549 [2024-05-15 17:11:53.922733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.549 [2024-05-15 17:11:53.922743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.549 [2024-05-15 17:11:53.922756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.550 [2024-05-15 17:11:53.922767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.550 [2024-05-15 17:11:53.922779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.550 [2024-05-15 17:11:53.922790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.550 [2024-05-15 17:11:53.922802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.550 [2024-05-15 17:11:53.922813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.550 [2024-05-15 17:11:53.922825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.550 [2024-05-15 17:11:53.922836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.550 [2024-05-15 17:11:53.922848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.550 [2024-05-15 17:11:53.922858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.550 [2024-05-15 17:11:53.922871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.550 [2024-05-15 17:11:53.922880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.550 [2024-05-15 17:11:53.922893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.550 [2024-05-15 17:11:53.922906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.550 [2024-05-15 17:11:53.922919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.550 [2024-05-15 17:11:53.922929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.550 [2024-05-15 17:11:53.922942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.550 [2024-05-15 17:11:53.922953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.550 [2024-05-15 17:11:53.922965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.550 [2024-05-15 17:11:53.922975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.550 [2024-05-15 17:11:53.922987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.550 [2024-05-15 17:11:53.922997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.550 [2024-05-15 17:11:53.923009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.550 [2024-05-15 17:11:53.923019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.550 [2024-05-15 17:11:53.923031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.550 [2024-05-15 17:11:53.923042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.550 [2024-05-15 17:11:53.923055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.550 [2024-05-15 17:11:53.923065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.550 [2024-05-15 17:11:53.923078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.550 [2024-05-15 17:11:53.923088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.550 [2024-05-15 17:11:53.923100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.550 [2024-05-15 17:11:53.923111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.550 [2024-05-15 17:11:53.923122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.550 [2024-05-15 17:11:53.923133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.550 [2024-05-15 17:11:53.923146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.550 [2024-05-15 17:11:53.923157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.550 [2024-05-15 17:11:53.923174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.550 [2024-05-15 17:11:53.923185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.550 [2024-05-15 17:11:53.923201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.550 [2024-05-15 17:11:53.923211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.550 [2024-05-15 17:11:53.923224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.550 [2024-05-15 17:11:53.923236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.550 [2024-05-15 17:11:53.923248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.550 [2024-05-15 17:11:53.923259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.550 [2024-05-15 17:11:53.923271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.550 [2024-05-15 17:11:53.923282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.550 [2024-05-15 17:11:53.923294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.550 [2024-05-15 17:11:53.923304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.550 [2024-05-15 17:11:53.923316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.550 [2024-05-15 17:11:53.923327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.550 [2024-05-15 17:11:53.923340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.550 [2024-05-15 17:11:53.923350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.550 [2024-05-15 17:11:53.923363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.550 [2024-05-15 17:11:53.923374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.550 [2024-05-15 17:11:53.923385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.550 [2024-05-15 17:11:53.923395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.550 [2024-05-15 17:11:53.923408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.550 [2024-05-15 17:11:53.923418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.550 [2024-05-15 17:11:53.923429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.550 [2024-05-15 17:11:53.923440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.550 [2024-05-15 17:11:53.923452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.550 [2024-05-15 17:11:53.923462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.550 [2024-05-15 17:11:53.923475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.550 [2024-05-15 17:11:53.923491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.550 [2024-05-15 17:11:53.923503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.551 [2024-05-15 17:11:53.923514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.551 [2024-05-15 17:11:53.923528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.551 [2024-05-15 17:11:53.923538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.551 [2024-05-15 17:11:53.923551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.551 [2024-05-15 17:11:53.923560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.551 [2024-05-15 17:11:53.923573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.551 [2024-05-15 17:11:53.923583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.551 [2024-05-15 17:11:53.923594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.551 [2024-05-15 17:11:53.923605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.551 [2024-05-15 17:11:53.923617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.551 [2024-05-15 17:11:53.923626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.551 [2024-05-15 17:11:53.923638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.551 [2024-05-15 17:11:53.923650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.551 [2024-05-15 17:11:53.923662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.551 [2024-05-15 17:11:53.923672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.551 [2024-05-15 17:11:53.923685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.551 [2024-05-15 17:11:53.923694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.551 [2024-05-15 17:11:53.923706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.551 [2024-05-15 17:11:53.923717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.551 [2024-05-15 17:11:53.923729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.551 [2024-05-15 17:11:53.923738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.551 [2024-05-15 17:11:53.923750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.551 [2024-05-15 17:11:53.923761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.551 [2024-05-15 17:11:53.923774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.551 [2024-05-15 17:11:53.923785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.551 [2024-05-15 17:11:53.923796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.551 [2024-05-15 17:11:53.923806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.551 [2024-05-15 17:11:53.923818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.551 [2024-05-15 17:11:53.923828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.551 [2024-05-15 17:11:53.923841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.551 [2024-05-15 17:11:53.923850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.551 [2024-05-15 17:11:53.923862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.551 [2024-05-15 17:11:53.923872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.551 [2024-05-15 17:11:53.923883] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e00f90 is same with the state(5) to be set 00:21:06.551 [2024-05-15 17:11:53.925280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.551 [2024-05-15 17:11:53.925299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.551 [2024-05-15 17:11:53.925314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.551 [2024-05-15 17:11:53.925324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.551 [2024-05-15 17:11:53.925337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.551 [2024-05-15 17:11:53.925348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.551 [2024-05-15 17:11:53.925360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.551 [2024-05-15 17:11:53.925371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.551 [2024-05-15 17:11:53.925383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.551 [2024-05-15 17:11:53.925393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.551 [2024-05-15 17:11:53.925406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.551 [2024-05-15 17:11:53.925418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.551 [2024-05-15 17:11:53.925429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.551 [2024-05-15 17:11:53.925440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.551 [2024-05-15 17:11:53.925455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.551 [2024-05-15 17:11:53.925466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.551 [2024-05-15 17:11:53.925477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.551 [2024-05-15 17:11:53.925488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.551 [2024-05-15 17:11:53.925499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.551 [2024-05-15 17:11:53.925509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.551 [2024-05-15 17:11:53.925521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.551 [2024-05-15 17:11:53.925531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.551 [2024-05-15 17:11:53.925544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.551 [2024-05-15 17:11:53.925554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.551 [2024-05-15 17:11:53.925567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.551 [2024-05-15 17:11:53.925578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.551 [2024-05-15 17:11:53.925590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.551 [2024-05-15 17:11:53.925600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.551 [2024-05-15 17:11:53.925613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.551 [2024-05-15 17:11:53.925623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.551 [2024-05-15 17:11:53.925636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.551 [2024-05-15 17:11:53.925646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.551 [2024-05-15 17:11:53.925658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.551 [2024-05-15 17:11:53.925669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.551 [2024-05-15 17:11:53.925681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.551 [2024-05-15 17:11:53.925691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.551 [2024-05-15 17:11:53.925703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.551 [2024-05-15 17:11:53.925713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.551 [2024-05-15 17:11:53.925725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.551 [2024-05-15 17:11:53.925738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.551 [2024-05-15 17:11:53.925751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.551 [2024-05-15 17:11:53.925760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.551 [2024-05-15 17:11:53.925773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.551 [2024-05-15 17:11:53.925784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.551 [2024-05-15 17:11:53.925795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.551 [2024-05-15 17:11:53.925806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.551 [2024-05-15 17:11:53.925819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.552 [2024-05-15 17:11:53.925830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.552 [2024-05-15 17:11:53.925843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.552 [2024-05-15 17:11:53.925854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.552 [2024-05-15 17:11:53.925865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.552 [2024-05-15 17:11:53.925876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.552 [2024-05-15 17:11:53.925889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.552 [2024-05-15 17:11:53.925899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.552 [2024-05-15 17:11:53.925912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.552 [2024-05-15 17:11:53.925922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.552 [2024-05-15 17:11:53.925935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.552 [2024-05-15 17:11:53.925945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.552 [2024-05-15 17:11:53.925957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.552 [2024-05-15 17:11:53.925967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.552 [2024-05-15 17:11:53.925979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.552 [2024-05-15 17:11:53.925989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.552 [2024-05-15 17:11:53.926001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.552 [2024-05-15 17:11:53.926012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.552 [2024-05-15 17:11:53.926026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.552 [2024-05-15 17:11:53.926038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.552 [2024-05-15 17:11:53.926051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.552 [2024-05-15 17:11:53.926061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.552 [2024-05-15 17:11:53.926072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.552 [2024-05-15 17:11:53.926083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.552 [2024-05-15 17:11:53.926095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.552 [2024-05-15 17:11:53.926107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.552 [2024-05-15 17:11:53.926120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.552 [2024-05-15 17:11:53.926131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.552 [2024-05-15 17:11:53.926142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.552 [2024-05-15 17:11:53.926153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.552 [2024-05-15 17:11:53.926179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.552 [2024-05-15 17:11:53.926190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.552 [2024-05-15 17:11:53.926203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.552 [2024-05-15 17:11:53.926214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.552 [2024-05-15 17:11:53.926227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.552 [2024-05-15 17:11:53.926238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.552 [2024-05-15 17:11:53.926250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.552 [2024-05-15 17:11:53.926261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.552 [2024-05-15 17:11:53.926273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.552 [2024-05-15 17:11:53.926283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.552 [2024-05-15 17:11:53.926296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.552 [2024-05-15 17:11:53.926307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.552 [2024-05-15 17:11:53.926319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.552 [2024-05-15 17:11:53.926330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.552 [2024-05-15 17:11:53.926345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.552 [2024-05-15 17:11:53.926355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.552 [2024-05-15 17:11:53.926368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.552 [2024-05-15 17:11:53.926379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.552 [2024-05-15 17:11:53.926391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.552 [2024-05-15 17:11:53.926402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.552 [2024-05-15 17:11:53.926415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.552 [2024-05-15 17:11:53.926425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.552 [2024-05-15 17:11:53.926437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.552 [2024-05-15 17:11:53.926448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.552 [2024-05-15 17:11:53.926459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.552 [2024-05-15 17:11:53.926469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.552 [2024-05-15 17:11:53.926481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.552 [2024-05-15 17:11:53.926491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.552 [2024-05-15 17:11:53.926504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.552 [2024-05-15 17:11:53.926514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.552 [2024-05-15 17:11:53.926526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.552 [2024-05-15 17:11:53.926536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.552 [2024-05-15 17:11:53.926548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.552 [2024-05-15 17:11:53.926558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.552 [2024-05-15 17:11:53.926570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.552 [2024-05-15 17:11:53.926580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.552 [2024-05-15 17:11:53.926591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.552 [2024-05-15 17:11:53.926601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.552 [2024-05-15 17:11:53.926613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.552 [2024-05-15 17:11:53.926624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.552 [2024-05-15 17:11:53.926637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.552 [2024-05-15 17:11:53.926647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.552 [2024-05-15 17:11:53.926659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.552 [2024-05-15 17:11:53.926669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.552 [2024-05-15 17:11:53.926681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.552 [2024-05-15 17:11:53.926691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.553 [2024-05-15 17:11:53.926703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.553 [2024-05-15 17:11:53.926713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.553 [2024-05-15 17:11:53.926725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.553 [2024-05-15 17:11:53.926735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.553 [2024-05-15 17:11:53.926748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.553 [2024-05-15 17:11:53.926758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.553 [2024-05-15 17:11:53.926770] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c94cf0 is same with the state(5) to be set 00:21:06.553 [2024-05-15 17:11:53.929490] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:06.553 [2024-05-15 17:11:53.929515] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:06.553 [2024-05-15 17:11:53.929525] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2] resetting controller 00:21:06.553 [2024-05-15 17:11:53.929538] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4] resetting controller 00:21:06.553 [2024-05-15 17:11:53.929889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:06.553 [2024-05-15 17:11:53.930074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:06.553 [2024-05-15 17:11:53.930089] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d81e10 with addr=10.0.0.2, port=4420 00:21:06.553 [2024-05-15 17:11:53.930101] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d81e10 is same with the state(5) to be set 00:21:06.553 [2024-05-15 17:11:53.930158] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:21:06.553 [2024-05-15 17:11:53.930183] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:21:06.553 [2024-05-15 17:11:53.930214] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d81e10 (9): Bad file descriptor 00:21:06.553 [2024-05-15 17:11:53.930604] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode5] resetting controller 00:21:06.553 [2024-05-15 17:11:53.930623] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode6] resetting controller 00:21:06.553 [2024-05-15 17:11:53.930798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:06.553 [2024-05-15 17:11:53.931028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:06.553 [2024-05-15 17:11:53.931043] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c9b730 with addr=10.0.0.2, port=4420 00:21:06.553 [2024-05-15 17:11:53.931054] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c9b730 is same with the state(5) to be set 00:21:06.553 [2024-05-15 17:11:53.931183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:06.553 [2024-05-15 17:11:53.931387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:06.553 [2024-05-15 17:11:53.931402] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e57db0 with addr=10.0.0.2, port=4420 00:21:06.553 [2024-05-15 17:11:53.931412] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e57db0 is same with the state(5) to be set 00:21:06.553 [2024-05-15 17:11:53.932443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.553 [2024-05-15 17:11:53.932458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.553 [2024-05-15 17:11:53.932471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.553 [2024-05-15 17:11:53.932479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.553 [2024-05-15 17:11:53.932489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.553 [2024-05-15 17:11:53.932497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.553 [2024-05-15 17:11:53.932505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.553 [2024-05-15 17:11:53.932514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.553 [2024-05-15 17:11:53.932523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.553 [2024-05-15 17:11:53.932531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.553 [2024-05-15 17:11:53.932540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.553 [2024-05-15 17:11:53.932548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.553 [2024-05-15 17:11:53.932557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.553 [2024-05-15 17:11:53.932564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.553 [2024-05-15 17:11:53.932574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.553 [2024-05-15 17:11:53.932581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.553 [2024-05-15 17:11:53.932590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.553 [2024-05-15 17:11:53.932597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.553 [2024-05-15 17:11:53.932606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.553 [2024-05-15 17:11:53.932618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.553 [2024-05-15 17:11:53.932627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.553 [2024-05-15 17:11:53.932635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.553 [2024-05-15 17:11:53.932643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.553 [2024-05-15 17:11:53.932651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.553 [2024-05-15 17:11:53.932660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.553 [2024-05-15 17:11:53.932667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.553 [2024-05-15 17:11:53.932676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.553 [2024-05-15 17:11:53.932683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.553 [2024-05-15 17:11:53.932692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.553 [2024-05-15 17:11:53.932700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.553 [2024-05-15 17:11:53.932709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.553 [2024-05-15 17:11:53.932716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.553 [2024-05-15 17:11:53.932726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.553 [2024-05-15 17:11:53.932733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.553 [2024-05-15 17:11:53.932742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.553 [2024-05-15 17:11:53.932750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.553 [2024-05-15 17:11:53.932759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.553 [2024-05-15 17:11:53.932767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.553 [2024-05-15 17:11:53.932776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.553 [2024-05-15 17:11:53.932783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.553 [2024-05-15 17:11:53.932792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.553 [2024-05-15 17:11:53.932798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.553 [2024-05-15 17:11:53.932808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.553 [2024-05-15 17:11:53.932815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.553 [2024-05-15 17:11:53.932826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.553 [2024-05-15 17:11:53.932833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.553 [2024-05-15 17:11:53.932843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.553 [2024-05-15 17:11:53.932850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.553 [2024-05-15 17:11:53.932860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.553 [2024-05-15 17:11:53.932868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.553 [2024-05-15 17:11:53.932877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.553 [2024-05-15 17:11:53.932886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.553 [2024-05-15 17:11:53.932894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.553 [2024-05-15 17:11:53.932903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.553 [2024-05-15 17:11:53.932911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.553 [2024-05-15 17:11:53.932918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.554 [2024-05-15 17:11:53.932928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.554 [2024-05-15 17:11:53.932936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.554 [2024-05-15 17:11:53.932945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.554 [2024-05-15 17:11:53.932952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.554 [2024-05-15 17:11:53.932961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.554 [2024-05-15 17:11:53.932968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.554 [2024-05-15 17:11:53.932977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.554 [2024-05-15 17:11:53.932984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.554 [2024-05-15 17:11:53.932993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.554 [2024-05-15 17:11:53.933000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.554 [2024-05-15 17:11:53.933009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.554 [2024-05-15 17:11:53.933015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.554 [2024-05-15 17:11:53.933026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.554 [2024-05-15 17:11:53.933035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.554 [2024-05-15 17:11:53.933045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.554 [2024-05-15 17:11:53.933052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.554 [2024-05-15 17:11:53.933061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.554 [2024-05-15 17:11:53.933067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.554 [2024-05-15 17:11:53.933076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.554 [2024-05-15 17:11:53.933083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.554 [2024-05-15 17:11:53.933091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.554 [2024-05-15 17:11:53.933099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.554 [2024-05-15 17:11:53.933108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.554 [2024-05-15 17:11:53.933115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.554 [2024-05-15 17:11:53.933124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.554 [2024-05-15 17:11:53.933130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.554 [2024-05-15 17:11:53.933139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.554 [2024-05-15 17:11:53.933146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.554 [2024-05-15 17:11:53.933154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.554 [2024-05-15 17:11:53.933162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.554 [2024-05-15 17:11:53.933175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.554 [2024-05-15 17:11:53.933183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.554 [2024-05-15 17:11:53.933191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.554 [2024-05-15 17:11:53.933199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.554 [2024-05-15 17:11:53.933208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.554 [2024-05-15 17:11:53.933216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.554 [2024-05-15 17:11:53.933225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.554 [2024-05-15 17:11:53.933232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.554 [2024-05-15 17:11:53.933242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.554 [2024-05-15 17:11:53.933250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.554 [2024-05-15 17:11:53.933259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.554 [2024-05-15 17:11:53.933266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.554 [2024-05-15 17:11:53.933275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.554 [2024-05-15 17:11:53.933282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.554 [2024-05-15 17:11:53.933291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.554 [2024-05-15 17:11:53.933298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.554 [2024-05-15 17:11:53.933307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.554 [2024-05-15 17:11:53.933315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.554 [2024-05-15 17:11:53.933324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.554 [2024-05-15 17:11:53.933331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.554 [2024-05-15 17:11:53.933340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.554 [2024-05-15 17:11:53.933347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.554 [2024-05-15 17:11:53.933356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.554 [2024-05-15 17:11:53.933363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.554 [2024-05-15 17:11:53.933372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.554 [2024-05-15 17:11:53.933380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.554 [2024-05-15 17:11:53.933389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.554 [2024-05-15 17:11:53.933396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.554 [2024-05-15 17:11:53.933406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.554 [2024-05-15 17:11:53.933413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.554 [2024-05-15 17:11:53.933421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.554 [2024-05-15 17:11:53.933429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.554 [2024-05-15 17:11:53.933437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.554 [2024-05-15 17:11:53.933446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.554 [2024-05-15 17:11:53.933455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.554 [2024-05-15 17:11:53.933462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.554 [2024-05-15 17:11:53.933472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.554 [2024-05-15 17:11:53.933479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.554 [2024-05-15 17:11:53.933489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.554 [2024-05-15 17:11:53.933496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.554 [2024-05-15 17:11:53.933505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.554 [2024-05-15 17:11:53.933512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.554 [2024-05-15 17:11:53.933519] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c97650 is same with the state(5) to be set 00:21:06.555 [2024-05-15 17:11:53.934558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.555 [2024-05-15 17:11:53.934572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.555 [2024-05-15 17:11:53.934583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.555 [2024-05-15 17:11:53.934591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.555 [2024-05-15 17:11:53.934601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.555 [2024-05-15 17:11:53.934608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.555 [2024-05-15 17:11:53.934617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.555 [2024-05-15 17:11:53.934625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.555 [2024-05-15 17:11:53.934634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.555 [2024-05-15 17:11:53.934642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.555 [2024-05-15 17:11:53.934652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.555 [2024-05-15 17:11:53.934658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.555 [2024-05-15 17:11:53.934668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.555 [2024-05-15 17:11:53.934675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.555 [2024-05-15 17:11:53.934684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.555 [2024-05-15 17:11:53.934691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.555 [2024-05-15 17:11:53.934703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.555 [2024-05-15 17:11:53.934711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.555 [2024-05-15 17:11:53.934720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.555 [2024-05-15 17:11:53.934728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.555 [2024-05-15 17:11:53.934737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.555 [2024-05-15 17:11:53.934744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.555 [2024-05-15 17:11:53.934753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.555 [2024-05-15 17:11:53.934760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.555 [2024-05-15 17:11:53.934769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.555 [2024-05-15 17:11:53.934776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.555 [2024-05-15 17:11:53.934785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.555 [2024-05-15 17:11:53.934794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.555 [2024-05-15 17:11:53.934803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.555 [2024-05-15 17:11:53.934810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.555 [2024-05-15 17:11:53.934820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.555 [2024-05-15 17:11:53.934827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.555 [2024-05-15 17:11:53.934835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.555 [2024-05-15 17:11:53.934842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.555 [2024-05-15 17:11:53.934851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.555 [2024-05-15 17:11:53.934859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.555 [2024-05-15 17:11:53.934867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.555 [2024-05-15 17:11:53.934875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.555 [2024-05-15 17:11:53.934883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.555 [2024-05-15 17:11:53.934891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.555 [2024-05-15 17:11:53.934900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.555 [2024-05-15 17:11:53.934907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.555 [2024-05-15 17:11:53.934917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.555 [2024-05-15 17:11:53.934923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.555 [2024-05-15 17:11:53.934933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.555 [2024-05-15 17:11:53.934940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.555 [2024-05-15 17:11:53.934948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.555 [2024-05-15 17:11:53.934956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.555 [2024-05-15 17:11:53.934965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.555 [2024-05-15 17:11:53.934973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.555 [2024-05-15 17:11:53.934982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.555 [2024-05-15 17:11:53.934990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.555 [2024-05-15 17:11:53.934998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.555 [2024-05-15 17:11:53.935005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.555 [2024-05-15 17:11:53.935014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.555 [2024-05-15 17:11:53.935021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.555 [2024-05-15 17:11:53.935030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.555 [2024-05-15 17:11:53.935036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.555 [2024-05-15 17:11:53.935045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.555 [2024-05-15 17:11:53.935052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.555 [2024-05-15 17:11:53.935063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.555 [2024-05-15 17:11:53.935070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.556 [2024-05-15 17:11:53.935078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.556 [2024-05-15 17:11:53.935086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.556 [2024-05-15 17:11:53.935095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.556 [2024-05-15 17:11:53.935102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.556 [2024-05-15 17:11:53.935114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.556 [2024-05-15 17:11:53.935122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.556 [2024-05-15 17:11:53.935131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.556 [2024-05-15 17:11:53.935138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.556 [2024-05-15 17:11:53.935147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.556 [2024-05-15 17:11:53.935155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.556 [2024-05-15 17:11:53.935169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.556 [2024-05-15 17:11:53.935177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.556 [2024-05-15 17:11:53.935186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.556 [2024-05-15 17:11:53.935193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.556 [2024-05-15 17:11:53.935203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.556 [2024-05-15 17:11:53.935210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.556 [2024-05-15 17:11:53.935219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.556 [2024-05-15 17:11:53.935226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.556 [2024-05-15 17:11:53.935235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.556 [2024-05-15 17:11:53.935242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.556 [2024-05-15 17:11:53.935251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.556 [2024-05-15 17:11:53.935258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.556 [2024-05-15 17:11:53.935267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.556 [2024-05-15 17:11:53.935275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.556 [2024-05-15 17:11:53.935284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.556 [2024-05-15 17:11:53.935291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.556 [2024-05-15 17:11:53.935300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.556 [2024-05-15 17:11:53.935307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.556 [2024-05-15 17:11:53.935317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.556 [2024-05-15 17:11:53.935326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.556 [2024-05-15 17:11:53.935335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.556 [2024-05-15 17:11:53.935342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.556 [2024-05-15 17:11:53.935351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.556 [2024-05-15 17:11:53.935358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.556 [2024-05-15 17:11:53.935368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.556 [2024-05-15 17:11:53.935374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.556 [2024-05-15 17:11:53.935384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.556 [2024-05-15 17:11:53.935391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.556 [2024-05-15 17:11:53.935399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.556 [2024-05-15 17:11:53.935406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.556 [2024-05-15 17:11:53.935414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.556 [2024-05-15 17:11:53.935421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.556 [2024-05-15 17:11:53.935430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.556 [2024-05-15 17:11:53.935437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.556 [2024-05-15 17:11:53.935446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.556 [2024-05-15 17:11:53.935452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.556 [2024-05-15 17:11:53.935461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.556 [2024-05-15 17:11:53.935468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.556 [2024-05-15 17:11:53.935476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.556 [2024-05-15 17:11:53.935485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.556 [2024-05-15 17:11:53.935494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.556 [2024-05-15 17:11:53.935502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.556 [2024-05-15 17:11:53.935510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.556 [2024-05-15 17:11:53.935518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.556 [2024-05-15 17:11:53.935528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.556 [2024-05-15 17:11:53.935536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.556 [2024-05-15 17:11:53.935544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.556 [2024-05-15 17:11:53.935552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.556 [2024-05-15 17:11:53.935561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.556 [2024-05-15 17:11:53.935567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.556 [2024-05-15 17:11:53.935577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.556 [2024-05-15 17:11:53.935583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.556 [2024-05-15 17:11:53.935593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.556 [2024-05-15 17:11:53.935600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.556 [2024-05-15 17:11:53.935609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.556 [2024-05-15 17:11:53.935616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.556 [2024-05-15 17:11:53.935624] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c98b50 is same with the state(5) to be set 00:21:06.556 [2024-05-15 17:11:53.936635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.556 [2024-05-15 17:11:53.936649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.556 [2024-05-15 17:11:53.936661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.556 [2024-05-15 17:11:53.936668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.556 [2024-05-15 17:11:53.936678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.556 [2024-05-15 17:11:53.936686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.556 [2024-05-15 17:11:53.936695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.556 [2024-05-15 17:11:53.936702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.556 [2024-05-15 17:11:53.936711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.556 [2024-05-15 17:11:53.936718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.556 [2024-05-15 17:11:53.936728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.556 [2024-05-15 17:11:53.936735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.556 [2024-05-15 17:11:53.936747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.556 [2024-05-15 17:11:53.936754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.557 [2024-05-15 17:11:53.936764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.557 [2024-05-15 17:11:53.936771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.557 [2024-05-15 17:11:53.936780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.557 [2024-05-15 17:11:53.936788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.557 [2024-05-15 17:11:53.936796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.557 [2024-05-15 17:11:53.936804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.557 [2024-05-15 17:11:53.936812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.557 [2024-05-15 17:11:53.936819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.557 [2024-05-15 17:11:53.936828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.557 [2024-05-15 17:11:53.936835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.557 [2024-05-15 17:11:53.936844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.557 [2024-05-15 17:11:53.936851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.557 [2024-05-15 17:11:53.936860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.557 [2024-05-15 17:11:53.936868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.557 [2024-05-15 17:11:53.936877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.557 [2024-05-15 17:11:53.936884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.557 [2024-05-15 17:11:53.936893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.557 [2024-05-15 17:11:53.936900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.557 [2024-05-15 17:11:53.936909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.557 [2024-05-15 17:11:53.936916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.557 [2024-05-15 17:11:53.936925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.557 [2024-05-15 17:11:53.936933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.557 [2024-05-15 17:11:53.936942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.557 [2024-05-15 17:11:53.936951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.557 [2024-05-15 17:11:53.936960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.557 [2024-05-15 17:11:53.936968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.557 [2024-05-15 17:11:53.936976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.557 [2024-05-15 17:11:53.936983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.557 [2024-05-15 17:11:53.936992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.557 [2024-05-15 17:11:53.936999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.557 [2024-05-15 17:11:53.937008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.557 [2024-05-15 17:11:53.937015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.557 [2024-05-15 17:11:53.937024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.557 [2024-05-15 17:11:53.937031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.557 [2024-05-15 17:11:53.937040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.557 [2024-05-15 17:11:53.937049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.557 [2024-05-15 17:11:53.937058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.557 [2024-05-15 17:11:53.937065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.557 [2024-05-15 17:11:53.937074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.557 [2024-05-15 17:11:53.937082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.557 [2024-05-15 17:11:53.937091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.557 [2024-05-15 17:11:53.937099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.557 [2024-05-15 17:11:53.937107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.557 [2024-05-15 17:11:53.937114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.557 [2024-05-15 17:11:53.937123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.557 [2024-05-15 17:11:53.937130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.557 [2024-05-15 17:11:53.937140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.557 [2024-05-15 17:11:53.937147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.557 [2024-05-15 17:11:53.937158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.557 [2024-05-15 17:11:53.937170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.557 [2024-05-15 17:11:53.937179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.557 [2024-05-15 17:11:53.937187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.557 [2024-05-15 17:11:53.937196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.557 [2024-05-15 17:11:53.937203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.557 [2024-05-15 17:11:53.937212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.557 [2024-05-15 17:11:53.937220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.557 [2024-05-15 17:11:53.937228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.557 [2024-05-15 17:11:53.937235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.557 [2024-05-15 17:11:53.937244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.557 [2024-05-15 17:11:53.937252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.557 [2024-05-15 17:11:53.937261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.557 [2024-05-15 17:11:53.937267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.557 [2024-05-15 17:11:53.937277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.557 [2024-05-15 17:11:53.937284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.557 [2024-05-15 17:11:53.937294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.557 [2024-05-15 17:11:53.937301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.557 [2024-05-15 17:11:53.937310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.557 [2024-05-15 17:11:53.937318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.557 [2024-05-15 17:11:53.937327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.557 [2024-05-15 17:11:53.937335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.557 [2024-05-15 17:11:53.937343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.557 [2024-05-15 17:11:53.937351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.557 [2024-05-15 17:11:53.937360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.557 [2024-05-15 17:11:53.937369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.557 [2024-05-15 17:11:53.937378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.558 [2024-05-15 17:11:53.937384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.558 [2024-05-15 17:11:53.937394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.558 [2024-05-15 17:11:53.937402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.558 [2024-05-15 17:11:53.937412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.558 [2024-05-15 17:11:53.937419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.558 [2024-05-15 17:11:53.937428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.558 [2024-05-15 17:11:53.937436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.558 [2024-05-15 17:11:53.937445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.558 [2024-05-15 17:11:53.937453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.558 [2024-05-15 17:11:53.937461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.558 [2024-05-15 17:11:53.937469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.558 [2024-05-15 17:11:53.937478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.558 [2024-05-15 17:11:53.937485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.558 [2024-05-15 17:11:53.937494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.558 [2024-05-15 17:11:53.937500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.558 [2024-05-15 17:11:53.937510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.558 [2024-05-15 17:11:53.937517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.558 [2024-05-15 17:11:53.937526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.558 [2024-05-15 17:11:53.937533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.558 [2024-05-15 17:11:53.937542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.558 [2024-05-15 17:11:53.937549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.558 [2024-05-15 17:11:53.937558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.558 [2024-05-15 17:11:53.937566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.558 [2024-05-15 17:11:53.937577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.558 [2024-05-15 17:11:53.937585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.558 [2024-05-15 17:11:53.937593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.558 [2024-05-15 17:11:53.937601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.558 [2024-05-15 17:11:53.937610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.558 [2024-05-15 17:11:53.937617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.558 [2024-05-15 17:11:53.937626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.558 [2024-05-15 17:11:53.937634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.558 [2024-05-15 17:11:53.937643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.558 [2024-05-15 17:11:53.937650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.558 [2024-05-15 17:11:53.937659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.558 [2024-05-15 17:11:53.937667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.558 [2024-05-15 17:11:53.937676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.558 [2024-05-15 17:11:53.937684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.558 [2024-05-15 17:11:53.937693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.558 [2024-05-15 17:11:53.937701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.558 [2024-05-15 17:11:53.937709] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d35490 is same with the state(5) to be set 00:21:06.558 [2024-05-15 17:11:53.938984] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:06.558 [2024-05-15 17:11:53.939005] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3] resetting controller 00:21:06.558 [2024-05-15 17:11:53.939015] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode7] resetting controller 00:21:06.558 [2024-05-15 17:11:53.939024] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode8] resetting controller 00:21:06.558 task offset: 33152 on job bdev=Nvme3n1 fails 00:21:06.558 00:21:06.558 Latency(us) 00:21:06.558 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:06.558 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:06.558 Job: Nvme1n1 ended in about 0.90 seconds with error 00:21:06.558 Verification LBA range: start 0x0 length 0x400 00:21:06.558 Nvme1n1 : 0.90 217.47 13.59 71.01 0.00 219674.04 16526.47 215186.03 00:21:06.558 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:06.558 Job: Nvme2n1 ended in about 0.92 seconds with error 00:21:06.558 Verification LBA range: start 0x0 length 0x400 00:21:06.558 Nvme2n1 : 0.92 214.02 13.38 69.88 0.00 219330.58 16754.42 214274.23 00:21:06.558 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:06.558 Job: Nvme3n1 ended in about 0.90 seconds with error 00:21:06.558 Verification LBA range: start 0x0 length 0x400 00:21:06.558 Nvme3n1 : 0.90 284.73 17.80 71.18 0.00 171669.41 13392.14 214274.23 00:21:06.558 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:06.558 Job: Nvme4n1 ended in about 0.92 seconds with error 00:21:06.558 Verification LBA range: start 0x0 length 0x400 00:21:06.558 Nvme4n1 : 0.92 208.94 13.06 69.65 0.00 215571.59 19261.89 209715.20 00:21:06.558 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:06.558 Job: Nvme5n1 ended in about 0.92 seconds with error 00:21:06.558 Verification LBA range: start 0x0 length 0x400 00:21:06.558 Nvme5n1 : 0.92 208.29 13.02 69.43 0.00 212382.27 15614.66 219745.06 00:21:06.558 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:06.558 Job: Nvme6n1 ended in about 0.92 seconds with error 00:21:06.558 Verification LBA range: start 0x0 length 0x400 00:21:06.558 Nvme6n1 : 0.92 207.99 13.00 69.33 0.00 208797.16 15842.62 217009.64 00:21:06.558 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:06.558 Job: Nvme7n1 ended in about 0.93 seconds with error 00:21:06.558 Verification LBA range: start 0x0 length 0x400 00:21:06.558 Nvme7n1 : 0.93 206.80 12.93 68.93 0.00 206166.82 18464.06 215186.03 00:21:06.558 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:06.558 Job: Nvme8n1 ended in about 0.93 seconds with error 00:21:06.558 Verification LBA range: start 0x0 length 0x400 00:21:06.558 Nvme8n1 : 0.93 206.33 12.90 68.78 0.00 202725.06 15386.71 217921.45 00:21:06.558 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:06.558 Job: Nvme9n1 ended in about 0.93 seconds with error 00:21:06.558 Verification LBA range: start 0x0 length 0x400 00:21:06.558 Nvme9n1 : 0.93 137.25 8.58 68.62 0.00 265787.36 16754.42 246187.41 00:21:06.558 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:06.558 Job: Nvme10n1 ended in about 0.91 seconds with error 00:21:06.558 Verification LBA range: start 0x0 length 0x400 00:21:06.558 Nvme10n1 : 0.91 144.61 9.04 70.12 0.00 248847.23 21655.37 238892.97 00:21:06.558 =================================================================================================================== 00:21:06.558 Total : 2036.44 127.28 696.94 0.00 213941.48 13392.14 246187.41 00:21:06.558 [2024-05-15 17:11:53.960520] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:21:06.558 [2024-05-15 17:11:53.960555] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode9] resetting controller 00:21:06.558 [2024-05-15 17:11:53.960774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:06.558 [2024-05-15 17:11:53.961042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:06.558 [2024-05-15 17:11:53.961055] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e67960 with addr=10.0.0.2, port=4420 00:21:06.558 [2024-05-15 17:11:53.961066] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e67960 is same with the state(5) to be set 00:21:06.558 [2024-05-15 17:11:53.961335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:06.558 [2024-05-15 17:11:53.961519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:06.558 [2024-05-15 17:11:53.961531] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d4df60 with addr=10.0.0.2, port=4420 00:21:06.559 [2024-05-15 17:11:53.961538] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d4df60 is same with the state(5) to be set 00:21:06.559 [2024-05-15 17:11:53.961552] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c9b730 (9): Bad file descriptor 00:21:06.559 [2024-05-15 17:11:53.961564] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e57db0 (9): Bad file descriptor 00:21:06.559 [2024-05-15 17:11:53.961578] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10] Ctrlr is in error state 00:21:06.559 [2024-05-15 17:11:53.961585] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10] controller reinitialization failed 00:21:06.559 [2024-05-15 17:11:53.961593] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10] in failed state. 00:21:06.559 [2024-05-15 17:11:53.961725] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:06.559 [2024-05-15 17:11:53.961982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:06.559 [2024-05-15 17:11:53.962220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:06.559 [2024-05-15 17:11:53.962232] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c9d8a0 with addr=10.0.0.2, port=4420 00:21:06.559 [2024-05-15 17:11:53.962240] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c9d8a0 is same with the state(5) to be set 00:21:06.559 [2024-05-15 17:11:53.962369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:06.559 [2024-05-15 17:11:53.962487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:06.559 [2024-05-15 17:11:53.962498] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cc8b70 with addr=10.0.0.2, port=4420 00:21:06.559 [2024-05-15 17:11:53.962506] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cc8b70 is same with the state(5) to be set 00:21:06.559 [2024-05-15 17:11:53.962751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:06.559 [2024-05-15 17:11:53.962868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:06.559 [2024-05-15 17:11:53.962880] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17a2610 with addr=10.0.0.2, port=4420 00:21:06.559 [2024-05-15 17:11:53.962888] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17a2610 is same with the state(5) to be set 00:21:06.559 [2024-05-15 17:11:53.963055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:06.559 [2024-05-15 17:11:53.963228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:06.559 [2024-05-15 17:11:53.963240] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d790b0 with addr=10.0.0.2, port=4420 00:21:06.559 [2024-05-15 17:11:53.963247] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d790b0 is same with the state(5) to be set 00:21:06.559 [2024-05-15 17:11:53.963479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:06.559 [2024-05-15 17:11:53.963653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:06.559 [2024-05-15 17:11:53.963665] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d4d0a0 with addr=10.0.0.2, port=4420 00:21:06.559 [2024-05-15 17:11:53.963672] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d4d0a0 is same with the state(5) to be set 00:21:06.559 [2024-05-15 17:11:53.963682] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e67960 (9): Bad file descriptor 00:21:06.559 [2024-05-15 17:11:53.963691] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d4df60 (9): Bad file descriptor 00:21:06.559 [2024-05-15 17:11:53.963699] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:21:06.559 [2024-05-15 17:11:53.963706] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2] controller reinitialization failed 00:21:06.559 [2024-05-15 17:11:53.963714] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:21:06.559 [2024-05-15 17:11:53.963725] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode4] Ctrlr is in error state 00:21:06.559 [2024-05-15 17:11:53.963732] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode4] controller reinitialization failed 00:21:06.559 [2024-05-15 17:11:53.963742] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4] in failed state. 00:21:06.559 [2024-05-15 17:11:53.963771] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:21:06.559 [2024-05-15 17:11:53.963782] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:21:06.559 [2024-05-15 17:11:53.963791] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:21:06.559 [2024-05-15 17:11:53.963802] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:21:06.559 [2024-05-15 17:11:53.964533] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:06.559 [2024-05-15 17:11:53.964547] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:06.559 [2024-05-15 17:11:53.964559] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c9d8a0 (9): Bad file descriptor 00:21:06.559 [2024-05-15 17:11:53.964570] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cc8b70 (9): Bad file descriptor 00:21:06.559 [2024-05-15 17:11:53.964578] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17a2610 (9): Bad file descriptor 00:21:06.559 [2024-05-15 17:11:53.964587] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d790b0 (9): Bad file descriptor 00:21:06.559 [2024-05-15 17:11:53.964597] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d4d0a0 (9): Bad file descriptor 00:21:06.559 [2024-05-15 17:11:53.964605] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode5] Ctrlr is in error state 00:21:06.559 [2024-05-15 17:11:53.964611] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode5] controller reinitialization failed 00:21:06.559 [2024-05-15 17:11:53.964617] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5] in failed state. 00:21:06.559 [2024-05-15 17:11:53.964627] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode6] Ctrlr is in error state 00:21:06.559 [2024-05-15 17:11:53.964634] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode6] controller reinitialization failed 00:21:06.559 [2024-05-15 17:11:53.964640] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode6] in failed state. 00:21:06.559 [2024-05-15 17:11:53.964918] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10] resetting controller 00:21:06.559 [2024-05-15 17:11:53.964933] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:06.559 [2024-05-15 17:11:53.964939] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:06.559 [2024-05-15 17:11:53.964954] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:06.559 [2024-05-15 17:11:53.964960] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:06.559 [2024-05-15 17:11:53.964967] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:06.559 [2024-05-15 17:11:53.964977] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3] Ctrlr is in error state 00:21:06.559 [2024-05-15 17:11:53.964983] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3] controller reinitialization failed 00:21:06.559 [2024-05-15 17:11:53.964990] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3] in failed state. 00:21:06.559 [2024-05-15 17:11:53.964998] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode7] Ctrlr is in error state 00:21:06.559 [2024-05-15 17:11:53.965005] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode7] controller reinitialization failed 00:21:06.559 [2024-05-15 17:11:53.965012] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode7] in failed state. 00:21:06.559 [2024-05-15 17:11:53.965024] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode8] Ctrlr is in error state 00:21:06.559 [2024-05-15 17:11:53.965031] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode8] controller reinitialization failed 00:21:06.559 [2024-05-15 17:11:53.965038] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode8] in failed state. 00:21:06.559 [2024-05-15 17:11:53.965047] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode9] Ctrlr is in error state 00:21:06.559 [2024-05-15 17:11:53.965053] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode9] controller reinitialization failed 00:21:06.559 [2024-05-15 17:11:53.965060] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode9] in failed state. 00:21:06.559 [2024-05-15 17:11:53.965102] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:06.559 [2024-05-15 17:11:53.965111] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:06.559 [2024-05-15 17:11:53.965116] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:06.559 [2024-05-15 17:11:53.965122] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:06.559 [2024-05-15 17:11:53.965129] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:06.559 [2024-05-15 17:11:53.965387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:06.559 [2024-05-15 17:11:53.965521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:06.559 [2024-05-15 17:11:53.965532] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d81e10 with addr=10.0.0.2, port=4420 00:21:06.559 [2024-05-15 17:11:53.965540] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d81e10 is same with the state(5) to be set 00:21:06.559 [2024-05-15 17:11:53.965569] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d81e10 (9): Bad file descriptor 00:21:06.559 [2024-05-15 17:11:53.965596] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10] Ctrlr is in error state 00:21:06.560 [2024-05-15 17:11:53.965603] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10] controller reinitialization failed 00:21:06.560 [2024-05-15 17:11:53.965610] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10] in failed state. 00:21:06.560 [2024-05-15 17:11:53.965634] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:06.818 17:11:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@136 -- # nvmfpid= 00:21:06.818 17:11:54 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@139 -- # sleep 1 00:21:07.788 17:11:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@142 -- # kill -9 3127958 00:21:07.788 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh: line 142: kill: (3127958) - No such process 00:21:07.788 17:11:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@142 -- # true 00:21:07.788 17:11:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@144 -- # stoptarget 00:21:07.788 17:11:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:21:07.788 17:11:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:21:07.788 17:11:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:21:07.788 17:11:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@45 -- # nvmftestfini 00:21:07.788 17:11:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@488 -- # nvmfcleanup 00:21:07.788 17:11:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@117 -- # sync 00:21:07.788 17:11:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:21:07.788 17:11:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@120 -- # set +e 00:21:07.788 17:11:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@121 -- # for i in {1..20} 00:21:07.788 17:11:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:21:07.788 rmmod nvme_tcp 00:21:07.788 rmmod nvme_fabrics 00:21:07.788 rmmod nvme_keyring 00:21:07.788 17:11:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:21:07.788 17:11:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@124 -- # set -e 00:21:07.788 17:11:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@125 -- # return 0 00:21:07.788 17:11:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:21:07.788 17:11:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:21:07.788 17:11:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:21:07.788 17:11:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:21:07.788 17:11:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:21:07.788 17:11:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:21:07.788 17:11:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:07.788 17:11:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:07.788 17:11:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:10.323 17:11:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:21:10.323 00:21:10.323 real 0m8.333s 00:21:10.323 user 0m21.932s 00:21:10.323 sys 0m1.288s 00:21:10.323 17:11:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1122 -- # xtrace_disable 00:21:10.323 17:11:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:21:10.323 ************************************ 00:21:10.323 END TEST nvmf_shutdown_tc3 00:21:10.323 ************************************ 00:21:10.323 17:11:57 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@151 -- # trap - SIGINT SIGTERM EXIT 00:21:10.323 00:21:10.323 real 0m31.371s 00:21:10.323 user 1m19.498s 00:21:10.323 sys 0m8.212s 00:21:10.323 17:11:57 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1122 -- # xtrace_disable 00:21:10.323 17:11:57 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:21:10.323 ************************************ 00:21:10.323 END TEST nvmf_shutdown 00:21:10.323 ************************************ 00:21:10.323 17:11:57 nvmf_tcp -- nvmf/nvmf.sh@85 -- # timing_exit target 00:21:10.323 17:11:57 nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:21:10.323 17:11:57 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:21:10.323 17:11:57 nvmf_tcp -- nvmf/nvmf.sh@87 -- # timing_enter host 00:21:10.323 17:11:57 nvmf_tcp -- common/autotest_common.sh@720 -- # xtrace_disable 00:21:10.323 17:11:57 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:21:10.323 17:11:57 nvmf_tcp -- nvmf/nvmf.sh@89 -- # [[ 0 -eq 0 ]] 00:21:10.323 17:11:57 nvmf_tcp -- nvmf/nvmf.sh@90 -- # run_test nvmf_multicontroller /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:21:10.323 17:11:57 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:21:10.323 17:11:57 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:21:10.323 17:11:57 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:21:10.323 ************************************ 00:21:10.323 START TEST nvmf_multicontroller 00:21:10.323 ************************************ 00:21:10.323 17:11:57 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:21:10.323 * Looking for test storage... 00:21:10.323 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:21:10.323 17:11:57 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:10.323 17:11:57 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@7 -- # uname -s 00:21:10.323 17:11:57 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:10.323 17:11:57 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:10.323 17:11:57 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:10.323 17:11:57 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:10.324 17:11:57 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:10.324 17:11:57 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:10.324 17:11:57 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:10.324 17:11:57 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:10.324 17:11:57 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:10.324 17:11:57 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:10.324 17:11:57 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:21:10.324 17:11:57 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:21:10.324 17:11:57 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:10.324 17:11:57 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:10.324 17:11:57 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:10.324 17:11:57 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:10.324 17:11:57 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:10.324 17:11:57 nvmf_tcp.nvmf_multicontroller -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:10.324 17:11:57 nvmf_tcp.nvmf_multicontroller -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:10.324 17:11:57 nvmf_tcp.nvmf_multicontroller -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:10.324 17:11:57 nvmf_tcp.nvmf_multicontroller -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:10.324 17:11:57 nvmf_tcp.nvmf_multicontroller -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:10.324 17:11:57 nvmf_tcp.nvmf_multicontroller -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:10.324 17:11:57 nvmf_tcp.nvmf_multicontroller -- paths/export.sh@5 -- # export PATH 00:21:10.324 17:11:57 nvmf_tcp.nvmf_multicontroller -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:10.324 17:11:57 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@47 -- # : 0 00:21:10.324 17:11:57 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:21:10.324 17:11:57 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:21:10.324 17:11:57 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:10.324 17:11:57 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:10.324 17:11:57 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:10.324 17:11:57 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:21:10.324 17:11:57 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:21:10.324 17:11:57 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@51 -- # have_pci_nics=0 00:21:10.324 17:11:57 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@11 -- # MALLOC_BDEV_SIZE=64 00:21:10.324 17:11:57 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:21:10.324 17:11:57 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@13 -- # NVMF_HOST_FIRST_PORT=60000 00:21:10.324 17:11:57 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@14 -- # NVMF_HOST_SECOND_PORT=60001 00:21:10.324 17:11:57 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:21:10.324 17:11:57 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@18 -- # '[' tcp == rdma ']' 00:21:10.324 17:11:57 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@23 -- # nvmftestinit 00:21:10.324 17:11:57 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:21:10.324 17:11:57 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:10.324 17:11:57 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@448 -- # prepare_net_devs 00:21:10.324 17:11:57 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@410 -- # local -g is_hw=no 00:21:10.324 17:11:57 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@412 -- # remove_spdk_ns 00:21:10.324 17:11:57 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:10.324 17:11:57 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:10.324 17:11:57 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:10.324 17:11:57 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:21:10.324 17:11:57 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:21:10.324 17:11:57 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@285 -- # xtrace_disable 00:21:10.324 17:11:57 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:15.590 17:12:02 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:15.590 17:12:02 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@291 -- # pci_devs=() 00:21:15.590 17:12:02 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@291 -- # local -a pci_devs 00:21:15.590 17:12:02 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@292 -- # pci_net_devs=() 00:21:15.590 17:12:02 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:21:15.590 17:12:02 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@293 -- # pci_drivers=() 00:21:15.590 17:12:02 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@293 -- # local -A pci_drivers 00:21:15.590 17:12:02 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@295 -- # net_devs=() 00:21:15.590 17:12:02 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@295 -- # local -ga net_devs 00:21:15.590 17:12:02 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@296 -- # e810=() 00:21:15.590 17:12:02 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@296 -- # local -ga e810 00:21:15.590 17:12:02 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@297 -- # x722=() 00:21:15.590 17:12:02 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@297 -- # local -ga x722 00:21:15.590 17:12:02 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@298 -- # mlx=() 00:21:15.590 17:12:02 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@298 -- # local -ga mlx 00:21:15.590 17:12:02 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:15.590 17:12:02 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:15.590 17:12:02 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:15.590 17:12:02 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:15.590 17:12:02 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:15.590 17:12:02 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:15.590 17:12:02 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:15.590 17:12:02 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:15.590 17:12:02 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:15.590 17:12:02 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:15.590 17:12:02 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:15.590 17:12:02 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:21:15.590 17:12:02 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:21:15.590 17:12:02 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:21:15.590 17:12:02 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:21:15.590 17:12:02 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:21:15.590 17:12:02 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:21:15.590 17:12:02 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:15.590 17:12:02 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:21:15.590 Found 0000:86:00.0 (0x8086 - 0x159b) 00:21:15.590 17:12:02 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:15.590 17:12:02 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:15.590 17:12:02 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:15.590 17:12:02 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:15.590 17:12:02 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:15.590 17:12:02 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:15.590 17:12:02 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:21:15.590 Found 0000:86:00.1 (0x8086 - 0x159b) 00:21:15.590 17:12:02 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:15.590 17:12:02 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:15.590 17:12:02 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:15.590 17:12:02 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:15.590 17:12:02 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:15.590 17:12:02 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:21:15.590 17:12:02 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:21:15.590 17:12:02 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:21:15.590 17:12:02 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:15.590 17:12:02 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:15.590 17:12:02 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:21:15.590 17:12:02 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:15.590 17:12:02 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@390 -- # [[ up == up ]] 00:21:15.590 17:12:02 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:15.591 17:12:02 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:15.591 17:12:02 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:21:15.591 Found net devices under 0000:86:00.0: cvl_0_0 00:21:15.591 17:12:02 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:15.591 17:12:02 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:15.591 17:12:02 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:15.591 17:12:02 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:21:15.591 17:12:02 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:15.591 17:12:02 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@390 -- # [[ up == up ]] 00:21:15.591 17:12:02 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:15.591 17:12:02 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:15.591 17:12:02 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:21:15.591 Found net devices under 0000:86:00.1: cvl_0_1 00:21:15.591 17:12:02 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:15.591 17:12:02 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:21:15.591 17:12:02 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@414 -- # is_hw=yes 00:21:15.591 17:12:02 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:21:15.591 17:12:02 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:21:15.591 17:12:02 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:21:15.591 17:12:02 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:15.591 17:12:02 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:15.591 17:12:02 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:15.591 17:12:02 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:21:15.591 17:12:02 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:15.591 17:12:02 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:15.591 17:12:02 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:21:15.591 17:12:02 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:15.591 17:12:02 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:15.591 17:12:02 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:21:15.591 17:12:02 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:21:15.591 17:12:02 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:21:15.591 17:12:02 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:15.591 17:12:02 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:15.591 17:12:02 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:15.591 17:12:02 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:21:15.591 17:12:02 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:15.591 17:12:03 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:15.591 17:12:03 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:15.591 17:12:03 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:21:15.591 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:15.591 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.170 ms 00:21:15.591 00:21:15.591 --- 10.0.0.2 ping statistics --- 00:21:15.591 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:15.591 rtt min/avg/max/mdev = 0.170/0.170/0.170/0.000 ms 00:21:15.591 17:12:03 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:15.591 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:15.591 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.234 ms 00:21:15.591 00:21:15.591 --- 10.0.0.1 ping statistics --- 00:21:15.591 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:15.591 rtt min/avg/max/mdev = 0.234/0.234/0.234/0.000 ms 00:21:15.591 17:12:03 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:15.591 17:12:03 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@422 -- # return 0 00:21:15.591 17:12:03 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:21:15.591 17:12:03 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:15.591 17:12:03 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:21:15.591 17:12:03 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:21:15.591 17:12:03 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:15.591 17:12:03 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:21:15.591 17:12:03 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:21:15.591 17:12:03 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@25 -- # nvmfappstart -m 0xE 00:21:15.591 17:12:03 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:21:15.591 17:12:03 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@720 -- # xtrace_disable 00:21:15.591 17:12:03 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:15.591 17:12:03 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@481 -- # nvmfpid=3132351 00:21:15.591 17:12:03 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@482 -- # waitforlisten 3132351 00:21:15.591 17:12:03 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:21:15.591 17:12:03 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@827 -- # '[' -z 3132351 ']' 00:21:15.591 17:12:03 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:15.591 17:12:03 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@832 -- # local max_retries=100 00:21:15.591 17:12:03 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:15.591 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:15.591 17:12:03 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@836 -- # xtrace_disable 00:21:15.591 17:12:03 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:15.591 [2024-05-15 17:12:03.130185] Starting SPDK v24.05-pre git sha1 0ba8ca574 / DPDK 23.11.0 initialization... 00:21:15.591 [2024-05-15 17:12:03.130228] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:15.591 EAL: No free 2048 kB hugepages reported on node 1 00:21:15.591 [2024-05-15 17:12:03.186415] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:21:15.850 [2024-05-15 17:12:03.266432] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:15.850 [2024-05-15 17:12:03.266463] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:15.850 [2024-05-15 17:12:03.266470] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:15.850 [2024-05-15 17:12:03.266476] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:15.850 [2024-05-15 17:12:03.266481] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:15.850 [2024-05-15 17:12:03.266589] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:21:15.850 [2024-05-15 17:12:03.266695] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:21:15.850 [2024-05-15 17:12:03.266701] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:21:16.415 17:12:03 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:21:16.415 17:12:03 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@860 -- # return 0 00:21:16.415 17:12:03 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:21:16.415 17:12:03 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@726 -- # xtrace_disable 00:21:16.415 17:12:03 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:16.415 17:12:03 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:16.415 17:12:03 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@27 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:21:16.415 17:12:03 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:16.415 17:12:03 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:16.415 [2024-05-15 17:12:03.990511] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:16.415 17:12:03 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:16.415 17:12:03 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@29 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:21:16.415 17:12:03 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:16.415 17:12:03 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:16.415 Malloc0 00:21:16.415 17:12:04 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:16.415 17:12:04 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@30 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:21:16.415 17:12:04 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:16.415 17:12:04 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:16.415 17:12:04 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:16.415 17:12:04 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:21:16.415 17:12:04 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:16.415 17:12:04 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:16.415 17:12:04 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:16.415 17:12:04 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:21:16.415 17:12:04 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:16.415 17:12:04 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:16.415 [2024-05-15 17:12:04.055603] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:21:16.415 [2024-05-15 17:12:04.055828] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:16.415 17:12:04 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:16.415 17:12:04 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:21:16.415 17:12:04 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:16.415 17:12:04 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:16.415 [2024-05-15 17:12:04.063740] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:21:16.415 17:12:04 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:16.415 17:12:04 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:21:16.415 17:12:04 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:16.415 17:12:04 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:16.674 Malloc1 00:21:16.674 17:12:04 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:16.674 17:12:04 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:21:16.674 17:12:04 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:16.674 17:12:04 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:16.674 17:12:04 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:16.674 17:12:04 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc1 00:21:16.674 17:12:04 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:16.674 17:12:04 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:16.674 17:12:04 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:16.674 17:12:04 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:21:16.674 17:12:04 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:16.674 17:12:04 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:16.674 17:12:04 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:16.674 17:12:04 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4421 00:21:16.674 17:12:04 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:16.674 17:12:04 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:16.674 17:12:04 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:16.674 17:12:04 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@44 -- # bdevperf_pid=3132596 00:21:16.674 17:12:04 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w write -t 1 -f 00:21:16.674 17:12:04 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@46 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; pap "$testdir/try.txt"; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:21:16.674 17:12:04 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@47 -- # waitforlisten 3132596 /var/tmp/bdevperf.sock 00:21:16.674 17:12:04 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@827 -- # '[' -z 3132596 ']' 00:21:16.674 17:12:04 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:16.674 17:12:04 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@832 -- # local max_retries=100 00:21:16.674 17:12:04 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:16.674 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:16.674 17:12:04 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@836 -- # xtrace_disable 00:21:16.674 17:12:04 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:17.608 17:12:04 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:21:17.608 17:12:04 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@860 -- # return 0 00:21:17.608 17:12:04 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@50 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 00:21:17.608 17:12:04 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:17.608 17:12:04 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:17.608 NVMe0n1 00:21:17.608 17:12:05 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:17.608 17:12:05 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@54 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:21:17.608 17:12:05 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@54 -- # grep -c NVMe 00:21:17.608 17:12:05 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:17.608 17:12:05 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:17.608 17:12:05 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:17.608 1 00:21:17.608 17:12:05 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@60 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:21:17.608 17:12:05 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@648 -- # local es=0 00:21:17.608 17:12:05 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:21:17.608 17:12:05 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:21:17.608 17:12:05 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:21:17.608 17:12:05 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:21:17.608 17:12:05 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:21:17.608 17:12:05 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:21:17.608 17:12:05 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:17.608 17:12:05 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:17.608 request: 00:21:17.608 { 00:21:17.608 "name": "NVMe0", 00:21:17.608 "trtype": "tcp", 00:21:17.608 "traddr": "10.0.0.2", 00:21:17.608 "hostnqn": "nqn.2021-09-7.io.spdk:00001", 00:21:17.608 "hostaddr": "10.0.0.2", 00:21:17.608 "hostsvcid": "60000", 00:21:17.608 "adrfam": "ipv4", 00:21:17.608 "trsvcid": "4420", 00:21:17.608 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:17.608 "method": "bdev_nvme_attach_controller", 00:21:17.608 "req_id": 1 00:21:17.608 } 00:21:17.608 Got JSON-RPC error response 00:21:17.608 response: 00:21:17.608 { 00:21:17.608 "code": -114, 00:21:17.608 "message": "A controller named NVMe0 already exists with the specified network path\n" 00:21:17.608 } 00:21:17.608 17:12:05 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:21:17.608 17:12:05 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # es=1 00:21:17.608 17:12:05 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:21:17.608 17:12:05 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:21:17.608 17:12:05 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:21:17.608 17:12:05 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@65 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:21:17.608 17:12:05 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@648 -- # local es=0 00:21:17.608 17:12:05 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:21:17.608 17:12:05 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:21:17.608 17:12:05 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:21:17.608 17:12:05 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:21:17.608 17:12:05 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:21:17.609 17:12:05 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:21:17.609 17:12:05 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:17.609 17:12:05 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:17.609 request: 00:21:17.609 { 00:21:17.609 "name": "NVMe0", 00:21:17.609 "trtype": "tcp", 00:21:17.609 "traddr": "10.0.0.2", 00:21:17.609 "hostaddr": "10.0.0.2", 00:21:17.609 "hostsvcid": "60000", 00:21:17.609 "adrfam": "ipv4", 00:21:17.609 "trsvcid": "4420", 00:21:17.609 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:21:17.609 "method": "bdev_nvme_attach_controller", 00:21:17.609 "req_id": 1 00:21:17.609 } 00:21:17.609 Got JSON-RPC error response 00:21:17.609 response: 00:21:17.609 { 00:21:17.609 "code": -114, 00:21:17.609 "message": "A controller named NVMe0 already exists with the specified network path\n" 00:21:17.609 } 00:21:17.609 17:12:05 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:21:17.609 17:12:05 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # es=1 00:21:17.609 17:12:05 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:21:17.609 17:12:05 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:21:17.609 17:12:05 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:21:17.609 17:12:05 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@69 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:21:17.609 17:12:05 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@648 -- # local es=0 00:21:17.609 17:12:05 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:21:17.609 17:12:05 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:21:17.609 17:12:05 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:21:17.609 17:12:05 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:21:17.609 17:12:05 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:21:17.609 17:12:05 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:21:17.609 17:12:05 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:17.609 17:12:05 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:17.609 request: 00:21:17.609 { 00:21:17.609 "name": "NVMe0", 00:21:17.609 "trtype": "tcp", 00:21:17.609 "traddr": "10.0.0.2", 00:21:17.609 "hostaddr": "10.0.0.2", 00:21:17.609 "hostsvcid": "60000", 00:21:17.609 "adrfam": "ipv4", 00:21:17.609 "trsvcid": "4420", 00:21:17.609 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:17.609 "multipath": "disable", 00:21:17.609 "method": "bdev_nvme_attach_controller", 00:21:17.609 "req_id": 1 00:21:17.609 } 00:21:17.609 Got JSON-RPC error response 00:21:17.609 response: 00:21:17.609 { 00:21:17.609 "code": -114, 00:21:17.609 "message": "A controller named NVMe0 already exists and multipath is disabled\n" 00:21:17.609 } 00:21:17.609 17:12:05 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:21:17.609 17:12:05 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # es=1 00:21:17.609 17:12:05 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:21:17.609 17:12:05 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:21:17.609 17:12:05 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:21:17.609 17:12:05 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@74 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:21:17.609 17:12:05 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@648 -- # local es=0 00:21:17.609 17:12:05 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:21:17.609 17:12:05 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:21:17.609 17:12:05 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:21:17.609 17:12:05 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:21:17.609 17:12:05 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:21:17.609 17:12:05 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:21:17.609 17:12:05 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:17.609 17:12:05 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:17.609 request: 00:21:17.609 { 00:21:17.609 "name": "NVMe0", 00:21:17.609 "trtype": "tcp", 00:21:17.609 "traddr": "10.0.0.2", 00:21:17.609 "hostaddr": "10.0.0.2", 00:21:17.609 "hostsvcid": "60000", 00:21:17.609 "adrfam": "ipv4", 00:21:17.609 "trsvcid": "4420", 00:21:17.609 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:17.609 "multipath": "failover", 00:21:17.609 "method": "bdev_nvme_attach_controller", 00:21:17.609 "req_id": 1 00:21:17.609 } 00:21:17.609 Got JSON-RPC error response 00:21:17.609 response: 00:21:17.609 { 00:21:17.609 "code": -114, 00:21:17.609 "message": "A controller named NVMe0 already exists with the specified network path\n" 00:21:17.609 } 00:21:17.609 17:12:05 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:21:17.609 17:12:05 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # es=1 00:21:17.609 17:12:05 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:21:17.609 17:12:05 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:21:17.609 17:12:05 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:21:17.609 17:12:05 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@79 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:21:17.609 17:12:05 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:17.609 17:12:05 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:17.867 00:21:17.867 17:12:05 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:17.867 17:12:05 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@83 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:21:17.867 17:12:05 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:17.867 17:12:05 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:17.867 17:12:05 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:17.867 17:12:05 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@87 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe1 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 00:21:17.867 17:12:05 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:17.867 17:12:05 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:17.867 00:21:17.867 17:12:05 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:17.867 17:12:05 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@90 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:21:17.867 17:12:05 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@90 -- # grep -c NVMe 00:21:17.867 17:12:05 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:17.867 17:12:05 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:17.867 17:12:05 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:17.867 17:12:05 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@90 -- # '[' 2 '!=' 2 ']' 00:21:17.867 17:12:05 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:21:19.240 0 00:21:19.240 17:12:06 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@98 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe1 00:21:19.240 17:12:06 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:19.240 17:12:06 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:19.240 17:12:06 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:19.240 17:12:06 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@100 -- # killprocess 3132596 00:21:19.240 17:12:06 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@946 -- # '[' -z 3132596 ']' 00:21:19.240 17:12:06 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@950 -- # kill -0 3132596 00:21:19.240 17:12:06 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@951 -- # uname 00:21:19.240 17:12:06 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:21:19.240 17:12:06 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3132596 00:21:19.240 17:12:06 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:21:19.240 17:12:06 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:21:19.240 17:12:06 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3132596' 00:21:19.240 killing process with pid 3132596 00:21:19.240 17:12:06 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@965 -- # kill 3132596 00:21:19.240 17:12:06 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@970 -- # wait 3132596 00:21:19.240 17:12:06 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@102 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:21:19.240 17:12:06 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:19.240 17:12:06 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:19.240 17:12:06 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:19.240 17:12:06 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@103 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:21:19.240 17:12:06 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:19.240 17:12:06 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:19.240 17:12:06 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:19.240 17:12:06 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@105 -- # trap - SIGINT SIGTERM EXIT 00:21:19.240 17:12:06 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@107 -- # pap /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:21:19.240 17:12:06 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1608 -- # read -r file 00:21:19.240 17:12:06 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1607 -- # find /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt -type f 00:21:19.240 17:12:06 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1607 -- # sort -u 00:21:19.240 17:12:06 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1609 -- # cat 00:21:19.240 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:21:19.240 [2024-05-15 17:12:04.164232] Starting SPDK v24.05-pre git sha1 0ba8ca574 / DPDK 23.11.0 initialization... 00:21:19.240 [2024-05-15 17:12:04.164280] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3132596 ] 00:21:19.240 EAL: No free 2048 kB hugepages reported on node 1 00:21:19.240 [2024-05-15 17:12:04.218722] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:19.240 [2024-05-15 17:12:04.291343] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:21:19.240 [2024-05-15 17:12:05.452379] bdev.c:4575:bdev_name_add: *ERROR*: Bdev name a81a90c7-1126-40bc-8ddc-40b1811fbd2b already exists 00:21:19.240 [2024-05-15 17:12:05.452410] bdev.c:7691:bdev_register: *ERROR*: Unable to add uuid:a81a90c7-1126-40bc-8ddc-40b1811fbd2b alias for bdev NVMe1n1 00:21:19.240 [2024-05-15 17:12:05.452420] bdev_nvme.c:4297:nvme_bdev_create: *ERROR*: spdk_bdev_register() failed 00:21:19.240 Running I/O for 1 seconds... 00:21:19.240 00:21:19.240 Latency(us) 00:21:19.240 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:19.240 Job: NVMe0n1 (Core Mask 0x1, workload: write, depth: 128, IO size: 4096) 00:21:19.240 NVMe0n1 : 1.01 24378.55 95.23 0.00 0.00 5238.40 1602.78 9118.05 00:21:19.240 =================================================================================================================== 00:21:19.240 Total : 24378.55 95.23 0.00 0.00 5238.40 1602.78 9118.05 00:21:19.240 Received shutdown signal, test time was about 1.000000 seconds 00:21:19.240 00:21:19.240 Latency(us) 00:21:19.240 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:19.240 =================================================================================================================== 00:21:19.240 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:19.240 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:21:19.240 17:12:06 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1614 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:21:19.240 17:12:06 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1608 -- # read -r file 00:21:19.240 17:12:06 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@108 -- # nvmftestfini 00:21:19.240 17:12:06 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@488 -- # nvmfcleanup 00:21:19.240 17:12:06 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@117 -- # sync 00:21:19.240 17:12:06 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:21:19.240 17:12:06 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@120 -- # set +e 00:21:19.240 17:12:06 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@121 -- # for i in {1..20} 00:21:19.497 17:12:06 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:21:19.497 rmmod nvme_tcp 00:21:19.497 rmmod nvme_fabrics 00:21:19.497 rmmod nvme_keyring 00:21:19.497 17:12:06 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:21:19.497 17:12:06 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@124 -- # set -e 00:21:19.497 17:12:06 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@125 -- # return 0 00:21:19.497 17:12:06 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@489 -- # '[' -n 3132351 ']' 00:21:19.497 17:12:06 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@490 -- # killprocess 3132351 00:21:19.497 17:12:06 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@946 -- # '[' -z 3132351 ']' 00:21:19.497 17:12:06 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@950 -- # kill -0 3132351 00:21:19.497 17:12:06 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@951 -- # uname 00:21:19.497 17:12:06 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:21:19.497 17:12:06 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3132351 00:21:19.497 17:12:07 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:21:19.497 17:12:07 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:21:19.497 17:12:07 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3132351' 00:21:19.497 killing process with pid 3132351 00:21:19.497 17:12:07 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@965 -- # kill 3132351 00:21:19.497 [2024-05-15 17:12:07.003900] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:21:19.497 17:12:07 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@970 -- # wait 3132351 00:21:19.756 17:12:07 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:21:19.756 17:12:07 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:21:19.756 17:12:07 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:21:19.756 17:12:07 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:21:19.756 17:12:07 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@278 -- # remove_spdk_ns 00:21:19.756 17:12:07 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:19.756 17:12:07 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:19.756 17:12:07 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:21.659 17:12:09 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:21:21.659 00:21:21.659 real 0m11.688s 00:21:21.659 user 0m16.407s 00:21:21.659 sys 0m4.756s 00:21:21.659 17:12:09 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1122 -- # xtrace_disable 00:21:21.917 17:12:09 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:21.917 ************************************ 00:21:21.917 END TEST nvmf_multicontroller 00:21:21.917 ************************************ 00:21:21.917 17:12:09 nvmf_tcp -- nvmf/nvmf.sh@91 -- # run_test nvmf_aer /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:21:21.917 17:12:09 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:21:21.917 17:12:09 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:21:21.917 17:12:09 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:21:21.917 ************************************ 00:21:21.917 START TEST nvmf_aer 00:21:21.917 ************************************ 00:21:21.917 17:12:09 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:21:21.917 * Looking for test storage... 00:21:21.917 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:21:21.917 17:12:09 nvmf_tcp.nvmf_aer -- host/aer.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:21.917 17:12:09 nvmf_tcp.nvmf_aer -- nvmf/common.sh@7 -- # uname -s 00:21:21.917 17:12:09 nvmf_tcp.nvmf_aer -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:21.917 17:12:09 nvmf_tcp.nvmf_aer -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:21.917 17:12:09 nvmf_tcp.nvmf_aer -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:21.917 17:12:09 nvmf_tcp.nvmf_aer -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:21.917 17:12:09 nvmf_tcp.nvmf_aer -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:21.917 17:12:09 nvmf_tcp.nvmf_aer -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:21.917 17:12:09 nvmf_tcp.nvmf_aer -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:21.917 17:12:09 nvmf_tcp.nvmf_aer -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:21.917 17:12:09 nvmf_tcp.nvmf_aer -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:21.917 17:12:09 nvmf_tcp.nvmf_aer -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:21.917 17:12:09 nvmf_tcp.nvmf_aer -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:21:21.917 17:12:09 nvmf_tcp.nvmf_aer -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:21:21.917 17:12:09 nvmf_tcp.nvmf_aer -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:21.917 17:12:09 nvmf_tcp.nvmf_aer -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:21.917 17:12:09 nvmf_tcp.nvmf_aer -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:21.917 17:12:09 nvmf_tcp.nvmf_aer -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:21.917 17:12:09 nvmf_tcp.nvmf_aer -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:21.917 17:12:09 nvmf_tcp.nvmf_aer -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:21.917 17:12:09 nvmf_tcp.nvmf_aer -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:21.917 17:12:09 nvmf_tcp.nvmf_aer -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:21.917 17:12:09 nvmf_tcp.nvmf_aer -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:21.917 17:12:09 nvmf_tcp.nvmf_aer -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:21.917 17:12:09 nvmf_tcp.nvmf_aer -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:21.917 17:12:09 nvmf_tcp.nvmf_aer -- paths/export.sh@5 -- # export PATH 00:21:21.917 17:12:09 nvmf_tcp.nvmf_aer -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:21.917 17:12:09 nvmf_tcp.nvmf_aer -- nvmf/common.sh@47 -- # : 0 00:21:21.917 17:12:09 nvmf_tcp.nvmf_aer -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:21:21.917 17:12:09 nvmf_tcp.nvmf_aer -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:21:21.917 17:12:09 nvmf_tcp.nvmf_aer -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:21.917 17:12:09 nvmf_tcp.nvmf_aer -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:21.917 17:12:09 nvmf_tcp.nvmf_aer -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:21.917 17:12:09 nvmf_tcp.nvmf_aer -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:21:21.917 17:12:09 nvmf_tcp.nvmf_aer -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:21:21.917 17:12:09 nvmf_tcp.nvmf_aer -- nvmf/common.sh@51 -- # have_pci_nics=0 00:21:21.917 17:12:09 nvmf_tcp.nvmf_aer -- host/aer.sh@11 -- # nvmftestinit 00:21:21.917 17:12:09 nvmf_tcp.nvmf_aer -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:21:21.917 17:12:09 nvmf_tcp.nvmf_aer -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:21.917 17:12:09 nvmf_tcp.nvmf_aer -- nvmf/common.sh@448 -- # prepare_net_devs 00:21:21.918 17:12:09 nvmf_tcp.nvmf_aer -- nvmf/common.sh@410 -- # local -g is_hw=no 00:21:21.918 17:12:09 nvmf_tcp.nvmf_aer -- nvmf/common.sh@412 -- # remove_spdk_ns 00:21:21.918 17:12:09 nvmf_tcp.nvmf_aer -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:21.918 17:12:09 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:21.918 17:12:09 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:21.918 17:12:09 nvmf_tcp.nvmf_aer -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:21:21.918 17:12:09 nvmf_tcp.nvmf_aer -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:21:21.918 17:12:09 nvmf_tcp.nvmf_aer -- nvmf/common.sh@285 -- # xtrace_disable 00:21:21.918 17:12:09 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:27.190 17:12:14 nvmf_tcp.nvmf_aer -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:27.190 17:12:14 nvmf_tcp.nvmf_aer -- nvmf/common.sh@291 -- # pci_devs=() 00:21:27.190 17:12:14 nvmf_tcp.nvmf_aer -- nvmf/common.sh@291 -- # local -a pci_devs 00:21:27.190 17:12:14 nvmf_tcp.nvmf_aer -- nvmf/common.sh@292 -- # pci_net_devs=() 00:21:27.190 17:12:14 nvmf_tcp.nvmf_aer -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:21:27.190 17:12:14 nvmf_tcp.nvmf_aer -- nvmf/common.sh@293 -- # pci_drivers=() 00:21:27.190 17:12:14 nvmf_tcp.nvmf_aer -- nvmf/common.sh@293 -- # local -A pci_drivers 00:21:27.190 17:12:14 nvmf_tcp.nvmf_aer -- nvmf/common.sh@295 -- # net_devs=() 00:21:27.190 17:12:14 nvmf_tcp.nvmf_aer -- nvmf/common.sh@295 -- # local -ga net_devs 00:21:27.190 17:12:14 nvmf_tcp.nvmf_aer -- nvmf/common.sh@296 -- # e810=() 00:21:27.190 17:12:14 nvmf_tcp.nvmf_aer -- nvmf/common.sh@296 -- # local -ga e810 00:21:27.190 17:12:14 nvmf_tcp.nvmf_aer -- nvmf/common.sh@297 -- # x722=() 00:21:27.190 17:12:14 nvmf_tcp.nvmf_aer -- nvmf/common.sh@297 -- # local -ga x722 00:21:27.190 17:12:14 nvmf_tcp.nvmf_aer -- nvmf/common.sh@298 -- # mlx=() 00:21:27.190 17:12:14 nvmf_tcp.nvmf_aer -- nvmf/common.sh@298 -- # local -ga mlx 00:21:27.190 17:12:14 nvmf_tcp.nvmf_aer -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:27.190 17:12:14 nvmf_tcp.nvmf_aer -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:27.190 17:12:14 nvmf_tcp.nvmf_aer -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:27.190 17:12:14 nvmf_tcp.nvmf_aer -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:27.190 17:12:14 nvmf_tcp.nvmf_aer -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:27.190 17:12:14 nvmf_tcp.nvmf_aer -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:27.190 17:12:14 nvmf_tcp.nvmf_aer -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:27.190 17:12:14 nvmf_tcp.nvmf_aer -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:27.190 17:12:14 nvmf_tcp.nvmf_aer -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:27.190 17:12:14 nvmf_tcp.nvmf_aer -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:27.190 17:12:14 nvmf_tcp.nvmf_aer -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:27.190 17:12:14 nvmf_tcp.nvmf_aer -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:21:27.190 17:12:14 nvmf_tcp.nvmf_aer -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:21:27.190 17:12:14 nvmf_tcp.nvmf_aer -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:21:27.190 17:12:14 nvmf_tcp.nvmf_aer -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:21:27.190 17:12:14 nvmf_tcp.nvmf_aer -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:21:27.190 17:12:14 nvmf_tcp.nvmf_aer -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:21:27.190 17:12:14 nvmf_tcp.nvmf_aer -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:27.190 17:12:14 nvmf_tcp.nvmf_aer -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:21:27.190 Found 0000:86:00.0 (0x8086 - 0x159b) 00:21:27.190 17:12:14 nvmf_tcp.nvmf_aer -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:27.190 17:12:14 nvmf_tcp.nvmf_aer -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:27.190 17:12:14 nvmf_tcp.nvmf_aer -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:27.190 17:12:14 nvmf_tcp.nvmf_aer -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:27.190 17:12:14 nvmf_tcp.nvmf_aer -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:27.190 17:12:14 nvmf_tcp.nvmf_aer -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:27.190 17:12:14 nvmf_tcp.nvmf_aer -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:21:27.190 Found 0000:86:00.1 (0x8086 - 0x159b) 00:21:27.190 17:12:14 nvmf_tcp.nvmf_aer -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:27.190 17:12:14 nvmf_tcp.nvmf_aer -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:27.190 17:12:14 nvmf_tcp.nvmf_aer -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:27.190 17:12:14 nvmf_tcp.nvmf_aer -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:27.190 17:12:14 nvmf_tcp.nvmf_aer -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:27.190 17:12:14 nvmf_tcp.nvmf_aer -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:21:27.190 17:12:14 nvmf_tcp.nvmf_aer -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:21:27.191 17:12:14 nvmf_tcp.nvmf_aer -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:21:27.191 17:12:14 nvmf_tcp.nvmf_aer -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:27.191 17:12:14 nvmf_tcp.nvmf_aer -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:27.191 17:12:14 nvmf_tcp.nvmf_aer -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:21:27.191 17:12:14 nvmf_tcp.nvmf_aer -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:27.191 17:12:14 nvmf_tcp.nvmf_aer -- nvmf/common.sh@390 -- # [[ up == up ]] 00:21:27.191 17:12:14 nvmf_tcp.nvmf_aer -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:27.191 17:12:14 nvmf_tcp.nvmf_aer -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:27.191 17:12:14 nvmf_tcp.nvmf_aer -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:21:27.191 Found net devices under 0000:86:00.0: cvl_0_0 00:21:27.191 17:12:14 nvmf_tcp.nvmf_aer -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:27.191 17:12:14 nvmf_tcp.nvmf_aer -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:27.191 17:12:14 nvmf_tcp.nvmf_aer -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:27.191 17:12:14 nvmf_tcp.nvmf_aer -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:21:27.191 17:12:14 nvmf_tcp.nvmf_aer -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:27.191 17:12:14 nvmf_tcp.nvmf_aer -- nvmf/common.sh@390 -- # [[ up == up ]] 00:21:27.191 17:12:14 nvmf_tcp.nvmf_aer -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:27.191 17:12:14 nvmf_tcp.nvmf_aer -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:27.191 17:12:14 nvmf_tcp.nvmf_aer -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:21:27.191 Found net devices under 0000:86:00.1: cvl_0_1 00:21:27.191 17:12:14 nvmf_tcp.nvmf_aer -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:27.191 17:12:14 nvmf_tcp.nvmf_aer -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:21:27.191 17:12:14 nvmf_tcp.nvmf_aer -- nvmf/common.sh@414 -- # is_hw=yes 00:21:27.191 17:12:14 nvmf_tcp.nvmf_aer -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:21:27.191 17:12:14 nvmf_tcp.nvmf_aer -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:21:27.191 17:12:14 nvmf_tcp.nvmf_aer -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:21:27.191 17:12:14 nvmf_tcp.nvmf_aer -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:27.191 17:12:14 nvmf_tcp.nvmf_aer -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:27.191 17:12:14 nvmf_tcp.nvmf_aer -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:27.191 17:12:14 nvmf_tcp.nvmf_aer -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:21:27.191 17:12:14 nvmf_tcp.nvmf_aer -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:27.191 17:12:14 nvmf_tcp.nvmf_aer -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:27.191 17:12:14 nvmf_tcp.nvmf_aer -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:21:27.191 17:12:14 nvmf_tcp.nvmf_aer -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:27.191 17:12:14 nvmf_tcp.nvmf_aer -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:27.191 17:12:14 nvmf_tcp.nvmf_aer -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:21:27.191 17:12:14 nvmf_tcp.nvmf_aer -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:21:27.191 17:12:14 nvmf_tcp.nvmf_aer -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:21:27.191 17:12:14 nvmf_tcp.nvmf_aer -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:27.191 17:12:14 nvmf_tcp.nvmf_aer -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:27.191 17:12:14 nvmf_tcp.nvmf_aer -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:27.191 17:12:14 nvmf_tcp.nvmf_aer -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:21:27.191 17:12:14 nvmf_tcp.nvmf_aer -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:27.450 17:12:14 nvmf_tcp.nvmf_aer -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:27.450 17:12:14 nvmf_tcp.nvmf_aer -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:27.450 17:12:14 nvmf_tcp.nvmf_aer -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:21:27.450 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:27.450 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.149 ms 00:21:27.450 00:21:27.450 --- 10.0.0.2 ping statistics --- 00:21:27.450 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:27.450 rtt min/avg/max/mdev = 0.149/0.149/0.149/0.000 ms 00:21:27.450 17:12:14 nvmf_tcp.nvmf_aer -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:27.450 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:27.450 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.088 ms 00:21:27.450 00:21:27.450 --- 10.0.0.1 ping statistics --- 00:21:27.450 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:27.450 rtt min/avg/max/mdev = 0.088/0.088/0.088/0.000 ms 00:21:27.450 17:12:14 nvmf_tcp.nvmf_aer -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:27.450 17:12:14 nvmf_tcp.nvmf_aer -- nvmf/common.sh@422 -- # return 0 00:21:27.450 17:12:14 nvmf_tcp.nvmf_aer -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:21:27.450 17:12:14 nvmf_tcp.nvmf_aer -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:27.450 17:12:14 nvmf_tcp.nvmf_aer -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:21:27.450 17:12:14 nvmf_tcp.nvmf_aer -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:21:27.450 17:12:14 nvmf_tcp.nvmf_aer -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:27.450 17:12:14 nvmf_tcp.nvmf_aer -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:21:27.450 17:12:14 nvmf_tcp.nvmf_aer -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:21:27.450 17:12:14 nvmf_tcp.nvmf_aer -- host/aer.sh@12 -- # nvmfappstart -m 0xF 00:21:27.450 17:12:14 nvmf_tcp.nvmf_aer -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:21:27.450 17:12:14 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@720 -- # xtrace_disable 00:21:27.450 17:12:14 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:27.450 17:12:14 nvmf_tcp.nvmf_aer -- nvmf/common.sh@481 -- # nvmfpid=3136764 00:21:27.450 17:12:14 nvmf_tcp.nvmf_aer -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:21:27.450 17:12:14 nvmf_tcp.nvmf_aer -- nvmf/common.sh@482 -- # waitforlisten 3136764 00:21:27.450 17:12:14 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@827 -- # '[' -z 3136764 ']' 00:21:27.450 17:12:14 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:27.450 17:12:14 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@832 -- # local max_retries=100 00:21:27.450 17:12:14 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:27.450 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:27.450 17:12:14 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@836 -- # xtrace_disable 00:21:27.450 17:12:14 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:27.450 [2024-05-15 17:12:14.998085] Starting SPDK v24.05-pre git sha1 0ba8ca574 / DPDK 23.11.0 initialization... 00:21:27.450 [2024-05-15 17:12:14.998128] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:27.450 EAL: No free 2048 kB hugepages reported on node 1 00:21:27.450 [2024-05-15 17:12:15.056885] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:21:27.708 [2024-05-15 17:12:15.137374] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:27.708 [2024-05-15 17:12:15.137409] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:27.708 [2024-05-15 17:12:15.137417] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:27.708 [2024-05-15 17:12:15.137423] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:27.708 [2024-05-15 17:12:15.137428] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:27.708 [2024-05-15 17:12:15.137469] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:21:27.708 [2024-05-15 17:12:15.137489] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:21:27.708 [2024-05-15 17:12:15.137579] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:21:27.708 [2024-05-15 17:12:15.137581] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:21:28.274 17:12:15 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:21:28.274 17:12:15 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@860 -- # return 0 00:21:28.274 17:12:15 nvmf_tcp.nvmf_aer -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:21:28.274 17:12:15 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@726 -- # xtrace_disable 00:21:28.274 17:12:15 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:28.274 17:12:15 nvmf_tcp.nvmf_aer -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:28.274 17:12:15 nvmf_tcp.nvmf_aer -- host/aer.sh@14 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:21:28.274 17:12:15 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:28.274 17:12:15 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:28.274 [2024-05-15 17:12:15.847167] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:28.274 17:12:15 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:28.274 17:12:15 nvmf_tcp.nvmf_aer -- host/aer.sh@16 -- # rpc_cmd bdev_malloc_create 64 512 --name Malloc0 00:21:28.274 17:12:15 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:28.274 17:12:15 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:28.274 Malloc0 00:21:28.274 17:12:15 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:28.274 17:12:15 nvmf_tcp.nvmf_aer -- host/aer.sh@17 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 2 00:21:28.274 17:12:15 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:28.274 17:12:15 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:28.274 17:12:15 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:28.274 17:12:15 nvmf_tcp.nvmf_aer -- host/aer.sh@18 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:21:28.275 17:12:15 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:28.275 17:12:15 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:28.275 17:12:15 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:28.275 17:12:15 nvmf_tcp.nvmf_aer -- host/aer.sh@19 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:21:28.275 17:12:15 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:28.275 17:12:15 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:28.275 [2024-05-15 17:12:15.898807] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:21:28.275 [2024-05-15 17:12:15.899049] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:28.275 17:12:15 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:28.275 17:12:15 nvmf_tcp.nvmf_aer -- host/aer.sh@21 -- # rpc_cmd nvmf_get_subsystems 00:21:28.275 17:12:15 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:28.275 17:12:15 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:28.275 [ 00:21:28.275 { 00:21:28.275 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:21:28.275 "subtype": "Discovery", 00:21:28.275 "listen_addresses": [], 00:21:28.275 "allow_any_host": true, 00:21:28.275 "hosts": [] 00:21:28.275 }, 00:21:28.275 { 00:21:28.275 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:28.275 "subtype": "NVMe", 00:21:28.275 "listen_addresses": [ 00:21:28.275 { 00:21:28.275 "trtype": "TCP", 00:21:28.275 "adrfam": "IPv4", 00:21:28.275 "traddr": "10.0.0.2", 00:21:28.275 "trsvcid": "4420" 00:21:28.275 } 00:21:28.275 ], 00:21:28.275 "allow_any_host": true, 00:21:28.275 "hosts": [], 00:21:28.275 "serial_number": "SPDK00000000000001", 00:21:28.275 "model_number": "SPDK bdev Controller", 00:21:28.275 "max_namespaces": 2, 00:21:28.275 "min_cntlid": 1, 00:21:28.275 "max_cntlid": 65519, 00:21:28.275 "namespaces": [ 00:21:28.275 { 00:21:28.275 "nsid": 1, 00:21:28.275 "bdev_name": "Malloc0", 00:21:28.275 "name": "Malloc0", 00:21:28.275 "nguid": "028D6D43478F486D89D8B38D1983A097", 00:21:28.275 "uuid": "028d6d43-478f-486d-89d8-b38d1983a097" 00:21:28.275 } 00:21:28.275 ] 00:21:28.275 } 00:21:28.275 ] 00:21:28.275 17:12:15 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:28.275 17:12:15 nvmf_tcp.nvmf_aer -- host/aer.sh@23 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:21:28.275 17:12:15 nvmf_tcp.nvmf_aer -- host/aer.sh@24 -- # rm -f /tmp/aer_touch_file 00:21:28.275 17:12:15 nvmf_tcp.nvmf_aer -- host/aer.sh@33 -- # aerpid=3137010 00:21:28.275 17:12:15 nvmf_tcp.nvmf_aer -- host/aer.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -n 2 -t /tmp/aer_touch_file 00:21:28.275 17:12:15 nvmf_tcp.nvmf_aer -- host/aer.sh@36 -- # waitforfile /tmp/aer_touch_file 00:21:28.275 17:12:15 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1261 -- # local i=0 00:21:28.275 17:12:15 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1262 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:21:28.275 17:12:15 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1263 -- # '[' 0 -lt 200 ']' 00:21:28.275 17:12:15 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1264 -- # i=1 00:21:28.275 17:12:15 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1265 -- # sleep 0.1 00:21:28.533 EAL: No free 2048 kB hugepages reported on node 1 00:21:28.533 17:12:16 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1262 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:21:28.533 17:12:16 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1263 -- # '[' 1 -lt 200 ']' 00:21:28.533 17:12:16 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1264 -- # i=2 00:21:28.533 17:12:16 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1265 -- # sleep 0.1 00:21:28.533 17:12:16 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1262 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:21:28.533 17:12:16 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1268 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:21:28.533 17:12:16 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1272 -- # return 0 00:21:28.533 17:12:16 nvmf_tcp.nvmf_aer -- host/aer.sh@39 -- # rpc_cmd bdev_malloc_create 64 4096 --name Malloc1 00:21:28.533 17:12:16 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:28.533 17:12:16 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:28.533 Malloc1 00:21:28.533 17:12:16 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:28.533 17:12:16 nvmf_tcp.nvmf_aer -- host/aer.sh@40 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 2 00:21:28.533 17:12:16 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:28.533 17:12:16 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:28.533 17:12:16 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:28.533 17:12:16 nvmf_tcp.nvmf_aer -- host/aer.sh@41 -- # rpc_cmd nvmf_get_subsystems 00:21:28.533 17:12:16 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:28.533 17:12:16 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:28.792 Asynchronous Event Request test 00:21:28.792 Attaching to 10.0.0.2 00:21:28.792 Attached to 10.0.0.2 00:21:28.792 Registering asynchronous event callbacks... 00:21:28.792 Starting namespace attribute notice tests for all controllers... 00:21:28.792 10.0.0.2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:21:28.792 aer_cb - Changed Namespace 00:21:28.792 Cleaning up... 00:21:28.792 [ 00:21:28.792 { 00:21:28.792 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:21:28.792 "subtype": "Discovery", 00:21:28.792 "listen_addresses": [], 00:21:28.792 "allow_any_host": true, 00:21:28.792 "hosts": [] 00:21:28.792 }, 00:21:28.792 { 00:21:28.792 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:28.792 "subtype": "NVMe", 00:21:28.792 "listen_addresses": [ 00:21:28.792 { 00:21:28.792 "trtype": "TCP", 00:21:28.792 "adrfam": "IPv4", 00:21:28.792 "traddr": "10.0.0.2", 00:21:28.792 "trsvcid": "4420" 00:21:28.792 } 00:21:28.792 ], 00:21:28.792 "allow_any_host": true, 00:21:28.792 "hosts": [], 00:21:28.792 "serial_number": "SPDK00000000000001", 00:21:28.792 "model_number": "SPDK bdev Controller", 00:21:28.792 "max_namespaces": 2, 00:21:28.792 "min_cntlid": 1, 00:21:28.792 "max_cntlid": 65519, 00:21:28.792 "namespaces": [ 00:21:28.792 { 00:21:28.792 "nsid": 1, 00:21:28.792 "bdev_name": "Malloc0", 00:21:28.792 "name": "Malloc0", 00:21:28.792 "nguid": "028D6D43478F486D89D8B38D1983A097", 00:21:28.792 "uuid": "028d6d43-478f-486d-89d8-b38d1983a097" 00:21:28.792 }, 00:21:28.792 { 00:21:28.792 "nsid": 2, 00:21:28.792 "bdev_name": "Malloc1", 00:21:28.792 "name": "Malloc1", 00:21:28.792 "nguid": "E06C01A1EB49479B9E3502DFA6551BBA", 00:21:28.792 "uuid": "e06c01a1-eb49-479b-9e35-02dfa6551bba" 00:21:28.792 } 00:21:28.792 ] 00:21:28.792 } 00:21:28.792 ] 00:21:28.792 17:12:16 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:28.792 17:12:16 nvmf_tcp.nvmf_aer -- host/aer.sh@43 -- # wait 3137010 00:21:28.792 17:12:16 nvmf_tcp.nvmf_aer -- host/aer.sh@45 -- # rpc_cmd bdev_malloc_delete Malloc0 00:21:28.792 17:12:16 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:28.792 17:12:16 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:28.792 17:12:16 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:28.792 17:12:16 nvmf_tcp.nvmf_aer -- host/aer.sh@46 -- # rpc_cmd bdev_malloc_delete Malloc1 00:21:28.792 17:12:16 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:28.792 17:12:16 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:28.792 17:12:16 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:28.792 17:12:16 nvmf_tcp.nvmf_aer -- host/aer.sh@47 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:21:28.792 17:12:16 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:28.792 17:12:16 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:28.792 17:12:16 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:28.792 17:12:16 nvmf_tcp.nvmf_aer -- host/aer.sh@49 -- # trap - SIGINT SIGTERM EXIT 00:21:28.792 17:12:16 nvmf_tcp.nvmf_aer -- host/aer.sh@51 -- # nvmftestfini 00:21:28.792 17:12:16 nvmf_tcp.nvmf_aer -- nvmf/common.sh@488 -- # nvmfcleanup 00:21:28.792 17:12:16 nvmf_tcp.nvmf_aer -- nvmf/common.sh@117 -- # sync 00:21:28.792 17:12:16 nvmf_tcp.nvmf_aer -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:21:28.792 17:12:16 nvmf_tcp.nvmf_aer -- nvmf/common.sh@120 -- # set +e 00:21:28.792 17:12:16 nvmf_tcp.nvmf_aer -- nvmf/common.sh@121 -- # for i in {1..20} 00:21:28.792 17:12:16 nvmf_tcp.nvmf_aer -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:21:28.792 rmmod nvme_tcp 00:21:28.792 rmmod nvme_fabrics 00:21:28.792 rmmod nvme_keyring 00:21:28.792 17:12:16 nvmf_tcp.nvmf_aer -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:21:28.792 17:12:16 nvmf_tcp.nvmf_aer -- nvmf/common.sh@124 -- # set -e 00:21:28.792 17:12:16 nvmf_tcp.nvmf_aer -- nvmf/common.sh@125 -- # return 0 00:21:28.792 17:12:16 nvmf_tcp.nvmf_aer -- nvmf/common.sh@489 -- # '[' -n 3136764 ']' 00:21:28.792 17:12:16 nvmf_tcp.nvmf_aer -- nvmf/common.sh@490 -- # killprocess 3136764 00:21:28.792 17:12:16 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@946 -- # '[' -z 3136764 ']' 00:21:28.792 17:12:16 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@950 -- # kill -0 3136764 00:21:28.792 17:12:16 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@951 -- # uname 00:21:28.792 17:12:16 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:21:28.792 17:12:16 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3136764 00:21:28.792 17:12:16 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:21:28.792 17:12:16 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:21:28.792 17:12:16 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3136764' 00:21:28.792 killing process with pid 3136764 00:21:28.792 17:12:16 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@965 -- # kill 3136764 00:21:28.792 [2024-05-15 17:12:16.364981] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:21:28.792 17:12:16 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@970 -- # wait 3136764 00:21:29.051 17:12:16 nvmf_tcp.nvmf_aer -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:21:29.051 17:12:16 nvmf_tcp.nvmf_aer -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:21:29.051 17:12:16 nvmf_tcp.nvmf_aer -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:21:29.051 17:12:16 nvmf_tcp.nvmf_aer -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:21:29.051 17:12:16 nvmf_tcp.nvmf_aer -- nvmf/common.sh@278 -- # remove_spdk_ns 00:21:29.051 17:12:16 nvmf_tcp.nvmf_aer -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:29.051 17:12:16 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:29.052 17:12:16 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:31.587 17:12:18 nvmf_tcp.nvmf_aer -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:21:31.587 00:21:31.587 real 0m9.251s 00:21:31.587 user 0m7.171s 00:21:31.587 sys 0m4.564s 00:21:31.587 17:12:18 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1122 -- # xtrace_disable 00:21:31.587 17:12:18 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:31.587 ************************************ 00:21:31.587 END TEST nvmf_aer 00:21:31.587 ************************************ 00:21:31.587 17:12:18 nvmf_tcp -- nvmf/nvmf.sh@92 -- # run_test nvmf_async_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:21:31.587 17:12:18 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:21:31.587 17:12:18 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:21:31.587 17:12:18 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:21:31.587 ************************************ 00:21:31.587 START TEST nvmf_async_init 00:21:31.587 ************************************ 00:21:31.587 17:12:18 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:21:31.587 * Looking for test storage... 00:21:31.587 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:21:31.587 17:12:18 nvmf_tcp.nvmf_async_init -- host/async_init.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:31.587 17:12:18 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@7 -- # uname -s 00:21:31.587 17:12:18 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:31.587 17:12:18 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:31.587 17:12:18 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:31.587 17:12:18 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:31.587 17:12:18 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:31.587 17:12:18 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:31.587 17:12:18 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:31.587 17:12:18 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:31.587 17:12:18 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:31.587 17:12:18 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:31.587 17:12:18 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:21:31.587 17:12:18 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:21:31.587 17:12:18 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:31.587 17:12:18 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:31.587 17:12:18 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:31.587 17:12:18 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:31.587 17:12:18 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:31.587 17:12:18 nvmf_tcp.nvmf_async_init -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:31.587 17:12:18 nvmf_tcp.nvmf_async_init -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:31.587 17:12:18 nvmf_tcp.nvmf_async_init -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:31.587 17:12:18 nvmf_tcp.nvmf_async_init -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:31.587 17:12:18 nvmf_tcp.nvmf_async_init -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:31.587 17:12:18 nvmf_tcp.nvmf_async_init -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:31.587 17:12:18 nvmf_tcp.nvmf_async_init -- paths/export.sh@5 -- # export PATH 00:21:31.587 17:12:18 nvmf_tcp.nvmf_async_init -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:31.587 17:12:18 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@47 -- # : 0 00:21:31.587 17:12:18 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:21:31.587 17:12:18 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:21:31.587 17:12:18 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:31.587 17:12:18 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:31.588 17:12:18 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:31.588 17:12:18 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:21:31.588 17:12:18 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:21:31.588 17:12:18 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@51 -- # have_pci_nics=0 00:21:31.588 17:12:18 nvmf_tcp.nvmf_async_init -- host/async_init.sh@13 -- # null_bdev_size=1024 00:21:31.588 17:12:18 nvmf_tcp.nvmf_async_init -- host/async_init.sh@14 -- # null_block_size=512 00:21:31.588 17:12:18 nvmf_tcp.nvmf_async_init -- host/async_init.sh@15 -- # null_bdev=null0 00:21:31.588 17:12:18 nvmf_tcp.nvmf_async_init -- host/async_init.sh@16 -- # nvme_bdev=nvme0 00:21:31.588 17:12:18 nvmf_tcp.nvmf_async_init -- host/async_init.sh@20 -- # uuidgen 00:21:31.588 17:12:18 nvmf_tcp.nvmf_async_init -- host/async_init.sh@20 -- # tr -d - 00:21:31.588 17:12:18 nvmf_tcp.nvmf_async_init -- host/async_init.sh@20 -- # nguid=f04a4e74e21144d6ad156ba359557e5d 00:21:31.588 17:12:18 nvmf_tcp.nvmf_async_init -- host/async_init.sh@22 -- # nvmftestinit 00:21:31.588 17:12:18 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:21:31.588 17:12:18 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:31.588 17:12:18 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@448 -- # prepare_net_devs 00:21:31.588 17:12:18 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@410 -- # local -g is_hw=no 00:21:31.588 17:12:18 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@412 -- # remove_spdk_ns 00:21:31.588 17:12:18 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:31.588 17:12:18 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:31.588 17:12:18 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:31.588 17:12:18 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:21:31.588 17:12:18 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:21:31.588 17:12:18 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@285 -- # xtrace_disable 00:21:31.588 17:12:18 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:36.854 17:12:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:36.854 17:12:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@291 -- # pci_devs=() 00:21:36.854 17:12:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@291 -- # local -a pci_devs 00:21:36.854 17:12:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@292 -- # pci_net_devs=() 00:21:36.854 17:12:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:21:36.854 17:12:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@293 -- # pci_drivers=() 00:21:36.854 17:12:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@293 -- # local -A pci_drivers 00:21:36.854 17:12:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@295 -- # net_devs=() 00:21:36.854 17:12:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@295 -- # local -ga net_devs 00:21:36.854 17:12:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@296 -- # e810=() 00:21:36.854 17:12:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@296 -- # local -ga e810 00:21:36.854 17:12:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@297 -- # x722=() 00:21:36.854 17:12:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@297 -- # local -ga x722 00:21:36.854 17:12:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@298 -- # mlx=() 00:21:36.854 17:12:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@298 -- # local -ga mlx 00:21:36.854 17:12:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:36.854 17:12:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:36.854 17:12:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:36.854 17:12:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:36.854 17:12:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:36.854 17:12:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:36.854 17:12:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:36.854 17:12:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:36.854 17:12:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:36.854 17:12:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:36.854 17:12:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:36.854 17:12:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:21:36.854 17:12:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:21:36.854 17:12:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:21:36.854 17:12:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:21:36.854 17:12:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:21:36.854 17:12:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:21:36.854 17:12:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:36.854 17:12:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:21:36.855 Found 0000:86:00.0 (0x8086 - 0x159b) 00:21:36.855 17:12:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:36.855 17:12:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:36.855 17:12:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:36.855 17:12:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:36.855 17:12:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:36.855 17:12:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:36.855 17:12:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:21:36.855 Found 0000:86:00.1 (0x8086 - 0x159b) 00:21:36.855 17:12:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:36.855 17:12:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:36.855 17:12:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:36.855 17:12:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:36.855 17:12:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:36.855 17:12:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:21:36.855 17:12:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:21:36.855 17:12:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:21:36.855 17:12:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:36.855 17:12:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:36.855 17:12:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:21:36.855 17:12:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:36.855 17:12:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@390 -- # [[ up == up ]] 00:21:36.855 17:12:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:36.855 17:12:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:36.855 17:12:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:21:36.855 Found net devices under 0000:86:00.0: cvl_0_0 00:21:36.855 17:12:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:36.855 17:12:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:36.855 17:12:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:36.855 17:12:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:21:36.855 17:12:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:36.855 17:12:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@390 -- # [[ up == up ]] 00:21:36.855 17:12:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:36.855 17:12:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:36.855 17:12:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:21:36.855 Found net devices under 0000:86:00.1: cvl_0_1 00:21:36.855 17:12:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:36.855 17:12:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:21:36.855 17:12:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@414 -- # is_hw=yes 00:21:36.855 17:12:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:21:36.855 17:12:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:21:36.855 17:12:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:21:36.855 17:12:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:36.855 17:12:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:36.855 17:12:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:36.855 17:12:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:21:36.855 17:12:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:36.855 17:12:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:36.855 17:12:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:21:36.855 17:12:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:36.855 17:12:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:36.855 17:12:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:21:36.855 17:12:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:21:36.855 17:12:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:21:36.855 17:12:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:36.855 17:12:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:36.855 17:12:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:36.855 17:12:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:21:36.855 17:12:23 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:36.855 17:12:24 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:36.855 17:12:24 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:36.855 17:12:24 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:21:36.855 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:36.855 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.255 ms 00:21:36.855 00:21:36.855 --- 10.0.0.2 ping statistics --- 00:21:36.855 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:36.855 rtt min/avg/max/mdev = 0.255/0.255/0.255/0.000 ms 00:21:36.855 17:12:24 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:36.855 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:36.855 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.222 ms 00:21:36.855 00:21:36.855 --- 10.0.0.1 ping statistics --- 00:21:36.855 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:36.855 rtt min/avg/max/mdev = 0.222/0.222/0.222/0.000 ms 00:21:36.855 17:12:24 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:36.855 17:12:24 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@422 -- # return 0 00:21:36.855 17:12:24 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:21:36.855 17:12:24 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:36.855 17:12:24 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:21:36.855 17:12:24 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:21:36.855 17:12:24 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:36.855 17:12:24 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:21:36.855 17:12:24 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:21:36.855 17:12:24 nvmf_tcp.nvmf_async_init -- host/async_init.sh@23 -- # nvmfappstart -m 0x1 00:21:36.855 17:12:24 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:21:36.855 17:12:24 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@720 -- # xtrace_disable 00:21:36.855 17:12:24 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:36.855 17:12:24 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@481 -- # nvmfpid=3140523 00:21:36.855 17:12:24 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@482 -- # waitforlisten 3140523 00:21:36.855 17:12:24 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@827 -- # '[' -z 3140523 ']' 00:21:36.855 17:12:24 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:36.855 17:12:24 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@832 -- # local max_retries=100 00:21:36.855 17:12:24 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:36.855 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:36.855 17:12:24 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@836 -- # xtrace_disable 00:21:36.855 17:12:24 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:36.855 17:12:24 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:21:36.855 [2024-05-15 17:12:24.178666] Starting SPDK v24.05-pre git sha1 0ba8ca574 / DPDK 23.11.0 initialization... 00:21:36.855 [2024-05-15 17:12:24.178711] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:36.855 EAL: No free 2048 kB hugepages reported on node 1 00:21:36.855 [2024-05-15 17:12:24.235494] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:36.855 [2024-05-15 17:12:24.314109] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:36.855 [2024-05-15 17:12:24.314144] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:36.855 [2024-05-15 17:12:24.314151] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:36.855 [2024-05-15 17:12:24.314157] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:36.855 [2024-05-15 17:12:24.314163] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:36.855 [2024-05-15 17:12:24.314187] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:21:37.421 17:12:24 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:21:37.421 17:12:24 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@860 -- # return 0 00:21:37.421 17:12:24 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:21:37.421 17:12:24 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@726 -- # xtrace_disable 00:21:37.421 17:12:24 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:37.421 17:12:25 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:37.421 17:12:25 nvmf_tcp.nvmf_async_init -- host/async_init.sh@26 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:21:37.421 17:12:25 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:37.421 17:12:25 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:37.421 [2024-05-15 17:12:25.006119] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:37.421 17:12:25 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:37.421 17:12:25 nvmf_tcp.nvmf_async_init -- host/async_init.sh@27 -- # rpc_cmd bdev_null_create null0 1024 512 00:21:37.421 17:12:25 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:37.421 17:12:25 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:37.421 null0 00:21:37.421 17:12:25 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:37.421 17:12:25 nvmf_tcp.nvmf_async_init -- host/async_init.sh@28 -- # rpc_cmd bdev_wait_for_examine 00:21:37.421 17:12:25 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:37.421 17:12:25 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:37.421 17:12:25 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:37.421 17:12:25 nvmf_tcp.nvmf_async_init -- host/async_init.sh@29 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a 00:21:37.421 17:12:25 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:37.421 17:12:25 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:37.421 17:12:25 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:37.421 17:12:25 nvmf_tcp.nvmf_async_init -- host/async_init.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 -g f04a4e74e21144d6ad156ba359557e5d 00:21:37.421 17:12:25 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:37.421 17:12:25 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:37.421 17:12:25 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:37.421 17:12:25 nvmf_tcp.nvmf_async_init -- host/async_init.sh@31 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:21:37.421 17:12:25 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:37.421 17:12:25 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:37.421 [2024-05-15 17:12:25.050184] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:21:37.422 [2024-05-15 17:12:25.050374] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:37.422 17:12:25 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:37.422 17:12:25 nvmf_tcp.nvmf_async_init -- host/async_init.sh@37 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode0 00:21:37.422 17:12:25 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:37.422 17:12:25 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:37.680 nvme0n1 00:21:37.680 17:12:25 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:37.680 17:12:25 nvmf_tcp.nvmf_async_init -- host/async_init.sh@41 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:21:37.680 17:12:25 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:37.680 17:12:25 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:37.680 [ 00:21:37.680 { 00:21:37.680 "name": "nvme0n1", 00:21:37.680 "aliases": [ 00:21:37.680 "f04a4e74-e211-44d6-ad15-6ba359557e5d" 00:21:37.680 ], 00:21:37.680 "product_name": "NVMe disk", 00:21:37.680 "block_size": 512, 00:21:37.680 "num_blocks": 2097152, 00:21:37.680 "uuid": "f04a4e74-e211-44d6-ad15-6ba359557e5d", 00:21:37.680 "assigned_rate_limits": { 00:21:37.680 "rw_ios_per_sec": 0, 00:21:37.680 "rw_mbytes_per_sec": 0, 00:21:37.680 "r_mbytes_per_sec": 0, 00:21:37.680 "w_mbytes_per_sec": 0 00:21:37.680 }, 00:21:37.680 "claimed": false, 00:21:37.680 "zoned": false, 00:21:37.680 "supported_io_types": { 00:21:37.680 "read": true, 00:21:37.680 "write": true, 00:21:37.680 "unmap": false, 00:21:37.680 "write_zeroes": true, 00:21:37.680 "flush": true, 00:21:37.680 "reset": true, 00:21:37.680 "compare": true, 00:21:37.680 "compare_and_write": true, 00:21:37.680 "abort": true, 00:21:37.680 "nvme_admin": true, 00:21:37.680 "nvme_io": true 00:21:37.680 }, 00:21:37.680 "memory_domains": [ 00:21:37.680 { 00:21:37.680 "dma_device_id": "system", 00:21:37.680 "dma_device_type": 1 00:21:37.680 } 00:21:37.680 ], 00:21:37.680 "driver_specific": { 00:21:37.680 "nvme": [ 00:21:37.680 { 00:21:37.680 "trid": { 00:21:37.680 "trtype": "TCP", 00:21:37.680 "adrfam": "IPv4", 00:21:37.680 "traddr": "10.0.0.2", 00:21:37.680 "trsvcid": "4420", 00:21:37.680 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:21:37.680 }, 00:21:37.680 "ctrlr_data": { 00:21:37.680 "cntlid": 1, 00:21:37.680 "vendor_id": "0x8086", 00:21:37.680 "model_number": "SPDK bdev Controller", 00:21:37.680 "serial_number": "00000000000000000000", 00:21:37.680 "firmware_revision": "24.05", 00:21:37.680 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:21:37.680 "oacs": { 00:21:37.680 "security": 0, 00:21:37.680 "format": 0, 00:21:37.680 "firmware": 0, 00:21:37.680 "ns_manage": 0 00:21:37.680 }, 00:21:37.680 "multi_ctrlr": true, 00:21:37.680 "ana_reporting": false 00:21:37.680 }, 00:21:37.680 "vs": { 00:21:37.680 "nvme_version": "1.3" 00:21:37.680 }, 00:21:37.680 "ns_data": { 00:21:37.680 "id": 1, 00:21:37.680 "can_share": true 00:21:37.680 } 00:21:37.680 } 00:21:37.680 ], 00:21:37.680 "mp_policy": "active_passive" 00:21:37.680 } 00:21:37.680 } 00:21:37.680 ] 00:21:37.680 17:12:25 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:37.680 17:12:25 nvmf_tcp.nvmf_async_init -- host/async_init.sh@44 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:21:37.680 17:12:25 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:37.680 17:12:25 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:37.680 [2024-05-15 17:12:25.302861] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:21:37.680 [2024-05-15 17:12:25.302913] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c85260 (9): Bad file descriptor 00:21:37.938 [2024-05-15 17:12:25.435254] bdev_nvme.c:2055:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:21:37.938 17:12:25 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:37.938 17:12:25 nvmf_tcp.nvmf_async_init -- host/async_init.sh@47 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:21:37.938 17:12:25 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:37.938 17:12:25 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:37.938 [ 00:21:37.938 { 00:21:37.938 "name": "nvme0n1", 00:21:37.938 "aliases": [ 00:21:37.938 "f04a4e74-e211-44d6-ad15-6ba359557e5d" 00:21:37.938 ], 00:21:37.938 "product_name": "NVMe disk", 00:21:37.938 "block_size": 512, 00:21:37.938 "num_blocks": 2097152, 00:21:37.939 "uuid": "f04a4e74-e211-44d6-ad15-6ba359557e5d", 00:21:37.939 "assigned_rate_limits": { 00:21:37.939 "rw_ios_per_sec": 0, 00:21:37.939 "rw_mbytes_per_sec": 0, 00:21:37.939 "r_mbytes_per_sec": 0, 00:21:37.939 "w_mbytes_per_sec": 0 00:21:37.939 }, 00:21:37.939 "claimed": false, 00:21:37.939 "zoned": false, 00:21:37.939 "supported_io_types": { 00:21:37.939 "read": true, 00:21:37.939 "write": true, 00:21:37.939 "unmap": false, 00:21:37.939 "write_zeroes": true, 00:21:37.939 "flush": true, 00:21:37.939 "reset": true, 00:21:37.939 "compare": true, 00:21:37.939 "compare_and_write": true, 00:21:37.939 "abort": true, 00:21:37.939 "nvme_admin": true, 00:21:37.939 "nvme_io": true 00:21:37.939 }, 00:21:37.939 "memory_domains": [ 00:21:37.939 { 00:21:37.939 "dma_device_id": "system", 00:21:37.939 "dma_device_type": 1 00:21:37.939 } 00:21:37.939 ], 00:21:37.939 "driver_specific": { 00:21:37.939 "nvme": [ 00:21:37.939 { 00:21:37.939 "trid": { 00:21:37.939 "trtype": "TCP", 00:21:37.939 "adrfam": "IPv4", 00:21:37.939 "traddr": "10.0.0.2", 00:21:37.939 "trsvcid": "4420", 00:21:37.939 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:21:37.939 }, 00:21:37.939 "ctrlr_data": { 00:21:37.939 "cntlid": 2, 00:21:37.939 "vendor_id": "0x8086", 00:21:37.939 "model_number": "SPDK bdev Controller", 00:21:37.939 "serial_number": "00000000000000000000", 00:21:37.939 "firmware_revision": "24.05", 00:21:37.939 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:21:37.939 "oacs": { 00:21:37.939 "security": 0, 00:21:37.939 "format": 0, 00:21:37.939 "firmware": 0, 00:21:37.939 "ns_manage": 0 00:21:37.939 }, 00:21:37.939 "multi_ctrlr": true, 00:21:37.939 "ana_reporting": false 00:21:37.939 }, 00:21:37.939 "vs": { 00:21:37.939 "nvme_version": "1.3" 00:21:37.939 }, 00:21:37.939 "ns_data": { 00:21:37.939 "id": 1, 00:21:37.939 "can_share": true 00:21:37.939 } 00:21:37.939 } 00:21:37.939 ], 00:21:37.939 "mp_policy": "active_passive" 00:21:37.939 } 00:21:37.939 } 00:21:37.939 ] 00:21:37.939 17:12:25 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:37.939 17:12:25 nvmf_tcp.nvmf_async_init -- host/async_init.sh@50 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:37.939 17:12:25 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:37.939 17:12:25 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:37.939 17:12:25 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:37.939 17:12:25 nvmf_tcp.nvmf_async_init -- host/async_init.sh@53 -- # mktemp 00:21:37.939 17:12:25 nvmf_tcp.nvmf_async_init -- host/async_init.sh@53 -- # key_path=/tmp/tmp.IPrNf22wlE 00:21:37.939 17:12:25 nvmf_tcp.nvmf_async_init -- host/async_init.sh@54 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:21:37.939 17:12:25 nvmf_tcp.nvmf_async_init -- host/async_init.sh@55 -- # chmod 0600 /tmp/tmp.IPrNf22wlE 00:21:37.939 17:12:25 nvmf_tcp.nvmf_async_init -- host/async_init.sh@56 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode0 --disable 00:21:37.939 17:12:25 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:37.939 17:12:25 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:37.939 17:12:25 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:37.939 17:12:25 nvmf_tcp.nvmf_async_init -- host/async_init.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 --secure-channel 00:21:37.939 17:12:25 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:37.939 17:12:25 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:37.939 [2024-05-15 17:12:25.495510] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:21:37.939 [2024-05-15 17:12:25.495617] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:21:37.939 17:12:25 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:37.939 17:12:25 nvmf_tcp.nvmf_async_init -- host/async_init.sh@59 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.IPrNf22wlE 00:21:37.939 17:12:25 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:37.939 17:12:25 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:37.939 [2024-05-15 17:12:25.503522] tcp.c:3665:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:21:37.939 17:12:25 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:37.939 17:12:25 nvmf_tcp.nvmf_async_init -- host/async_init.sh@65 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4421 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.IPrNf22wlE 00:21:37.939 17:12:25 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:37.939 17:12:25 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:37.939 [2024-05-15 17:12:25.515555] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:21:37.939 [2024-05-15 17:12:25.515590] nvme_tcp.c:2580:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:21:37.939 nvme0n1 00:21:37.939 17:12:25 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:37.939 17:12:25 nvmf_tcp.nvmf_async_init -- host/async_init.sh@69 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:21:37.939 17:12:25 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:37.939 17:12:25 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:37.939 [ 00:21:37.939 { 00:21:37.939 "name": "nvme0n1", 00:21:37.939 "aliases": [ 00:21:37.939 "f04a4e74-e211-44d6-ad15-6ba359557e5d" 00:21:37.939 ], 00:21:37.939 "product_name": "NVMe disk", 00:21:37.939 "block_size": 512, 00:21:37.939 "num_blocks": 2097152, 00:21:37.939 "uuid": "f04a4e74-e211-44d6-ad15-6ba359557e5d", 00:21:37.939 "assigned_rate_limits": { 00:21:37.939 "rw_ios_per_sec": 0, 00:21:37.939 "rw_mbytes_per_sec": 0, 00:21:37.939 "r_mbytes_per_sec": 0, 00:21:37.939 "w_mbytes_per_sec": 0 00:21:37.939 }, 00:21:37.939 "claimed": false, 00:21:37.939 "zoned": false, 00:21:37.939 "supported_io_types": { 00:21:37.939 "read": true, 00:21:37.939 "write": true, 00:21:37.939 "unmap": false, 00:21:37.939 "write_zeroes": true, 00:21:37.939 "flush": true, 00:21:37.939 "reset": true, 00:21:37.939 "compare": true, 00:21:37.939 "compare_and_write": true, 00:21:37.939 "abort": true, 00:21:37.939 "nvme_admin": true, 00:21:37.939 "nvme_io": true 00:21:37.939 }, 00:21:37.939 "memory_domains": [ 00:21:37.939 { 00:21:37.939 "dma_device_id": "system", 00:21:37.939 "dma_device_type": 1 00:21:37.939 } 00:21:37.939 ], 00:21:37.939 "driver_specific": { 00:21:37.939 "nvme": [ 00:21:37.939 { 00:21:37.939 "trid": { 00:21:37.939 "trtype": "TCP", 00:21:37.939 "adrfam": "IPv4", 00:21:37.939 "traddr": "10.0.0.2", 00:21:37.939 "trsvcid": "4421", 00:21:37.939 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:21:37.939 }, 00:21:37.939 "ctrlr_data": { 00:21:37.939 "cntlid": 3, 00:21:37.939 "vendor_id": "0x8086", 00:21:37.939 "model_number": "SPDK bdev Controller", 00:21:37.939 "serial_number": "00000000000000000000", 00:21:37.939 "firmware_revision": "24.05", 00:21:37.939 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:21:37.939 "oacs": { 00:21:37.939 "security": 0, 00:21:37.939 "format": 0, 00:21:37.939 "firmware": 0, 00:21:37.939 "ns_manage": 0 00:21:37.939 }, 00:21:37.939 "multi_ctrlr": true, 00:21:37.940 "ana_reporting": false 00:21:37.940 }, 00:21:37.940 "vs": { 00:21:37.940 "nvme_version": "1.3" 00:21:37.940 }, 00:21:37.940 "ns_data": { 00:21:38.199 "id": 1, 00:21:38.199 "can_share": true 00:21:38.199 } 00:21:38.199 } 00:21:38.199 ], 00:21:38.199 "mp_policy": "active_passive" 00:21:38.199 } 00:21:38.199 } 00:21:38.199 ] 00:21:38.199 17:12:25 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:38.199 17:12:25 nvmf_tcp.nvmf_async_init -- host/async_init.sh@72 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:38.199 17:12:25 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:38.199 17:12:25 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:38.199 17:12:25 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:38.199 17:12:25 nvmf_tcp.nvmf_async_init -- host/async_init.sh@75 -- # rm -f /tmp/tmp.IPrNf22wlE 00:21:38.199 17:12:25 nvmf_tcp.nvmf_async_init -- host/async_init.sh@77 -- # trap - SIGINT SIGTERM EXIT 00:21:38.199 17:12:25 nvmf_tcp.nvmf_async_init -- host/async_init.sh@78 -- # nvmftestfini 00:21:38.199 17:12:25 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@488 -- # nvmfcleanup 00:21:38.199 17:12:25 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@117 -- # sync 00:21:38.199 17:12:25 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:21:38.199 17:12:25 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@120 -- # set +e 00:21:38.199 17:12:25 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@121 -- # for i in {1..20} 00:21:38.199 17:12:25 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:21:38.199 rmmod nvme_tcp 00:21:38.199 rmmod nvme_fabrics 00:21:38.199 rmmod nvme_keyring 00:21:38.199 17:12:25 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:21:38.199 17:12:25 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@124 -- # set -e 00:21:38.199 17:12:25 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@125 -- # return 0 00:21:38.199 17:12:25 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@489 -- # '[' -n 3140523 ']' 00:21:38.199 17:12:25 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@490 -- # killprocess 3140523 00:21:38.199 17:12:25 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@946 -- # '[' -z 3140523 ']' 00:21:38.199 17:12:25 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@950 -- # kill -0 3140523 00:21:38.199 17:12:25 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@951 -- # uname 00:21:38.199 17:12:25 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:21:38.199 17:12:25 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3140523 00:21:38.199 17:12:25 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:21:38.199 17:12:25 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:21:38.199 17:12:25 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3140523' 00:21:38.199 killing process with pid 3140523 00:21:38.199 17:12:25 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@965 -- # kill 3140523 00:21:38.199 [2024-05-15 17:12:25.717850] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:21:38.199 [2024-05-15 17:12:25.717874] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:21:38.199 [2024-05-15 17:12:25.717884] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:21:38.199 17:12:25 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@970 -- # wait 3140523 00:21:38.543 17:12:25 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:21:38.543 17:12:25 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:21:38.543 17:12:25 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:21:38.543 17:12:25 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:21:38.543 17:12:25 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@278 -- # remove_spdk_ns 00:21:38.543 17:12:25 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:38.543 17:12:25 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:38.543 17:12:25 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:40.468 17:12:27 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:21:40.468 00:21:40.468 real 0m9.248s 00:21:40.468 user 0m3.446s 00:21:40.468 sys 0m4.318s 00:21:40.468 17:12:27 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@1122 -- # xtrace_disable 00:21:40.468 17:12:27 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:40.468 ************************************ 00:21:40.468 END TEST nvmf_async_init 00:21:40.468 ************************************ 00:21:40.468 17:12:28 nvmf_tcp -- nvmf/nvmf.sh@93 -- # run_test dma /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:21:40.468 17:12:28 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:21:40.468 17:12:28 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:21:40.468 17:12:28 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:21:40.468 ************************************ 00:21:40.468 START TEST dma 00:21:40.468 ************************************ 00:21:40.468 17:12:28 nvmf_tcp.dma -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:21:40.727 * Looking for test storage... 00:21:40.727 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:21:40.728 17:12:28 nvmf_tcp.dma -- host/dma.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:40.728 17:12:28 nvmf_tcp.dma -- nvmf/common.sh@7 -- # uname -s 00:21:40.728 17:12:28 nvmf_tcp.dma -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:40.728 17:12:28 nvmf_tcp.dma -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:40.728 17:12:28 nvmf_tcp.dma -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:40.728 17:12:28 nvmf_tcp.dma -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:40.728 17:12:28 nvmf_tcp.dma -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:40.728 17:12:28 nvmf_tcp.dma -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:40.728 17:12:28 nvmf_tcp.dma -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:40.728 17:12:28 nvmf_tcp.dma -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:40.728 17:12:28 nvmf_tcp.dma -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:40.728 17:12:28 nvmf_tcp.dma -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:40.728 17:12:28 nvmf_tcp.dma -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:21:40.728 17:12:28 nvmf_tcp.dma -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:21:40.728 17:12:28 nvmf_tcp.dma -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:40.728 17:12:28 nvmf_tcp.dma -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:40.728 17:12:28 nvmf_tcp.dma -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:40.728 17:12:28 nvmf_tcp.dma -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:40.728 17:12:28 nvmf_tcp.dma -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:40.728 17:12:28 nvmf_tcp.dma -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:40.728 17:12:28 nvmf_tcp.dma -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:40.728 17:12:28 nvmf_tcp.dma -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:40.728 17:12:28 nvmf_tcp.dma -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:40.728 17:12:28 nvmf_tcp.dma -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:40.728 17:12:28 nvmf_tcp.dma -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:40.728 17:12:28 nvmf_tcp.dma -- paths/export.sh@5 -- # export PATH 00:21:40.728 17:12:28 nvmf_tcp.dma -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:40.728 17:12:28 nvmf_tcp.dma -- nvmf/common.sh@47 -- # : 0 00:21:40.728 17:12:28 nvmf_tcp.dma -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:21:40.728 17:12:28 nvmf_tcp.dma -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:21:40.728 17:12:28 nvmf_tcp.dma -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:40.728 17:12:28 nvmf_tcp.dma -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:40.728 17:12:28 nvmf_tcp.dma -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:40.728 17:12:28 nvmf_tcp.dma -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:21:40.728 17:12:28 nvmf_tcp.dma -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:21:40.728 17:12:28 nvmf_tcp.dma -- nvmf/common.sh@51 -- # have_pci_nics=0 00:21:40.728 17:12:28 nvmf_tcp.dma -- host/dma.sh@12 -- # '[' tcp '!=' rdma ']' 00:21:40.728 17:12:28 nvmf_tcp.dma -- host/dma.sh@13 -- # exit 0 00:21:40.728 00:21:40.728 real 0m0.123s 00:21:40.728 user 0m0.057s 00:21:40.728 sys 0m0.074s 00:21:40.728 17:12:28 nvmf_tcp.dma -- common/autotest_common.sh@1122 -- # xtrace_disable 00:21:40.728 17:12:28 nvmf_tcp.dma -- common/autotest_common.sh@10 -- # set +x 00:21:40.728 ************************************ 00:21:40.728 END TEST dma 00:21:40.728 ************************************ 00:21:40.728 17:12:28 nvmf_tcp -- nvmf/nvmf.sh@96 -- # run_test nvmf_identify /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:21:40.728 17:12:28 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:21:40.728 17:12:28 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:21:40.728 17:12:28 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:21:40.728 ************************************ 00:21:40.728 START TEST nvmf_identify 00:21:40.728 ************************************ 00:21:40.728 17:12:28 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:21:40.728 * Looking for test storage... 00:21:40.728 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:21:40.728 17:12:28 nvmf_tcp.nvmf_identify -- host/identify.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:40.728 17:12:28 nvmf_tcp.nvmf_identify -- nvmf/common.sh@7 -- # uname -s 00:21:40.728 17:12:28 nvmf_tcp.nvmf_identify -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:40.728 17:12:28 nvmf_tcp.nvmf_identify -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:40.728 17:12:28 nvmf_tcp.nvmf_identify -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:40.728 17:12:28 nvmf_tcp.nvmf_identify -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:40.728 17:12:28 nvmf_tcp.nvmf_identify -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:40.728 17:12:28 nvmf_tcp.nvmf_identify -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:40.728 17:12:28 nvmf_tcp.nvmf_identify -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:40.728 17:12:28 nvmf_tcp.nvmf_identify -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:40.728 17:12:28 nvmf_tcp.nvmf_identify -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:40.728 17:12:28 nvmf_tcp.nvmf_identify -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:40.728 17:12:28 nvmf_tcp.nvmf_identify -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:21:40.728 17:12:28 nvmf_tcp.nvmf_identify -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:21:40.728 17:12:28 nvmf_tcp.nvmf_identify -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:40.728 17:12:28 nvmf_tcp.nvmf_identify -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:40.728 17:12:28 nvmf_tcp.nvmf_identify -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:40.728 17:12:28 nvmf_tcp.nvmf_identify -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:40.728 17:12:28 nvmf_tcp.nvmf_identify -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:40.728 17:12:28 nvmf_tcp.nvmf_identify -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:40.728 17:12:28 nvmf_tcp.nvmf_identify -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:40.728 17:12:28 nvmf_tcp.nvmf_identify -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:40.728 17:12:28 nvmf_tcp.nvmf_identify -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:40.728 17:12:28 nvmf_tcp.nvmf_identify -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:40.728 17:12:28 nvmf_tcp.nvmf_identify -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:40.728 17:12:28 nvmf_tcp.nvmf_identify -- paths/export.sh@5 -- # export PATH 00:21:40.728 17:12:28 nvmf_tcp.nvmf_identify -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:40.728 17:12:28 nvmf_tcp.nvmf_identify -- nvmf/common.sh@47 -- # : 0 00:21:40.728 17:12:28 nvmf_tcp.nvmf_identify -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:21:40.728 17:12:28 nvmf_tcp.nvmf_identify -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:21:40.728 17:12:28 nvmf_tcp.nvmf_identify -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:40.728 17:12:28 nvmf_tcp.nvmf_identify -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:40.728 17:12:28 nvmf_tcp.nvmf_identify -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:40.728 17:12:28 nvmf_tcp.nvmf_identify -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:21:40.728 17:12:28 nvmf_tcp.nvmf_identify -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:21:40.728 17:12:28 nvmf_tcp.nvmf_identify -- nvmf/common.sh@51 -- # have_pci_nics=0 00:21:40.728 17:12:28 nvmf_tcp.nvmf_identify -- host/identify.sh@11 -- # MALLOC_BDEV_SIZE=64 00:21:40.728 17:12:28 nvmf_tcp.nvmf_identify -- host/identify.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:21:40.728 17:12:28 nvmf_tcp.nvmf_identify -- host/identify.sh@14 -- # nvmftestinit 00:21:40.728 17:12:28 nvmf_tcp.nvmf_identify -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:21:40.728 17:12:28 nvmf_tcp.nvmf_identify -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:40.728 17:12:28 nvmf_tcp.nvmf_identify -- nvmf/common.sh@448 -- # prepare_net_devs 00:21:40.728 17:12:28 nvmf_tcp.nvmf_identify -- nvmf/common.sh@410 -- # local -g is_hw=no 00:21:40.728 17:12:28 nvmf_tcp.nvmf_identify -- nvmf/common.sh@412 -- # remove_spdk_ns 00:21:40.728 17:12:28 nvmf_tcp.nvmf_identify -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:40.729 17:12:28 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:40.729 17:12:28 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:40.729 17:12:28 nvmf_tcp.nvmf_identify -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:21:40.729 17:12:28 nvmf_tcp.nvmf_identify -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:21:40.729 17:12:28 nvmf_tcp.nvmf_identify -- nvmf/common.sh@285 -- # xtrace_disable 00:21:40.729 17:12:28 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:21:45.993 17:12:33 nvmf_tcp.nvmf_identify -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:45.993 17:12:33 nvmf_tcp.nvmf_identify -- nvmf/common.sh@291 -- # pci_devs=() 00:21:45.993 17:12:33 nvmf_tcp.nvmf_identify -- nvmf/common.sh@291 -- # local -a pci_devs 00:21:45.993 17:12:33 nvmf_tcp.nvmf_identify -- nvmf/common.sh@292 -- # pci_net_devs=() 00:21:45.993 17:12:33 nvmf_tcp.nvmf_identify -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:21:45.993 17:12:33 nvmf_tcp.nvmf_identify -- nvmf/common.sh@293 -- # pci_drivers=() 00:21:45.993 17:12:33 nvmf_tcp.nvmf_identify -- nvmf/common.sh@293 -- # local -A pci_drivers 00:21:45.993 17:12:33 nvmf_tcp.nvmf_identify -- nvmf/common.sh@295 -- # net_devs=() 00:21:45.993 17:12:33 nvmf_tcp.nvmf_identify -- nvmf/common.sh@295 -- # local -ga net_devs 00:21:45.993 17:12:33 nvmf_tcp.nvmf_identify -- nvmf/common.sh@296 -- # e810=() 00:21:45.993 17:12:33 nvmf_tcp.nvmf_identify -- nvmf/common.sh@296 -- # local -ga e810 00:21:45.993 17:12:33 nvmf_tcp.nvmf_identify -- nvmf/common.sh@297 -- # x722=() 00:21:45.993 17:12:33 nvmf_tcp.nvmf_identify -- nvmf/common.sh@297 -- # local -ga x722 00:21:45.993 17:12:33 nvmf_tcp.nvmf_identify -- nvmf/common.sh@298 -- # mlx=() 00:21:45.993 17:12:33 nvmf_tcp.nvmf_identify -- nvmf/common.sh@298 -- # local -ga mlx 00:21:45.993 17:12:33 nvmf_tcp.nvmf_identify -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:45.993 17:12:33 nvmf_tcp.nvmf_identify -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:45.993 17:12:33 nvmf_tcp.nvmf_identify -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:45.993 17:12:33 nvmf_tcp.nvmf_identify -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:45.993 17:12:33 nvmf_tcp.nvmf_identify -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:45.993 17:12:33 nvmf_tcp.nvmf_identify -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:45.993 17:12:33 nvmf_tcp.nvmf_identify -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:45.993 17:12:33 nvmf_tcp.nvmf_identify -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:45.993 17:12:33 nvmf_tcp.nvmf_identify -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:45.993 17:12:33 nvmf_tcp.nvmf_identify -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:45.993 17:12:33 nvmf_tcp.nvmf_identify -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:45.993 17:12:33 nvmf_tcp.nvmf_identify -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:21:45.993 17:12:33 nvmf_tcp.nvmf_identify -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:21:45.993 17:12:33 nvmf_tcp.nvmf_identify -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:21:45.993 17:12:33 nvmf_tcp.nvmf_identify -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:21:45.993 17:12:33 nvmf_tcp.nvmf_identify -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:21:45.993 17:12:33 nvmf_tcp.nvmf_identify -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:21:45.993 17:12:33 nvmf_tcp.nvmf_identify -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:45.993 17:12:33 nvmf_tcp.nvmf_identify -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:21:45.993 Found 0000:86:00.0 (0x8086 - 0x159b) 00:21:45.993 17:12:33 nvmf_tcp.nvmf_identify -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:45.993 17:12:33 nvmf_tcp.nvmf_identify -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:45.993 17:12:33 nvmf_tcp.nvmf_identify -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:45.993 17:12:33 nvmf_tcp.nvmf_identify -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:45.993 17:12:33 nvmf_tcp.nvmf_identify -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:45.993 17:12:33 nvmf_tcp.nvmf_identify -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:45.993 17:12:33 nvmf_tcp.nvmf_identify -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:21:45.993 Found 0000:86:00.1 (0x8086 - 0x159b) 00:21:45.993 17:12:33 nvmf_tcp.nvmf_identify -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:45.993 17:12:33 nvmf_tcp.nvmf_identify -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:45.994 17:12:33 nvmf_tcp.nvmf_identify -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:45.994 17:12:33 nvmf_tcp.nvmf_identify -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:45.994 17:12:33 nvmf_tcp.nvmf_identify -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:45.994 17:12:33 nvmf_tcp.nvmf_identify -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:21:45.994 17:12:33 nvmf_tcp.nvmf_identify -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:21:45.994 17:12:33 nvmf_tcp.nvmf_identify -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:21:45.994 17:12:33 nvmf_tcp.nvmf_identify -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:45.994 17:12:33 nvmf_tcp.nvmf_identify -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:45.994 17:12:33 nvmf_tcp.nvmf_identify -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:21:45.994 17:12:33 nvmf_tcp.nvmf_identify -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:45.994 17:12:33 nvmf_tcp.nvmf_identify -- nvmf/common.sh@390 -- # [[ up == up ]] 00:21:45.994 17:12:33 nvmf_tcp.nvmf_identify -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:45.994 17:12:33 nvmf_tcp.nvmf_identify -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:45.994 17:12:33 nvmf_tcp.nvmf_identify -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:21:45.994 Found net devices under 0000:86:00.0: cvl_0_0 00:21:45.994 17:12:33 nvmf_tcp.nvmf_identify -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:45.994 17:12:33 nvmf_tcp.nvmf_identify -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:45.994 17:12:33 nvmf_tcp.nvmf_identify -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:45.994 17:12:33 nvmf_tcp.nvmf_identify -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:21:45.994 17:12:33 nvmf_tcp.nvmf_identify -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:45.994 17:12:33 nvmf_tcp.nvmf_identify -- nvmf/common.sh@390 -- # [[ up == up ]] 00:21:45.994 17:12:33 nvmf_tcp.nvmf_identify -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:45.994 17:12:33 nvmf_tcp.nvmf_identify -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:45.994 17:12:33 nvmf_tcp.nvmf_identify -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:21:45.994 Found net devices under 0000:86:00.1: cvl_0_1 00:21:45.994 17:12:33 nvmf_tcp.nvmf_identify -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:45.994 17:12:33 nvmf_tcp.nvmf_identify -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:21:45.994 17:12:33 nvmf_tcp.nvmf_identify -- nvmf/common.sh@414 -- # is_hw=yes 00:21:45.994 17:12:33 nvmf_tcp.nvmf_identify -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:21:45.994 17:12:33 nvmf_tcp.nvmf_identify -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:21:45.994 17:12:33 nvmf_tcp.nvmf_identify -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:21:45.994 17:12:33 nvmf_tcp.nvmf_identify -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:45.994 17:12:33 nvmf_tcp.nvmf_identify -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:45.994 17:12:33 nvmf_tcp.nvmf_identify -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:45.994 17:12:33 nvmf_tcp.nvmf_identify -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:21:45.994 17:12:33 nvmf_tcp.nvmf_identify -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:45.994 17:12:33 nvmf_tcp.nvmf_identify -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:45.994 17:12:33 nvmf_tcp.nvmf_identify -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:21:45.994 17:12:33 nvmf_tcp.nvmf_identify -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:45.994 17:12:33 nvmf_tcp.nvmf_identify -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:45.994 17:12:33 nvmf_tcp.nvmf_identify -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:21:45.994 17:12:33 nvmf_tcp.nvmf_identify -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:21:45.994 17:12:33 nvmf_tcp.nvmf_identify -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:21:45.994 17:12:33 nvmf_tcp.nvmf_identify -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:45.994 17:12:33 nvmf_tcp.nvmf_identify -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:45.994 17:12:33 nvmf_tcp.nvmf_identify -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:45.994 17:12:33 nvmf_tcp.nvmf_identify -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:21:45.994 17:12:33 nvmf_tcp.nvmf_identify -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:45.994 17:12:33 nvmf_tcp.nvmf_identify -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:45.994 17:12:33 nvmf_tcp.nvmf_identify -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:45.994 17:12:33 nvmf_tcp.nvmf_identify -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:21:45.994 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:45.994 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.196 ms 00:21:45.994 00:21:45.994 --- 10.0.0.2 ping statistics --- 00:21:45.994 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:45.994 rtt min/avg/max/mdev = 0.196/0.196/0.196/0.000 ms 00:21:45.994 17:12:33 nvmf_tcp.nvmf_identify -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:45.994 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:45.994 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.188 ms 00:21:45.994 00:21:45.994 --- 10.0.0.1 ping statistics --- 00:21:45.994 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:45.994 rtt min/avg/max/mdev = 0.188/0.188/0.188/0.000 ms 00:21:45.994 17:12:33 nvmf_tcp.nvmf_identify -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:45.994 17:12:33 nvmf_tcp.nvmf_identify -- nvmf/common.sh@422 -- # return 0 00:21:45.994 17:12:33 nvmf_tcp.nvmf_identify -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:21:45.994 17:12:33 nvmf_tcp.nvmf_identify -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:45.994 17:12:33 nvmf_tcp.nvmf_identify -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:21:45.994 17:12:33 nvmf_tcp.nvmf_identify -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:21:45.994 17:12:33 nvmf_tcp.nvmf_identify -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:45.994 17:12:33 nvmf_tcp.nvmf_identify -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:21:45.994 17:12:33 nvmf_tcp.nvmf_identify -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:21:45.994 17:12:33 nvmf_tcp.nvmf_identify -- host/identify.sh@16 -- # timing_enter start_nvmf_tgt 00:21:45.994 17:12:33 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@720 -- # xtrace_disable 00:21:45.994 17:12:33 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:21:45.994 17:12:33 nvmf_tcp.nvmf_identify -- host/identify.sh@19 -- # nvmfpid=3144219 00:21:45.994 17:12:33 nvmf_tcp.nvmf_identify -- host/identify.sh@21 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:21:45.994 17:12:33 nvmf_tcp.nvmf_identify -- host/identify.sh@18 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:21:45.994 17:12:33 nvmf_tcp.nvmf_identify -- host/identify.sh@23 -- # waitforlisten 3144219 00:21:45.994 17:12:33 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@827 -- # '[' -z 3144219 ']' 00:21:45.994 17:12:33 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:45.994 17:12:33 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@832 -- # local max_retries=100 00:21:45.994 17:12:33 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:45.994 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:45.994 17:12:33 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@836 -- # xtrace_disable 00:21:45.994 17:12:33 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:21:45.994 [2024-05-15 17:12:33.630177] Starting SPDK v24.05-pre git sha1 0ba8ca574 / DPDK 23.11.0 initialization... 00:21:45.994 [2024-05-15 17:12:33.630225] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:46.252 EAL: No free 2048 kB hugepages reported on node 1 00:21:46.252 [2024-05-15 17:12:33.689376] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:21:46.252 [2024-05-15 17:12:33.771104] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:46.252 [2024-05-15 17:12:33.771140] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:46.252 [2024-05-15 17:12:33.771147] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:46.252 [2024-05-15 17:12:33.771154] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:46.252 [2024-05-15 17:12:33.771159] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:46.252 [2024-05-15 17:12:33.771207] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:21:46.252 [2024-05-15 17:12:33.771302] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:21:46.252 [2024-05-15 17:12:33.771378] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:21:46.252 [2024-05-15 17:12:33.771380] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:21:46.817 17:12:34 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:21:46.817 17:12:34 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@860 -- # return 0 00:21:46.818 17:12:34 nvmf_tcp.nvmf_identify -- host/identify.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:21:46.818 17:12:34 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:46.818 17:12:34 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:21:46.818 [2024-05-15 17:12:34.451085] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:46.818 17:12:34 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:46.818 17:12:34 nvmf_tcp.nvmf_identify -- host/identify.sh@25 -- # timing_exit start_nvmf_tgt 00:21:46.818 17:12:34 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@726 -- # xtrace_disable 00:21:46.818 17:12:34 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:21:47.080 17:12:34 nvmf_tcp.nvmf_identify -- host/identify.sh@27 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:21:47.080 17:12:34 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:47.080 17:12:34 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:21:47.080 Malloc0 00:21:47.080 17:12:34 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:47.080 17:12:34 nvmf_tcp.nvmf_identify -- host/identify.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:21:47.080 17:12:34 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:47.080 17:12:34 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:21:47.080 17:12:34 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:47.080 17:12:34 nvmf_tcp.nvmf_identify -- host/identify.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 --nguid ABCDEF0123456789ABCDEF0123456789 --eui64 ABCDEF0123456789 00:21:47.080 17:12:34 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:47.080 17:12:34 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:21:47.080 17:12:34 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:47.081 17:12:34 nvmf_tcp.nvmf_identify -- host/identify.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:21:47.081 17:12:34 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:47.081 17:12:34 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:21:47.081 [2024-05-15 17:12:34.530588] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:21:47.081 [2024-05-15 17:12:34.530825] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:47.081 17:12:34 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:47.081 17:12:34 nvmf_tcp.nvmf_identify -- host/identify.sh@35 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:21:47.081 17:12:34 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:47.081 17:12:34 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:21:47.081 17:12:34 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:47.081 17:12:34 nvmf_tcp.nvmf_identify -- host/identify.sh@37 -- # rpc_cmd nvmf_get_subsystems 00:21:47.081 17:12:34 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:47.081 17:12:34 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:21:47.081 [ 00:21:47.081 { 00:21:47.081 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:21:47.081 "subtype": "Discovery", 00:21:47.081 "listen_addresses": [ 00:21:47.081 { 00:21:47.081 "trtype": "TCP", 00:21:47.081 "adrfam": "IPv4", 00:21:47.081 "traddr": "10.0.0.2", 00:21:47.081 "trsvcid": "4420" 00:21:47.081 } 00:21:47.081 ], 00:21:47.081 "allow_any_host": true, 00:21:47.081 "hosts": [] 00:21:47.081 }, 00:21:47.081 { 00:21:47.081 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:47.081 "subtype": "NVMe", 00:21:47.081 "listen_addresses": [ 00:21:47.081 { 00:21:47.081 "trtype": "TCP", 00:21:47.081 "adrfam": "IPv4", 00:21:47.081 "traddr": "10.0.0.2", 00:21:47.081 "trsvcid": "4420" 00:21:47.081 } 00:21:47.081 ], 00:21:47.081 "allow_any_host": true, 00:21:47.081 "hosts": [], 00:21:47.081 "serial_number": "SPDK00000000000001", 00:21:47.081 "model_number": "SPDK bdev Controller", 00:21:47.081 "max_namespaces": 32, 00:21:47.081 "min_cntlid": 1, 00:21:47.081 "max_cntlid": 65519, 00:21:47.081 "namespaces": [ 00:21:47.081 { 00:21:47.081 "nsid": 1, 00:21:47.081 "bdev_name": "Malloc0", 00:21:47.081 "name": "Malloc0", 00:21:47.081 "nguid": "ABCDEF0123456789ABCDEF0123456789", 00:21:47.081 "eui64": "ABCDEF0123456789", 00:21:47.081 "uuid": "982bf91f-c3be-4bd1-842d-22831ca485b9" 00:21:47.081 } 00:21:47.081 ] 00:21:47.081 } 00:21:47.081 ] 00:21:47.081 17:12:34 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:47.081 17:12:34 nvmf_tcp.nvmf_identify -- host/identify.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' -L all 00:21:47.081 [2024-05-15 17:12:34.582170] Starting SPDK v24.05-pre git sha1 0ba8ca574 / DPDK 23.11.0 initialization... 00:21:47.081 [2024-05-15 17:12:34.582220] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3144388 ] 00:21:47.081 EAL: No free 2048 kB hugepages reported on node 1 00:21:47.081 [2024-05-15 17:12:34.611811] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to connect adminq (no timeout) 00:21:47.081 [2024-05-15 17:12:34.611853] nvme_tcp.c:2329:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:21:47.081 [2024-05-15 17:12:34.611859] nvme_tcp.c:2333:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:21:47.081 [2024-05-15 17:12:34.611869] nvme_tcp.c:2351:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:21:47.081 [2024-05-15 17:12:34.611879] sock.c: 336:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:21:47.081 [2024-05-15 17:12:34.615299] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for connect adminq (no timeout) 00:21:47.081 [2024-05-15 17:12:34.615329] nvme_tcp.c:1546:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x112ac30 0 00:21:47.081 [2024-05-15 17:12:34.623174] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:21:47.081 [2024-05-15 17:12:34.623191] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:21:47.081 [2024-05-15 17:12:34.623196] nvme_tcp.c:1592:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:21:47.081 [2024-05-15 17:12:34.623199] nvme_tcp.c:1593:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:21:47.081 [2024-05-15 17:12:34.623233] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:47.081 [2024-05-15 17:12:34.623238] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:47.081 [2024-05-15 17:12:34.623243] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x112ac30) 00:21:47.081 [2024-05-15 17:12:34.623257] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:21:47.081 [2024-05-15 17:12:34.623272] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1192980, cid 0, qid 0 00:21:47.081 [2024-05-15 17:12:34.631175] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:47.081 [2024-05-15 17:12:34.631183] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:47.081 [2024-05-15 17:12:34.631187] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:47.081 [2024-05-15 17:12:34.631191] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1192980) on tqpair=0x112ac30 00:21:47.081 [2024-05-15 17:12:34.631204] nvme_fabric.c: 622:_nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:21:47.081 [2024-05-15 17:12:34.631210] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs (no timeout) 00:21:47.081 [2024-05-15 17:12:34.631215] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs wait for vs (no timeout) 00:21:47.081 [2024-05-15 17:12:34.631226] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:47.081 [2024-05-15 17:12:34.631229] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:47.081 [2024-05-15 17:12:34.631232] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x112ac30) 00:21:47.081 [2024-05-15 17:12:34.631240] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:47.081 [2024-05-15 17:12:34.631252] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1192980, cid 0, qid 0 00:21:47.081 [2024-05-15 17:12:34.631432] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:47.081 [2024-05-15 17:12:34.631438] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:47.081 [2024-05-15 17:12:34.631440] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:47.081 [2024-05-15 17:12:34.631444] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1192980) on tqpair=0x112ac30 00:21:47.081 [2024-05-15 17:12:34.631450] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap (no timeout) 00:21:47.081 [2024-05-15 17:12:34.631456] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap wait for cap (no timeout) 00:21:47.081 [2024-05-15 17:12:34.631462] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:47.081 [2024-05-15 17:12:34.631466] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:47.081 [2024-05-15 17:12:34.631469] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x112ac30) 00:21:47.081 [2024-05-15 17:12:34.631475] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:47.081 [2024-05-15 17:12:34.631484] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1192980, cid 0, qid 0 00:21:47.081 [2024-05-15 17:12:34.631555] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:47.081 [2024-05-15 17:12:34.631560] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:47.081 [2024-05-15 17:12:34.631563] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:47.081 [2024-05-15 17:12:34.631567] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1192980) on tqpair=0x112ac30 00:21:47.081 [2024-05-15 17:12:34.631572] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en (no timeout) 00:21:47.081 [2024-05-15 17:12:34.631579] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en wait for cc (timeout 15000 ms) 00:21:47.081 [2024-05-15 17:12:34.631585] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:47.081 [2024-05-15 17:12:34.631588] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:47.081 [2024-05-15 17:12:34.631591] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x112ac30) 00:21:47.081 [2024-05-15 17:12:34.631597] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:47.081 [2024-05-15 17:12:34.631606] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1192980, cid 0, qid 0 00:21:47.081 [2024-05-15 17:12:34.631676] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:47.081 [2024-05-15 17:12:34.631681] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:47.081 [2024-05-15 17:12:34.631684] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:47.081 [2024-05-15 17:12:34.631687] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1192980) on tqpair=0x112ac30 00:21:47.081 [2024-05-15 17:12:34.631693] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:21:47.081 [2024-05-15 17:12:34.631701] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:47.081 [2024-05-15 17:12:34.631704] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:47.081 [2024-05-15 17:12:34.631707] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x112ac30) 00:21:47.081 [2024-05-15 17:12:34.631713] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:47.081 [2024-05-15 17:12:34.631721] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1192980, cid 0, qid 0 00:21:47.081 [2024-05-15 17:12:34.631796] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:47.081 [2024-05-15 17:12:34.631801] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:47.081 [2024-05-15 17:12:34.631804] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:47.081 [2024-05-15 17:12:34.631807] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1192980) on tqpair=0x112ac30 00:21:47.081 [2024-05-15 17:12:34.631812] nvme_ctrlr.c:3750:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 0 && CSTS.RDY = 0 00:21:47.081 [2024-05-15 17:12:34.631816] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to controller is disabled (timeout 15000 ms) 00:21:47.081 [2024-05-15 17:12:34.631822] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:21:47.082 [2024-05-15 17:12:34.631927] nvme_ctrlr.c:3943:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Setting CC.EN = 1 00:21:47.082 [2024-05-15 17:12:34.631931] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:21:47.082 [2024-05-15 17:12:34.631940] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:47.082 [2024-05-15 17:12:34.631943] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:47.082 [2024-05-15 17:12:34.631946] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x112ac30) 00:21:47.082 [2024-05-15 17:12:34.631954] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:47.082 [2024-05-15 17:12:34.631964] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1192980, cid 0, qid 0 00:21:47.082 [2024-05-15 17:12:34.632070] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:47.082 [2024-05-15 17:12:34.632075] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:47.082 [2024-05-15 17:12:34.632078] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:47.082 [2024-05-15 17:12:34.632081] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1192980) on tqpair=0x112ac30 00:21:47.082 [2024-05-15 17:12:34.632085] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:21:47.082 [2024-05-15 17:12:34.632094] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:47.082 [2024-05-15 17:12:34.632098] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:47.082 [2024-05-15 17:12:34.632101] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x112ac30) 00:21:47.082 [2024-05-15 17:12:34.632106] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:47.082 [2024-05-15 17:12:34.632115] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1192980, cid 0, qid 0 00:21:47.082 [2024-05-15 17:12:34.632196] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:47.082 [2024-05-15 17:12:34.632203] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:47.082 [2024-05-15 17:12:34.632206] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:47.082 [2024-05-15 17:12:34.632209] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1192980) on tqpair=0x112ac30 00:21:47.082 [2024-05-15 17:12:34.632213] nvme_ctrlr.c:3785:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:21:47.082 [2024-05-15 17:12:34.632217] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to reset admin queue (timeout 30000 ms) 00:21:47.082 [2024-05-15 17:12:34.632224] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to identify controller (no timeout) 00:21:47.082 [2024-05-15 17:12:34.632231] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for identify controller (timeout 30000 ms) 00:21:47.082 [2024-05-15 17:12:34.632239] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:47.082 [2024-05-15 17:12:34.632242] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x112ac30) 00:21:47.082 [2024-05-15 17:12:34.632248] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:47.082 [2024-05-15 17:12:34.632258] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1192980, cid 0, qid 0 00:21:47.082 [2024-05-15 17:12:34.632361] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:21:47.082 [2024-05-15 17:12:34.632367] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:21:47.082 [2024-05-15 17:12:34.632370] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:21:47.082 [2024-05-15 17:12:34.632374] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x112ac30): datao=0, datal=4096, cccid=0 00:21:47.082 [2024-05-15 17:12:34.632378] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1192980) on tqpair(0x112ac30): expected_datao=0, payload_size=4096 00:21:47.082 [2024-05-15 17:12:34.632382] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:47.082 [2024-05-15 17:12:34.632389] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:21:47.082 [2024-05-15 17:12:34.632393] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:21:47.082 [2024-05-15 17:12:34.632414] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:47.082 [2024-05-15 17:12:34.632422] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:47.082 [2024-05-15 17:12:34.632424] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:47.082 [2024-05-15 17:12:34.632427] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1192980) on tqpair=0x112ac30 00:21:47.082 [2024-05-15 17:12:34.632435] nvme_ctrlr.c:1985:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_xfer_size 4294967295 00:21:47.082 [2024-05-15 17:12:34.632439] nvme_ctrlr.c:1989:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] MDTS max_xfer_size 131072 00:21:47.082 [2024-05-15 17:12:34.632443] nvme_ctrlr.c:1992:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CNTLID 0x0001 00:21:47.082 [2024-05-15 17:12:34.632448] nvme_ctrlr.c:2016:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_sges 16 00:21:47.082 [2024-05-15 17:12:34.632452] nvme_ctrlr.c:2031:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] fuses compare and write: 1 00:21:47.082 [2024-05-15 17:12:34.632456] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to configure AER (timeout 30000 ms) 00:21:47.082 [2024-05-15 17:12:34.632467] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for configure aer (timeout 30000 ms) 00:21:47.082 [2024-05-15 17:12:34.632474] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:47.082 [2024-05-15 17:12:34.632478] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:47.082 [2024-05-15 17:12:34.632481] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x112ac30) 00:21:47.082 [2024-05-15 17:12:34.632487] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:21:47.082 [2024-05-15 17:12:34.632497] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1192980, cid 0, qid 0 00:21:47.082 [2024-05-15 17:12:34.632572] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:47.082 [2024-05-15 17:12:34.632578] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:47.082 [2024-05-15 17:12:34.632581] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:47.082 [2024-05-15 17:12:34.632584] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1192980) on tqpair=0x112ac30 00:21:47.082 [2024-05-15 17:12:34.632594] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:47.082 [2024-05-15 17:12:34.632598] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:47.082 [2024-05-15 17:12:34.632601] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x112ac30) 00:21:47.082 [2024-05-15 17:12:34.632606] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:47.082 [2024-05-15 17:12:34.632612] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:47.082 [2024-05-15 17:12:34.632615] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:47.082 [2024-05-15 17:12:34.632618] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x112ac30) 00:21:47.082 [2024-05-15 17:12:34.632623] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:47.082 [2024-05-15 17:12:34.632627] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:47.082 [2024-05-15 17:12:34.632631] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:47.082 [2024-05-15 17:12:34.632634] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x112ac30) 00:21:47.082 [2024-05-15 17:12:34.632638] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:47.082 [2024-05-15 17:12:34.632643] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:47.082 [2024-05-15 17:12:34.632646] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:47.082 [2024-05-15 17:12:34.632649] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x112ac30) 00:21:47.082 [2024-05-15 17:12:34.632656] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:47.082 [2024-05-15 17:12:34.632660] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to set keep alive timeout (timeout 30000 ms) 00:21:47.082 [2024-05-15 17:12:34.632668] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:21:47.082 [2024-05-15 17:12:34.632673] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:47.082 [2024-05-15 17:12:34.632676] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x112ac30) 00:21:47.082 [2024-05-15 17:12:34.632682] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:47.082 [2024-05-15 17:12:34.632692] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1192980, cid 0, qid 0 00:21:47.082 [2024-05-15 17:12:34.632697] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1192ae0, cid 1, qid 0 00:21:47.082 [2024-05-15 17:12:34.632701] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1192c40, cid 2, qid 0 00:21:47.082 [2024-05-15 17:12:34.632704] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1192da0, cid 3, qid 0 00:21:47.082 [2024-05-15 17:12:34.632708] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1192f00, cid 4, qid 0 00:21:47.082 [2024-05-15 17:12:34.632816] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:47.082 [2024-05-15 17:12:34.632822] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:47.082 [2024-05-15 17:12:34.632825] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:47.082 [2024-05-15 17:12:34.632828] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1192f00) on tqpair=0x112ac30 00:21:47.082 [2024-05-15 17:12:34.632835] nvme_ctrlr.c:2903:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Sending keep alive every 5000000 us 00:21:47.082 [2024-05-15 17:12:34.632840] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to ready (no timeout) 00:21:47.082 [2024-05-15 17:12:34.632848] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:47.082 [2024-05-15 17:12:34.632852] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x112ac30) 00:21:47.082 [2024-05-15 17:12:34.632857] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:47.082 [2024-05-15 17:12:34.632867] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1192f00, cid 4, qid 0 00:21:47.082 [2024-05-15 17:12:34.632953] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:21:47.082 [2024-05-15 17:12:34.632959] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:21:47.082 [2024-05-15 17:12:34.632962] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:21:47.082 [2024-05-15 17:12:34.632965] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x112ac30): datao=0, datal=4096, cccid=4 00:21:47.082 [2024-05-15 17:12:34.632969] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1192f00) on tqpair(0x112ac30): expected_datao=0, payload_size=4096 00:21:47.082 [2024-05-15 17:12:34.632973] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:47.082 [2024-05-15 17:12:34.632979] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:21:47.082 [2024-05-15 17:12:34.632982] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:21:47.083 [2024-05-15 17:12:34.633001] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:47.083 [2024-05-15 17:12:34.633007] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:47.083 [2024-05-15 17:12:34.633010] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:47.083 [2024-05-15 17:12:34.633013] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1192f00) on tqpair=0x112ac30 00:21:47.083 [2024-05-15 17:12:34.633027] nvme_ctrlr.c:4037:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Ctrlr already in ready state 00:21:47.083 [2024-05-15 17:12:34.633050] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:47.083 [2024-05-15 17:12:34.633054] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x112ac30) 00:21:47.083 [2024-05-15 17:12:34.633060] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:47.083 [2024-05-15 17:12:34.633065] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:47.083 [2024-05-15 17:12:34.633068] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:47.083 [2024-05-15 17:12:34.633071] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x112ac30) 00:21:47.083 [2024-05-15 17:12:34.633076] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:21:47.083 [2024-05-15 17:12:34.633092] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1192f00, cid 4, qid 0 00:21:47.083 [2024-05-15 17:12:34.633096] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1193060, cid 5, qid 0 00:21:47.083 [2024-05-15 17:12:34.633207] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:21:47.083 [2024-05-15 17:12:34.633213] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:21:47.083 [2024-05-15 17:12:34.633216] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:21:47.083 [2024-05-15 17:12:34.633219] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x112ac30): datao=0, datal=1024, cccid=4 00:21:47.083 [2024-05-15 17:12:34.633223] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1192f00) on tqpair(0x112ac30): expected_datao=0, payload_size=1024 00:21:47.083 [2024-05-15 17:12:34.633227] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:47.083 [2024-05-15 17:12:34.633232] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:21:47.083 [2024-05-15 17:12:34.633235] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:21:47.083 [2024-05-15 17:12:34.633240] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:47.083 [2024-05-15 17:12:34.633245] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:47.083 [2024-05-15 17:12:34.633248] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:47.083 [2024-05-15 17:12:34.633251] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1193060) on tqpair=0x112ac30 00:21:47.083 [2024-05-15 17:12:34.679170] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:47.083 [2024-05-15 17:12:34.679182] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:47.083 [2024-05-15 17:12:34.679185] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:47.083 [2024-05-15 17:12:34.679189] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1192f00) on tqpair=0x112ac30 00:21:47.083 [2024-05-15 17:12:34.679201] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:47.083 [2024-05-15 17:12:34.679205] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x112ac30) 00:21:47.083 [2024-05-15 17:12:34.679212] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:02ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:47.083 [2024-05-15 17:12:34.679228] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1192f00, cid 4, qid 0 00:21:47.083 [2024-05-15 17:12:34.679446] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:21:47.083 [2024-05-15 17:12:34.679451] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:21:47.083 [2024-05-15 17:12:34.679454] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:21:47.083 [2024-05-15 17:12:34.679458] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x112ac30): datao=0, datal=3072, cccid=4 00:21:47.083 [2024-05-15 17:12:34.679461] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1192f00) on tqpair(0x112ac30): expected_datao=0, payload_size=3072 00:21:47.083 [2024-05-15 17:12:34.679468] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:47.083 [2024-05-15 17:12:34.679474] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:21:47.083 [2024-05-15 17:12:34.679477] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:21:47.083 [2024-05-15 17:12:34.679525] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:47.083 [2024-05-15 17:12:34.679531] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:47.083 [2024-05-15 17:12:34.679534] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:47.083 [2024-05-15 17:12:34.679537] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1192f00) on tqpair=0x112ac30 00:21:47.083 [2024-05-15 17:12:34.679545] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:47.083 [2024-05-15 17:12:34.679548] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x112ac30) 00:21:47.083 [2024-05-15 17:12:34.679554] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00010070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:47.083 [2024-05-15 17:12:34.679568] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1192f00, cid 4, qid 0 00:21:47.083 [2024-05-15 17:12:34.679652] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:21:47.083 [2024-05-15 17:12:34.679658] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:21:47.083 [2024-05-15 17:12:34.679661] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:21:47.083 [2024-05-15 17:12:34.679664] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x112ac30): datao=0, datal=8, cccid=4 00:21:47.083 [2024-05-15 17:12:34.679668] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1192f00) on tqpair(0x112ac30): expected_datao=0, payload_size=8 00:21:47.083 [2024-05-15 17:12:34.679671] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:47.083 [2024-05-15 17:12:34.679677] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:21:47.083 [2024-05-15 17:12:34.679680] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:21:47.083 [2024-05-15 17:12:34.720389] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:47.083 [2024-05-15 17:12:34.720400] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:47.083 [2024-05-15 17:12:34.720403] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:47.083 [2024-05-15 17:12:34.720406] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1192f00) on tqpair=0x112ac30 00:21:47.083 ===================================================== 00:21:47.083 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2014-08.org.nvmexpress.discovery 00:21:47.083 ===================================================== 00:21:47.083 Controller Capabilities/Features 00:21:47.083 ================================ 00:21:47.083 Vendor ID: 0000 00:21:47.083 Subsystem Vendor ID: 0000 00:21:47.083 Serial Number: .................... 00:21:47.083 Model Number: ........................................ 00:21:47.083 Firmware Version: 24.05 00:21:47.083 Recommended Arb Burst: 0 00:21:47.083 IEEE OUI Identifier: 00 00 00 00:21:47.083 Multi-path I/O 00:21:47.083 May have multiple subsystem ports: No 00:21:47.083 May have multiple controllers: No 00:21:47.083 Associated with SR-IOV VF: No 00:21:47.083 Max Data Transfer Size: 131072 00:21:47.083 Max Number of Namespaces: 0 00:21:47.083 Max Number of I/O Queues: 1024 00:21:47.083 NVMe Specification Version (VS): 1.3 00:21:47.083 NVMe Specification Version (Identify): 1.3 00:21:47.083 Maximum Queue Entries: 128 00:21:47.083 Contiguous Queues Required: Yes 00:21:47.083 Arbitration Mechanisms Supported 00:21:47.083 Weighted Round Robin: Not Supported 00:21:47.083 Vendor Specific: Not Supported 00:21:47.083 Reset Timeout: 15000 ms 00:21:47.083 Doorbell Stride: 4 bytes 00:21:47.083 NVM Subsystem Reset: Not Supported 00:21:47.083 Command Sets Supported 00:21:47.083 NVM Command Set: Supported 00:21:47.083 Boot Partition: Not Supported 00:21:47.083 Memory Page Size Minimum: 4096 bytes 00:21:47.083 Memory Page Size Maximum: 4096 bytes 00:21:47.083 Persistent Memory Region: Not Supported 00:21:47.083 Optional Asynchronous Events Supported 00:21:47.083 Namespace Attribute Notices: Not Supported 00:21:47.083 Firmware Activation Notices: Not Supported 00:21:47.083 ANA Change Notices: Not Supported 00:21:47.083 PLE Aggregate Log Change Notices: Not Supported 00:21:47.083 LBA Status Info Alert Notices: Not Supported 00:21:47.083 EGE Aggregate Log Change Notices: Not Supported 00:21:47.083 Normal NVM Subsystem Shutdown event: Not Supported 00:21:47.083 Zone Descriptor Change Notices: Not Supported 00:21:47.083 Discovery Log Change Notices: Supported 00:21:47.083 Controller Attributes 00:21:47.083 128-bit Host Identifier: Not Supported 00:21:47.083 Non-Operational Permissive Mode: Not Supported 00:21:47.083 NVM Sets: Not Supported 00:21:47.083 Read Recovery Levels: Not Supported 00:21:47.083 Endurance Groups: Not Supported 00:21:47.083 Predictable Latency Mode: Not Supported 00:21:47.083 Traffic Based Keep ALive: Not Supported 00:21:47.083 Namespace Granularity: Not Supported 00:21:47.083 SQ Associations: Not Supported 00:21:47.083 UUID List: Not Supported 00:21:47.083 Multi-Domain Subsystem: Not Supported 00:21:47.083 Fixed Capacity Management: Not Supported 00:21:47.083 Variable Capacity Management: Not Supported 00:21:47.083 Delete Endurance Group: Not Supported 00:21:47.083 Delete NVM Set: Not Supported 00:21:47.083 Extended LBA Formats Supported: Not Supported 00:21:47.083 Flexible Data Placement Supported: Not Supported 00:21:47.083 00:21:47.083 Controller Memory Buffer Support 00:21:47.083 ================================ 00:21:47.083 Supported: No 00:21:47.083 00:21:47.083 Persistent Memory Region Support 00:21:47.083 ================================ 00:21:47.083 Supported: No 00:21:47.083 00:21:47.083 Admin Command Set Attributes 00:21:47.083 ============================ 00:21:47.083 Security Send/Receive: Not Supported 00:21:47.083 Format NVM: Not Supported 00:21:47.083 Firmware Activate/Download: Not Supported 00:21:47.083 Namespace Management: Not Supported 00:21:47.083 Device Self-Test: Not Supported 00:21:47.083 Directives: Not Supported 00:21:47.083 NVMe-MI: Not Supported 00:21:47.083 Virtualization Management: Not Supported 00:21:47.083 Doorbell Buffer Config: Not Supported 00:21:47.084 Get LBA Status Capability: Not Supported 00:21:47.084 Command & Feature Lockdown Capability: Not Supported 00:21:47.084 Abort Command Limit: 1 00:21:47.084 Async Event Request Limit: 4 00:21:47.084 Number of Firmware Slots: N/A 00:21:47.084 Firmware Slot 1 Read-Only: N/A 00:21:47.084 Firmware Activation Without Reset: N/A 00:21:47.084 Multiple Update Detection Support: N/A 00:21:47.084 Firmware Update Granularity: No Information Provided 00:21:47.084 Per-Namespace SMART Log: No 00:21:47.084 Asymmetric Namespace Access Log Page: Not Supported 00:21:47.084 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:21:47.084 Command Effects Log Page: Not Supported 00:21:47.084 Get Log Page Extended Data: Supported 00:21:47.084 Telemetry Log Pages: Not Supported 00:21:47.084 Persistent Event Log Pages: Not Supported 00:21:47.084 Supported Log Pages Log Page: May Support 00:21:47.084 Commands Supported & Effects Log Page: Not Supported 00:21:47.084 Feature Identifiers & Effects Log Page:May Support 00:21:47.084 NVMe-MI Commands & Effects Log Page: May Support 00:21:47.084 Data Area 4 for Telemetry Log: Not Supported 00:21:47.084 Error Log Page Entries Supported: 128 00:21:47.084 Keep Alive: Not Supported 00:21:47.084 00:21:47.084 NVM Command Set Attributes 00:21:47.084 ========================== 00:21:47.084 Submission Queue Entry Size 00:21:47.084 Max: 1 00:21:47.084 Min: 1 00:21:47.084 Completion Queue Entry Size 00:21:47.084 Max: 1 00:21:47.084 Min: 1 00:21:47.084 Number of Namespaces: 0 00:21:47.084 Compare Command: Not Supported 00:21:47.084 Write Uncorrectable Command: Not Supported 00:21:47.084 Dataset Management Command: Not Supported 00:21:47.084 Write Zeroes Command: Not Supported 00:21:47.084 Set Features Save Field: Not Supported 00:21:47.084 Reservations: Not Supported 00:21:47.084 Timestamp: Not Supported 00:21:47.084 Copy: Not Supported 00:21:47.084 Volatile Write Cache: Not Present 00:21:47.084 Atomic Write Unit (Normal): 1 00:21:47.084 Atomic Write Unit (PFail): 1 00:21:47.084 Atomic Compare & Write Unit: 1 00:21:47.084 Fused Compare & Write: Supported 00:21:47.084 Scatter-Gather List 00:21:47.084 SGL Command Set: Supported 00:21:47.084 SGL Keyed: Supported 00:21:47.084 SGL Bit Bucket Descriptor: Not Supported 00:21:47.084 SGL Metadata Pointer: Not Supported 00:21:47.084 Oversized SGL: Not Supported 00:21:47.084 SGL Metadata Address: Not Supported 00:21:47.084 SGL Offset: Supported 00:21:47.084 Transport SGL Data Block: Not Supported 00:21:47.084 Replay Protected Memory Block: Not Supported 00:21:47.084 00:21:47.084 Firmware Slot Information 00:21:47.084 ========================= 00:21:47.084 Active slot: 0 00:21:47.084 00:21:47.084 00:21:47.084 Error Log 00:21:47.084 ========= 00:21:47.084 00:21:47.084 Active Namespaces 00:21:47.084 ================= 00:21:47.084 Discovery Log Page 00:21:47.084 ================== 00:21:47.084 Generation Counter: 2 00:21:47.084 Number of Records: 2 00:21:47.084 Record Format: 0 00:21:47.084 00:21:47.084 Discovery Log Entry 0 00:21:47.084 ---------------------- 00:21:47.084 Transport Type: 3 (TCP) 00:21:47.084 Address Family: 1 (IPv4) 00:21:47.084 Subsystem Type: 3 (Current Discovery Subsystem) 00:21:47.084 Entry Flags: 00:21:47.084 Duplicate Returned Information: 1 00:21:47.084 Explicit Persistent Connection Support for Discovery: 1 00:21:47.084 Transport Requirements: 00:21:47.084 Secure Channel: Not Required 00:21:47.084 Port ID: 0 (0x0000) 00:21:47.084 Controller ID: 65535 (0xffff) 00:21:47.084 Admin Max SQ Size: 128 00:21:47.084 Transport Service Identifier: 4420 00:21:47.084 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:21:47.084 Transport Address: 10.0.0.2 00:21:47.084 Discovery Log Entry 1 00:21:47.084 ---------------------- 00:21:47.084 Transport Type: 3 (TCP) 00:21:47.084 Address Family: 1 (IPv4) 00:21:47.084 Subsystem Type: 2 (NVM Subsystem) 00:21:47.084 Entry Flags: 00:21:47.084 Duplicate Returned Information: 0 00:21:47.084 Explicit Persistent Connection Support for Discovery: 0 00:21:47.084 Transport Requirements: 00:21:47.084 Secure Channel: Not Required 00:21:47.084 Port ID: 0 (0x0000) 00:21:47.084 Controller ID: 65535 (0xffff) 00:21:47.084 Admin Max SQ Size: 128 00:21:47.084 Transport Service Identifier: 4420 00:21:47.084 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:cnode1 00:21:47.084 Transport Address: 10.0.0.2 [2024-05-15 17:12:34.720483] nvme_ctrlr.c:4222:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Prepare to destruct SSD 00:21:47.084 [2024-05-15 17:12:34.720496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:47.084 [2024-05-15 17:12:34.720503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:47.084 [2024-05-15 17:12:34.720508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:47.084 [2024-05-15 17:12:34.720513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:47.084 [2024-05-15 17:12:34.720520] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:47.084 [2024-05-15 17:12:34.720524] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:47.084 [2024-05-15 17:12:34.720527] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x112ac30) 00:21:47.084 [2024-05-15 17:12:34.720534] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:47.084 [2024-05-15 17:12:34.720547] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1192da0, cid 3, qid 0 00:21:47.084 [2024-05-15 17:12:34.720618] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:47.084 [2024-05-15 17:12:34.720623] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:47.084 [2024-05-15 17:12:34.720628] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:47.084 [2024-05-15 17:12:34.720632] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1192da0) on tqpair=0x112ac30 00:21:47.084 [2024-05-15 17:12:34.720638] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:47.084 [2024-05-15 17:12:34.720641] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:47.084 [2024-05-15 17:12:34.720644] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x112ac30) 00:21:47.084 [2024-05-15 17:12:34.720650] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:47.084 [2024-05-15 17:12:34.720663] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1192da0, cid 3, qid 0 00:21:47.084 [2024-05-15 17:12:34.720743] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:47.084 [2024-05-15 17:12:34.720748] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:47.084 [2024-05-15 17:12:34.720751] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:47.084 [2024-05-15 17:12:34.720754] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1192da0) on tqpair=0x112ac30 00:21:47.084 [2024-05-15 17:12:34.720759] nvme_ctrlr.c:1083:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] RTD3E = 0 us 00:21:47.084 [2024-05-15 17:12:34.720763] nvme_ctrlr.c:1086:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown timeout = 10000 ms 00:21:47.084 [2024-05-15 17:12:34.720771] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:47.084 [2024-05-15 17:12:34.720775] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:47.084 [2024-05-15 17:12:34.720778] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x112ac30) 00:21:47.084 [2024-05-15 17:12:34.720783] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:47.084 [2024-05-15 17:12:34.720792] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1192da0, cid 3, qid 0 00:21:47.084 [2024-05-15 17:12:34.720864] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:47.084 [2024-05-15 17:12:34.720869] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:47.084 [2024-05-15 17:12:34.720872] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:47.084 [2024-05-15 17:12:34.720875] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1192da0) on tqpair=0x112ac30 00:21:47.084 [2024-05-15 17:12:34.720884] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:47.084 [2024-05-15 17:12:34.720888] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:47.084 [2024-05-15 17:12:34.720891] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x112ac30) 00:21:47.084 [2024-05-15 17:12:34.720896] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:47.084 [2024-05-15 17:12:34.720905] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1192da0, cid 3, qid 0 00:21:47.084 [2024-05-15 17:12:34.720978] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:47.084 [2024-05-15 17:12:34.720984] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:47.084 [2024-05-15 17:12:34.720987] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:47.084 [2024-05-15 17:12:34.720990] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1192da0) on tqpair=0x112ac30 00:21:47.084 [2024-05-15 17:12:34.720998] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:47.084 [2024-05-15 17:12:34.721002] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:47.084 [2024-05-15 17:12:34.721005] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x112ac30) 00:21:47.084 [2024-05-15 17:12:34.721011] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:47.084 [2024-05-15 17:12:34.721019] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1192da0, cid 3, qid 0 00:21:47.084 [2024-05-15 17:12:34.721099] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:47.084 [2024-05-15 17:12:34.721104] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:47.084 [2024-05-15 17:12:34.721107] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:47.084 [2024-05-15 17:12:34.721111] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1192da0) on tqpair=0x112ac30 00:21:47.085 [2024-05-15 17:12:34.721119] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:47.085 [2024-05-15 17:12:34.721123] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:47.085 [2024-05-15 17:12:34.721126] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x112ac30) 00:21:47.085 [2024-05-15 17:12:34.721131] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:47.085 [2024-05-15 17:12:34.721140] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1192da0, cid 3, qid 0 00:21:47.085 [2024-05-15 17:12:34.721216] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:47.085 [2024-05-15 17:12:34.721222] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:47.085 [2024-05-15 17:12:34.721224] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:47.085 [2024-05-15 17:12:34.721228] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1192da0) on tqpair=0x112ac30 00:21:47.085 [2024-05-15 17:12:34.721236] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:47.085 [2024-05-15 17:12:34.721240] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:47.085 [2024-05-15 17:12:34.721243] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x112ac30) 00:21:47.085 [2024-05-15 17:12:34.721249] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:47.085 [2024-05-15 17:12:34.721258] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1192da0, cid 3, qid 0 00:21:47.085 [2024-05-15 17:12:34.721332] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:47.085 [2024-05-15 17:12:34.721337] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:47.085 [2024-05-15 17:12:34.721340] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:47.085 [2024-05-15 17:12:34.721344] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1192da0) on tqpair=0x112ac30 00:21:47.085 [2024-05-15 17:12:34.721352] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:47.085 [2024-05-15 17:12:34.721356] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:47.085 [2024-05-15 17:12:34.721359] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x112ac30) 00:21:47.085 [2024-05-15 17:12:34.721364] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:47.085 [2024-05-15 17:12:34.721373] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1192da0, cid 3, qid 0 00:21:47.085 [2024-05-15 17:12:34.721442] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:47.085 [2024-05-15 17:12:34.721448] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:47.085 [2024-05-15 17:12:34.721451] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:47.085 [2024-05-15 17:12:34.721454] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1192da0) on tqpair=0x112ac30 00:21:47.085 [2024-05-15 17:12:34.721463] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:47.085 [2024-05-15 17:12:34.721466] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:47.085 [2024-05-15 17:12:34.721469] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x112ac30) 00:21:47.085 [2024-05-15 17:12:34.721475] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:47.085 [2024-05-15 17:12:34.721484] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1192da0, cid 3, qid 0 00:21:47.085 [2024-05-15 17:12:34.721565] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:47.085 [2024-05-15 17:12:34.721572] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:47.085 [2024-05-15 17:12:34.721575] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:47.085 [2024-05-15 17:12:34.721578] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1192da0) on tqpair=0x112ac30 00:21:47.085 [2024-05-15 17:12:34.721587] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:47.085 [2024-05-15 17:12:34.721591] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:47.085 [2024-05-15 17:12:34.721594] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x112ac30) 00:21:47.085 [2024-05-15 17:12:34.721599] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:47.085 [2024-05-15 17:12:34.721608] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1192da0, cid 3, qid 0 00:21:47.085 [2024-05-15 17:12:34.721683] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:47.085 [2024-05-15 17:12:34.721689] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:47.085 [2024-05-15 17:12:34.721691] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:47.085 [2024-05-15 17:12:34.721694] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1192da0) on tqpair=0x112ac30 00:21:47.085 [2024-05-15 17:12:34.721703] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:47.085 [2024-05-15 17:12:34.721707] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:47.085 [2024-05-15 17:12:34.721710] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x112ac30) 00:21:47.085 [2024-05-15 17:12:34.721715] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:47.085 [2024-05-15 17:12:34.721724] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1192da0, cid 3, qid 0 00:21:47.085 [2024-05-15 17:12:34.721799] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:47.085 [2024-05-15 17:12:34.721805] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:47.085 [2024-05-15 17:12:34.721808] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:47.085 [2024-05-15 17:12:34.721811] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1192da0) on tqpair=0x112ac30 00:21:47.085 [2024-05-15 17:12:34.721820] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:47.085 [2024-05-15 17:12:34.721823] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:47.085 [2024-05-15 17:12:34.721826] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x112ac30) 00:21:47.085 [2024-05-15 17:12:34.721832] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:47.085 [2024-05-15 17:12:34.721840] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1192da0, cid 3, qid 0 00:21:47.085 [2024-05-15 17:12:34.721914] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:47.085 [2024-05-15 17:12:34.721920] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:47.085 [2024-05-15 17:12:34.721922] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:47.085 [2024-05-15 17:12:34.721925] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1192da0) on tqpair=0x112ac30 00:21:47.085 [2024-05-15 17:12:34.721934] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:47.085 [2024-05-15 17:12:34.721938] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:47.085 [2024-05-15 17:12:34.721941] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x112ac30) 00:21:47.085 [2024-05-15 17:12:34.721946] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:47.085 [2024-05-15 17:12:34.721955] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1192da0, cid 3, qid 0 00:21:47.085 [2024-05-15 17:12:34.722034] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:47.085 [2024-05-15 17:12:34.722039] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:47.085 [2024-05-15 17:12:34.722044] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:47.085 [2024-05-15 17:12:34.722048] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1192da0) on tqpair=0x112ac30 00:21:47.085 [2024-05-15 17:12:34.722056] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:47.085 [2024-05-15 17:12:34.722060] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:47.085 [2024-05-15 17:12:34.722063] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x112ac30) 00:21:47.085 [2024-05-15 17:12:34.722068] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:47.085 [2024-05-15 17:12:34.722077] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1192da0, cid 3, qid 0 00:21:47.085 [2024-05-15 17:12:34.722151] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:47.085 [2024-05-15 17:12:34.722157] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:47.085 [2024-05-15 17:12:34.722159] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:47.085 [2024-05-15 17:12:34.722163] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1192da0) on tqpair=0x112ac30 00:21:47.085 [2024-05-15 17:12:34.722175] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:47.085 [2024-05-15 17:12:34.722178] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:47.085 [2024-05-15 17:12:34.722182] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x112ac30) 00:21:47.085 [2024-05-15 17:12:34.722187] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:47.085 [2024-05-15 17:12:34.722196] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1192da0, cid 3, qid 0 00:21:47.085 [2024-05-15 17:12:34.722268] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:47.086 [2024-05-15 17:12:34.722273] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:47.086 [2024-05-15 17:12:34.722276] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:47.086 [2024-05-15 17:12:34.722279] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1192da0) on tqpair=0x112ac30 00:21:47.086 [2024-05-15 17:12:34.722288] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:47.086 [2024-05-15 17:12:34.722291] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:47.086 [2024-05-15 17:12:34.722295] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x112ac30) 00:21:47.086 [2024-05-15 17:12:34.722300] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:47.086 [2024-05-15 17:12:34.722309] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1192da0, cid 3, qid 0 00:21:47.086 [2024-05-15 17:12:34.722378] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:47.086 [2024-05-15 17:12:34.722383] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:47.086 [2024-05-15 17:12:34.722386] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:47.086 [2024-05-15 17:12:34.722389] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1192da0) on tqpair=0x112ac30 00:21:47.086 [2024-05-15 17:12:34.722398] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:47.086 [2024-05-15 17:12:34.722401] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:47.086 [2024-05-15 17:12:34.722404] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x112ac30) 00:21:47.086 [2024-05-15 17:12:34.722410] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:47.086 [2024-05-15 17:12:34.722419] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1192da0, cid 3, qid 0 00:21:47.086 [2024-05-15 17:12:34.722502] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:47.086 [2024-05-15 17:12:34.722507] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:47.086 [2024-05-15 17:12:34.722510] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:47.086 [2024-05-15 17:12:34.722515] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1192da0) on tqpair=0x112ac30 00:21:47.086 [2024-05-15 17:12:34.722524] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:47.086 [2024-05-15 17:12:34.722527] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:47.086 [2024-05-15 17:12:34.722531] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x112ac30) 00:21:47.086 [2024-05-15 17:12:34.722536] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:47.086 [2024-05-15 17:12:34.722545] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1192da0, cid 3, qid 0 00:21:47.086 [2024-05-15 17:12:34.722619] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:47.086 [2024-05-15 17:12:34.722624] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:47.086 [2024-05-15 17:12:34.722627] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:47.086 [2024-05-15 17:12:34.722631] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1192da0) on tqpair=0x112ac30 00:21:47.086 [2024-05-15 17:12:34.722639] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:47.086 [2024-05-15 17:12:34.722643] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:47.086 [2024-05-15 17:12:34.722646] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x112ac30) 00:21:47.086 [2024-05-15 17:12:34.722651] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:47.086 [2024-05-15 17:12:34.722660] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1192da0, cid 3, qid 0 00:21:47.086 [2024-05-15 17:12:34.722736] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:47.086 [2024-05-15 17:12:34.722742] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:47.086 [2024-05-15 17:12:34.722745] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:47.086 [2024-05-15 17:12:34.722748] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1192da0) on tqpair=0x112ac30 00:21:47.086 [2024-05-15 17:12:34.722757] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:47.086 [2024-05-15 17:12:34.722760] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:47.086 [2024-05-15 17:12:34.722763] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x112ac30) 00:21:47.086 [2024-05-15 17:12:34.722769] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:47.086 [2024-05-15 17:12:34.722778] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1192da0, cid 3, qid 0 00:21:47.086 [2024-05-15 17:12:34.722852] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:47.086 [2024-05-15 17:12:34.722858] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:47.086 [2024-05-15 17:12:34.722861] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:47.086 [2024-05-15 17:12:34.722864] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1192da0) on tqpair=0x112ac30 00:21:47.086 [2024-05-15 17:12:34.722872] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:47.086 [2024-05-15 17:12:34.722876] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:47.086 [2024-05-15 17:12:34.722879] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x112ac30) 00:21:47.086 [2024-05-15 17:12:34.722885] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:47.086 [2024-05-15 17:12:34.722893] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1192da0, cid 3, qid 0 00:21:47.086 [2024-05-15 17:12:34.722969] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:47.086 [2024-05-15 17:12:34.722975] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:47.086 [2024-05-15 17:12:34.722978] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:47.086 [2024-05-15 17:12:34.722981] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1192da0) on tqpair=0x112ac30 00:21:47.086 [2024-05-15 17:12:34.722993] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:47.086 [2024-05-15 17:12:34.722996] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:47.086 [2024-05-15 17:12:34.723000] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x112ac30) 00:21:47.086 [2024-05-15 17:12:34.723005] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:47.086 [2024-05-15 17:12:34.723014] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1192da0, cid 3, qid 0 00:21:47.086 [2024-05-15 17:12:34.723086] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:47.086 [2024-05-15 17:12:34.723091] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:47.086 [2024-05-15 17:12:34.723094] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:47.086 [2024-05-15 17:12:34.723097] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1192da0) on tqpair=0x112ac30 00:21:47.086 [2024-05-15 17:12:34.723106] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:47.086 [2024-05-15 17:12:34.723109] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:47.086 [2024-05-15 17:12:34.723112] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x112ac30) 00:21:47.086 [2024-05-15 17:12:34.723118] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:47.086 [2024-05-15 17:12:34.723127] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1192da0, cid 3, qid 0 00:21:47.086 [2024-05-15 17:12:34.727173] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:47.086 [2024-05-15 17:12:34.727181] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:47.086 [2024-05-15 17:12:34.727184] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:47.086 [2024-05-15 17:12:34.727187] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1192da0) on tqpair=0x112ac30 00:21:47.086 [2024-05-15 17:12:34.727197] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:47.086 [2024-05-15 17:12:34.727201] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:47.086 [2024-05-15 17:12:34.727204] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x112ac30) 00:21:47.086 [2024-05-15 17:12:34.727210] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:47.086 [2024-05-15 17:12:34.727220] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1192da0, cid 3, qid 0 00:21:47.086 [2024-05-15 17:12:34.727382] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:47.086 [2024-05-15 17:12:34.727388] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:47.086 [2024-05-15 17:12:34.727391] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:47.086 [2024-05-15 17:12:34.727394] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1192da0) on tqpair=0x112ac30 00:21:47.086 [2024-05-15 17:12:34.727401] nvme_ctrlr.c:1205:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown complete in 6 milliseconds 00:21:47.348 00:21:47.348 17:12:34 nvmf_tcp.nvmf_identify -- host/identify.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -L all 00:21:47.348 [2024-05-15 17:12:34.763872] Starting SPDK v24.05-pre git sha1 0ba8ca574 / DPDK 23.11.0 initialization... 00:21:47.348 [2024-05-15 17:12:34.763917] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3144390 ] 00:21:47.348 EAL: No free 2048 kB hugepages reported on node 1 00:21:47.348 [2024-05-15 17:12:34.793010] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to connect adminq (no timeout) 00:21:47.348 [2024-05-15 17:12:34.793047] nvme_tcp.c:2329:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:21:47.348 [2024-05-15 17:12:34.793052] nvme_tcp.c:2333:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:21:47.348 [2024-05-15 17:12:34.793062] nvme_tcp.c:2351:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:21:47.348 [2024-05-15 17:12:34.793068] sock.c: 336:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:21:47.348 [2024-05-15 17:12:34.793401] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for connect adminq (no timeout) 00:21:47.348 [2024-05-15 17:12:34.793421] nvme_tcp.c:1546:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x10e4c30 0 00:21:47.348 [2024-05-15 17:12:34.807169] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:21:47.348 [2024-05-15 17:12:34.807185] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:21:47.348 [2024-05-15 17:12:34.807189] nvme_tcp.c:1592:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:21:47.348 [2024-05-15 17:12:34.807192] nvme_tcp.c:1593:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:21:47.348 [2024-05-15 17:12:34.807221] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:47.348 [2024-05-15 17:12:34.807226] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:47.348 [2024-05-15 17:12:34.807230] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x10e4c30) 00:21:47.349 [2024-05-15 17:12:34.807240] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:21:47.349 [2024-05-15 17:12:34.807255] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x114c980, cid 0, qid 0 00:21:47.349 [2024-05-15 17:12:34.815175] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:47.349 [2024-05-15 17:12:34.815183] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:47.349 [2024-05-15 17:12:34.815186] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:47.349 [2024-05-15 17:12:34.815189] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x114c980) on tqpair=0x10e4c30 00:21:47.349 [2024-05-15 17:12:34.815198] nvme_fabric.c: 622:_nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:21:47.349 [2024-05-15 17:12:34.815204] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs (no timeout) 00:21:47.349 [2024-05-15 17:12:34.815208] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs wait for vs (no timeout) 00:21:47.349 [2024-05-15 17:12:34.815218] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:47.349 [2024-05-15 17:12:34.815222] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:47.349 [2024-05-15 17:12:34.815225] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x10e4c30) 00:21:47.349 [2024-05-15 17:12:34.815231] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:47.349 [2024-05-15 17:12:34.815244] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x114c980, cid 0, qid 0 00:21:47.349 [2024-05-15 17:12:34.815410] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:47.349 [2024-05-15 17:12:34.815416] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:47.349 [2024-05-15 17:12:34.815420] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:47.349 [2024-05-15 17:12:34.815423] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x114c980) on tqpair=0x10e4c30 00:21:47.349 [2024-05-15 17:12:34.815428] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap (no timeout) 00:21:47.349 [2024-05-15 17:12:34.815434] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap wait for cap (no timeout) 00:21:47.349 [2024-05-15 17:12:34.815443] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:47.349 [2024-05-15 17:12:34.815446] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:47.349 [2024-05-15 17:12:34.815449] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x10e4c30) 00:21:47.349 [2024-05-15 17:12:34.815455] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:47.349 [2024-05-15 17:12:34.815465] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x114c980, cid 0, qid 0 00:21:47.349 [2024-05-15 17:12:34.815542] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:47.349 [2024-05-15 17:12:34.815548] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:47.349 [2024-05-15 17:12:34.815551] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:47.349 [2024-05-15 17:12:34.815554] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x114c980) on tqpair=0x10e4c30 00:21:47.349 [2024-05-15 17:12:34.815559] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en (no timeout) 00:21:47.349 [2024-05-15 17:12:34.815566] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en wait for cc (timeout 15000 ms) 00:21:47.349 [2024-05-15 17:12:34.815572] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:47.349 [2024-05-15 17:12:34.815575] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:47.349 [2024-05-15 17:12:34.815578] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x10e4c30) 00:21:47.349 [2024-05-15 17:12:34.815583] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:47.349 [2024-05-15 17:12:34.815592] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x114c980, cid 0, qid 0 00:21:47.349 [2024-05-15 17:12:34.815666] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:47.349 [2024-05-15 17:12:34.815672] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:47.349 [2024-05-15 17:12:34.815675] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:47.349 [2024-05-15 17:12:34.815678] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x114c980) on tqpair=0x10e4c30 00:21:47.349 [2024-05-15 17:12:34.815683] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:21:47.349 [2024-05-15 17:12:34.815691] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:47.349 [2024-05-15 17:12:34.815695] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:47.349 [2024-05-15 17:12:34.815698] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x10e4c30) 00:21:47.349 [2024-05-15 17:12:34.815703] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:47.349 [2024-05-15 17:12:34.815713] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x114c980, cid 0, qid 0 00:21:47.349 [2024-05-15 17:12:34.815782] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:47.349 [2024-05-15 17:12:34.815787] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:47.349 [2024-05-15 17:12:34.815790] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:47.349 [2024-05-15 17:12:34.815793] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x114c980) on tqpair=0x10e4c30 00:21:47.349 [2024-05-15 17:12:34.815798] nvme_ctrlr.c:3750:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 0 && CSTS.RDY = 0 00:21:47.349 [2024-05-15 17:12:34.815802] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to controller is disabled (timeout 15000 ms) 00:21:47.349 [2024-05-15 17:12:34.815808] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:21:47.349 [2024-05-15 17:12:34.815913] nvme_ctrlr.c:3943:nvme_ctrlr_process_init: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Setting CC.EN = 1 00:21:47.349 [2024-05-15 17:12:34.815918] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:21:47.349 [2024-05-15 17:12:34.815924] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:47.349 [2024-05-15 17:12:34.815927] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:47.349 [2024-05-15 17:12:34.815930] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x10e4c30) 00:21:47.349 [2024-05-15 17:12:34.815936] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:47.349 [2024-05-15 17:12:34.815945] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x114c980, cid 0, qid 0 00:21:47.349 [2024-05-15 17:12:34.816016] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:47.349 [2024-05-15 17:12:34.816021] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:47.349 [2024-05-15 17:12:34.816024] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:47.349 [2024-05-15 17:12:34.816027] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x114c980) on tqpair=0x10e4c30 00:21:47.349 [2024-05-15 17:12:34.816032] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:21:47.349 [2024-05-15 17:12:34.816040] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:47.349 [2024-05-15 17:12:34.816043] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:47.349 [2024-05-15 17:12:34.816046] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x10e4c30) 00:21:47.349 [2024-05-15 17:12:34.816052] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:47.349 [2024-05-15 17:12:34.816061] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x114c980, cid 0, qid 0 00:21:47.349 [2024-05-15 17:12:34.816130] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:47.349 [2024-05-15 17:12:34.816136] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:47.349 [2024-05-15 17:12:34.816139] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:47.349 [2024-05-15 17:12:34.816142] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x114c980) on tqpair=0x10e4c30 00:21:47.349 [2024-05-15 17:12:34.816146] nvme_ctrlr.c:3785:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:21:47.349 [2024-05-15 17:12:34.816150] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to reset admin queue (timeout 30000 ms) 00:21:47.349 [2024-05-15 17:12:34.816156] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller (no timeout) 00:21:47.349 [2024-05-15 17:12:34.816168] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify controller (timeout 30000 ms) 00:21:47.349 [2024-05-15 17:12:34.816176] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:47.349 [2024-05-15 17:12:34.816179] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x10e4c30) 00:21:47.349 [2024-05-15 17:12:34.816185] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:47.349 [2024-05-15 17:12:34.816195] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x114c980, cid 0, qid 0 00:21:47.349 [2024-05-15 17:12:34.816316] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:21:47.349 [2024-05-15 17:12:34.816322] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:21:47.349 [2024-05-15 17:12:34.816325] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:21:47.349 [2024-05-15 17:12:34.816328] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x10e4c30): datao=0, datal=4096, cccid=0 00:21:47.349 [2024-05-15 17:12:34.816332] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x114c980) on tqpair(0x10e4c30): expected_datao=0, payload_size=4096 00:21:47.349 [2024-05-15 17:12:34.816338] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:47.349 [2024-05-15 17:12:34.816344] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:21:47.349 [2024-05-15 17:12:34.816348] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:21:47.349 [2024-05-15 17:12:34.816390] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:47.349 [2024-05-15 17:12:34.816395] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:47.349 [2024-05-15 17:12:34.816398] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:47.349 [2024-05-15 17:12:34.816401] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x114c980) on tqpair=0x10e4c30 00:21:47.349 [2024-05-15 17:12:34.816408] nvme_ctrlr.c:1985:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_xfer_size 4294967295 00:21:47.349 [2024-05-15 17:12:34.816412] nvme_ctrlr.c:1989:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] MDTS max_xfer_size 131072 00:21:47.349 [2024-05-15 17:12:34.816416] nvme_ctrlr.c:1992:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CNTLID 0x0001 00:21:47.349 [2024-05-15 17:12:34.816419] nvme_ctrlr.c:2016:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_sges 16 00:21:47.349 [2024-05-15 17:12:34.816423] nvme_ctrlr.c:2031:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] fuses compare and write: 1 00:21:47.349 [2024-05-15 17:12:34.816427] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to configure AER (timeout 30000 ms) 00:21:47.349 [2024-05-15 17:12:34.816437] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for configure aer (timeout 30000 ms) 00:21:47.350 [2024-05-15 17:12:34.816445] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:47.350 [2024-05-15 17:12:34.816448] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:47.350 [2024-05-15 17:12:34.816451] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x10e4c30) 00:21:47.350 [2024-05-15 17:12:34.816457] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:21:47.350 [2024-05-15 17:12:34.816467] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x114c980, cid 0, qid 0 00:21:47.350 [2024-05-15 17:12:34.816541] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:47.350 [2024-05-15 17:12:34.816546] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:47.350 [2024-05-15 17:12:34.816549] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:47.350 [2024-05-15 17:12:34.816552] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x114c980) on tqpair=0x10e4c30 00:21:47.350 [2024-05-15 17:12:34.816560] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:47.350 [2024-05-15 17:12:34.816563] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:47.350 [2024-05-15 17:12:34.816567] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x10e4c30) 00:21:47.350 [2024-05-15 17:12:34.816572] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:47.350 [2024-05-15 17:12:34.816577] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:47.350 [2024-05-15 17:12:34.816580] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:47.350 [2024-05-15 17:12:34.816583] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x10e4c30) 00:21:47.350 [2024-05-15 17:12:34.816588] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:47.350 [2024-05-15 17:12:34.816593] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:47.350 [2024-05-15 17:12:34.816596] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:47.350 [2024-05-15 17:12:34.816599] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x10e4c30) 00:21:47.350 [2024-05-15 17:12:34.816604] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:47.350 [2024-05-15 17:12:34.816610] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:47.350 [2024-05-15 17:12:34.816614] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:47.350 [2024-05-15 17:12:34.816616] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x10e4c30) 00:21:47.350 [2024-05-15 17:12:34.816621] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:47.350 [2024-05-15 17:12:34.816625] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set keep alive timeout (timeout 30000 ms) 00:21:47.350 [2024-05-15 17:12:34.816633] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:21:47.350 [2024-05-15 17:12:34.816638] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:47.350 [2024-05-15 17:12:34.816642] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x10e4c30) 00:21:47.350 [2024-05-15 17:12:34.816647] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:47.350 [2024-05-15 17:12:34.816658] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x114c980, cid 0, qid 0 00:21:47.350 [2024-05-15 17:12:34.816662] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x114cae0, cid 1, qid 0 00:21:47.350 [2024-05-15 17:12:34.816666] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x114cc40, cid 2, qid 0 00:21:47.350 [2024-05-15 17:12:34.816670] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x114cda0, cid 3, qid 0 00:21:47.350 [2024-05-15 17:12:34.816674] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x114cf00, cid 4, qid 0 00:21:47.350 [2024-05-15 17:12:34.816784] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:47.350 [2024-05-15 17:12:34.816790] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:47.350 [2024-05-15 17:12:34.816792] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:47.350 [2024-05-15 17:12:34.816795] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x114cf00) on tqpair=0x10e4c30 00:21:47.350 [2024-05-15 17:12:34.816802] nvme_ctrlr.c:2903:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Sending keep alive every 5000000 us 00:21:47.350 [2024-05-15 17:12:34.816807] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller iocs specific (timeout 30000 ms) 00:21:47.350 [2024-05-15 17:12:34.816814] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set number of queues (timeout 30000 ms) 00:21:47.350 [2024-05-15 17:12:34.816819] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set number of queues (timeout 30000 ms) 00:21:47.350 [2024-05-15 17:12:34.816825] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:47.350 [2024-05-15 17:12:34.816828] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:47.350 [2024-05-15 17:12:34.816831] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x10e4c30) 00:21:47.350 [2024-05-15 17:12:34.816837] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:4 cdw10:00000007 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:21:47.350 [2024-05-15 17:12:34.816846] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x114cf00, cid 4, qid 0 00:21:47.350 [2024-05-15 17:12:34.816920] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:47.350 [2024-05-15 17:12:34.816926] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:47.350 [2024-05-15 17:12:34.816929] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:47.350 [2024-05-15 17:12:34.816932] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x114cf00) on tqpair=0x10e4c30 00:21:47.350 [2024-05-15 17:12:34.816976] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify active ns (timeout 30000 ms) 00:21:47.350 [2024-05-15 17:12:34.816987] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify active ns (timeout 30000 ms) 00:21:47.350 [2024-05-15 17:12:34.816994] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:47.350 [2024-05-15 17:12:34.816997] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x10e4c30) 00:21:47.350 [2024-05-15 17:12:34.817002] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:47.350 [2024-05-15 17:12:34.817012] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x114cf00, cid 4, qid 0 00:21:47.350 [2024-05-15 17:12:34.817110] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:21:47.350 [2024-05-15 17:12:34.817116] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:21:47.350 [2024-05-15 17:12:34.817119] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:21:47.350 [2024-05-15 17:12:34.817122] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x10e4c30): datao=0, datal=4096, cccid=4 00:21:47.350 [2024-05-15 17:12:34.817125] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x114cf00) on tqpair(0x10e4c30): expected_datao=0, payload_size=4096 00:21:47.350 [2024-05-15 17:12:34.817129] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:47.350 [2024-05-15 17:12:34.817135] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:21:47.350 [2024-05-15 17:12:34.817138] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:21:47.350 [2024-05-15 17:12:34.861173] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:47.350 [2024-05-15 17:12:34.861186] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:47.350 [2024-05-15 17:12:34.861189] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:47.350 [2024-05-15 17:12:34.861193] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x114cf00) on tqpair=0x10e4c30 00:21:47.350 [2024-05-15 17:12:34.861206] nvme_ctrlr.c:4558:spdk_nvme_ctrlr_get_ns: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Namespace 1 was added 00:21:47.350 [2024-05-15 17:12:34.861220] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns (timeout 30000 ms) 00:21:47.350 [2024-05-15 17:12:34.861229] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify ns (timeout 30000 ms) 00:21:47.350 [2024-05-15 17:12:34.861236] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:47.350 [2024-05-15 17:12:34.861239] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x10e4c30) 00:21:47.350 [2024-05-15 17:12:34.861246] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:47.350 [2024-05-15 17:12:34.861258] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x114cf00, cid 4, qid 0 00:21:47.350 [2024-05-15 17:12:34.861428] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:21:47.350 [2024-05-15 17:12:34.861434] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:21:47.350 [2024-05-15 17:12:34.861437] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:21:47.350 [2024-05-15 17:12:34.861441] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x10e4c30): datao=0, datal=4096, cccid=4 00:21:47.350 [2024-05-15 17:12:34.861445] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x114cf00) on tqpair(0x10e4c30): expected_datao=0, payload_size=4096 00:21:47.350 [2024-05-15 17:12:34.861448] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:47.350 [2024-05-15 17:12:34.861471] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:21:47.350 [2024-05-15 17:12:34.861475] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:21:47.350 [2024-05-15 17:12:34.902300] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:47.350 [2024-05-15 17:12:34.902311] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:47.350 [2024-05-15 17:12:34.902319] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:47.350 [2024-05-15 17:12:34.902323] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x114cf00) on tqpair=0x10e4c30 00:21:47.350 [2024-05-15 17:12:34.902334] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:21:47.350 [2024-05-15 17:12:34.902344] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:21:47.350 [2024-05-15 17:12:34.902351] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:47.350 [2024-05-15 17:12:34.902355] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x10e4c30) 00:21:47.350 [2024-05-15 17:12:34.902362] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:47.350 [2024-05-15 17:12:34.902374] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x114cf00, cid 4, qid 0 00:21:47.350 [2024-05-15 17:12:34.902496] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:21:47.350 [2024-05-15 17:12:34.902501] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:21:47.350 [2024-05-15 17:12:34.902504] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:21:47.350 [2024-05-15 17:12:34.902507] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x10e4c30): datao=0, datal=4096, cccid=4 00:21:47.350 [2024-05-15 17:12:34.902511] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x114cf00) on tqpair(0x10e4c30): expected_datao=0, payload_size=4096 00:21:47.350 [2024-05-15 17:12:34.902515] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:47.350 [2024-05-15 17:12:34.902545] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:21:47.351 [2024-05-15 17:12:34.902549] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:21:47.351 [2024-05-15 17:12:34.902596] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:47.351 [2024-05-15 17:12:34.902602] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:47.351 [2024-05-15 17:12:34.902605] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:47.351 [2024-05-15 17:12:34.902608] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x114cf00) on tqpair=0x10e4c30 00:21:47.351 [2024-05-15 17:12:34.902619] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns iocs specific (timeout 30000 ms) 00:21:47.351 [2024-05-15 17:12:34.902626] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported log pages (timeout 30000 ms) 00:21:47.351 [2024-05-15 17:12:34.902633] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported features (timeout 30000 ms) 00:21:47.351 [2024-05-15 17:12:34.902638] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set doorbell buffer config (timeout 30000 ms) 00:21:47.351 [2024-05-15 17:12:34.902643] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set host ID (timeout 30000 ms) 00:21:47.351 [2024-05-15 17:12:34.902647] nvme_ctrlr.c:2991:nvme_ctrlr_set_host_id: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] NVMe-oF transport - not sending Set Features - Host ID 00:21:47.351 [2024-05-15 17:12:34.902651] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to transport ready (timeout 30000 ms) 00:21:47.351 [2024-05-15 17:12:34.902656] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to ready (no timeout) 00:21:47.351 [2024-05-15 17:12:34.902670] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:47.351 [2024-05-15 17:12:34.902673] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x10e4c30) 00:21:47.351 [2024-05-15 17:12:34.902680] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:4 cdw10:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:47.351 [2024-05-15 17:12:34.902687] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:47.351 [2024-05-15 17:12:34.902691] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:47.351 [2024-05-15 17:12:34.902694] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x10e4c30) 00:21:47.351 [2024-05-15 17:12:34.902699] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:21:47.351 [2024-05-15 17:12:34.902711] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x114cf00, cid 4, qid 0 00:21:47.351 [2024-05-15 17:12:34.902716] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x114d060, cid 5, qid 0 00:21:47.351 [2024-05-15 17:12:34.902802] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:47.351 [2024-05-15 17:12:34.902808] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:47.351 [2024-05-15 17:12:34.902811] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:47.351 [2024-05-15 17:12:34.902814] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x114cf00) on tqpair=0x10e4c30 00:21:47.351 [2024-05-15 17:12:34.902820] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:47.351 [2024-05-15 17:12:34.902825] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:47.351 [2024-05-15 17:12:34.902828] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:47.351 [2024-05-15 17:12:34.902831] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x114d060) on tqpair=0x10e4c30 00:21:47.351 [2024-05-15 17:12:34.902840] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:47.351 [2024-05-15 17:12:34.902844] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x10e4c30) 00:21:47.351 [2024-05-15 17:12:34.902849] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:5 cdw10:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:47.351 [2024-05-15 17:12:34.902859] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x114d060, cid 5, qid 0 00:21:47.351 [2024-05-15 17:12:34.902933] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:47.351 [2024-05-15 17:12:34.902939] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:47.351 [2024-05-15 17:12:34.902942] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:47.351 [2024-05-15 17:12:34.902945] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x114d060) on tqpair=0x10e4c30 00:21:47.351 [2024-05-15 17:12:34.902953] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:47.351 [2024-05-15 17:12:34.902957] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x10e4c30) 00:21:47.351 [2024-05-15 17:12:34.902962] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:5 cdw10:00000004 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:47.351 [2024-05-15 17:12:34.902971] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x114d060, cid 5, qid 0 00:21:47.351 [2024-05-15 17:12:34.903043] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:47.351 [2024-05-15 17:12:34.903049] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:47.351 [2024-05-15 17:12:34.903052] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:47.351 [2024-05-15 17:12:34.903055] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x114d060) on tqpair=0x10e4c30 00:21:47.351 [2024-05-15 17:12:34.903063] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:47.351 [2024-05-15 17:12:34.903066] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x10e4c30) 00:21:47.351 [2024-05-15 17:12:34.903072] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:5 cdw10:00000007 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:47.351 [2024-05-15 17:12:34.903080] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x114d060, cid 5, qid 0 00:21:47.351 [2024-05-15 17:12:34.903187] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:47.351 [2024-05-15 17:12:34.903193] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:47.351 [2024-05-15 17:12:34.903198] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:47.351 [2024-05-15 17:12:34.903201] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x114d060) on tqpair=0x10e4c30 00:21:47.351 [2024-05-15 17:12:34.903212] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:47.351 [2024-05-15 17:12:34.903216] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x10e4c30) 00:21:47.351 [2024-05-15 17:12:34.903222] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:47.351 [2024-05-15 17:12:34.903227] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:47.351 [2024-05-15 17:12:34.903230] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x10e4c30) 00:21:47.351 [2024-05-15 17:12:34.903236] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:ffffffff cdw10:007f0002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:47.351 [2024-05-15 17:12:34.903242] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:47.351 [2024-05-15 17:12:34.903245] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=6 on tqpair(0x10e4c30) 00:21:47.351 [2024-05-15 17:12:34.903250] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:ffffffff cdw10:007f0003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:47.351 [2024-05-15 17:12:34.903258] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:47.351 [2024-05-15 17:12:34.903262] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x10e4c30) 00:21:47.351 [2024-05-15 17:12:34.903267] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:47.351 [2024-05-15 17:12:34.903278] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x114d060, cid 5, qid 0 00:21:47.351 [2024-05-15 17:12:34.903283] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x114cf00, cid 4, qid 0 00:21:47.351 [2024-05-15 17:12:34.903287] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x114d1c0, cid 6, qid 0 00:21:47.351 [2024-05-15 17:12:34.903291] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x114d320, cid 7, qid 0 00:21:47.351 [2024-05-15 17:12:34.903453] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:21:47.351 [2024-05-15 17:12:34.903459] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:21:47.351 [2024-05-15 17:12:34.903462] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:21:47.351 [2024-05-15 17:12:34.903465] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x10e4c30): datao=0, datal=8192, cccid=5 00:21:47.351 [2024-05-15 17:12:34.903469] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x114d060) on tqpair(0x10e4c30): expected_datao=0, payload_size=8192 00:21:47.351 [2024-05-15 17:12:34.903472] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:47.351 [2024-05-15 17:12:34.903478] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:21:47.351 [2024-05-15 17:12:34.903481] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:21:47.351 [2024-05-15 17:12:34.903486] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:21:47.351 [2024-05-15 17:12:34.903491] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:21:47.351 [2024-05-15 17:12:34.903494] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:21:47.351 [2024-05-15 17:12:34.903497] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x10e4c30): datao=0, datal=512, cccid=4 00:21:47.351 [2024-05-15 17:12:34.903501] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x114cf00) on tqpair(0x10e4c30): expected_datao=0, payload_size=512 00:21:47.351 [2024-05-15 17:12:34.903504] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:47.351 [2024-05-15 17:12:34.903510] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:21:47.351 [2024-05-15 17:12:34.903513] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:21:47.351 [2024-05-15 17:12:34.903519] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:21:47.351 [2024-05-15 17:12:34.903524] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:21:47.351 [2024-05-15 17:12:34.903527] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:21:47.351 [2024-05-15 17:12:34.903530] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x10e4c30): datao=0, datal=512, cccid=6 00:21:47.351 [2024-05-15 17:12:34.903533] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x114d1c0) on tqpair(0x10e4c30): expected_datao=0, payload_size=512 00:21:47.351 [2024-05-15 17:12:34.903537] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:47.351 [2024-05-15 17:12:34.903542] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:21:47.351 [2024-05-15 17:12:34.903545] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:21:47.351 [2024-05-15 17:12:34.903550] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:21:47.351 [2024-05-15 17:12:34.903555] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:21:47.351 [2024-05-15 17:12:34.903558] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:21:47.351 [2024-05-15 17:12:34.903561] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x10e4c30): datao=0, datal=4096, cccid=7 00:21:47.351 [2024-05-15 17:12:34.903564] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x114d320) on tqpair(0x10e4c30): expected_datao=0, payload_size=4096 00:21:47.351 [2024-05-15 17:12:34.903568] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:47.351 [2024-05-15 17:12:34.903574] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:21:47.351 [2024-05-15 17:12:34.903576] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:21:47.351 [2024-05-15 17:12:34.903584] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:47.351 [2024-05-15 17:12:34.903589] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:47.351 [2024-05-15 17:12:34.903591] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:47.352 [2024-05-15 17:12:34.903595] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x114d060) on tqpair=0x10e4c30 00:21:47.352 [2024-05-15 17:12:34.903606] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:47.352 [2024-05-15 17:12:34.903610] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:47.352 [2024-05-15 17:12:34.903614] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:47.352 [2024-05-15 17:12:34.903617] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x114cf00) on tqpair=0x10e4c30 00:21:47.352 [2024-05-15 17:12:34.903624] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:47.352 [2024-05-15 17:12:34.903629] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:47.352 [2024-05-15 17:12:34.903632] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:47.352 [2024-05-15 17:12:34.903635] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x114d1c0) on tqpair=0x10e4c30 00:21:47.352 [2024-05-15 17:12:34.903643] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:47.352 [2024-05-15 17:12:34.903648] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:47.352 [2024-05-15 17:12:34.903651] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:47.352 [2024-05-15 17:12:34.903654] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x114d320) on tqpair=0x10e4c30 00:21:47.352 ===================================================== 00:21:47.352 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:21:47.352 ===================================================== 00:21:47.352 Controller Capabilities/Features 00:21:47.352 ================================ 00:21:47.352 Vendor ID: 8086 00:21:47.352 Subsystem Vendor ID: 8086 00:21:47.352 Serial Number: SPDK00000000000001 00:21:47.352 Model Number: SPDK bdev Controller 00:21:47.352 Firmware Version: 24.05 00:21:47.352 Recommended Arb Burst: 6 00:21:47.352 IEEE OUI Identifier: e4 d2 5c 00:21:47.352 Multi-path I/O 00:21:47.352 May have multiple subsystem ports: Yes 00:21:47.352 May have multiple controllers: Yes 00:21:47.352 Associated with SR-IOV VF: No 00:21:47.352 Max Data Transfer Size: 131072 00:21:47.352 Max Number of Namespaces: 32 00:21:47.352 Max Number of I/O Queues: 127 00:21:47.352 NVMe Specification Version (VS): 1.3 00:21:47.352 NVMe Specification Version (Identify): 1.3 00:21:47.352 Maximum Queue Entries: 128 00:21:47.352 Contiguous Queues Required: Yes 00:21:47.352 Arbitration Mechanisms Supported 00:21:47.352 Weighted Round Robin: Not Supported 00:21:47.352 Vendor Specific: Not Supported 00:21:47.352 Reset Timeout: 15000 ms 00:21:47.352 Doorbell Stride: 4 bytes 00:21:47.352 NVM Subsystem Reset: Not Supported 00:21:47.352 Command Sets Supported 00:21:47.352 NVM Command Set: Supported 00:21:47.352 Boot Partition: Not Supported 00:21:47.352 Memory Page Size Minimum: 4096 bytes 00:21:47.352 Memory Page Size Maximum: 4096 bytes 00:21:47.352 Persistent Memory Region: Not Supported 00:21:47.352 Optional Asynchronous Events Supported 00:21:47.352 Namespace Attribute Notices: Supported 00:21:47.352 Firmware Activation Notices: Not Supported 00:21:47.352 ANA Change Notices: Not Supported 00:21:47.352 PLE Aggregate Log Change Notices: Not Supported 00:21:47.352 LBA Status Info Alert Notices: Not Supported 00:21:47.352 EGE Aggregate Log Change Notices: Not Supported 00:21:47.352 Normal NVM Subsystem Shutdown event: Not Supported 00:21:47.352 Zone Descriptor Change Notices: Not Supported 00:21:47.352 Discovery Log Change Notices: Not Supported 00:21:47.352 Controller Attributes 00:21:47.352 128-bit Host Identifier: Supported 00:21:47.352 Non-Operational Permissive Mode: Not Supported 00:21:47.352 NVM Sets: Not Supported 00:21:47.352 Read Recovery Levels: Not Supported 00:21:47.352 Endurance Groups: Not Supported 00:21:47.352 Predictable Latency Mode: Not Supported 00:21:47.352 Traffic Based Keep ALive: Not Supported 00:21:47.352 Namespace Granularity: Not Supported 00:21:47.352 SQ Associations: Not Supported 00:21:47.352 UUID List: Not Supported 00:21:47.352 Multi-Domain Subsystem: Not Supported 00:21:47.352 Fixed Capacity Management: Not Supported 00:21:47.352 Variable Capacity Management: Not Supported 00:21:47.352 Delete Endurance Group: Not Supported 00:21:47.352 Delete NVM Set: Not Supported 00:21:47.352 Extended LBA Formats Supported: Not Supported 00:21:47.352 Flexible Data Placement Supported: Not Supported 00:21:47.352 00:21:47.352 Controller Memory Buffer Support 00:21:47.352 ================================ 00:21:47.352 Supported: No 00:21:47.352 00:21:47.352 Persistent Memory Region Support 00:21:47.352 ================================ 00:21:47.352 Supported: No 00:21:47.352 00:21:47.352 Admin Command Set Attributes 00:21:47.352 ============================ 00:21:47.352 Security Send/Receive: Not Supported 00:21:47.352 Format NVM: Not Supported 00:21:47.352 Firmware Activate/Download: Not Supported 00:21:47.352 Namespace Management: Not Supported 00:21:47.352 Device Self-Test: Not Supported 00:21:47.352 Directives: Not Supported 00:21:47.352 NVMe-MI: Not Supported 00:21:47.352 Virtualization Management: Not Supported 00:21:47.352 Doorbell Buffer Config: Not Supported 00:21:47.352 Get LBA Status Capability: Not Supported 00:21:47.352 Command & Feature Lockdown Capability: Not Supported 00:21:47.352 Abort Command Limit: 4 00:21:47.352 Async Event Request Limit: 4 00:21:47.352 Number of Firmware Slots: N/A 00:21:47.352 Firmware Slot 1 Read-Only: N/A 00:21:47.352 Firmware Activation Without Reset: N/A 00:21:47.352 Multiple Update Detection Support: N/A 00:21:47.352 Firmware Update Granularity: No Information Provided 00:21:47.352 Per-Namespace SMART Log: No 00:21:47.352 Asymmetric Namespace Access Log Page: Not Supported 00:21:47.352 Subsystem NQN: nqn.2016-06.io.spdk:cnode1 00:21:47.352 Command Effects Log Page: Supported 00:21:47.352 Get Log Page Extended Data: Supported 00:21:47.352 Telemetry Log Pages: Not Supported 00:21:47.352 Persistent Event Log Pages: Not Supported 00:21:47.352 Supported Log Pages Log Page: May Support 00:21:47.352 Commands Supported & Effects Log Page: Not Supported 00:21:47.352 Feature Identifiers & Effects Log Page:May Support 00:21:47.352 NVMe-MI Commands & Effects Log Page: May Support 00:21:47.352 Data Area 4 for Telemetry Log: Not Supported 00:21:47.352 Error Log Page Entries Supported: 128 00:21:47.352 Keep Alive: Supported 00:21:47.352 Keep Alive Granularity: 10000 ms 00:21:47.352 00:21:47.352 NVM Command Set Attributes 00:21:47.352 ========================== 00:21:47.352 Submission Queue Entry Size 00:21:47.352 Max: 64 00:21:47.352 Min: 64 00:21:47.352 Completion Queue Entry Size 00:21:47.352 Max: 16 00:21:47.352 Min: 16 00:21:47.352 Number of Namespaces: 32 00:21:47.352 Compare Command: Supported 00:21:47.352 Write Uncorrectable Command: Not Supported 00:21:47.352 Dataset Management Command: Supported 00:21:47.352 Write Zeroes Command: Supported 00:21:47.352 Set Features Save Field: Not Supported 00:21:47.352 Reservations: Supported 00:21:47.352 Timestamp: Not Supported 00:21:47.352 Copy: Supported 00:21:47.352 Volatile Write Cache: Present 00:21:47.352 Atomic Write Unit (Normal): 1 00:21:47.352 Atomic Write Unit (PFail): 1 00:21:47.352 Atomic Compare & Write Unit: 1 00:21:47.352 Fused Compare & Write: Supported 00:21:47.352 Scatter-Gather List 00:21:47.352 SGL Command Set: Supported 00:21:47.352 SGL Keyed: Supported 00:21:47.352 SGL Bit Bucket Descriptor: Not Supported 00:21:47.352 SGL Metadata Pointer: Not Supported 00:21:47.352 Oversized SGL: Not Supported 00:21:47.352 SGL Metadata Address: Not Supported 00:21:47.352 SGL Offset: Supported 00:21:47.352 Transport SGL Data Block: Not Supported 00:21:47.352 Replay Protected Memory Block: Not Supported 00:21:47.352 00:21:47.352 Firmware Slot Information 00:21:47.352 ========================= 00:21:47.352 Active slot: 1 00:21:47.352 Slot 1 Firmware Revision: 24.05 00:21:47.352 00:21:47.352 00:21:47.352 Commands Supported and Effects 00:21:47.352 ============================== 00:21:47.352 Admin Commands 00:21:47.352 -------------- 00:21:47.352 Get Log Page (02h): Supported 00:21:47.352 Identify (06h): Supported 00:21:47.352 Abort (08h): Supported 00:21:47.352 Set Features (09h): Supported 00:21:47.352 Get Features (0Ah): Supported 00:21:47.352 Asynchronous Event Request (0Ch): Supported 00:21:47.352 Keep Alive (18h): Supported 00:21:47.352 I/O Commands 00:21:47.352 ------------ 00:21:47.352 Flush (00h): Supported LBA-Change 00:21:47.352 Write (01h): Supported LBA-Change 00:21:47.352 Read (02h): Supported 00:21:47.352 Compare (05h): Supported 00:21:47.352 Write Zeroes (08h): Supported LBA-Change 00:21:47.352 Dataset Management (09h): Supported LBA-Change 00:21:47.352 Copy (19h): Supported LBA-Change 00:21:47.352 Unknown (79h): Supported LBA-Change 00:21:47.352 Unknown (7Ah): Supported 00:21:47.352 00:21:47.352 Error Log 00:21:47.352 ========= 00:21:47.352 00:21:47.352 Arbitration 00:21:47.352 =========== 00:21:47.352 Arbitration Burst: 1 00:21:47.352 00:21:47.352 Power Management 00:21:47.352 ================ 00:21:47.352 Number of Power States: 1 00:21:47.353 Current Power State: Power State #0 00:21:47.353 Power State #0: 00:21:47.353 Max Power: 0.00 W 00:21:47.353 Non-Operational State: Operational 00:21:47.353 Entry Latency: Not Reported 00:21:47.353 Exit Latency: Not Reported 00:21:47.353 Relative Read Throughput: 0 00:21:47.353 Relative Read Latency: 0 00:21:47.353 Relative Write Throughput: 0 00:21:47.353 Relative Write Latency: 0 00:21:47.353 Idle Power: Not Reported 00:21:47.353 Active Power: Not Reported 00:21:47.353 Non-Operational Permissive Mode: Not Supported 00:21:47.353 00:21:47.353 Health Information 00:21:47.353 ================== 00:21:47.353 Critical Warnings: 00:21:47.353 Available Spare Space: OK 00:21:47.353 Temperature: OK 00:21:47.353 Device Reliability: OK 00:21:47.353 Read Only: No 00:21:47.353 Volatile Memory Backup: OK 00:21:47.353 Current Temperature: 0 Kelvin (-273 Celsius) 00:21:47.353 Temperature Threshold: [2024-05-15 17:12:34.903738] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:47.353 [2024-05-15 17:12:34.903743] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x10e4c30) 00:21:47.353 [2024-05-15 17:12:34.903749] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:7 cdw10:00000005 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:47.353 [2024-05-15 17:12:34.903761] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x114d320, cid 7, qid 0 00:21:47.353 [2024-05-15 17:12:34.903854] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:47.353 [2024-05-15 17:12:34.903860] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:47.353 [2024-05-15 17:12:34.903864] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:47.353 [2024-05-15 17:12:34.903867] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x114d320) on tqpair=0x10e4c30 00:21:47.353 [2024-05-15 17:12:34.903891] nvme_ctrlr.c:4222:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Prepare to destruct SSD 00:21:47.353 [2024-05-15 17:12:34.903902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:47.353 [2024-05-15 17:12:34.903908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:47.353 [2024-05-15 17:12:34.903913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:47.353 [2024-05-15 17:12:34.903918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:47.353 [2024-05-15 17:12:34.903925] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:47.353 [2024-05-15 17:12:34.903928] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:47.353 [2024-05-15 17:12:34.903932] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x10e4c30) 00:21:47.353 [2024-05-15 17:12:34.903938] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:47.353 [2024-05-15 17:12:34.903948] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x114cda0, cid 3, qid 0 00:21:47.353 [2024-05-15 17:12:34.904021] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:47.353 [2024-05-15 17:12:34.904027] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:47.353 [2024-05-15 17:12:34.904030] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:47.353 [2024-05-15 17:12:34.904033] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x114cda0) on tqpair=0x10e4c30 00:21:47.353 [2024-05-15 17:12:34.904039] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:47.353 [2024-05-15 17:12:34.904042] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:47.353 [2024-05-15 17:12:34.904045] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x10e4c30) 00:21:47.353 [2024-05-15 17:12:34.904051] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:47.353 [2024-05-15 17:12:34.904063] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x114cda0, cid 3, qid 0 00:21:47.353 [2024-05-15 17:12:34.904151] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:47.353 [2024-05-15 17:12:34.904158] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:47.353 [2024-05-15 17:12:34.904160] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:47.353 [2024-05-15 17:12:34.904168] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x114cda0) on tqpair=0x10e4c30 00:21:47.353 [2024-05-15 17:12:34.904173] nvme_ctrlr.c:1083:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] RTD3E = 0 us 00:21:47.353 [2024-05-15 17:12:34.904177] nvme_ctrlr.c:1086:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown timeout = 10000 ms 00:21:47.353 [2024-05-15 17:12:34.904185] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:47.353 [2024-05-15 17:12:34.904189] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:47.353 [2024-05-15 17:12:34.904192] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x10e4c30) 00:21:47.353 [2024-05-15 17:12:34.904197] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:47.353 [2024-05-15 17:12:34.904206] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x114cda0, cid 3, qid 0 00:21:47.353 [2024-05-15 17:12:34.904282] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:47.353 [2024-05-15 17:12:34.904287] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:47.353 [2024-05-15 17:12:34.904292] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:47.353 [2024-05-15 17:12:34.904296] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x114cda0) on tqpair=0x10e4c30 00:21:47.353 [2024-05-15 17:12:34.904304] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:47.353 [2024-05-15 17:12:34.904308] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:47.353 [2024-05-15 17:12:34.904311] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x10e4c30) 00:21:47.353 [2024-05-15 17:12:34.904316] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:47.353 [2024-05-15 17:12:34.904325] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x114cda0, cid 3, qid 0 00:21:47.353 [2024-05-15 17:12:34.904399] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:47.353 [2024-05-15 17:12:34.904405] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:47.353 [2024-05-15 17:12:34.904408] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:47.353 [2024-05-15 17:12:34.904411] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x114cda0) on tqpair=0x10e4c30 00:21:47.353 [2024-05-15 17:12:34.904419] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:47.353 [2024-05-15 17:12:34.904422] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:47.353 [2024-05-15 17:12:34.904425] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x10e4c30) 00:21:47.353 [2024-05-15 17:12:34.904431] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:47.353 [2024-05-15 17:12:34.904440] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x114cda0, cid 3, qid 0 00:21:47.353 [2024-05-15 17:12:34.904511] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:47.353 [2024-05-15 17:12:34.904516] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:47.353 [2024-05-15 17:12:34.904519] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:47.353 [2024-05-15 17:12:34.904522] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x114cda0) on tqpair=0x10e4c30 00:21:47.353 [2024-05-15 17:12:34.904531] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:47.353 [2024-05-15 17:12:34.904534] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:47.353 [2024-05-15 17:12:34.904537] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x10e4c30) 00:21:47.353 [2024-05-15 17:12:34.904543] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:47.353 [2024-05-15 17:12:34.904552] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x114cda0, cid 3, qid 0 00:21:47.353 [2024-05-15 17:12:34.904632] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:47.353 [2024-05-15 17:12:34.904638] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:47.353 [2024-05-15 17:12:34.904640] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:47.353 [2024-05-15 17:12:34.904644] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x114cda0) on tqpair=0x10e4c30 00:21:47.353 [2024-05-15 17:12:34.904652] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:47.353 [2024-05-15 17:12:34.904656] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:47.353 [2024-05-15 17:12:34.904658] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x10e4c30) 00:21:47.353 [2024-05-15 17:12:34.904664] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:47.353 [2024-05-15 17:12:34.904673] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x114cda0, cid 3, qid 0 00:21:47.354 [2024-05-15 17:12:34.904749] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:47.354 [2024-05-15 17:12:34.904754] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:47.354 [2024-05-15 17:12:34.904757] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:47.354 [2024-05-15 17:12:34.904762] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x114cda0) on tqpair=0x10e4c30 00:21:47.354 [2024-05-15 17:12:34.904770] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:47.354 [2024-05-15 17:12:34.904774] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:47.354 [2024-05-15 17:12:34.904777] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x10e4c30) 00:21:47.354 [2024-05-15 17:12:34.904783] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:47.354 [2024-05-15 17:12:34.904791] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x114cda0, cid 3, qid 0 00:21:47.354 [2024-05-15 17:12:34.904867] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:47.354 [2024-05-15 17:12:34.904873] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:47.354 [2024-05-15 17:12:34.904876] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:47.354 [2024-05-15 17:12:34.904879] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x114cda0) on tqpair=0x10e4c30 00:21:47.354 [2024-05-15 17:12:34.904887] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:47.354 [2024-05-15 17:12:34.904891] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:47.354 [2024-05-15 17:12:34.904894] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x10e4c30) 00:21:47.354 [2024-05-15 17:12:34.904899] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:47.354 [2024-05-15 17:12:34.904908] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x114cda0, cid 3, qid 0 00:21:47.354 [2024-05-15 17:12:34.904977] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:47.354 [2024-05-15 17:12:34.904983] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:47.354 [2024-05-15 17:12:34.904986] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:47.354 [2024-05-15 17:12:34.904989] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x114cda0) on tqpair=0x10e4c30 00:21:47.354 [2024-05-15 17:12:34.904997] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:47.354 [2024-05-15 17:12:34.905001] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:47.354 [2024-05-15 17:12:34.905004] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x10e4c30) 00:21:47.354 [2024-05-15 17:12:34.905009] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:47.354 [2024-05-15 17:12:34.905018] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x114cda0, cid 3, qid 0 00:21:47.354 [2024-05-15 17:12:34.905100] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:47.354 [2024-05-15 17:12:34.905105] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:47.354 [2024-05-15 17:12:34.905108] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:47.354 [2024-05-15 17:12:34.905111] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x114cda0) on tqpair=0x10e4c30 00:21:47.354 [2024-05-15 17:12:34.905120] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:47.354 [2024-05-15 17:12:34.905123] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:47.354 [2024-05-15 17:12:34.905127] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x10e4c30) 00:21:47.354 [2024-05-15 17:12:34.905132] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:47.354 [2024-05-15 17:12:34.905141] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x114cda0, cid 3, qid 0 00:21:47.354 [2024-05-15 17:12:34.909174] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:47.354 [2024-05-15 17:12:34.909183] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:47.354 [2024-05-15 17:12:34.909185] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:47.354 [2024-05-15 17:12:34.909189] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x114cda0) on tqpair=0x10e4c30 00:21:47.354 [2024-05-15 17:12:34.909202] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:47.354 [2024-05-15 17:12:34.909206] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:47.354 [2024-05-15 17:12:34.909209] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x10e4c30) 00:21:47.354 [2024-05-15 17:12:34.909215] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:47.354 [2024-05-15 17:12:34.909226] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x114cda0, cid 3, qid 0 00:21:47.354 [2024-05-15 17:12:34.909387] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:47.354 [2024-05-15 17:12:34.909393] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:47.354 [2024-05-15 17:12:34.909396] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:47.354 [2024-05-15 17:12:34.909399] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x114cda0) on tqpair=0x10e4c30 00:21:47.354 [2024-05-15 17:12:34.909406] nvme_ctrlr.c:1205:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown complete in 5 milliseconds 00:21:47.354 0 Kelvin (-273 Celsius) 00:21:47.354 Available Spare: 0% 00:21:47.354 Available Spare Threshold: 0% 00:21:47.354 Life Percentage Used: 0% 00:21:47.354 Data Units Read: 0 00:21:47.354 Data Units Written: 0 00:21:47.354 Host Read Commands: 0 00:21:47.354 Host Write Commands: 0 00:21:47.354 Controller Busy Time: 0 minutes 00:21:47.354 Power Cycles: 0 00:21:47.354 Power On Hours: 0 hours 00:21:47.354 Unsafe Shutdowns: 0 00:21:47.354 Unrecoverable Media Errors: 0 00:21:47.354 Lifetime Error Log Entries: 0 00:21:47.354 Warning Temperature Time: 0 minutes 00:21:47.354 Critical Temperature Time: 0 minutes 00:21:47.354 00:21:47.354 Number of Queues 00:21:47.354 ================ 00:21:47.354 Number of I/O Submission Queues: 127 00:21:47.354 Number of I/O Completion Queues: 127 00:21:47.354 00:21:47.354 Active Namespaces 00:21:47.354 ================= 00:21:47.354 Namespace ID:1 00:21:47.354 Error Recovery Timeout: Unlimited 00:21:47.354 Command Set Identifier: NVM (00h) 00:21:47.354 Deallocate: Supported 00:21:47.354 Deallocated/Unwritten Error: Not Supported 00:21:47.354 Deallocated Read Value: Unknown 00:21:47.354 Deallocate in Write Zeroes: Not Supported 00:21:47.354 Deallocated Guard Field: 0xFFFF 00:21:47.354 Flush: Supported 00:21:47.354 Reservation: Supported 00:21:47.354 Namespace Sharing Capabilities: Multiple Controllers 00:21:47.354 Size (in LBAs): 131072 (0GiB) 00:21:47.354 Capacity (in LBAs): 131072 (0GiB) 00:21:47.354 Utilization (in LBAs): 131072 (0GiB) 00:21:47.354 NGUID: ABCDEF0123456789ABCDEF0123456789 00:21:47.354 EUI64: ABCDEF0123456789 00:21:47.354 UUID: 982bf91f-c3be-4bd1-842d-22831ca485b9 00:21:47.354 Thin Provisioning: Not Supported 00:21:47.354 Per-NS Atomic Units: Yes 00:21:47.354 Atomic Boundary Size (Normal): 0 00:21:47.354 Atomic Boundary Size (PFail): 0 00:21:47.354 Atomic Boundary Offset: 0 00:21:47.354 Maximum Single Source Range Length: 65535 00:21:47.354 Maximum Copy Length: 65535 00:21:47.354 Maximum Source Range Count: 1 00:21:47.354 NGUID/EUI64 Never Reused: No 00:21:47.354 Namespace Write Protected: No 00:21:47.354 Number of LBA Formats: 1 00:21:47.354 Current LBA Format: LBA Format #00 00:21:47.354 LBA Format #00: Data Size: 512 Metadata Size: 0 00:21:47.354 00:21:47.354 17:12:34 nvmf_tcp.nvmf_identify -- host/identify.sh@51 -- # sync 00:21:47.354 17:12:34 nvmf_tcp.nvmf_identify -- host/identify.sh@52 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:21:47.354 17:12:34 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:47.354 17:12:34 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:21:47.354 17:12:34 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:47.354 17:12:34 nvmf_tcp.nvmf_identify -- host/identify.sh@54 -- # trap - SIGINT SIGTERM EXIT 00:21:47.354 17:12:34 nvmf_tcp.nvmf_identify -- host/identify.sh@56 -- # nvmftestfini 00:21:47.354 17:12:34 nvmf_tcp.nvmf_identify -- nvmf/common.sh@488 -- # nvmfcleanup 00:21:47.354 17:12:34 nvmf_tcp.nvmf_identify -- nvmf/common.sh@117 -- # sync 00:21:47.354 17:12:34 nvmf_tcp.nvmf_identify -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:21:47.354 17:12:34 nvmf_tcp.nvmf_identify -- nvmf/common.sh@120 -- # set +e 00:21:47.354 17:12:34 nvmf_tcp.nvmf_identify -- nvmf/common.sh@121 -- # for i in {1..20} 00:21:47.354 17:12:34 nvmf_tcp.nvmf_identify -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:21:47.354 rmmod nvme_tcp 00:21:47.354 rmmod nvme_fabrics 00:21:47.354 rmmod nvme_keyring 00:21:47.354 17:12:34 nvmf_tcp.nvmf_identify -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:21:47.354 17:12:34 nvmf_tcp.nvmf_identify -- nvmf/common.sh@124 -- # set -e 00:21:47.354 17:12:34 nvmf_tcp.nvmf_identify -- nvmf/common.sh@125 -- # return 0 00:21:47.354 17:12:34 nvmf_tcp.nvmf_identify -- nvmf/common.sh@489 -- # '[' -n 3144219 ']' 00:21:47.354 17:12:34 nvmf_tcp.nvmf_identify -- nvmf/common.sh@490 -- # killprocess 3144219 00:21:47.354 17:12:34 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@946 -- # '[' -z 3144219 ']' 00:21:47.354 17:12:34 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@950 -- # kill -0 3144219 00:21:47.354 17:12:34 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@951 -- # uname 00:21:47.354 17:12:35 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:21:47.613 17:12:35 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3144219 00:21:47.613 17:12:35 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:21:47.613 17:12:35 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:21:47.613 17:12:35 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3144219' 00:21:47.613 killing process with pid 3144219 00:21:47.613 17:12:35 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@965 -- # kill 3144219 00:21:47.614 [2024-05-15 17:12:35.044236] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:21:47.614 17:12:35 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@970 -- # wait 3144219 00:21:47.873 17:12:35 nvmf_tcp.nvmf_identify -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:21:47.873 17:12:35 nvmf_tcp.nvmf_identify -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:21:47.873 17:12:35 nvmf_tcp.nvmf_identify -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:21:47.873 17:12:35 nvmf_tcp.nvmf_identify -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:21:47.873 17:12:35 nvmf_tcp.nvmf_identify -- nvmf/common.sh@278 -- # remove_spdk_ns 00:21:47.873 17:12:35 nvmf_tcp.nvmf_identify -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:47.874 17:12:35 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:47.874 17:12:35 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:49.776 17:12:37 nvmf_tcp.nvmf_identify -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:21:49.776 00:21:49.776 real 0m9.102s 00:21:49.776 user 0m7.503s 00:21:49.776 sys 0m4.288s 00:21:49.776 17:12:37 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@1122 -- # xtrace_disable 00:21:49.776 17:12:37 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:21:49.776 ************************************ 00:21:49.776 END TEST nvmf_identify 00:21:49.776 ************************************ 00:21:49.776 17:12:37 nvmf_tcp -- nvmf/nvmf.sh@97 -- # run_test nvmf_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:21:49.776 17:12:37 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:21:49.776 17:12:37 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:21:49.777 17:12:37 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:21:49.777 ************************************ 00:21:49.777 START TEST nvmf_perf 00:21:49.777 ************************************ 00:21:49.777 17:12:37 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:21:50.036 * Looking for test storage... 00:21:50.036 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:21:50.036 17:12:37 nvmf_tcp.nvmf_perf -- host/perf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:50.036 17:12:37 nvmf_tcp.nvmf_perf -- nvmf/common.sh@7 -- # uname -s 00:21:50.036 17:12:37 nvmf_tcp.nvmf_perf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:50.036 17:12:37 nvmf_tcp.nvmf_perf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:50.036 17:12:37 nvmf_tcp.nvmf_perf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:50.036 17:12:37 nvmf_tcp.nvmf_perf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:50.036 17:12:37 nvmf_tcp.nvmf_perf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:50.036 17:12:37 nvmf_tcp.nvmf_perf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:50.036 17:12:37 nvmf_tcp.nvmf_perf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:50.036 17:12:37 nvmf_tcp.nvmf_perf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:50.036 17:12:37 nvmf_tcp.nvmf_perf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:50.036 17:12:37 nvmf_tcp.nvmf_perf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:50.036 17:12:37 nvmf_tcp.nvmf_perf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:21:50.036 17:12:37 nvmf_tcp.nvmf_perf -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:21:50.036 17:12:37 nvmf_tcp.nvmf_perf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:50.036 17:12:37 nvmf_tcp.nvmf_perf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:50.036 17:12:37 nvmf_tcp.nvmf_perf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:50.036 17:12:37 nvmf_tcp.nvmf_perf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:50.036 17:12:37 nvmf_tcp.nvmf_perf -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:50.036 17:12:37 nvmf_tcp.nvmf_perf -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:50.036 17:12:37 nvmf_tcp.nvmf_perf -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:50.036 17:12:37 nvmf_tcp.nvmf_perf -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:50.036 17:12:37 nvmf_tcp.nvmf_perf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:50.036 17:12:37 nvmf_tcp.nvmf_perf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:50.036 17:12:37 nvmf_tcp.nvmf_perf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:50.036 17:12:37 nvmf_tcp.nvmf_perf -- paths/export.sh@5 -- # export PATH 00:21:50.036 17:12:37 nvmf_tcp.nvmf_perf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:50.036 17:12:37 nvmf_tcp.nvmf_perf -- nvmf/common.sh@47 -- # : 0 00:21:50.036 17:12:37 nvmf_tcp.nvmf_perf -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:21:50.036 17:12:37 nvmf_tcp.nvmf_perf -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:21:50.036 17:12:37 nvmf_tcp.nvmf_perf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:50.036 17:12:37 nvmf_tcp.nvmf_perf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:50.036 17:12:37 nvmf_tcp.nvmf_perf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:50.036 17:12:37 nvmf_tcp.nvmf_perf -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:21:50.036 17:12:37 nvmf_tcp.nvmf_perf -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:21:50.036 17:12:37 nvmf_tcp.nvmf_perf -- nvmf/common.sh@51 -- # have_pci_nics=0 00:21:50.036 17:12:37 nvmf_tcp.nvmf_perf -- host/perf.sh@12 -- # MALLOC_BDEV_SIZE=64 00:21:50.036 17:12:37 nvmf_tcp.nvmf_perf -- host/perf.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:21:50.036 17:12:37 nvmf_tcp.nvmf_perf -- host/perf.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:21:50.036 17:12:37 nvmf_tcp.nvmf_perf -- host/perf.sh@17 -- # nvmftestinit 00:21:50.036 17:12:37 nvmf_tcp.nvmf_perf -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:21:50.036 17:12:37 nvmf_tcp.nvmf_perf -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:50.036 17:12:37 nvmf_tcp.nvmf_perf -- nvmf/common.sh@448 -- # prepare_net_devs 00:21:50.036 17:12:37 nvmf_tcp.nvmf_perf -- nvmf/common.sh@410 -- # local -g is_hw=no 00:21:50.036 17:12:37 nvmf_tcp.nvmf_perf -- nvmf/common.sh@412 -- # remove_spdk_ns 00:21:50.036 17:12:37 nvmf_tcp.nvmf_perf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:50.036 17:12:37 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:50.036 17:12:37 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:50.036 17:12:37 nvmf_tcp.nvmf_perf -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:21:50.036 17:12:37 nvmf_tcp.nvmf_perf -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:21:50.036 17:12:37 nvmf_tcp.nvmf_perf -- nvmf/common.sh@285 -- # xtrace_disable 00:21:50.036 17:12:37 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:21:55.305 17:12:42 nvmf_tcp.nvmf_perf -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:55.305 17:12:42 nvmf_tcp.nvmf_perf -- nvmf/common.sh@291 -- # pci_devs=() 00:21:55.305 17:12:42 nvmf_tcp.nvmf_perf -- nvmf/common.sh@291 -- # local -a pci_devs 00:21:55.305 17:12:42 nvmf_tcp.nvmf_perf -- nvmf/common.sh@292 -- # pci_net_devs=() 00:21:55.305 17:12:42 nvmf_tcp.nvmf_perf -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:21:55.305 17:12:42 nvmf_tcp.nvmf_perf -- nvmf/common.sh@293 -- # pci_drivers=() 00:21:55.305 17:12:42 nvmf_tcp.nvmf_perf -- nvmf/common.sh@293 -- # local -A pci_drivers 00:21:55.305 17:12:42 nvmf_tcp.nvmf_perf -- nvmf/common.sh@295 -- # net_devs=() 00:21:55.305 17:12:42 nvmf_tcp.nvmf_perf -- nvmf/common.sh@295 -- # local -ga net_devs 00:21:55.305 17:12:42 nvmf_tcp.nvmf_perf -- nvmf/common.sh@296 -- # e810=() 00:21:55.305 17:12:42 nvmf_tcp.nvmf_perf -- nvmf/common.sh@296 -- # local -ga e810 00:21:55.305 17:12:42 nvmf_tcp.nvmf_perf -- nvmf/common.sh@297 -- # x722=() 00:21:55.305 17:12:42 nvmf_tcp.nvmf_perf -- nvmf/common.sh@297 -- # local -ga x722 00:21:55.305 17:12:42 nvmf_tcp.nvmf_perf -- nvmf/common.sh@298 -- # mlx=() 00:21:55.305 17:12:42 nvmf_tcp.nvmf_perf -- nvmf/common.sh@298 -- # local -ga mlx 00:21:55.305 17:12:42 nvmf_tcp.nvmf_perf -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:55.305 17:12:42 nvmf_tcp.nvmf_perf -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:55.305 17:12:42 nvmf_tcp.nvmf_perf -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:55.305 17:12:42 nvmf_tcp.nvmf_perf -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:55.305 17:12:42 nvmf_tcp.nvmf_perf -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:55.305 17:12:42 nvmf_tcp.nvmf_perf -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:55.305 17:12:42 nvmf_tcp.nvmf_perf -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:55.305 17:12:42 nvmf_tcp.nvmf_perf -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:55.305 17:12:42 nvmf_tcp.nvmf_perf -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:55.305 17:12:42 nvmf_tcp.nvmf_perf -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:55.305 17:12:42 nvmf_tcp.nvmf_perf -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:55.305 17:12:42 nvmf_tcp.nvmf_perf -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:21:55.305 17:12:42 nvmf_tcp.nvmf_perf -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:21:55.305 17:12:42 nvmf_tcp.nvmf_perf -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:21:55.305 17:12:42 nvmf_tcp.nvmf_perf -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:21:55.305 17:12:42 nvmf_tcp.nvmf_perf -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:21:55.305 17:12:42 nvmf_tcp.nvmf_perf -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:21:55.306 17:12:42 nvmf_tcp.nvmf_perf -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:55.306 17:12:42 nvmf_tcp.nvmf_perf -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:21:55.306 Found 0000:86:00.0 (0x8086 - 0x159b) 00:21:55.306 17:12:42 nvmf_tcp.nvmf_perf -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:55.306 17:12:42 nvmf_tcp.nvmf_perf -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:55.306 17:12:42 nvmf_tcp.nvmf_perf -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:55.306 17:12:42 nvmf_tcp.nvmf_perf -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:55.306 17:12:42 nvmf_tcp.nvmf_perf -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:55.306 17:12:42 nvmf_tcp.nvmf_perf -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:55.306 17:12:42 nvmf_tcp.nvmf_perf -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:21:55.306 Found 0000:86:00.1 (0x8086 - 0x159b) 00:21:55.306 17:12:42 nvmf_tcp.nvmf_perf -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:55.306 17:12:42 nvmf_tcp.nvmf_perf -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:55.306 17:12:42 nvmf_tcp.nvmf_perf -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:55.306 17:12:42 nvmf_tcp.nvmf_perf -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:55.306 17:12:42 nvmf_tcp.nvmf_perf -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:55.306 17:12:42 nvmf_tcp.nvmf_perf -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:21:55.306 17:12:42 nvmf_tcp.nvmf_perf -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:21:55.306 17:12:42 nvmf_tcp.nvmf_perf -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:21:55.306 17:12:42 nvmf_tcp.nvmf_perf -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:55.306 17:12:42 nvmf_tcp.nvmf_perf -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:55.306 17:12:42 nvmf_tcp.nvmf_perf -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:21:55.306 17:12:42 nvmf_tcp.nvmf_perf -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:55.306 17:12:42 nvmf_tcp.nvmf_perf -- nvmf/common.sh@390 -- # [[ up == up ]] 00:21:55.306 17:12:42 nvmf_tcp.nvmf_perf -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:55.306 17:12:42 nvmf_tcp.nvmf_perf -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:55.306 17:12:42 nvmf_tcp.nvmf_perf -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:21:55.306 Found net devices under 0000:86:00.0: cvl_0_0 00:21:55.306 17:12:42 nvmf_tcp.nvmf_perf -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:55.306 17:12:42 nvmf_tcp.nvmf_perf -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:55.306 17:12:42 nvmf_tcp.nvmf_perf -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:55.306 17:12:42 nvmf_tcp.nvmf_perf -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:21:55.306 17:12:42 nvmf_tcp.nvmf_perf -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:55.306 17:12:42 nvmf_tcp.nvmf_perf -- nvmf/common.sh@390 -- # [[ up == up ]] 00:21:55.306 17:12:42 nvmf_tcp.nvmf_perf -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:55.306 17:12:42 nvmf_tcp.nvmf_perf -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:55.306 17:12:42 nvmf_tcp.nvmf_perf -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:21:55.306 Found net devices under 0000:86:00.1: cvl_0_1 00:21:55.306 17:12:42 nvmf_tcp.nvmf_perf -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:55.306 17:12:42 nvmf_tcp.nvmf_perf -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:21:55.306 17:12:42 nvmf_tcp.nvmf_perf -- nvmf/common.sh@414 -- # is_hw=yes 00:21:55.306 17:12:42 nvmf_tcp.nvmf_perf -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:21:55.306 17:12:42 nvmf_tcp.nvmf_perf -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:21:55.306 17:12:42 nvmf_tcp.nvmf_perf -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:21:55.306 17:12:42 nvmf_tcp.nvmf_perf -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:55.306 17:12:42 nvmf_tcp.nvmf_perf -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:55.306 17:12:42 nvmf_tcp.nvmf_perf -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:55.306 17:12:42 nvmf_tcp.nvmf_perf -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:21:55.306 17:12:42 nvmf_tcp.nvmf_perf -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:55.306 17:12:42 nvmf_tcp.nvmf_perf -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:55.306 17:12:42 nvmf_tcp.nvmf_perf -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:21:55.306 17:12:42 nvmf_tcp.nvmf_perf -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:55.306 17:12:42 nvmf_tcp.nvmf_perf -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:55.306 17:12:42 nvmf_tcp.nvmf_perf -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:21:55.306 17:12:42 nvmf_tcp.nvmf_perf -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:21:55.306 17:12:42 nvmf_tcp.nvmf_perf -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:21:55.306 17:12:42 nvmf_tcp.nvmf_perf -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:55.306 17:12:42 nvmf_tcp.nvmf_perf -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:55.306 17:12:42 nvmf_tcp.nvmf_perf -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:55.306 17:12:42 nvmf_tcp.nvmf_perf -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:21:55.306 17:12:42 nvmf_tcp.nvmf_perf -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:55.306 17:12:42 nvmf_tcp.nvmf_perf -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:55.306 17:12:42 nvmf_tcp.nvmf_perf -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:55.565 17:12:42 nvmf_tcp.nvmf_perf -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:21:55.565 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:55.565 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.154 ms 00:21:55.565 00:21:55.565 --- 10.0.0.2 ping statistics --- 00:21:55.565 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:55.565 rtt min/avg/max/mdev = 0.154/0.154/0.154/0.000 ms 00:21:55.565 17:12:42 nvmf_tcp.nvmf_perf -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:55.565 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:55.565 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.234 ms 00:21:55.565 00:21:55.565 --- 10.0.0.1 ping statistics --- 00:21:55.565 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:55.565 rtt min/avg/max/mdev = 0.234/0.234/0.234/0.000 ms 00:21:55.565 17:12:42 nvmf_tcp.nvmf_perf -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:55.565 17:12:42 nvmf_tcp.nvmf_perf -- nvmf/common.sh@422 -- # return 0 00:21:55.565 17:12:42 nvmf_tcp.nvmf_perf -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:21:55.565 17:12:42 nvmf_tcp.nvmf_perf -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:55.565 17:12:42 nvmf_tcp.nvmf_perf -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:21:55.565 17:12:42 nvmf_tcp.nvmf_perf -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:21:55.565 17:12:42 nvmf_tcp.nvmf_perf -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:55.565 17:12:42 nvmf_tcp.nvmf_perf -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:21:55.565 17:12:42 nvmf_tcp.nvmf_perf -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:21:55.565 17:12:43 nvmf_tcp.nvmf_perf -- host/perf.sh@18 -- # nvmfappstart -m 0xF 00:21:55.565 17:12:43 nvmf_tcp.nvmf_perf -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:21:55.565 17:12:43 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@720 -- # xtrace_disable 00:21:55.565 17:12:43 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:21:55.565 17:12:43 nvmf_tcp.nvmf_perf -- nvmf/common.sh@481 -- # nvmfpid=3147898 00:21:55.565 17:12:43 nvmf_tcp.nvmf_perf -- nvmf/common.sh@482 -- # waitforlisten 3147898 00:21:55.565 17:12:43 nvmf_tcp.nvmf_perf -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:21:55.565 17:12:43 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@827 -- # '[' -z 3147898 ']' 00:21:55.565 17:12:43 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:55.565 17:12:43 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@832 -- # local max_retries=100 00:21:55.565 17:12:43 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:55.565 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:55.565 17:12:43 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@836 -- # xtrace_disable 00:21:55.565 17:12:43 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:21:55.565 [2024-05-15 17:12:43.064009] Starting SPDK v24.05-pre git sha1 0ba8ca574 / DPDK 23.11.0 initialization... 00:21:55.565 [2024-05-15 17:12:43.064049] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:55.565 EAL: No free 2048 kB hugepages reported on node 1 00:21:55.565 [2024-05-15 17:12:43.124578] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:21:55.565 [2024-05-15 17:12:43.197046] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:55.565 [2024-05-15 17:12:43.197089] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:55.565 [2024-05-15 17:12:43.197096] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:55.565 [2024-05-15 17:12:43.197102] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:55.565 [2024-05-15 17:12:43.197107] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:55.565 [2024-05-15 17:12:43.197211] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:21:55.565 [2024-05-15 17:12:43.197231] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:21:55.565 [2024-05-15 17:12:43.197331] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:21:55.565 [2024-05-15 17:12:43.197332] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:21:56.499 17:12:43 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:21:56.499 17:12:43 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@860 -- # return 0 00:21:56.499 17:12:43 nvmf_tcp.nvmf_perf -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:21:56.499 17:12:43 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@726 -- # xtrace_disable 00:21:56.499 17:12:43 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:21:56.499 17:12:43 nvmf_tcp.nvmf_perf -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:56.499 17:12:43 nvmf_tcp.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:21:56.499 17:12:43 nvmf_tcp.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_subsystem_config 00:21:59.783 17:12:46 nvmf_tcp.nvmf_perf -- host/perf.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_get_config bdev 00:21:59.783 17:12:46 nvmf_tcp.nvmf_perf -- host/perf.sh@30 -- # jq -r '.[].params | select(.name=="Nvme0").traddr' 00:21:59.783 17:12:47 nvmf_tcp.nvmf_perf -- host/perf.sh@30 -- # local_nvme_trid=0000:5e:00.0 00:21:59.783 17:12:47 nvmf_tcp.nvmf_perf -- host/perf.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:21:59.783 17:12:47 nvmf_tcp.nvmf_perf -- host/perf.sh@31 -- # bdevs=' Malloc0' 00:21:59.783 17:12:47 nvmf_tcp.nvmf_perf -- host/perf.sh@33 -- # '[' -n 0000:5e:00.0 ']' 00:21:59.783 17:12:47 nvmf_tcp.nvmf_perf -- host/perf.sh@34 -- # bdevs=' Malloc0 Nvme0n1' 00:21:59.783 17:12:47 nvmf_tcp.nvmf_perf -- host/perf.sh@37 -- # '[' tcp == rdma ']' 00:21:59.783 17:12:47 nvmf_tcp.nvmf_perf -- host/perf.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:22:00.041 [2024-05-15 17:12:47.465839] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:00.041 17:12:47 nvmf_tcp.nvmf_perf -- host/perf.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:22:00.041 17:12:47 nvmf_tcp.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:22:00.041 17:12:47 nvmf_tcp.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:22:00.299 17:12:47 nvmf_tcp.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:22:00.299 17:12:47 nvmf_tcp.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:22:00.558 17:12:48 nvmf_tcp.nvmf_perf -- host/perf.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:00.816 [2024-05-15 17:12:48.224482] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:22:00.816 [2024-05-15 17:12:48.224741] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:00.816 17:12:48 nvmf_tcp.nvmf_perf -- host/perf.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:22:00.816 17:12:48 nvmf_tcp.nvmf_perf -- host/perf.sh@52 -- # '[' -n 0000:5e:00.0 ']' 00:22:00.816 17:12:48 nvmf_tcp.nvmf_perf -- host/perf.sh@53 -- # perf_app -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:5e:00.0' 00:22:00.816 17:12:48 nvmf_tcp.nvmf_perf -- host/perf.sh@21 -- # '[' 0 -eq 1 ']' 00:22:00.816 17:12:48 nvmf_tcp.nvmf_perf -- host/perf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:5e:00.0' 00:22:02.191 Initializing NVMe Controllers 00:22:02.191 Attached to NVMe Controller at 0000:5e:00.0 [8086:0a54] 00:22:02.191 Associating PCIE (0000:5e:00.0) NSID 1 with lcore 0 00:22:02.191 Initialization complete. Launching workers. 00:22:02.191 ======================================================== 00:22:02.191 Latency(us) 00:22:02.191 Device Information : IOPS MiB/s Average min max 00:22:02.191 PCIE (0000:5e:00.0) NSID 1 from core 0: 97780.62 381.96 327.19 10.49 4455.36 00:22:02.191 ======================================================== 00:22:02.191 Total : 97780.62 381.96 327.19 10.49 4455.36 00:22:02.191 00:22:02.191 17:12:49 nvmf_tcp.nvmf_perf -- host/perf.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:22:02.191 EAL: No free 2048 kB hugepages reported on node 1 00:22:03.566 Initializing NVMe Controllers 00:22:03.566 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:22:03.566 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:22:03.567 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:22:03.567 Initialization complete. Launching workers. 00:22:03.567 ======================================================== 00:22:03.567 Latency(us) 00:22:03.567 Device Information : IOPS MiB/s Average min max 00:22:03.567 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 124.00 0.48 8368.42 136.51 44708.58 00:22:03.567 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 59.00 0.23 17430.76 7944.17 47887.77 00:22:03.567 ======================================================== 00:22:03.567 Total : 183.00 0.71 11290.16 136.51 47887.77 00:22:03.567 00:22:03.567 17:12:51 nvmf_tcp.nvmf_perf -- host/perf.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 4096 -w randrw -M 50 -t 1 -HI -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:22:03.567 EAL: No free 2048 kB hugepages reported on node 1 00:22:04.943 Initializing NVMe Controllers 00:22:04.943 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:22:04.943 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:22:04.943 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:22:04.943 Initialization complete. Launching workers. 00:22:04.943 ======================================================== 00:22:04.943 Latency(us) 00:22:04.943 Device Information : IOPS MiB/s Average min max 00:22:04.943 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 10608.09 41.44 3015.86 399.01 9188.82 00:22:04.943 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 3864.82 15.10 8293.98 7117.74 16009.20 00:22:04.943 ======================================================== 00:22:04.943 Total : 14472.91 56.53 4425.33 399.01 16009.20 00:22:04.943 00:22:04.943 17:12:52 nvmf_tcp.nvmf_perf -- host/perf.sh@59 -- # [[ e810 == \e\8\1\0 ]] 00:22:04.943 17:12:52 nvmf_tcp.nvmf_perf -- host/perf.sh@59 -- # [[ tcp == \r\d\m\a ]] 00:22:04.943 17:12:52 nvmf_tcp.nvmf_perf -- host/perf.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -O 16384 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:22:04.943 EAL: No free 2048 kB hugepages reported on node 1 00:22:07.516 Initializing NVMe Controllers 00:22:07.516 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:22:07.516 Controller IO queue size 128, less than required. 00:22:07.516 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:22:07.516 Controller IO queue size 128, less than required. 00:22:07.516 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:22:07.516 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:22:07.516 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:22:07.516 Initialization complete. Launching workers. 00:22:07.516 ======================================================== 00:22:07.516 Latency(us) 00:22:07.516 Device Information : IOPS MiB/s Average min max 00:22:07.516 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1310.82 327.71 99521.87 63942.38 143540.37 00:22:07.516 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 589.42 147.36 222080.67 76481.37 326455.37 00:22:07.516 ======================================================== 00:22:07.516 Total : 1900.24 475.06 137537.34 63942.38 326455.37 00:22:07.516 00:22:07.516 17:12:54 nvmf_tcp.nvmf_perf -- host/perf.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 36964 -O 4096 -w randrw -M 50 -t 5 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0xf -P 4 00:22:07.516 EAL: No free 2048 kB hugepages reported on node 1 00:22:07.516 No valid NVMe controllers or AIO or URING devices found 00:22:07.516 Initializing NVMe Controllers 00:22:07.516 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:22:07.516 Controller IO queue size 128, less than required. 00:22:07.516 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:22:07.516 WARNING: IO size 36964 (-o) is not a multiple of nsid 1 sector size 512. Removing this ns from test 00:22:07.516 Controller IO queue size 128, less than required. 00:22:07.516 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:22:07.516 WARNING: IO size 36964 (-o) is not a multiple of nsid 2 sector size 512. Removing this ns from test 00:22:07.517 WARNING: Some requested NVMe devices were skipped 00:22:07.517 17:12:55 nvmf_tcp.nvmf_perf -- host/perf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' --transport-stat 00:22:07.517 EAL: No free 2048 kB hugepages reported on node 1 00:22:10.045 Initializing NVMe Controllers 00:22:10.045 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:22:10.045 Controller IO queue size 128, less than required. 00:22:10.046 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:22:10.046 Controller IO queue size 128, less than required. 00:22:10.046 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:22:10.046 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:22:10.046 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:22:10.046 Initialization complete. Launching workers. 00:22:10.046 00:22:10.046 ==================== 00:22:10.046 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 statistics: 00:22:10.046 TCP transport: 00:22:10.046 polls: 28993 00:22:10.046 idle_polls: 10238 00:22:10.046 sock_completions: 18755 00:22:10.046 nvme_completions: 5599 00:22:10.046 submitted_requests: 8362 00:22:10.046 queued_requests: 1 00:22:10.046 00:22:10.046 ==================== 00:22:10.046 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 statistics: 00:22:10.046 TCP transport: 00:22:10.046 polls: 32807 00:22:10.046 idle_polls: 14443 00:22:10.046 sock_completions: 18364 00:22:10.046 nvme_completions: 4633 00:22:10.046 submitted_requests: 6930 00:22:10.046 queued_requests: 1 00:22:10.046 ======================================================== 00:22:10.046 Latency(us) 00:22:10.046 Device Information : IOPS MiB/s Average min max 00:22:10.046 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1399.48 349.87 93597.94 51170.01 125967.43 00:22:10.046 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 1157.99 289.50 112194.89 47897.12 157052.67 00:22:10.046 ======================================================== 00:22:10.046 Total : 2557.47 639.37 102018.38 47897.12 157052.67 00:22:10.046 00:22:10.046 17:12:57 nvmf_tcp.nvmf_perf -- host/perf.sh@66 -- # sync 00:22:10.046 17:12:57 nvmf_tcp.nvmf_perf -- host/perf.sh@67 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:22:10.304 17:12:57 nvmf_tcp.nvmf_perf -- host/perf.sh@69 -- # '[' 0 -eq 1 ']' 00:22:10.304 17:12:57 nvmf_tcp.nvmf_perf -- host/perf.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:22:10.304 17:12:57 nvmf_tcp.nvmf_perf -- host/perf.sh@114 -- # nvmftestfini 00:22:10.304 17:12:57 nvmf_tcp.nvmf_perf -- nvmf/common.sh@488 -- # nvmfcleanup 00:22:10.304 17:12:57 nvmf_tcp.nvmf_perf -- nvmf/common.sh@117 -- # sync 00:22:10.304 17:12:57 nvmf_tcp.nvmf_perf -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:22:10.304 17:12:57 nvmf_tcp.nvmf_perf -- nvmf/common.sh@120 -- # set +e 00:22:10.304 17:12:57 nvmf_tcp.nvmf_perf -- nvmf/common.sh@121 -- # for i in {1..20} 00:22:10.304 17:12:57 nvmf_tcp.nvmf_perf -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:22:10.304 rmmod nvme_tcp 00:22:10.304 rmmod nvme_fabrics 00:22:10.304 rmmod nvme_keyring 00:22:10.304 17:12:57 nvmf_tcp.nvmf_perf -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:22:10.304 17:12:57 nvmf_tcp.nvmf_perf -- nvmf/common.sh@124 -- # set -e 00:22:10.304 17:12:57 nvmf_tcp.nvmf_perf -- nvmf/common.sh@125 -- # return 0 00:22:10.304 17:12:57 nvmf_tcp.nvmf_perf -- nvmf/common.sh@489 -- # '[' -n 3147898 ']' 00:22:10.304 17:12:57 nvmf_tcp.nvmf_perf -- nvmf/common.sh@490 -- # killprocess 3147898 00:22:10.304 17:12:57 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@946 -- # '[' -z 3147898 ']' 00:22:10.304 17:12:57 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@950 -- # kill -0 3147898 00:22:10.304 17:12:57 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@951 -- # uname 00:22:10.304 17:12:57 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:22:10.304 17:12:57 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3147898 00:22:10.563 17:12:57 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:22:10.563 17:12:57 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:22:10.563 17:12:57 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3147898' 00:22:10.563 killing process with pid 3147898 00:22:10.563 17:12:57 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@965 -- # kill 3147898 00:22:10.563 [2024-05-15 17:12:57.965645] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:22:10.563 17:12:57 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@970 -- # wait 3147898 00:22:11.939 17:12:59 nvmf_tcp.nvmf_perf -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:22:11.939 17:12:59 nvmf_tcp.nvmf_perf -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:22:11.939 17:12:59 nvmf_tcp.nvmf_perf -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:22:11.939 17:12:59 nvmf_tcp.nvmf_perf -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:22:11.939 17:12:59 nvmf_tcp.nvmf_perf -- nvmf/common.sh@278 -- # remove_spdk_ns 00:22:11.939 17:12:59 nvmf_tcp.nvmf_perf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:11.939 17:12:59 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:11.939 17:12:59 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:14.473 17:13:01 nvmf_tcp.nvmf_perf -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:22:14.473 00:22:14.473 real 0m24.120s 00:22:14.473 user 1m5.090s 00:22:14.473 sys 0m7.189s 00:22:14.473 17:13:01 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1122 -- # xtrace_disable 00:22:14.473 17:13:01 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:22:14.473 ************************************ 00:22:14.473 END TEST nvmf_perf 00:22:14.473 ************************************ 00:22:14.473 17:13:01 nvmf_tcp -- nvmf/nvmf.sh@98 -- # run_test nvmf_fio_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:22:14.473 17:13:01 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:22:14.473 17:13:01 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:22:14.473 17:13:01 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:22:14.473 ************************************ 00:22:14.473 START TEST nvmf_fio_host 00:22:14.473 ************************************ 00:22:14.473 17:13:01 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:22:14.473 * Looking for test storage... 00:22:14.473 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:22:14.473 17:13:01 nvmf_tcp.nvmf_fio_host -- host/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:14.473 17:13:01 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:14.473 17:13:01 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:14.473 17:13:01 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:14.473 17:13:01 nvmf_tcp.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:14.473 17:13:01 nvmf_tcp.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:14.473 17:13:01 nvmf_tcp.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:14.473 17:13:01 nvmf_tcp.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:22:14.473 17:13:01 nvmf_tcp.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:14.473 17:13:01 nvmf_tcp.nvmf_fio_host -- host/fio.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:14.473 17:13:01 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@7 -- # uname -s 00:22:14.473 17:13:01 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:14.473 17:13:01 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:14.473 17:13:01 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:14.473 17:13:01 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:14.473 17:13:01 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:14.473 17:13:01 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:14.473 17:13:01 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:14.473 17:13:01 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:14.473 17:13:01 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:14.473 17:13:01 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:14.473 17:13:01 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:22:14.473 17:13:01 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:22:14.473 17:13:01 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:14.473 17:13:01 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:14.473 17:13:01 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:14.473 17:13:01 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:14.473 17:13:01 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:14.473 17:13:01 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:14.473 17:13:01 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:14.473 17:13:01 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:14.473 17:13:01 nvmf_tcp.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:14.473 17:13:01 nvmf_tcp.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:14.473 17:13:01 nvmf_tcp.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:14.473 17:13:01 nvmf_tcp.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:22:14.473 17:13:01 nvmf_tcp.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:14.473 17:13:01 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@47 -- # : 0 00:22:14.473 17:13:01 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:22:14.473 17:13:01 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:22:14.473 17:13:01 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:14.473 17:13:01 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:14.473 17:13:01 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:14.473 17:13:01 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:22:14.473 17:13:01 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:22:14.473 17:13:01 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@51 -- # have_pci_nics=0 00:22:14.473 17:13:01 nvmf_tcp.nvmf_fio_host -- host/fio.sh@12 -- # nvmftestinit 00:22:14.473 17:13:01 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:22:14.473 17:13:01 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:14.473 17:13:01 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@448 -- # prepare_net_devs 00:22:14.473 17:13:01 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@410 -- # local -g is_hw=no 00:22:14.473 17:13:01 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@412 -- # remove_spdk_ns 00:22:14.473 17:13:01 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:14.473 17:13:01 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:14.473 17:13:01 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:14.473 17:13:01 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:22:14.473 17:13:01 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:22:14.473 17:13:01 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@285 -- # xtrace_disable 00:22:14.473 17:13:01 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:22:19.737 17:13:06 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:19.737 17:13:06 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@291 -- # pci_devs=() 00:22:19.737 17:13:06 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@291 -- # local -a pci_devs 00:22:19.737 17:13:06 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@292 -- # pci_net_devs=() 00:22:19.737 17:13:06 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:22:19.737 17:13:06 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@293 -- # pci_drivers=() 00:22:19.737 17:13:06 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@293 -- # local -A pci_drivers 00:22:19.737 17:13:06 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@295 -- # net_devs=() 00:22:19.737 17:13:06 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@295 -- # local -ga net_devs 00:22:19.737 17:13:06 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@296 -- # e810=() 00:22:19.737 17:13:06 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@296 -- # local -ga e810 00:22:19.737 17:13:06 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@297 -- # x722=() 00:22:19.737 17:13:06 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@297 -- # local -ga x722 00:22:19.737 17:13:06 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@298 -- # mlx=() 00:22:19.737 17:13:06 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@298 -- # local -ga mlx 00:22:19.737 17:13:06 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:19.737 17:13:06 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:19.737 17:13:06 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:19.737 17:13:06 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:19.737 17:13:06 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:19.737 17:13:06 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:19.737 17:13:06 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:19.737 17:13:06 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:19.737 17:13:06 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:19.737 17:13:06 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:19.737 17:13:06 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:19.737 17:13:06 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:22:19.737 17:13:06 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:22:19.737 17:13:06 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:22:19.737 17:13:06 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:22:19.737 17:13:06 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:22:19.737 17:13:06 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:22:19.737 17:13:06 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:19.737 17:13:06 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:22:19.737 Found 0000:86:00.0 (0x8086 - 0x159b) 00:22:19.737 17:13:06 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:19.737 17:13:06 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:19.737 17:13:06 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:19.737 17:13:06 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:19.737 17:13:06 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:19.737 17:13:06 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:19.737 17:13:06 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:22:19.737 Found 0000:86:00.1 (0x8086 - 0x159b) 00:22:19.737 17:13:06 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:19.737 17:13:06 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:19.737 17:13:06 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:19.737 17:13:06 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:19.737 17:13:06 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:19.737 17:13:06 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:22:19.737 17:13:06 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:22:19.737 17:13:06 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:22:19.737 17:13:06 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:19.737 17:13:06 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:19.737 17:13:06 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:19.737 17:13:06 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:19.737 17:13:06 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:19.737 17:13:06 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:19.737 17:13:06 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:19.737 17:13:06 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:22:19.737 Found net devices under 0000:86:00.0: cvl_0_0 00:22:19.737 17:13:06 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:19.737 17:13:06 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:19.737 17:13:06 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:19.737 17:13:06 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:19.737 17:13:06 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:19.737 17:13:06 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:19.737 17:13:06 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:19.737 17:13:06 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:19.737 17:13:06 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:22:19.737 Found net devices under 0000:86:00.1: cvl_0_1 00:22:19.737 17:13:06 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:19.737 17:13:06 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:22:19.737 17:13:06 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@414 -- # is_hw=yes 00:22:19.737 17:13:06 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:22:19.737 17:13:06 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:22:19.737 17:13:06 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:22:19.737 17:13:06 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:19.738 17:13:06 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:19.738 17:13:06 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:19.738 17:13:06 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:22:19.738 17:13:06 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:19.738 17:13:06 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:19.738 17:13:06 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:22:19.738 17:13:06 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:19.738 17:13:06 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:19.738 17:13:06 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:22:19.738 17:13:06 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:22:19.738 17:13:06 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:22:19.738 17:13:06 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:19.738 17:13:06 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:19.738 17:13:06 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:19.738 17:13:06 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:22:19.738 17:13:06 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:19.738 17:13:06 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:19.738 17:13:06 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:19.738 17:13:06 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:22:19.738 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:19.738 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.248 ms 00:22:19.738 00:22:19.738 --- 10.0.0.2 ping statistics --- 00:22:19.738 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:19.738 rtt min/avg/max/mdev = 0.248/0.248/0.248/0.000 ms 00:22:19.738 17:13:06 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:19.738 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:19.738 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.085 ms 00:22:19.738 00:22:19.738 --- 10.0.0.1 ping statistics --- 00:22:19.738 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:19.738 rtt min/avg/max/mdev = 0.085/0.085/0.085/0.000 ms 00:22:19.738 17:13:06 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:19.738 17:13:06 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@422 -- # return 0 00:22:19.738 17:13:06 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:22:19.738 17:13:06 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:19.738 17:13:06 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:22:19.738 17:13:06 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:22:19.738 17:13:06 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:19.738 17:13:06 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:22:19.738 17:13:06 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:22:19.738 17:13:06 nvmf_tcp.nvmf_fio_host -- host/fio.sh@14 -- # [[ y != y ]] 00:22:19.738 17:13:06 nvmf_tcp.nvmf_fio_host -- host/fio.sh@19 -- # timing_enter start_nvmf_tgt 00:22:19.738 17:13:06 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@720 -- # xtrace_disable 00:22:19.738 17:13:06 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:22:19.738 17:13:06 nvmf_tcp.nvmf_fio_host -- host/fio.sh@22 -- # nvmfpid=3153991 00:22:19.738 17:13:06 nvmf_tcp.nvmf_fio_host -- host/fio.sh@21 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:22:19.738 17:13:06 nvmf_tcp.nvmf_fio_host -- host/fio.sh@24 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:22:19.738 17:13:06 nvmf_tcp.nvmf_fio_host -- host/fio.sh@26 -- # waitforlisten 3153991 00:22:19.738 17:13:06 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@827 -- # '[' -z 3153991 ']' 00:22:19.738 17:13:06 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:19.738 17:13:06 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@832 -- # local max_retries=100 00:22:19.738 17:13:06 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:19.738 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:19.738 17:13:06 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@836 -- # xtrace_disable 00:22:19.738 17:13:06 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:22:19.738 [2024-05-15 17:13:06.828436] Starting SPDK v24.05-pre git sha1 0ba8ca574 / DPDK 23.11.0 initialization... 00:22:19.738 [2024-05-15 17:13:06.828478] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:19.738 EAL: No free 2048 kB hugepages reported on node 1 00:22:19.738 [2024-05-15 17:13:06.886140] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:22:19.738 [2024-05-15 17:13:06.964490] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:19.738 [2024-05-15 17:13:06.964530] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:19.738 [2024-05-15 17:13:06.964537] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:19.738 [2024-05-15 17:13:06.964543] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:19.738 [2024-05-15 17:13:06.964548] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:19.738 [2024-05-15 17:13:06.964587] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:22:19.738 [2024-05-15 17:13:06.964681] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:22:19.738 [2024-05-15 17:13:06.964770] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:22:19.738 [2024-05-15 17:13:06.964772] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:22:19.996 17:13:07 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:22:19.996 17:13:07 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@860 -- # return 0 00:22:19.996 17:13:07 nvmf_tcp.nvmf_fio_host -- host/fio.sh@27 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:22:19.996 17:13:07 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:19.996 17:13:07 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:22:19.996 [2024-05-15 17:13:07.642909] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:19.996 17:13:07 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:19.996 17:13:07 nvmf_tcp.nvmf_fio_host -- host/fio.sh@28 -- # timing_exit start_nvmf_tgt 00:22:19.996 17:13:07 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:19.996 17:13:07 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:22:20.254 17:13:07 nvmf_tcp.nvmf_fio_host -- host/fio.sh@30 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:22:20.254 17:13:07 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:20.254 17:13:07 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:22:20.254 Malloc1 00:22:20.254 17:13:07 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:20.254 17:13:07 nvmf_tcp.nvmf_fio_host -- host/fio.sh@31 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:22:20.254 17:13:07 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:20.254 17:13:07 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:22:20.254 17:13:07 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:20.254 17:13:07 nvmf_tcp.nvmf_fio_host -- host/fio.sh@32 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:22:20.254 17:13:07 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:20.254 17:13:07 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:22:20.254 17:13:07 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:20.254 17:13:07 nvmf_tcp.nvmf_fio_host -- host/fio.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:20.254 17:13:07 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:20.254 17:13:07 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:22:20.254 [2024-05-15 17:13:07.722764] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:22:20.254 [2024-05-15 17:13:07.723021] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:20.254 17:13:07 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:20.254 17:13:07 nvmf_tcp.nvmf_fio_host -- host/fio.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:22:20.254 17:13:07 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:20.254 17:13:07 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:22:20.254 17:13:07 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:20.254 17:13:07 nvmf_tcp.nvmf_fio_host -- host/fio.sh@36 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:22:20.254 17:13:07 nvmf_tcp.nvmf_fio_host -- host/fio.sh@39 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:22:20.254 17:13:07 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:22:20.254 17:13:07 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1333 -- # local fio_dir=/usr/src/fio 00:22:20.254 17:13:07 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1335 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:22:20.254 17:13:07 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1335 -- # local sanitizers 00:22:20.254 17:13:07 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1336 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:22:20.254 17:13:07 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1337 -- # shift 00:22:20.254 17:13:07 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local asan_lib= 00:22:20.254 17:13:07 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:22:20.254 17:13:07 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:22:20.254 17:13:07 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # grep libasan 00:22:20.254 17:13:07 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:22:20.254 17:13:07 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # asan_lib= 00:22:20.254 17:13:07 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:22:20.254 17:13:07 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:22:20.254 17:13:07 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:22:20.254 17:13:07 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # grep libclang_rt.asan 00:22:20.254 17:13:07 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:22:20.254 17:13:07 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # asan_lib= 00:22:20.254 17:13:07 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:22:20.254 17:13:07 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1348 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:22:20.255 17:13:07 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1348 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:22:20.512 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:22:20.512 fio-3.35 00:22:20.512 Starting 1 thread 00:22:20.512 EAL: No free 2048 kB hugepages reported on node 1 00:22:23.039 00:22:23.039 test: (groupid=0, jobs=1): err= 0: pid=3154360: Wed May 15 17:13:10 2024 00:22:23.039 read: IOPS=11.6k, BW=45.5MiB/s (47.7MB/s)(91.2MiB/2005msec) 00:22:23.039 slat (nsec): min=1587, max=242096, avg=1766.39, stdev=2295.17 00:22:23.039 clat (usec): min=4020, max=10524, avg=6075.12, stdev=446.15 00:22:23.039 lat (usec): min=4051, max=10525, avg=6076.89, stdev=446.07 00:22:23.039 clat percentiles (usec): 00:22:23.039 | 1.00th=[ 5014], 5.00th=[ 5342], 10.00th=[ 5473], 20.00th=[ 5735], 00:22:23.039 | 30.00th=[ 5866], 40.00th=[ 5997], 50.00th=[ 6063], 60.00th=[ 6194], 00:22:23.039 | 70.00th=[ 6325], 80.00th=[ 6456], 90.00th=[ 6587], 95.00th=[ 6783], 00:22:23.039 | 99.00th=[ 7046], 99.50th=[ 7177], 99.90th=[ 7832], 99.95th=[ 9503], 00:22:23.039 | 99.99th=[10421] 00:22:23.039 bw ( KiB/s): min=45136, max=47136, per=99.95%, avg=46534.00, stdev=939.46, samples=4 00:22:23.039 iops : min=11284, max=11784, avg=11633.50, stdev=234.86, samples=4 00:22:23.039 write: IOPS=11.6k, BW=45.1MiB/s (47.3MB/s)(90.5MiB/2005msec); 0 zone resets 00:22:23.039 slat (nsec): min=1645, max=247154, avg=1867.11, stdev=1789.47 00:22:23.039 clat (usec): min=2515, max=9230, avg=4881.43, stdev=367.44 00:22:23.039 lat (usec): min=2530, max=9232, avg=4883.30, stdev=367.41 00:22:23.039 clat percentiles (usec): 00:22:23.039 | 1.00th=[ 4047], 5.00th=[ 4293], 10.00th=[ 4424], 20.00th=[ 4621], 00:22:23.039 | 30.00th=[ 4686], 40.00th=[ 4817], 50.00th=[ 4883], 60.00th=[ 4948], 00:22:23.039 | 70.00th=[ 5080], 80.00th=[ 5145], 90.00th=[ 5342], 95.00th=[ 5473], 00:22:23.039 | 99.00th=[ 5735], 99.50th=[ 5800], 99.90th=[ 7046], 99.95th=[ 7832], 00:22:23.039 | 99.99th=[ 8586] 00:22:23.039 bw ( KiB/s): min=45520, max=46720, per=100.00%, avg=46226.00, stdev=547.77, samples=4 00:22:23.039 iops : min=11380, max=11680, avg=11556.50, stdev=136.94, samples=4 00:22:23.039 lat (msec) : 4=0.40%, 10=99.59%, 20=0.01% 00:22:23.039 cpu : usr=69.56%, sys=27.35%, ctx=46, majf=0, minf=4 00:22:23.039 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:22:23.039 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:23.039 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:22:23.039 issued rwts: total=23338,23169,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:23.039 latency : target=0, window=0, percentile=100.00%, depth=128 00:22:23.039 00:22:23.039 Run status group 0 (all jobs): 00:22:23.039 READ: bw=45.5MiB/s (47.7MB/s), 45.5MiB/s-45.5MiB/s (47.7MB/s-47.7MB/s), io=91.2MiB (95.6MB), run=2005-2005msec 00:22:23.039 WRITE: bw=45.1MiB/s (47.3MB/s), 45.1MiB/s-45.1MiB/s (47.3MB/s-47.3MB/s), io=90.5MiB (94.9MB), run=2005-2005msec 00:22:23.039 17:13:10 nvmf_tcp.nvmf_fio_host -- host/fio.sh@43 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:22:23.039 17:13:10 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:22:23.039 17:13:10 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1333 -- # local fio_dir=/usr/src/fio 00:22:23.039 17:13:10 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1335 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:22:23.039 17:13:10 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1335 -- # local sanitizers 00:22:23.039 17:13:10 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1336 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:22:23.039 17:13:10 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1337 -- # shift 00:22:23.039 17:13:10 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local asan_lib= 00:22:23.039 17:13:10 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:22:23.040 17:13:10 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:22:23.040 17:13:10 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # grep libasan 00:22:23.040 17:13:10 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:22:23.040 17:13:10 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # asan_lib= 00:22:23.040 17:13:10 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:22:23.040 17:13:10 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:22:23.040 17:13:10 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:22:23.040 17:13:10 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # grep libclang_rt.asan 00:22:23.040 17:13:10 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:22:23.040 17:13:10 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # asan_lib= 00:22:23.040 17:13:10 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:22:23.040 17:13:10 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1348 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:22:23.040 17:13:10 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1348 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:22:23.303 test: (g=0): rw=randrw, bs=(R) 16.0KiB-16.0KiB, (W) 16.0KiB-16.0KiB, (T) 16.0KiB-16.0KiB, ioengine=spdk, iodepth=128 00:22:23.303 fio-3.35 00:22:23.303 Starting 1 thread 00:22:23.303 EAL: No free 2048 kB hugepages reported on node 1 00:22:25.841 00:22:25.841 test: (groupid=0, jobs=1): err= 0: pid=3154929: Wed May 15 17:13:13 2024 00:22:25.841 read: IOPS=9760, BW=153MiB/s (160MB/s)(306MiB/2008msec) 00:22:25.841 slat (nsec): min=2588, max=86156, avg=3644.84, stdev=2874.86 00:22:25.841 clat (usec): min=2722, max=26975, avg=7783.08, stdev=3130.35 00:22:25.841 lat (usec): min=2725, max=26991, avg=7786.72, stdev=3132.47 00:22:25.841 clat percentiles (usec): 00:22:25.841 | 1.00th=[ 3785], 5.00th=[ 4621], 10.00th=[ 5080], 20.00th=[ 5800], 00:22:25.841 | 30.00th=[ 6259], 40.00th=[ 6718], 50.00th=[ 7177], 60.00th=[ 7701], 00:22:25.841 | 70.00th=[ 8094], 80.00th=[ 8455], 90.00th=[10290], 95.00th=[16909], 00:22:25.841 | 99.00th=[19530], 99.50th=[20055], 99.90th=[21103], 99.95th=[21365], 00:22:25.841 | 99.99th=[21627] 00:22:25.841 bw ( KiB/s): min=50880, max=101077, per=49.40%, avg=77157.25, stdev=20821.25, samples=4 00:22:25.841 iops : min= 3180, max= 6317, avg=4822.25, stdev=1301.21, samples=4 00:22:25.841 write: IOPS=5670, BW=88.6MiB/s (92.9MB/s)(158MiB/1783msec); 0 zone resets 00:22:25.841 slat (usec): min=29, max=386, avg=33.79, stdev= 9.89 00:22:25.841 clat (usec): min=4702, max=26647, avg=9446.69, stdev=3317.76 00:22:25.841 lat (usec): min=4733, max=26685, avg=9480.48, stdev=3321.44 00:22:25.841 clat percentiles (usec): 00:22:25.841 | 1.00th=[ 5866], 5.00th=[ 6521], 10.00th=[ 6980], 20.00th=[ 7504], 00:22:25.841 | 30.00th=[ 7832], 40.00th=[ 8160], 50.00th=[ 8455], 60.00th=[ 8848], 00:22:25.841 | 70.00th=[ 9372], 80.00th=[10421], 90.00th=[12256], 95.00th=[19530], 00:22:25.841 | 99.00th=[21627], 99.50th=[21890], 99.90th=[22414], 99.95th=[22414], 00:22:25.841 | 99.99th=[25035] 00:22:25.841 bw ( KiB/s): min=52480, max=105357, per=88.79%, avg=80563.25, stdev=21963.97, samples=4 00:22:25.841 iops : min= 3280, max= 6584, avg=5035.00, stdev=1372.44, samples=4 00:22:25.841 lat (msec) : 4=1.07%, 10=83.93%, 20=13.33%, 50=1.68% 00:22:25.841 cpu : usr=86.75%, sys=11.40%, ctx=39, majf=0, minf=1 00:22:25.841 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.8%, >=64=98.5% 00:22:25.841 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:25.841 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:22:25.841 issued rwts: total=19600,10111,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:25.841 latency : target=0, window=0, percentile=100.00%, depth=128 00:22:25.841 00:22:25.841 Run status group 0 (all jobs): 00:22:25.841 READ: bw=153MiB/s (160MB/s), 153MiB/s-153MiB/s (160MB/s-160MB/s), io=306MiB (321MB), run=2008-2008msec 00:22:25.841 WRITE: bw=88.6MiB/s (92.9MB/s), 88.6MiB/s-88.6MiB/s (92.9MB/s-92.9MB/s), io=158MiB (166MB), run=1783-1783msec 00:22:25.841 17:13:13 nvmf_tcp.nvmf_fio_host -- host/fio.sh@45 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:22:25.841 17:13:13 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:25.841 17:13:13 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:22:25.841 17:13:13 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:25.841 17:13:13 nvmf_tcp.nvmf_fio_host -- host/fio.sh@47 -- # '[' 0 -eq 1 ']' 00:22:25.841 17:13:13 nvmf_tcp.nvmf_fio_host -- host/fio.sh@81 -- # trap - SIGINT SIGTERM EXIT 00:22:25.841 17:13:13 nvmf_tcp.nvmf_fio_host -- host/fio.sh@83 -- # rm -f ./local-test-0-verify.state 00:22:25.841 17:13:13 nvmf_tcp.nvmf_fio_host -- host/fio.sh@84 -- # nvmftestfini 00:22:25.841 17:13:13 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@488 -- # nvmfcleanup 00:22:25.841 17:13:13 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@117 -- # sync 00:22:25.841 17:13:13 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:22:25.841 17:13:13 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@120 -- # set +e 00:22:25.841 17:13:13 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@121 -- # for i in {1..20} 00:22:25.841 17:13:13 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:22:25.841 rmmod nvme_tcp 00:22:25.841 rmmod nvme_fabrics 00:22:25.841 rmmod nvme_keyring 00:22:25.841 17:13:13 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:22:25.841 17:13:13 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@124 -- # set -e 00:22:25.841 17:13:13 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@125 -- # return 0 00:22:25.841 17:13:13 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@489 -- # '[' -n 3153991 ']' 00:22:25.841 17:13:13 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@490 -- # killprocess 3153991 00:22:25.841 17:13:13 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@946 -- # '[' -z 3153991 ']' 00:22:25.841 17:13:13 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@950 -- # kill -0 3153991 00:22:25.841 17:13:13 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@951 -- # uname 00:22:25.841 17:13:13 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:22:25.841 17:13:13 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3153991 00:22:25.841 17:13:13 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:22:25.841 17:13:13 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:22:25.841 17:13:13 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3153991' 00:22:25.841 killing process with pid 3153991 00:22:25.841 17:13:13 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@965 -- # kill 3153991 00:22:25.841 [2024-05-15 17:13:13.253215] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:22:25.841 17:13:13 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@970 -- # wait 3153991 00:22:25.841 17:13:13 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:22:25.841 17:13:13 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:22:25.841 17:13:13 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:22:25.842 17:13:13 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:22:25.842 17:13:13 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@278 -- # remove_spdk_ns 00:22:25.842 17:13:13 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:25.842 17:13:13 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:25.842 17:13:13 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:28.381 17:13:15 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:22:28.381 00:22:28.381 real 0m13.922s 00:22:28.381 user 0m41.349s 00:22:28.381 sys 0m5.566s 00:22:28.381 17:13:15 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1122 -- # xtrace_disable 00:22:28.381 17:13:15 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:22:28.381 ************************************ 00:22:28.381 END TEST nvmf_fio_host 00:22:28.381 ************************************ 00:22:28.381 17:13:15 nvmf_tcp -- nvmf/nvmf.sh@99 -- # run_test nvmf_failover /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:22:28.381 17:13:15 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:22:28.381 17:13:15 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:22:28.381 17:13:15 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:22:28.381 ************************************ 00:22:28.381 START TEST nvmf_failover 00:22:28.381 ************************************ 00:22:28.381 17:13:15 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:22:28.381 * Looking for test storage... 00:22:28.381 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:22:28.381 17:13:15 nvmf_tcp.nvmf_failover -- host/failover.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:28.381 17:13:15 nvmf_tcp.nvmf_failover -- nvmf/common.sh@7 -- # uname -s 00:22:28.381 17:13:15 nvmf_tcp.nvmf_failover -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:28.381 17:13:15 nvmf_tcp.nvmf_failover -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:28.381 17:13:15 nvmf_tcp.nvmf_failover -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:28.381 17:13:15 nvmf_tcp.nvmf_failover -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:28.381 17:13:15 nvmf_tcp.nvmf_failover -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:28.381 17:13:15 nvmf_tcp.nvmf_failover -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:28.381 17:13:15 nvmf_tcp.nvmf_failover -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:28.381 17:13:15 nvmf_tcp.nvmf_failover -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:28.381 17:13:15 nvmf_tcp.nvmf_failover -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:28.381 17:13:15 nvmf_tcp.nvmf_failover -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:28.381 17:13:15 nvmf_tcp.nvmf_failover -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:22:28.381 17:13:15 nvmf_tcp.nvmf_failover -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:22:28.381 17:13:15 nvmf_tcp.nvmf_failover -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:28.381 17:13:15 nvmf_tcp.nvmf_failover -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:28.381 17:13:15 nvmf_tcp.nvmf_failover -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:28.381 17:13:15 nvmf_tcp.nvmf_failover -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:28.381 17:13:15 nvmf_tcp.nvmf_failover -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:28.381 17:13:15 nvmf_tcp.nvmf_failover -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:28.381 17:13:15 nvmf_tcp.nvmf_failover -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:28.381 17:13:15 nvmf_tcp.nvmf_failover -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:28.381 17:13:15 nvmf_tcp.nvmf_failover -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:28.381 17:13:15 nvmf_tcp.nvmf_failover -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:28.381 17:13:15 nvmf_tcp.nvmf_failover -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:28.381 17:13:15 nvmf_tcp.nvmf_failover -- paths/export.sh@5 -- # export PATH 00:22:28.381 17:13:15 nvmf_tcp.nvmf_failover -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:28.381 17:13:15 nvmf_tcp.nvmf_failover -- nvmf/common.sh@47 -- # : 0 00:22:28.381 17:13:15 nvmf_tcp.nvmf_failover -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:22:28.381 17:13:15 nvmf_tcp.nvmf_failover -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:22:28.381 17:13:15 nvmf_tcp.nvmf_failover -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:28.381 17:13:15 nvmf_tcp.nvmf_failover -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:28.381 17:13:15 nvmf_tcp.nvmf_failover -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:28.381 17:13:15 nvmf_tcp.nvmf_failover -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:22:28.381 17:13:15 nvmf_tcp.nvmf_failover -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:22:28.381 17:13:15 nvmf_tcp.nvmf_failover -- nvmf/common.sh@51 -- # have_pci_nics=0 00:22:28.381 17:13:15 nvmf_tcp.nvmf_failover -- host/failover.sh@11 -- # MALLOC_BDEV_SIZE=64 00:22:28.381 17:13:15 nvmf_tcp.nvmf_failover -- host/failover.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:22:28.381 17:13:15 nvmf_tcp.nvmf_failover -- host/failover.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:22:28.381 17:13:15 nvmf_tcp.nvmf_failover -- host/failover.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:22:28.381 17:13:15 nvmf_tcp.nvmf_failover -- host/failover.sh@18 -- # nvmftestinit 00:22:28.381 17:13:15 nvmf_tcp.nvmf_failover -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:22:28.381 17:13:15 nvmf_tcp.nvmf_failover -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:28.381 17:13:15 nvmf_tcp.nvmf_failover -- nvmf/common.sh@448 -- # prepare_net_devs 00:22:28.381 17:13:15 nvmf_tcp.nvmf_failover -- nvmf/common.sh@410 -- # local -g is_hw=no 00:22:28.381 17:13:15 nvmf_tcp.nvmf_failover -- nvmf/common.sh@412 -- # remove_spdk_ns 00:22:28.381 17:13:15 nvmf_tcp.nvmf_failover -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:28.382 17:13:15 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:28.382 17:13:15 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:28.382 17:13:15 nvmf_tcp.nvmf_failover -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:22:28.382 17:13:15 nvmf_tcp.nvmf_failover -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:22:28.382 17:13:15 nvmf_tcp.nvmf_failover -- nvmf/common.sh@285 -- # xtrace_disable 00:22:28.382 17:13:15 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:22:33.652 17:13:20 nvmf_tcp.nvmf_failover -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:33.652 17:13:20 nvmf_tcp.nvmf_failover -- nvmf/common.sh@291 -- # pci_devs=() 00:22:33.652 17:13:20 nvmf_tcp.nvmf_failover -- nvmf/common.sh@291 -- # local -a pci_devs 00:22:33.652 17:13:20 nvmf_tcp.nvmf_failover -- nvmf/common.sh@292 -- # pci_net_devs=() 00:22:33.652 17:13:20 nvmf_tcp.nvmf_failover -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:22:33.652 17:13:20 nvmf_tcp.nvmf_failover -- nvmf/common.sh@293 -- # pci_drivers=() 00:22:33.652 17:13:20 nvmf_tcp.nvmf_failover -- nvmf/common.sh@293 -- # local -A pci_drivers 00:22:33.652 17:13:20 nvmf_tcp.nvmf_failover -- nvmf/common.sh@295 -- # net_devs=() 00:22:33.652 17:13:20 nvmf_tcp.nvmf_failover -- nvmf/common.sh@295 -- # local -ga net_devs 00:22:33.652 17:13:20 nvmf_tcp.nvmf_failover -- nvmf/common.sh@296 -- # e810=() 00:22:33.652 17:13:20 nvmf_tcp.nvmf_failover -- nvmf/common.sh@296 -- # local -ga e810 00:22:33.652 17:13:20 nvmf_tcp.nvmf_failover -- nvmf/common.sh@297 -- # x722=() 00:22:33.652 17:13:20 nvmf_tcp.nvmf_failover -- nvmf/common.sh@297 -- # local -ga x722 00:22:33.652 17:13:20 nvmf_tcp.nvmf_failover -- nvmf/common.sh@298 -- # mlx=() 00:22:33.652 17:13:20 nvmf_tcp.nvmf_failover -- nvmf/common.sh@298 -- # local -ga mlx 00:22:33.652 17:13:20 nvmf_tcp.nvmf_failover -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:33.652 17:13:20 nvmf_tcp.nvmf_failover -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:33.652 17:13:20 nvmf_tcp.nvmf_failover -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:33.652 17:13:20 nvmf_tcp.nvmf_failover -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:33.652 17:13:20 nvmf_tcp.nvmf_failover -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:33.652 17:13:20 nvmf_tcp.nvmf_failover -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:33.652 17:13:20 nvmf_tcp.nvmf_failover -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:33.652 17:13:20 nvmf_tcp.nvmf_failover -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:33.652 17:13:20 nvmf_tcp.nvmf_failover -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:33.652 17:13:20 nvmf_tcp.nvmf_failover -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:33.652 17:13:20 nvmf_tcp.nvmf_failover -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:33.652 17:13:20 nvmf_tcp.nvmf_failover -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:22:33.652 17:13:20 nvmf_tcp.nvmf_failover -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:22:33.652 17:13:20 nvmf_tcp.nvmf_failover -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:22:33.652 17:13:20 nvmf_tcp.nvmf_failover -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:22:33.652 17:13:20 nvmf_tcp.nvmf_failover -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:22:33.652 17:13:20 nvmf_tcp.nvmf_failover -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:22:33.652 17:13:20 nvmf_tcp.nvmf_failover -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:33.652 17:13:20 nvmf_tcp.nvmf_failover -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:22:33.652 Found 0000:86:00.0 (0x8086 - 0x159b) 00:22:33.652 17:13:20 nvmf_tcp.nvmf_failover -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:33.652 17:13:20 nvmf_tcp.nvmf_failover -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:33.652 17:13:20 nvmf_tcp.nvmf_failover -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:33.652 17:13:20 nvmf_tcp.nvmf_failover -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:33.652 17:13:20 nvmf_tcp.nvmf_failover -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:33.652 17:13:20 nvmf_tcp.nvmf_failover -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:33.652 17:13:20 nvmf_tcp.nvmf_failover -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:22:33.652 Found 0000:86:00.1 (0x8086 - 0x159b) 00:22:33.652 17:13:20 nvmf_tcp.nvmf_failover -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:33.652 17:13:20 nvmf_tcp.nvmf_failover -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:33.652 17:13:20 nvmf_tcp.nvmf_failover -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:33.652 17:13:20 nvmf_tcp.nvmf_failover -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:33.652 17:13:20 nvmf_tcp.nvmf_failover -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:33.652 17:13:20 nvmf_tcp.nvmf_failover -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:22:33.652 17:13:20 nvmf_tcp.nvmf_failover -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:22:33.652 17:13:20 nvmf_tcp.nvmf_failover -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:22:33.652 17:13:20 nvmf_tcp.nvmf_failover -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:33.652 17:13:20 nvmf_tcp.nvmf_failover -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:33.653 17:13:20 nvmf_tcp.nvmf_failover -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:33.653 17:13:20 nvmf_tcp.nvmf_failover -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:33.653 17:13:20 nvmf_tcp.nvmf_failover -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:33.653 17:13:20 nvmf_tcp.nvmf_failover -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:33.653 17:13:20 nvmf_tcp.nvmf_failover -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:33.653 17:13:20 nvmf_tcp.nvmf_failover -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:22:33.653 Found net devices under 0000:86:00.0: cvl_0_0 00:22:33.653 17:13:20 nvmf_tcp.nvmf_failover -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:33.653 17:13:20 nvmf_tcp.nvmf_failover -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:33.653 17:13:20 nvmf_tcp.nvmf_failover -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:33.653 17:13:20 nvmf_tcp.nvmf_failover -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:33.653 17:13:20 nvmf_tcp.nvmf_failover -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:33.653 17:13:20 nvmf_tcp.nvmf_failover -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:33.653 17:13:20 nvmf_tcp.nvmf_failover -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:33.653 17:13:20 nvmf_tcp.nvmf_failover -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:33.653 17:13:20 nvmf_tcp.nvmf_failover -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:22:33.653 Found net devices under 0000:86:00.1: cvl_0_1 00:22:33.653 17:13:20 nvmf_tcp.nvmf_failover -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:33.653 17:13:20 nvmf_tcp.nvmf_failover -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:22:33.653 17:13:20 nvmf_tcp.nvmf_failover -- nvmf/common.sh@414 -- # is_hw=yes 00:22:33.653 17:13:20 nvmf_tcp.nvmf_failover -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:22:33.653 17:13:20 nvmf_tcp.nvmf_failover -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:22:33.653 17:13:20 nvmf_tcp.nvmf_failover -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:22:33.653 17:13:20 nvmf_tcp.nvmf_failover -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:33.653 17:13:20 nvmf_tcp.nvmf_failover -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:33.653 17:13:20 nvmf_tcp.nvmf_failover -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:33.653 17:13:20 nvmf_tcp.nvmf_failover -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:22:33.653 17:13:20 nvmf_tcp.nvmf_failover -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:33.653 17:13:20 nvmf_tcp.nvmf_failover -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:33.653 17:13:20 nvmf_tcp.nvmf_failover -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:22:33.653 17:13:20 nvmf_tcp.nvmf_failover -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:33.653 17:13:20 nvmf_tcp.nvmf_failover -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:33.653 17:13:20 nvmf_tcp.nvmf_failover -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:22:33.653 17:13:20 nvmf_tcp.nvmf_failover -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:22:33.653 17:13:20 nvmf_tcp.nvmf_failover -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:22:33.653 17:13:20 nvmf_tcp.nvmf_failover -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:33.653 17:13:20 nvmf_tcp.nvmf_failover -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:33.653 17:13:21 nvmf_tcp.nvmf_failover -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:33.653 17:13:21 nvmf_tcp.nvmf_failover -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:22:33.653 17:13:21 nvmf_tcp.nvmf_failover -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:33.653 17:13:21 nvmf_tcp.nvmf_failover -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:33.653 17:13:21 nvmf_tcp.nvmf_failover -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:33.653 17:13:21 nvmf_tcp.nvmf_failover -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:22:33.653 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:33.653 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.175 ms 00:22:33.653 00:22:33.653 --- 10.0.0.2 ping statistics --- 00:22:33.653 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:33.653 rtt min/avg/max/mdev = 0.175/0.175/0.175/0.000 ms 00:22:33.653 17:13:21 nvmf_tcp.nvmf_failover -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:33.653 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:33.653 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.236 ms 00:22:33.653 00:22:33.653 --- 10.0.0.1 ping statistics --- 00:22:33.653 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:33.653 rtt min/avg/max/mdev = 0.236/0.236/0.236/0.000 ms 00:22:33.653 17:13:21 nvmf_tcp.nvmf_failover -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:33.653 17:13:21 nvmf_tcp.nvmf_failover -- nvmf/common.sh@422 -- # return 0 00:22:33.653 17:13:21 nvmf_tcp.nvmf_failover -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:22:33.653 17:13:21 nvmf_tcp.nvmf_failover -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:33.653 17:13:21 nvmf_tcp.nvmf_failover -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:22:33.653 17:13:21 nvmf_tcp.nvmf_failover -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:22:33.653 17:13:21 nvmf_tcp.nvmf_failover -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:33.653 17:13:21 nvmf_tcp.nvmf_failover -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:22:33.653 17:13:21 nvmf_tcp.nvmf_failover -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:22:33.653 17:13:21 nvmf_tcp.nvmf_failover -- host/failover.sh@20 -- # nvmfappstart -m 0xE 00:22:33.653 17:13:21 nvmf_tcp.nvmf_failover -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:22:33.653 17:13:21 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@720 -- # xtrace_disable 00:22:33.653 17:13:21 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:22:33.653 17:13:21 nvmf_tcp.nvmf_failover -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:22:33.653 17:13:21 nvmf_tcp.nvmf_failover -- nvmf/common.sh@481 -- # nvmfpid=3158673 00:22:33.653 17:13:21 nvmf_tcp.nvmf_failover -- nvmf/common.sh@482 -- # waitforlisten 3158673 00:22:33.653 17:13:21 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@827 -- # '[' -z 3158673 ']' 00:22:33.653 17:13:21 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:33.653 17:13:21 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@832 -- # local max_retries=100 00:22:33.653 17:13:21 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:33.653 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:33.653 17:13:21 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@836 -- # xtrace_disable 00:22:33.653 17:13:21 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:22:33.653 [2024-05-15 17:13:21.182707] Starting SPDK v24.05-pre git sha1 0ba8ca574 / DPDK 23.11.0 initialization... 00:22:33.653 [2024-05-15 17:13:21.182748] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:33.653 EAL: No free 2048 kB hugepages reported on node 1 00:22:33.653 [2024-05-15 17:13:21.240158] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:22:33.911 [2024-05-15 17:13:21.319061] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:33.911 [2024-05-15 17:13:21.319097] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:33.911 [2024-05-15 17:13:21.319104] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:33.911 [2024-05-15 17:13:21.319110] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:33.911 [2024-05-15 17:13:21.319115] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:33.911 [2024-05-15 17:13:21.319211] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:22:33.911 [2024-05-15 17:13:21.319296] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:22:33.911 [2024-05-15 17:13:21.319297] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:22:34.477 17:13:22 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:22:34.477 17:13:22 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@860 -- # return 0 00:22:34.477 17:13:22 nvmf_tcp.nvmf_failover -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:22:34.477 17:13:22 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:34.477 17:13:22 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:22:34.477 17:13:22 nvmf_tcp.nvmf_failover -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:34.477 17:13:22 nvmf_tcp.nvmf_failover -- host/failover.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:22:34.735 [2024-05-15 17:13:22.195679] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:34.735 17:13:22 nvmf_tcp.nvmf_failover -- host/failover.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:22:34.994 Malloc0 00:22:34.994 17:13:22 nvmf_tcp.nvmf_failover -- host/failover.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:22:34.994 17:13:22 nvmf_tcp.nvmf_failover -- host/failover.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:22:35.252 17:13:22 nvmf_tcp.nvmf_failover -- host/failover.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:35.509 [2024-05-15 17:13:22.921236] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:22:35.509 [2024-05-15 17:13:22.921493] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:35.509 17:13:22 nvmf_tcp.nvmf_failover -- host/failover.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:22:35.509 [2024-05-15 17:13:23.109981] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:22:35.509 17:13:23 nvmf_tcp.nvmf_failover -- host/failover.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:22:35.768 [2024-05-15 17:13:23.298593] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:22:35.768 17:13:23 nvmf_tcp.nvmf_failover -- host/failover.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 15 -f 00:22:35.768 17:13:23 nvmf_tcp.nvmf_failover -- host/failover.sh@31 -- # bdevperf_pid=3159154 00:22:35.768 17:13:23 nvmf_tcp.nvmf_failover -- host/failover.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; cat $testdir/try.txt; rm -f $testdir/try.txt; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:22:35.768 17:13:23 nvmf_tcp.nvmf_failover -- host/failover.sh@34 -- # waitforlisten 3159154 /var/tmp/bdevperf.sock 00:22:35.768 17:13:23 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@827 -- # '[' -z 3159154 ']' 00:22:35.768 17:13:23 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:35.768 17:13:23 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@832 -- # local max_retries=100 00:22:35.768 17:13:23 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:35.768 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:35.768 17:13:23 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@836 -- # xtrace_disable 00:22:35.768 17:13:23 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:22:36.026 17:13:23 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:22:36.026 17:13:23 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@860 -- # return 0 00:22:36.026 17:13:23 nvmf_tcp.nvmf_failover -- host/failover.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:22:36.593 NVMe0n1 00:22:36.593 17:13:23 nvmf_tcp.nvmf_failover -- host/failover.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:22:36.852 00:22:36.852 17:13:24 nvmf_tcp.nvmf_failover -- host/failover.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:22:36.852 17:13:24 nvmf_tcp.nvmf_failover -- host/failover.sh@39 -- # run_test_pid=3159192 00:22:36.852 17:13:24 nvmf_tcp.nvmf_failover -- host/failover.sh@41 -- # sleep 1 00:22:37.865 17:13:25 nvmf_tcp.nvmf_failover -- host/failover.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:38.123 17:13:25 nvmf_tcp.nvmf_failover -- host/failover.sh@45 -- # sleep 3 00:22:41.410 17:13:28 nvmf_tcp.nvmf_failover -- host/failover.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:22:41.410 00:22:41.410 17:13:28 nvmf_tcp.nvmf_failover -- host/failover.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:22:41.669 [2024-05-15 17:13:29.147538] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f6c20 is same with the state(5) to be set 00:22:41.669 [2024-05-15 17:13:29.147590] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f6c20 is same with the state(5) to be set 00:22:41.669 [2024-05-15 17:13:29.147598] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f6c20 is same with the state(5) to be set 00:22:41.669 [2024-05-15 17:13:29.147604] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f6c20 is same with the state(5) to be set 00:22:41.669 [2024-05-15 17:13:29.147611] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f6c20 is same with the state(5) to be set 00:22:41.669 [2024-05-15 17:13:29.147617] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f6c20 is same with the state(5) to be set 00:22:41.669 [2024-05-15 17:13:29.147623] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f6c20 is same with the state(5) to be set 00:22:41.669 [2024-05-15 17:13:29.147629] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f6c20 is same with the state(5) to be set 00:22:41.669 17:13:29 nvmf_tcp.nvmf_failover -- host/failover.sh@50 -- # sleep 3 00:22:44.958 17:13:32 nvmf_tcp.nvmf_failover -- host/failover.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:44.958 [2024-05-15 17:13:32.338937] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:44.958 17:13:32 nvmf_tcp.nvmf_failover -- host/failover.sh@55 -- # sleep 1 00:22:45.894 17:13:33 nvmf_tcp.nvmf_failover -- host/failover.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:22:45.894 [2024-05-15 17:13:33.541188] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x134d7f0 is same with the state(5) to be set 00:22:45.894 [2024-05-15 17:13:33.541231] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x134d7f0 is same with the state(5) to be set 00:22:45.894 [2024-05-15 17:13:33.541239] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x134d7f0 is same with the state(5) to be set 00:22:45.894 [2024-05-15 17:13:33.541246] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x134d7f0 is same with the state(5) to be set 00:22:45.894 [2024-05-15 17:13:33.541258] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x134d7f0 is same with the state(5) to be set 00:22:45.894 [2024-05-15 17:13:33.541264] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x134d7f0 is same with the state(5) to be set 00:22:46.152 17:13:33 nvmf_tcp.nvmf_failover -- host/failover.sh@59 -- # wait 3159192 00:22:52.724 0 00:22:52.724 17:13:39 nvmf_tcp.nvmf_failover -- host/failover.sh@61 -- # killprocess 3159154 00:22:52.724 17:13:39 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@946 -- # '[' -z 3159154 ']' 00:22:52.724 17:13:39 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@950 -- # kill -0 3159154 00:22:52.724 17:13:39 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@951 -- # uname 00:22:52.724 17:13:39 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:22:52.724 17:13:39 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3159154 00:22:52.724 17:13:39 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:22:52.724 17:13:39 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:22:52.724 17:13:39 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3159154' 00:22:52.724 killing process with pid 3159154 00:22:52.724 17:13:39 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@965 -- # kill 3159154 00:22:52.724 17:13:39 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@970 -- # wait 3159154 00:22:52.724 17:13:39 nvmf_tcp.nvmf_failover -- host/failover.sh@63 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:22:52.724 [2024-05-15 17:13:23.352510] Starting SPDK v24.05-pre git sha1 0ba8ca574 / DPDK 23.11.0 initialization... 00:22:52.724 [2024-05-15 17:13:23.352559] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3159154 ] 00:22:52.724 EAL: No free 2048 kB hugepages reported on node 1 00:22:52.724 [2024-05-15 17:13:23.407726] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:52.724 [2024-05-15 17:13:23.483028] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:22:52.724 Running I/O for 15 seconds... 00:22:52.724 [2024-05-15 17:13:25.512329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:93048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.724 [2024-05-15 17:13:25.512378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.724 [2024-05-15 17:13:25.512394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:93056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.724 [2024-05-15 17:13:25.512402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.724 [2024-05-15 17:13:25.512412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:93064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.724 [2024-05-15 17:13:25.512419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.724 [2024-05-15 17:13:25.512427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:93072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.724 [2024-05-15 17:13:25.512434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.724 [2024-05-15 17:13:25.512442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:93080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.724 [2024-05-15 17:13:25.512448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.724 [2024-05-15 17:13:25.512456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:93088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.724 [2024-05-15 17:13:25.512463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.724 [2024-05-15 17:13:25.512471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:93096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.724 [2024-05-15 17:13:25.512477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.724 [2024-05-15 17:13:25.512485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:93104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.724 [2024-05-15 17:13:25.512492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.724 [2024-05-15 17:13:25.512504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:93112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.724 [2024-05-15 17:13:25.512511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.724 [2024-05-15 17:13:25.512519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:93120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.725 [2024-05-15 17:13:25.512525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.725 [2024-05-15 17:13:25.512534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:93128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.725 [2024-05-15 17:13:25.512540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.725 [2024-05-15 17:13:25.512554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:93136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.725 [2024-05-15 17:13:25.512560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.725 [2024-05-15 17:13:25.512568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:93144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.725 [2024-05-15 17:13:25.512575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.725 [2024-05-15 17:13:25.512583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:93152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.725 [2024-05-15 17:13:25.512589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.725 [2024-05-15 17:13:25.512597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:93160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.725 [2024-05-15 17:13:25.512603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.725 [2024-05-15 17:13:25.512611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:93168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.725 [2024-05-15 17:13:25.512617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.725 [2024-05-15 17:13:25.512625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:93176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.725 [2024-05-15 17:13:25.512632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.725 [2024-05-15 17:13:25.512640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:93184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.725 [2024-05-15 17:13:25.512647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.725 [2024-05-15 17:13:25.512655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:93192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.725 [2024-05-15 17:13:25.512661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.725 [2024-05-15 17:13:25.512669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:93200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.725 [2024-05-15 17:13:25.512676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.725 [2024-05-15 17:13:25.512684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:93208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.725 [2024-05-15 17:13:25.512690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.725 [2024-05-15 17:13:25.512699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:93216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.725 [2024-05-15 17:13:25.512705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.725 [2024-05-15 17:13:25.512713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:93224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.725 [2024-05-15 17:13:25.512720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.725 [2024-05-15 17:13:25.512728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:93232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.725 [2024-05-15 17:13:25.512736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.725 [2024-05-15 17:13:25.512744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:93240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.725 [2024-05-15 17:13:25.512751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.725 [2024-05-15 17:13:25.512759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:93248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.725 [2024-05-15 17:13:25.512765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.725 [2024-05-15 17:13:25.512773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:93256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.725 [2024-05-15 17:13:25.512780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.725 [2024-05-15 17:13:25.512787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:93264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.725 [2024-05-15 17:13:25.512794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.725 [2024-05-15 17:13:25.512801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:93272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.725 [2024-05-15 17:13:25.512808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.725 [2024-05-15 17:13:25.512815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:93280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.725 [2024-05-15 17:13:25.512821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.725 [2024-05-15 17:13:25.512829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:93288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.725 [2024-05-15 17:13:25.512835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.725 [2024-05-15 17:13:25.512843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:93296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.725 [2024-05-15 17:13:25.512849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.725 [2024-05-15 17:13:25.512857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:93304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.725 [2024-05-15 17:13:25.512863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.725 [2024-05-15 17:13:25.512871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:93312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.725 [2024-05-15 17:13:25.512877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.725 [2024-05-15 17:13:25.512885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:93320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.725 [2024-05-15 17:13:25.512892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.725 [2024-05-15 17:13:25.512899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:93328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.725 [2024-05-15 17:13:25.512907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.725 [2024-05-15 17:13:25.512917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:93336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.725 [2024-05-15 17:13:25.512924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.725 [2024-05-15 17:13:25.512932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:93344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.725 [2024-05-15 17:13:25.512938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.725 [2024-05-15 17:13:25.512946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:93352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.725 [2024-05-15 17:13:25.512952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.725 [2024-05-15 17:13:25.512960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:93360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.725 [2024-05-15 17:13:25.512967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.725 [2024-05-15 17:13:25.512975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:93368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.725 [2024-05-15 17:13:25.512981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.725 [2024-05-15 17:13:25.512989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:93376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.725 [2024-05-15 17:13:25.512995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.725 [2024-05-15 17:13:25.513003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:93384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.725 [2024-05-15 17:13:25.513009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.725 [2024-05-15 17:13:25.513017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:93392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.725 [2024-05-15 17:13:25.513023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.725 [2024-05-15 17:13:25.513031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:93400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.725 [2024-05-15 17:13:25.513038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.725 [2024-05-15 17:13:25.513046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:93408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.725 [2024-05-15 17:13:25.513052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.725 [2024-05-15 17:13:25.513060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:93416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.725 [2024-05-15 17:13:25.513067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.725 [2024-05-15 17:13:25.513074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:93424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.725 [2024-05-15 17:13:25.513081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.725 [2024-05-15 17:13:25.513089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:93432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.725 [2024-05-15 17:13:25.513096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.725 [2024-05-15 17:13:25.513110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:93440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.725 [2024-05-15 17:13:25.513117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.725 [2024-05-15 17:13:25.513124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:93448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.726 [2024-05-15 17:13:25.513131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.726 [2024-05-15 17:13:25.513139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:93456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.726 [2024-05-15 17:13:25.513146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.726 [2024-05-15 17:13:25.513153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:93464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.726 [2024-05-15 17:13:25.513160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.726 [2024-05-15 17:13:25.513175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:93472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.726 [2024-05-15 17:13:25.513181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.726 [2024-05-15 17:13:25.513189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:93480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.726 [2024-05-15 17:13:25.513196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.726 [2024-05-15 17:13:25.513204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:93488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.726 [2024-05-15 17:13:25.513210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.726 [2024-05-15 17:13:25.513218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:93496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.726 [2024-05-15 17:13:25.513225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.726 [2024-05-15 17:13:25.513232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:93504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.726 [2024-05-15 17:13:25.513238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.726 [2024-05-15 17:13:25.513246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:93512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.726 [2024-05-15 17:13:25.513252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.726 [2024-05-15 17:13:25.513260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:93520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.726 [2024-05-15 17:13:25.513267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.726 [2024-05-15 17:13:25.513274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:93528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.726 [2024-05-15 17:13:25.513280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.726 [2024-05-15 17:13:25.513288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:93536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.726 [2024-05-15 17:13:25.513296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.726 [2024-05-15 17:13:25.513304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:93544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.726 [2024-05-15 17:13:25.513310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.726 [2024-05-15 17:13:25.513318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:93552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.726 [2024-05-15 17:13:25.513324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.726 [2024-05-15 17:13:25.513332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:93560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.726 [2024-05-15 17:13:25.513338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.726 [2024-05-15 17:13:25.513347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:93568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.726 [2024-05-15 17:13:25.513353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.726 [2024-05-15 17:13:25.513363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:93576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.726 [2024-05-15 17:13:25.513369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.726 [2024-05-15 17:13:25.513377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:93584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.726 [2024-05-15 17:13:25.513383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.726 [2024-05-15 17:13:25.513391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:93592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.726 [2024-05-15 17:13:25.513397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.726 [2024-05-15 17:13:25.513404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:93600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.726 [2024-05-15 17:13:25.513411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.726 [2024-05-15 17:13:25.513419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:93608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.726 [2024-05-15 17:13:25.513425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.726 [2024-05-15 17:13:25.513433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:93616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.726 [2024-05-15 17:13:25.513439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.726 [2024-05-15 17:13:25.513446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:93624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.726 [2024-05-15 17:13:25.513452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.726 [2024-05-15 17:13:25.513471] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:52.726 [2024-05-15 17:13:25.513478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:93632 len:8 PRP1 0x0 PRP2 0x0 00:22:52.726 [2024-05-15 17:13:25.513484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.726 [2024-05-15 17:13:25.513496] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:52.726 [2024-05-15 17:13:25.513501] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:52.726 [2024-05-15 17:13:25.513507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:93640 len:8 PRP1 0x0 PRP2 0x0 00:22:52.726 [2024-05-15 17:13:25.513513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.726 [2024-05-15 17:13:25.513519] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:52.726 [2024-05-15 17:13:25.513524] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:52.726 [2024-05-15 17:13:25.513529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:93648 len:8 PRP1 0x0 PRP2 0x0 00:22:52.726 [2024-05-15 17:13:25.513535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.726 [2024-05-15 17:13:25.513542] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:52.726 [2024-05-15 17:13:25.513547] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:52.726 [2024-05-15 17:13:25.513552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:93656 len:8 PRP1 0x0 PRP2 0x0 00:22:52.726 [2024-05-15 17:13:25.513558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.726 [2024-05-15 17:13:25.513565] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:52.726 [2024-05-15 17:13:25.513571] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:52.726 [2024-05-15 17:13:25.513576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:93664 len:8 PRP1 0x0 PRP2 0x0 00:22:52.726 [2024-05-15 17:13:25.513583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.726 [2024-05-15 17:13:25.513590] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:52.726 [2024-05-15 17:13:25.513594] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:52.726 [2024-05-15 17:13:25.513600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:93672 len:8 PRP1 0x0 PRP2 0x0 00:22:52.726 [2024-05-15 17:13:25.513606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.726 [2024-05-15 17:13:25.513612] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:52.726 [2024-05-15 17:13:25.513618] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:52.726 [2024-05-15 17:13:25.513623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:93680 len:8 PRP1 0x0 PRP2 0x0 00:22:52.726 [2024-05-15 17:13:25.513629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.726 [2024-05-15 17:13:25.513635] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:52.726 [2024-05-15 17:13:25.513640] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:52.726 [2024-05-15 17:13:25.513645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:93688 len:8 PRP1 0x0 PRP2 0x0 00:22:52.726 [2024-05-15 17:13:25.513651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.726 [2024-05-15 17:13:25.513657] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:52.726 [2024-05-15 17:13:25.513662] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:52.726 [2024-05-15 17:13:25.513667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:93696 len:8 PRP1 0x0 PRP2 0x0 00:22:52.726 [2024-05-15 17:13:25.513675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.726 [2024-05-15 17:13:25.513681] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:52.726 [2024-05-15 17:13:25.513686] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:52.726 [2024-05-15 17:13:25.513691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:93704 len:8 PRP1 0x0 PRP2 0x0 00:22:52.726 [2024-05-15 17:13:25.513697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.726 [2024-05-15 17:13:25.513703] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:52.726 [2024-05-15 17:13:25.513708] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:52.726 [2024-05-15 17:13:25.513714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:93712 len:8 PRP1 0x0 PRP2 0x0 00:22:52.726 [2024-05-15 17:13:25.513720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.727 [2024-05-15 17:13:25.513726] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:52.727 [2024-05-15 17:13:25.513731] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:52.727 [2024-05-15 17:13:25.513736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:93720 len:8 PRP1 0x0 PRP2 0x0 00:22:52.727 [2024-05-15 17:13:25.513742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.727 [2024-05-15 17:13:25.513748] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:52.727 [2024-05-15 17:13:25.513754] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:52.727 [2024-05-15 17:13:25.513760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:93728 len:8 PRP1 0x0 PRP2 0x0 00:22:52.727 [2024-05-15 17:13:25.513767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.727 [2024-05-15 17:13:25.513773] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:52.727 [2024-05-15 17:13:25.513778] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:52.727 [2024-05-15 17:13:25.513783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:93736 len:8 PRP1 0x0 PRP2 0x0 00:22:52.727 [2024-05-15 17:13:25.513789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.727 [2024-05-15 17:13:25.513795] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:52.727 [2024-05-15 17:13:25.513800] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:52.727 [2024-05-15 17:13:25.513805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:93744 len:8 PRP1 0x0 PRP2 0x0 00:22:52.727 [2024-05-15 17:13:25.513812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.727 [2024-05-15 17:13:25.513818] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:52.727 [2024-05-15 17:13:25.513823] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:52.727 [2024-05-15 17:13:25.513828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:93752 len:8 PRP1 0x0 PRP2 0x0 00:22:52.727 [2024-05-15 17:13:25.513834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.727 [2024-05-15 17:13:25.513840] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:52.727 [2024-05-15 17:13:25.513847] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:52.727 [2024-05-15 17:13:25.513852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:93760 len:8 PRP1 0x0 PRP2 0x0 00:22:52.727 [2024-05-15 17:13:25.513858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.727 [2024-05-15 17:13:25.513865] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:52.727 [2024-05-15 17:13:25.513870] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:52.727 [2024-05-15 17:13:25.513875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:93768 len:8 PRP1 0x0 PRP2 0x0 00:22:52.727 [2024-05-15 17:13:25.513881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.727 [2024-05-15 17:13:25.513887] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:52.727 [2024-05-15 17:13:25.513892] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:52.727 [2024-05-15 17:13:25.513897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:93776 len:8 PRP1 0x0 PRP2 0x0 00:22:52.727 [2024-05-15 17:13:25.513903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.727 [2024-05-15 17:13:25.513910] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:52.727 [2024-05-15 17:13:25.513914] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:52.727 [2024-05-15 17:13:25.513920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:93784 len:8 PRP1 0x0 PRP2 0x0 00:22:52.727 [2024-05-15 17:13:25.513926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.727 [2024-05-15 17:13:25.513932] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:52.727 [2024-05-15 17:13:25.513938] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:52.727 [2024-05-15 17:13:25.513943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:93792 len:8 PRP1 0x0 PRP2 0x0 00:22:52.727 [2024-05-15 17:13:25.513949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.727 [2024-05-15 17:13:25.513956] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:52.727 [2024-05-15 17:13:25.513961] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:52.727 [2024-05-15 17:13:25.513966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:93800 len:8 PRP1 0x0 PRP2 0x0 00:22:52.727 [2024-05-15 17:13:25.513972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.727 [2024-05-15 17:13:25.513979] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:52.727 [2024-05-15 17:13:25.513983] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:52.727 [2024-05-15 17:13:25.513989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:93808 len:8 PRP1 0x0 PRP2 0x0 00:22:52.727 [2024-05-15 17:13:25.513995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.727 [2024-05-15 17:13:25.514001] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:52.727 [2024-05-15 17:13:25.514006] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:52.727 [2024-05-15 17:13:25.514011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:93816 len:8 PRP1 0x0 PRP2 0x0 00:22:52.727 [2024-05-15 17:13:25.514018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.727 [2024-05-15 17:13:25.514025] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:52.727 [2024-05-15 17:13:25.514032] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:52.727 [2024-05-15 17:13:25.514037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:93824 len:8 PRP1 0x0 PRP2 0x0 00:22:52.727 [2024-05-15 17:13:25.514043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.727 [2024-05-15 17:13:25.514049] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:52.727 [2024-05-15 17:13:25.514055] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:52.727 [2024-05-15 17:13:25.514060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:93832 len:8 PRP1 0x0 PRP2 0x0 00:22:52.727 [2024-05-15 17:13:25.514066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.727 [2024-05-15 17:13:25.514073] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:52.727 [2024-05-15 17:13:25.514077] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:52.727 [2024-05-15 17:13:25.514082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:93840 len:8 PRP1 0x0 PRP2 0x0 00:22:52.727 [2024-05-15 17:13:25.514088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.727 [2024-05-15 17:13:25.514095] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:52.727 [2024-05-15 17:13:25.514099] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:52.727 [2024-05-15 17:13:25.514105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:93848 len:8 PRP1 0x0 PRP2 0x0 00:22:52.727 [2024-05-15 17:13:25.514111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.727 [2024-05-15 17:13:25.514118] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:52.727 [2024-05-15 17:13:25.514124] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:52.727 [2024-05-15 17:13:25.514129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:93856 len:8 PRP1 0x0 PRP2 0x0 00:22:52.727 [2024-05-15 17:13:25.514135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.727 [2024-05-15 17:13:25.514142] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:52.727 [2024-05-15 17:13:25.514146] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:52.727 [2024-05-15 17:13:25.514152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:93864 len:8 PRP1 0x0 PRP2 0x0 00:22:52.727 [2024-05-15 17:13:25.514158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.727 [2024-05-15 17:13:25.514170] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:52.727 [2024-05-15 17:13:25.514176] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:52.727 [2024-05-15 17:13:25.514181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:93872 len:8 PRP1 0x0 PRP2 0x0 00:22:52.727 [2024-05-15 17:13:25.514187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.727 [2024-05-15 17:13:25.514194] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:52.727 [2024-05-15 17:13:25.514199] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:52.727 [2024-05-15 17:13:25.514204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:93880 len:8 PRP1 0x0 PRP2 0x0 00:22:52.727 [2024-05-15 17:13:25.514212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.727 [2024-05-15 17:13:25.514218] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:52.727 [2024-05-15 17:13:25.514224] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:52.727 [2024-05-15 17:13:25.514230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:93888 len:8 PRP1 0x0 PRP2 0x0 00:22:52.727 [2024-05-15 17:13:25.514236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.727 [2024-05-15 17:13:25.514242] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:52.727 [2024-05-15 17:13:25.514247] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:52.727 [2024-05-15 17:13:25.514252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:93896 len:8 PRP1 0x0 PRP2 0x0 00:22:52.727 [2024-05-15 17:13:25.514258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.727 [2024-05-15 17:13:25.514265] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:52.727 [2024-05-15 17:13:25.514269] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:52.727 [2024-05-15 17:13:25.514275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:93904 len:8 PRP1 0x0 PRP2 0x0 00:22:52.727 [2024-05-15 17:13:25.514281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.728 [2024-05-15 17:13:25.514287] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:52.728 [2024-05-15 17:13:25.514292] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:52.728 [2024-05-15 17:13:25.514297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:93912 len:8 PRP1 0x0 PRP2 0x0 00:22:52.728 [2024-05-15 17:13:25.514303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.728 [2024-05-15 17:13:25.514309] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:52.728 [2024-05-15 17:13:25.514315] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:52.728 [2024-05-15 17:13:25.514321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:93920 len:8 PRP1 0x0 PRP2 0x0 00:22:52.728 [2024-05-15 17:13:25.514327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.728 [2024-05-15 17:13:25.514333] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:52.728 [2024-05-15 17:13:25.514338] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:52.728 [2024-05-15 17:13:25.514344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:93928 len:8 PRP1 0x0 PRP2 0x0 00:22:52.728 [2024-05-15 17:13:25.514350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.728 [2024-05-15 17:13:25.514356] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:52.728 [2024-05-15 17:13:25.514360] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:52.728 [2024-05-15 17:13:25.514366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:92920 len:8 PRP1 0x0 PRP2 0x0 00:22:52.728 [2024-05-15 17:13:25.514372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.728 [2024-05-15 17:13:25.514379] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:52.728 [2024-05-15 17:13:25.514383] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:52.728 [2024-05-15 17:13:25.514390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:92928 len:8 PRP1 0x0 PRP2 0x0 00:22:52.728 [2024-05-15 17:13:25.514396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.728 [2024-05-15 17:13:25.514402] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:52.728 [2024-05-15 17:13:25.514408] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:52.728 [2024-05-15 17:13:25.514413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:92936 len:8 PRP1 0x0 PRP2 0x0 00:22:52.728 [2024-05-15 17:13:25.514420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.728 [2024-05-15 17:13:25.514426] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:52.728 [2024-05-15 17:13:25.524611] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:52.728 [2024-05-15 17:13:25.524629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:92944 len:8 PRP1 0x0 PRP2 0x0 00:22:52.728 [2024-05-15 17:13:25.524637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.728 [2024-05-15 17:13:25.524645] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:52.728 [2024-05-15 17:13:25.524650] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:52.728 [2024-05-15 17:13:25.524656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:92952 len:8 PRP1 0x0 PRP2 0x0 00:22:52.728 [2024-05-15 17:13:25.524662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.728 [2024-05-15 17:13:25.524671] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:52.728 [2024-05-15 17:13:25.524676] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:52.728 [2024-05-15 17:13:25.524681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:92960 len:8 PRP1 0x0 PRP2 0x0 00:22:52.728 [2024-05-15 17:13:25.524688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.728 [2024-05-15 17:13:25.524694] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:52.728 [2024-05-15 17:13:25.524700] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:52.728 [2024-05-15 17:13:25.524706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:92968 len:8 PRP1 0x0 PRP2 0x0 00:22:52.728 [2024-05-15 17:13:25.524712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.728 [2024-05-15 17:13:25.524719] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:52.728 [2024-05-15 17:13:25.524723] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:52.728 [2024-05-15 17:13:25.524729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:93936 len:8 PRP1 0x0 PRP2 0x0 00:22:52.728 [2024-05-15 17:13:25.524735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.728 [2024-05-15 17:13:25.524741] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:52.728 [2024-05-15 17:13:25.524745] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:52.728 [2024-05-15 17:13:25.524751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:92976 len:8 PRP1 0x0 PRP2 0x0 00:22:52.728 [2024-05-15 17:13:25.524757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.728 [2024-05-15 17:13:25.524763] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:52.728 [2024-05-15 17:13:25.524775] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:52.728 [2024-05-15 17:13:25.524780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:92984 len:8 PRP1 0x0 PRP2 0x0 00:22:52.728 [2024-05-15 17:13:25.524786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.728 [2024-05-15 17:13:25.524792] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:52.728 [2024-05-15 17:13:25.524798] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:52.728 [2024-05-15 17:13:25.524803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:92992 len:8 PRP1 0x0 PRP2 0x0 00:22:52.728 [2024-05-15 17:13:25.524810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.728 [2024-05-15 17:13:25.524816] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:52.728 [2024-05-15 17:13:25.524821] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:52.728 [2024-05-15 17:13:25.524826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:93000 len:8 PRP1 0x0 PRP2 0x0 00:22:52.728 [2024-05-15 17:13:25.524832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.728 [2024-05-15 17:13:25.524839] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:52.728 [2024-05-15 17:13:25.524843] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:52.728 [2024-05-15 17:13:25.524849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:93008 len:8 PRP1 0x0 PRP2 0x0 00:22:52.728 [2024-05-15 17:13:25.524855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.728 [2024-05-15 17:13:25.524861] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:52.728 [2024-05-15 17:13:25.524866] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:52.728 [2024-05-15 17:13:25.524871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:93016 len:8 PRP1 0x0 PRP2 0x0 00:22:52.728 [2024-05-15 17:13:25.524877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.728 [2024-05-15 17:13:25.524884] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:52.728 [2024-05-15 17:13:25.524888] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:52.728 [2024-05-15 17:13:25.524894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:93024 len:8 PRP1 0x0 PRP2 0x0 00:22:52.728 [2024-05-15 17:13:25.524900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.728 [2024-05-15 17:13:25.524907] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:52.728 [2024-05-15 17:13:25.524912] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:52.728 [2024-05-15 17:13:25.524917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:93032 len:8 PRP1 0x0 PRP2 0x0 00:22:52.728 [2024-05-15 17:13:25.524923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.728 [2024-05-15 17:13:25.524930] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:52.728 [2024-05-15 17:13:25.524934] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:52.728 [2024-05-15 17:13:25.524940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:93040 len:8 PRP1 0x0 PRP2 0x0 00:22:52.728 [2024-05-15 17:13:25.524946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.728 [2024-05-15 17:13:25.524989] bdev_nvme.c:1602:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x13732f0 was disconnected and freed. reset controller. 00:22:52.728 [2024-05-15 17:13:25.525003] bdev_nvme.c:1858:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:22:52.728 [2024-05-15 17:13:25.525026] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:52.728 [2024-05-15 17:13:25.525033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.729 [2024-05-15 17:13:25.525041] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:52.729 [2024-05-15 17:13:25.525047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.729 [2024-05-15 17:13:25.525054] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:52.729 [2024-05-15 17:13:25.525060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.729 [2024-05-15 17:13:25.525067] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:52.729 [2024-05-15 17:13:25.525073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.729 [2024-05-15 17:13:25.525079] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:52.729 [2024-05-15 17:13:25.525103] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1354400 (9): Bad file descriptor 00:22:52.729 [2024-05-15 17:13:25.528529] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:52.729 [2024-05-15 17:13:25.565790] bdev_nvme.c:2055:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:22:52.729 [2024-05-15 17:13:29.150315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:23416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.729 [2024-05-15 17:13:29.150353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.729 [2024-05-15 17:13:29.150368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:23424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.729 [2024-05-15 17:13:29.150376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.729 [2024-05-15 17:13:29.150385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:23432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.729 [2024-05-15 17:13:29.150392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.729 [2024-05-15 17:13:29.150400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:23440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.729 [2024-05-15 17:13:29.150407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.729 [2024-05-15 17:13:29.150415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:23448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.729 [2024-05-15 17:13:29.150421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.729 [2024-05-15 17:13:29.150429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:23456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.729 [2024-05-15 17:13:29.150435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.729 [2024-05-15 17:13:29.150448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:23464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.729 [2024-05-15 17:13:29.150454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.729 [2024-05-15 17:13:29.150462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:23472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.729 [2024-05-15 17:13:29.150469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.729 [2024-05-15 17:13:29.150477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:23480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.729 [2024-05-15 17:13:29.150483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.729 [2024-05-15 17:13:29.150491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:23488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.729 [2024-05-15 17:13:29.150498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.729 [2024-05-15 17:13:29.150506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:23496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.729 [2024-05-15 17:13:29.150512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.729 [2024-05-15 17:13:29.150520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:23504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.729 [2024-05-15 17:13:29.150526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.729 [2024-05-15 17:13:29.150534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:23512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.729 [2024-05-15 17:13:29.150541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.729 [2024-05-15 17:13:29.150548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:23520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.729 [2024-05-15 17:13:29.150555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.729 [2024-05-15 17:13:29.150563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:23528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.729 [2024-05-15 17:13:29.150569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.729 [2024-05-15 17:13:29.150576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:23536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.729 [2024-05-15 17:13:29.150583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.729 [2024-05-15 17:13:29.150592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:23544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.729 [2024-05-15 17:13:29.150598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.729 [2024-05-15 17:13:29.150607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:23552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.729 [2024-05-15 17:13:29.150613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.729 [2024-05-15 17:13:29.150621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:23560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.729 [2024-05-15 17:13:29.150630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.729 [2024-05-15 17:13:29.150638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:23568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.729 [2024-05-15 17:13:29.150644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.729 [2024-05-15 17:13:29.150652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:23576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.729 [2024-05-15 17:13:29.150658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.729 [2024-05-15 17:13:29.150666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:23584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.729 [2024-05-15 17:13:29.150673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.729 [2024-05-15 17:13:29.150680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:23592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.729 [2024-05-15 17:13:29.150687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.729 [2024-05-15 17:13:29.150695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:23600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.729 [2024-05-15 17:13:29.150701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.729 [2024-05-15 17:13:29.150709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:23608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.729 [2024-05-15 17:13:29.150715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.729 [2024-05-15 17:13:29.150723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:23616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.729 [2024-05-15 17:13:29.150729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.729 [2024-05-15 17:13:29.150737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:23624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.729 [2024-05-15 17:13:29.150743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.729 [2024-05-15 17:13:29.150751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:23632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.729 [2024-05-15 17:13:29.150757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.729 [2024-05-15 17:13:29.150765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:23640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.729 [2024-05-15 17:13:29.150771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.729 [2024-05-15 17:13:29.150780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:23648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.729 [2024-05-15 17:13:29.150786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.729 [2024-05-15 17:13:29.150794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:23656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.729 [2024-05-15 17:13:29.150800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.729 [2024-05-15 17:13:29.150808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:23664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.729 [2024-05-15 17:13:29.150816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.729 [2024-05-15 17:13:29.150823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:23672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.729 [2024-05-15 17:13:29.150830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.729 [2024-05-15 17:13:29.150838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:23680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.730 [2024-05-15 17:13:29.150844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.730 [2024-05-15 17:13:29.150852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:23688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.730 [2024-05-15 17:13:29.150858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.730 [2024-05-15 17:13:29.150866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:23696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.730 [2024-05-15 17:13:29.150872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.730 [2024-05-15 17:13:29.150880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:23704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.730 [2024-05-15 17:13:29.150886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.730 [2024-05-15 17:13:29.150894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:23712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.730 [2024-05-15 17:13:29.150900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.730 [2024-05-15 17:13:29.150908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:23720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.730 [2024-05-15 17:13:29.150914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.730 [2024-05-15 17:13:29.150921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:23728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.730 [2024-05-15 17:13:29.150928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.730 [2024-05-15 17:13:29.150936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:23736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.730 [2024-05-15 17:13:29.150942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.730 [2024-05-15 17:13:29.150949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:23744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.730 [2024-05-15 17:13:29.150955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.730 [2024-05-15 17:13:29.150963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:23752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.730 [2024-05-15 17:13:29.150969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.730 [2024-05-15 17:13:29.150977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:23760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.730 [2024-05-15 17:13:29.150983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.730 [2024-05-15 17:13:29.150992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:23768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.730 [2024-05-15 17:13:29.150998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.730 [2024-05-15 17:13:29.151006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:23776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.730 [2024-05-15 17:13:29.151012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.730 [2024-05-15 17:13:29.151020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:23784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.730 [2024-05-15 17:13:29.151026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.730 [2024-05-15 17:13:29.151035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:23792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.730 [2024-05-15 17:13:29.151041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.730 [2024-05-15 17:13:29.151049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:23800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.730 [2024-05-15 17:13:29.151055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.730 [2024-05-15 17:13:29.151064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:23808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.730 [2024-05-15 17:13:29.151070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.730 [2024-05-15 17:13:29.151078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:23816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.730 [2024-05-15 17:13:29.151084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.730 [2024-05-15 17:13:29.151092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:23824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.730 [2024-05-15 17:13:29.151098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.730 [2024-05-15 17:13:29.151106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:23832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.730 [2024-05-15 17:13:29.151112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.730 [2024-05-15 17:13:29.151119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:23840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.730 [2024-05-15 17:13:29.151126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.730 [2024-05-15 17:13:29.151134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:23848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.730 [2024-05-15 17:13:29.151140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.730 [2024-05-15 17:13:29.151147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:23856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.730 [2024-05-15 17:13:29.151153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.730 [2024-05-15 17:13:29.151161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:23864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.730 [2024-05-15 17:13:29.151174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.730 [2024-05-15 17:13:29.151182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:23872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.730 [2024-05-15 17:13:29.151188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.730 [2024-05-15 17:13:29.151208] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:52.730 [2024-05-15 17:13:29.151214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23880 len:8 PRP1 0x0 PRP2 0x0 00:22:52.730 [2024-05-15 17:13:29.151221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.730 [2024-05-15 17:13:29.151230] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:52.730 [2024-05-15 17:13:29.151235] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:52.730 [2024-05-15 17:13:29.151241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23888 len:8 PRP1 0x0 PRP2 0x0 00:22:52.730 [2024-05-15 17:13:29.151247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.730 [2024-05-15 17:13:29.151254] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:52.730 [2024-05-15 17:13:29.151259] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:52.730 [2024-05-15 17:13:29.151264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23896 len:8 PRP1 0x0 PRP2 0x0 00:22:52.730 [2024-05-15 17:13:29.151270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.730 [2024-05-15 17:13:29.151276] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:52.730 [2024-05-15 17:13:29.151281] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:52.730 [2024-05-15 17:13:29.151286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23904 len:8 PRP1 0x0 PRP2 0x0 00:22:52.730 [2024-05-15 17:13:29.151293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.730 [2024-05-15 17:13:29.151299] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:52.730 [2024-05-15 17:13:29.151304] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:52.730 [2024-05-15 17:13:29.151309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23912 len:8 PRP1 0x0 PRP2 0x0 00:22:52.730 [2024-05-15 17:13:29.151315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.730 [2024-05-15 17:13:29.151322] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:52.730 [2024-05-15 17:13:29.151326] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:52.730 [2024-05-15 17:13:29.151331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23920 len:8 PRP1 0x0 PRP2 0x0 00:22:52.730 [2024-05-15 17:13:29.151338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.730 [2024-05-15 17:13:29.151344] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:52.730 [2024-05-15 17:13:29.151349] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:52.730 [2024-05-15 17:13:29.151354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23928 len:8 PRP1 0x0 PRP2 0x0 00:22:52.730 [2024-05-15 17:13:29.151360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.730 [2024-05-15 17:13:29.151368] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:52.730 [2024-05-15 17:13:29.151373] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:52.730 [2024-05-15 17:13:29.151378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23936 len:8 PRP1 0x0 PRP2 0x0 00:22:52.730 [2024-05-15 17:13:29.151384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.730 [2024-05-15 17:13:29.151390] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:52.730 [2024-05-15 17:13:29.151395] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:52.730 [2024-05-15 17:13:29.151400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23944 len:8 PRP1 0x0 PRP2 0x0 00:22:52.730 [2024-05-15 17:13:29.151407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.730 [2024-05-15 17:13:29.151413] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:52.730 [2024-05-15 17:13:29.151417] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:52.730 [2024-05-15 17:13:29.151423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23952 len:8 PRP1 0x0 PRP2 0x0 00:22:52.730 [2024-05-15 17:13:29.151429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.731 [2024-05-15 17:13:29.151435] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:52.731 [2024-05-15 17:13:29.151441] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:52.731 [2024-05-15 17:13:29.151446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23960 len:8 PRP1 0x0 PRP2 0x0 00:22:52.731 [2024-05-15 17:13:29.151452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.731 [2024-05-15 17:13:29.151459] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:52.731 [2024-05-15 17:13:29.151463] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:52.731 [2024-05-15 17:13:29.151469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23968 len:8 PRP1 0x0 PRP2 0x0 00:22:52.731 [2024-05-15 17:13:29.151475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.731 [2024-05-15 17:13:29.151481] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:52.731 [2024-05-15 17:13:29.151486] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:52.731 [2024-05-15 17:13:29.151492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23976 len:8 PRP1 0x0 PRP2 0x0 00:22:52.731 [2024-05-15 17:13:29.151498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.731 [2024-05-15 17:13:29.151504] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:52.731 [2024-05-15 17:13:29.151509] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:52.731 [2024-05-15 17:13:29.151514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23984 len:8 PRP1 0x0 PRP2 0x0 00:22:52.731 [2024-05-15 17:13:29.151520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.731 [2024-05-15 17:13:29.151526] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:52.731 [2024-05-15 17:13:29.151531] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:52.731 [2024-05-15 17:13:29.151536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23992 len:8 PRP1 0x0 PRP2 0x0 00:22:52.731 [2024-05-15 17:13:29.151544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.731 [2024-05-15 17:13:29.151550] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:52.731 [2024-05-15 17:13:29.151555] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:52.731 [2024-05-15 17:13:29.151560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24000 len:8 PRP1 0x0 PRP2 0x0 00:22:52.731 [2024-05-15 17:13:29.151566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.731 [2024-05-15 17:13:29.151572] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:52.731 [2024-05-15 17:13:29.151577] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:52.731 [2024-05-15 17:13:29.151583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24008 len:8 PRP1 0x0 PRP2 0x0 00:22:52.731 [2024-05-15 17:13:29.151589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.731 [2024-05-15 17:13:29.151596] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:52.731 [2024-05-15 17:13:29.151601] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:52.731 [2024-05-15 17:13:29.151608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24016 len:8 PRP1 0x0 PRP2 0x0 00:22:52.731 [2024-05-15 17:13:29.151614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.731 [2024-05-15 17:13:29.151620] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:52.731 [2024-05-15 17:13:29.151625] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:52.731 [2024-05-15 17:13:29.151630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24024 len:8 PRP1 0x0 PRP2 0x0 00:22:52.731 [2024-05-15 17:13:29.151637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.731 [2024-05-15 17:13:29.151643] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:52.731 [2024-05-15 17:13:29.151648] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:52.731 [2024-05-15 17:13:29.151653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24032 len:8 PRP1 0x0 PRP2 0x0 00:22:52.731 [2024-05-15 17:13:29.151660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.731 [2024-05-15 17:13:29.151666] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:52.731 [2024-05-15 17:13:29.151671] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:52.731 [2024-05-15 17:13:29.151676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24040 len:8 PRP1 0x0 PRP2 0x0 00:22:52.731 [2024-05-15 17:13:29.151682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.731 [2024-05-15 17:13:29.151689] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:52.731 [2024-05-15 17:13:29.151694] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:52.731 [2024-05-15 17:13:29.151699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24048 len:8 PRP1 0x0 PRP2 0x0 00:22:52.731 [2024-05-15 17:13:29.151705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.731 [2024-05-15 17:13:29.151711] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:52.731 [2024-05-15 17:13:29.151716] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:52.731 [2024-05-15 17:13:29.151722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24056 len:8 PRP1 0x0 PRP2 0x0 00:22:52.731 [2024-05-15 17:13:29.151728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.731 [2024-05-15 17:13:29.151735] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:52.731 [2024-05-15 17:13:29.151740] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:52.731 [2024-05-15 17:13:29.151745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24064 len:8 PRP1 0x0 PRP2 0x0 00:22:52.731 [2024-05-15 17:13:29.151751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.731 [2024-05-15 17:13:29.151757] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:52.731 [2024-05-15 17:13:29.151762] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:52.731 [2024-05-15 17:13:29.151767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24072 len:8 PRP1 0x0 PRP2 0x0 00:22:52.731 [2024-05-15 17:13:29.151773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.731 [2024-05-15 17:13:29.151779] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:52.731 [2024-05-15 17:13:29.151784] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:52.731 [2024-05-15 17:13:29.151790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24080 len:8 PRP1 0x0 PRP2 0x0 00:22:52.731 [2024-05-15 17:13:29.151796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.731 [2024-05-15 17:13:29.151803] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:52.731 [2024-05-15 17:13:29.151807] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:52.731 [2024-05-15 17:13:29.151813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24088 len:8 PRP1 0x0 PRP2 0x0 00:22:52.731 [2024-05-15 17:13:29.151819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.731 [2024-05-15 17:13:29.151825] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:52.731 [2024-05-15 17:13:29.151830] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:52.731 [2024-05-15 17:13:29.151835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24096 len:8 PRP1 0x0 PRP2 0x0 00:22:52.731 [2024-05-15 17:13:29.151843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.731 [2024-05-15 17:13:29.151849] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:52.731 [2024-05-15 17:13:29.151854] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:52.731 [2024-05-15 17:13:29.151859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24104 len:8 PRP1 0x0 PRP2 0x0 00:22:52.731 [2024-05-15 17:13:29.151865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.731 [2024-05-15 17:13:29.151871] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:52.731 [2024-05-15 17:13:29.151876] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:52.731 [2024-05-15 17:13:29.151882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24112 len:8 PRP1 0x0 PRP2 0x0 00:22:52.731 [2024-05-15 17:13:29.151888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.731 [2024-05-15 17:13:29.151894] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:52.731 [2024-05-15 17:13:29.151900] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:52.731 [2024-05-15 17:13:29.151905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24120 len:8 PRP1 0x0 PRP2 0x0 00:22:52.731 [2024-05-15 17:13:29.151911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.731 [2024-05-15 17:13:29.151917] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:52.731 [2024-05-15 17:13:29.151922] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:52.731 [2024-05-15 17:13:29.151927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24128 len:8 PRP1 0x0 PRP2 0x0 00:22:52.731 [2024-05-15 17:13:29.151934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.731 [2024-05-15 17:13:29.151940] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:52.731 [2024-05-15 17:13:29.151945] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:52.731 [2024-05-15 17:13:29.151950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24136 len:8 PRP1 0x0 PRP2 0x0 00:22:52.731 [2024-05-15 17:13:29.151956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.731 [2024-05-15 17:13:29.151962] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:52.731 [2024-05-15 17:13:29.151967] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:52.731 [2024-05-15 17:13:29.151974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24144 len:8 PRP1 0x0 PRP2 0x0 00:22:52.731 [2024-05-15 17:13:29.151980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.732 [2024-05-15 17:13:29.151987] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:52.732 [2024-05-15 17:13:29.151991] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:52.732 [2024-05-15 17:13:29.151997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24152 len:8 PRP1 0x0 PRP2 0x0 00:22:52.732 [2024-05-15 17:13:29.152003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.732 [2024-05-15 17:13:29.152009] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:52.732 [2024-05-15 17:13:29.152014] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:52.732 [2024-05-15 17:13:29.152019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24160 len:8 PRP1 0x0 PRP2 0x0 00:22:52.732 [2024-05-15 17:13:29.152027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.732 [2024-05-15 17:13:29.152034] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:52.732 [2024-05-15 17:13:29.152039] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:52.732 [2024-05-15 17:13:29.152044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24168 len:8 PRP1 0x0 PRP2 0x0 00:22:52.732 [2024-05-15 17:13:29.152050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.732 [2024-05-15 17:13:29.152056] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:52.732 [2024-05-15 17:13:29.152061] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:52.732 [2024-05-15 17:13:29.152066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24176 len:8 PRP1 0x0 PRP2 0x0 00:22:52.732 [2024-05-15 17:13:29.152072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.732 [2024-05-15 17:13:29.152080] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:52.732 [2024-05-15 17:13:29.152085] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:52.732 [2024-05-15 17:13:29.152090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24184 len:8 PRP1 0x0 PRP2 0x0 00:22:52.732 [2024-05-15 17:13:29.152096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.732 [2024-05-15 17:13:29.152102] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:52.732 [2024-05-15 17:13:29.152107] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:52.732 [2024-05-15 17:13:29.152112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24192 len:8 PRP1 0x0 PRP2 0x0 00:22:52.732 [2024-05-15 17:13:29.152118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.732 [2024-05-15 17:13:29.152124] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:52.732 [2024-05-15 17:13:29.152129] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:52.732 [2024-05-15 17:13:29.152134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24200 len:8 PRP1 0x0 PRP2 0x0 00:22:52.732 [2024-05-15 17:13:29.152140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.732 [2024-05-15 17:13:29.152147] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:52.732 [2024-05-15 17:13:29.152151] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:52.732 [2024-05-15 17:13:29.152158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24208 len:8 PRP1 0x0 PRP2 0x0 00:22:52.732 [2024-05-15 17:13:29.152168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.732 [2024-05-15 17:13:29.152174] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:52.732 [2024-05-15 17:13:29.152179] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:52.732 [2024-05-15 17:13:29.152184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24216 len:8 PRP1 0x0 PRP2 0x0 00:22:52.732 [2024-05-15 17:13:29.152190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.732 [2024-05-15 17:13:29.152197] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:52.732 [2024-05-15 17:13:29.152202] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:52.732 [2024-05-15 17:13:29.152207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24224 len:8 PRP1 0x0 PRP2 0x0 00:22:52.732 [2024-05-15 17:13:29.152215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.732 [2024-05-15 17:13:29.152221] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:52.732 [2024-05-15 17:13:29.152226] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:52.732 [2024-05-15 17:13:29.152232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24232 len:8 PRP1 0x0 PRP2 0x0 00:22:52.732 [2024-05-15 17:13:29.152238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.732 [2024-05-15 17:13:29.152245] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:52.732 [2024-05-15 17:13:29.152250] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:52.732 [2024-05-15 17:13:29.152255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24240 len:8 PRP1 0x0 PRP2 0x0 00:22:52.732 [2024-05-15 17:13:29.152262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.732 [2024-05-15 17:13:29.152268] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:52.732 [2024-05-15 17:13:29.152273] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:52.732 [2024-05-15 17:13:29.152279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24248 len:8 PRP1 0x0 PRP2 0x0 00:22:52.732 [2024-05-15 17:13:29.152285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.732 [2024-05-15 17:13:29.152291] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:52.732 [2024-05-15 17:13:29.152296] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:52.732 [2024-05-15 17:13:29.152301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24256 len:8 PRP1 0x0 PRP2 0x0 00:22:52.732 [2024-05-15 17:13:29.152307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.732 [2024-05-15 17:13:29.152313] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:52.732 [2024-05-15 17:13:29.152318] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:52.732 [2024-05-15 17:13:29.152323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24264 len:8 PRP1 0x0 PRP2 0x0 00:22:52.732 [2024-05-15 17:13:29.162362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.732 [2024-05-15 17:13:29.162373] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:52.732 [2024-05-15 17:13:29.162379] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:52.732 [2024-05-15 17:13:29.162387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24272 len:8 PRP1 0x0 PRP2 0x0 00:22:52.732 [2024-05-15 17:13:29.162395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.732 [2024-05-15 17:13:29.162403] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:52.732 [2024-05-15 17:13:29.162409] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:52.732 [2024-05-15 17:13:29.162414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24280 len:8 PRP1 0x0 PRP2 0x0 00:22:52.732 [2024-05-15 17:13:29.162421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.732 [2024-05-15 17:13:29.162428] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:52.732 [2024-05-15 17:13:29.162434] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:52.732 [2024-05-15 17:13:29.162439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24288 len:8 PRP1 0x0 PRP2 0x0 00:22:52.732 [2024-05-15 17:13:29.162447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.732 [2024-05-15 17:13:29.162454] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:52.732 [2024-05-15 17:13:29.162460] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:52.732 [2024-05-15 17:13:29.162465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24296 len:8 PRP1 0x0 PRP2 0x0 00:22:52.732 [2024-05-15 17:13:29.162472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.732 [2024-05-15 17:13:29.162479] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:52.732 [2024-05-15 17:13:29.162486] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:52.732 [2024-05-15 17:13:29.162492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24304 len:8 PRP1 0x0 PRP2 0x0 00:22:52.732 [2024-05-15 17:13:29.162499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.732 [2024-05-15 17:13:29.162506] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:52.732 [2024-05-15 17:13:29.162511] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:52.732 [2024-05-15 17:13:29.162517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24312 len:8 PRP1 0x0 PRP2 0x0 00:22:52.732 [2024-05-15 17:13:29.162524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.732 [2024-05-15 17:13:29.162531] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:52.732 [2024-05-15 17:13:29.162536] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:52.732 [2024-05-15 17:13:29.162542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24320 len:8 PRP1 0x0 PRP2 0x0 00:22:52.732 [2024-05-15 17:13:29.162549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.732 [2024-05-15 17:13:29.162555] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:52.732 [2024-05-15 17:13:29.162561] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:52.732 [2024-05-15 17:13:29.162567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24328 len:8 PRP1 0x0 PRP2 0x0 00:22:52.732 [2024-05-15 17:13:29.162573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.732 [2024-05-15 17:13:29.162580] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:52.733 [2024-05-15 17:13:29.162585] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:52.733 [2024-05-15 17:13:29.162591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24336 len:8 PRP1 0x0 PRP2 0x0 00:22:52.733 [2024-05-15 17:13:29.162598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.733 [2024-05-15 17:13:29.162605] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:52.733 [2024-05-15 17:13:29.162610] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:52.733 [2024-05-15 17:13:29.162616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24344 len:8 PRP1 0x0 PRP2 0x0 00:22:52.733 [2024-05-15 17:13:29.162623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.733 [2024-05-15 17:13:29.162630] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:52.733 [2024-05-15 17:13:29.162635] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:52.733 [2024-05-15 17:13:29.162641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24352 len:8 PRP1 0x0 PRP2 0x0 00:22:52.733 [2024-05-15 17:13:29.162648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.733 [2024-05-15 17:13:29.162655] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:52.733 [2024-05-15 17:13:29.162661] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:52.733 [2024-05-15 17:13:29.162666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24360 len:8 PRP1 0x0 PRP2 0x0 00:22:52.733 [2024-05-15 17:13:29.162673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.733 [2024-05-15 17:13:29.162684] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:52.733 [2024-05-15 17:13:29.162690] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:52.733 [2024-05-15 17:13:29.162695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24368 len:8 PRP1 0x0 PRP2 0x0 00:22:52.733 [2024-05-15 17:13:29.162702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.733 [2024-05-15 17:13:29.162709] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:52.733 [2024-05-15 17:13:29.162714] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:52.733 [2024-05-15 17:13:29.162720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24376 len:8 PRP1 0x0 PRP2 0x0 00:22:52.733 [2024-05-15 17:13:29.162727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.733 [2024-05-15 17:13:29.162734] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:52.733 [2024-05-15 17:13:29.162739] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:52.733 [2024-05-15 17:13:29.162745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24384 len:8 PRP1 0x0 PRP2 0x0 00:22:52.733 [2024-05-15 17:13:29.162752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.733 [2024-05-15 17:13:29.162759] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:52.733 [2024-05-15 17:13:29.162764] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:52.733 [2024-05-15 17:13:29.162770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24392 len:8 PRP1 0x0 PRP2 0x0 00:22:52.733 [2024-05-15 17:13:29.162776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.733 [2024-05-15 17:13:29.162784] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:52.733 [2024-05-15 17:13:29.162789] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:52.733 [2024-05-15 17:13:29.162795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24400 len:8 PRP1 0x0 PRP2 0x0 00:22:52.733 [2024-05-15 17:13:29.162801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.733 [2024-05-15 17:13:29.162809] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:52.733 [2024-05-15 17:13:29.162814] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:52.733 [2024-05-15 17:13:29.162820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24408 len:8 PRP1 0x0 PRP2 0x0 00:22:52.733 [2024-05-15 17:13:29.162827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.733 [2024-05-15 17:13:29.162835] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:52.733 [2024-05-15 17:13:29.162840] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:52.733 [2024-05-15 17:13:29.162846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24416 len:8 PRP1 0x0 PRP2 0x0 00:22:52.733 [2024-05-15 17:13:29.162853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.733 [2024-05-15 17:13:29.162860] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:52.733 [2024-05-15 17:13:29.162865] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:52.733 [2024-05-15 17:13:29.162871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24424 len:8 PRP1 0x0 PRP2 0x0 00:22:52.733 [2024-05-15 17:13:29.162879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.733 [2024-05-15 17:13:29.162887] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:52.733 [2024-05-15 17:13:29.162892] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:52.733 [2024-05-15 17:13:29.162898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24432 len:8 PRP1 0x0 PRP2 0x0 00:22:52.733 [2024-05-15 17:13:29.162905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.733 [2024-05-15 17:13:29.162946] bdev_nvme.c:1602:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x151de00 was disconnected and freed. reset controller. 00:22:52.733 [2024-05-15 17:13:29.162955] bdev_nvme.c:1858:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4421 to 10.0.0.2:4422 00:22:52.733 [2024-05-15 17:13:29.162977] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:52.733 [2024-05-15 17:13:29.162985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.733 [2024-05-15 17:13:29.162993] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:52.733 [2024-05-15 17:13:29.163000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.733 [2024-05-15 17:13:29.163007] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:52.733 [2024-05-15 17:13:29.163014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.733 [2024-05-15 17:13:29.163023] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:52.733 [2024-05-15 17:13:29.163029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.733 [2024-05-15 17:13:29.163036] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:52.733 [2024-05-15 17:13:29.163067] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1354400 (9): Bad file descriptor 00:22:52.733 [2024-05-15 17:13:29.166197] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:52.733 [2024-05-15 17:13:29.206750] bdev_nvme.c:2055:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:22:52.733 [2024-05-15 17:13:33.541462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:20936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.733 [2024-05-15 17:13:33.541498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.733 [2024-05-15 17:13:33.541513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:20944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.733 [2024-05-15 17:13:33.541520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.733 [2024-05-15 17:13:33.541529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:20952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.733 [2024-05-15 17:13:33.541536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.733 [2024-05-15 17:13:33.541544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:20960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.733 [2024-05-15 17:13:33.541550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.733 [2024-05-15 17:13:33.541563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:20968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.733 [2024-05-15 17:13:33.541569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.733 [2024-05-15 17:13:33.541577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:20976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.734 [2024-05-15 17:13:33.541583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.734 [2024-05-15 17:13:33.541591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:20984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.734 [2024-05-15 17:13:33.541597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.734 [2024-05-15 17:13:33.541605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:20992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.734 [2024-05-15 17:13:33.541611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.734 [2024-05-15 17:13:33.541618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:21000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.734 [2024-05-15 17:13:33.541624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.734 [2024-05-15 17:13:33.541632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:21008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.734 [2024-05-15 17:13:33.541638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.734 [2024-05-15 17:13:33.541646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:21016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.734 [2024-05-15 17:13:33.541652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.734 [2024-05-15 17:13:33.541660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:21024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.734 [2024-05-15 17:13:33.541666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.734 [2024-05-15 17:13:33.541674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:21032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.734 [2024-05-15 17:13:33.541680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.734 [2024-05-15 17:13:33.541688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:21040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.734 [2024-05-15 17:13:33.541695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.734 [2024-05-15 17:13:33.541703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:21048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.734 [2024-05-15 17:13:33.541709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.734 [2024-05-15 17:13:33.541717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:21056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.734 [2024-05-15 17:13:33.541723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.734 [2024-05-15 17:13:33.541730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:21064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.734 [2024-05-15 17:13:33.541738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.734 [2024-05-15 17:13:33.541746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:21072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.734 [2024-05-15 17:13:33.541752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.734 [2024-05-15 17:13:33.541760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:20120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.734 [2024-05-15 17:13:33.541766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.734 [2024-05-15 17:13:33.541774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:20128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.734 [2024-05-15 17:13:33.541780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.734 [2024-05-15 17:13:33.541788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:20136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.734 [2024-05-15 17:13:33.541794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.734 [2024-05-15 17:13:33.541801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:20144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.734 [2024-05-15 17:13:33.541808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.734 [2024-05-15 17:13:33.541815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:20152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.734 [2024-05-15 17:13:33.541821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.734 [2024-05-15 17:13:33.541829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:20160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.734 [2024-05-15 17:13:33.541835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.734 [2024-05-15 17:13:33.541842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:20168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.734 [2024-05-15 17:13:33.541848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.734 [2024-05-15 17:13:33.541856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:21080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.734 [2024-05-15 17:13:33.541862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.734 [2024-05-15 17:13:33.541870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:20176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.734 [2024-05-15 17:13:33.541876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.734 [2024-05-15 17:13:33.541884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.734 [2024-05-15 17:13:33.541890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.734 [2024-05-15 17:13:33.541897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:20192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.734 [2024-05-15 17:13:33.541903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.734 [2024-05-15 17:13:33.541911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:20200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.734 [2024-05-15 17:13:33.541919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.734 [2024-05-15 17:13:33.541927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:20208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.734 [2024-05-15 17:13:33.541933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.734 [2024-05-15 17:13:33.541941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:20216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.734 [2024-05-15 17:13:33.541947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.734 [2024-05-15 17:13:33.541954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:20224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.734 [2024-05-15 17:13:33.541961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.734 [2024-05-15 17:13:33.541969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:20232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.734 [2024-05-15 17:13:33.541975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.734 [2024-05-15 17:13:33.541983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:20240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.734 [2024-05-15 17:13:33.541989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.734 [2024-05-15 17:13:33.541997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:20248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.734 [2024-05-15 17:13:33.542003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.734 [2024-05-15 17:13:33.542010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:20256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.734 [2024-05-15 17:13:33.542016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.734 [2024-05-15 17:13:33.542024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:20264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.734 [2024-05-15 17:13:33.542030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.734 [2024-05-15 17:13:33.542038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:20272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.734 [2024-05-15 17:13:33.542044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.734 [2024-05-15 17:13:33.542052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:20280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.734 [2024-05-15 17:13:33.542058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.734 [2024-05-15 17:13:33.542065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:20288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.734 [2024-05-15 17:13:33.542071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.734 [2024-05-15 17:13:33.542079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:20296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.734 [2024-05-15 17:13:33.542085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.734 [2024-05-15 17:13:33.542095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.734 [2024-05-15 17:13:33.542101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.734 [2024-05-15 17:13:33.542109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:20312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.734 [2024-05-15 17:13:33.542115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.734 [2024-05-15 17:13:33.542123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:20320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.734 [2024-05-15 17:13:33.542129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.734 [2024-05-15 17:13:33.542137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:20328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.735 [2024-05-15 17:13:33.542143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.735 [2024-05-15 17:13:33.542151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:20336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.735 [2024-05-15 17:13:33.542157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.735 [2024-05-15 17:13:33.542171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:20344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.735 [2024-05-15 17:13:33.542178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.735 [2024-05-15 17:13:33.542185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:20352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.735 [2024-05-15 17:13:33.542192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.735 [2024-05-15 17:13:33.542200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:20360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.735 [2024-05-15 17:13:33.542206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.735 [2024-05-15 17:13:33.542214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:20368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.735 [2024-05-15 17:13:33.542220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.735 [2024-05-15 17:13:33.542228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:20376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.735 [2024-05-15 17:13:33.542234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.735 [2024-05-15 17:13:33.542242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:20384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.735 [2024-05-15 17:13:33.542248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.735 [2024-05-15 17:13:33.542256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:20392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.735 [2024-05-15 17:13:33.542263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.735 [2024-05-15 17:13:33.542271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:20400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.735 [2024-05-15 17:13:33.542278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.735 [2024-05-15 17:13:33.542286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:20408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.735 [2024-05-15 17:13:33.542293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.735 [2024-05-15 17:13:33.542300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:20416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.735 [2024-05-15 17:13:33.542306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.735 [2024-05-15 17:13:33.542314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:20424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.735 [2024-05-15 17:13:33.542320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.735 [2024-05-15 17:13:33.542328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:20432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.735 [2024-05-15 17:13:33.542334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.735 [2024-05-15 17:13:33.542343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:20440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.735 [2024-05-15 17:13:33.542349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.735 [2024-05-15 17:13:33.542357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:20448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.735 [2024-05-15 17:13:33.542363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.735 [2024-05-15 17:13:33.542370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:20456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.735 [2024-05-15 17:13:33.542377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.735 [2024-05-15 17:13:33.542385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:20464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.735 [2024-05-15 17:13:33.542391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.735 [2024-05-15 17:13:33.542398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:20472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.735 [2024-05-15 17:13:33.542404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.735 [2024-05-15 17:13:33.542412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:20480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.735 [2024-05-15 17:13:33.542419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.735 [2024-05-15 17:13:33.542426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:20488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.735 [2024-05-15 17:13:33.542433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.735 [2024-05-15 17:13:33.542440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:20496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.735 [2024-05-15 17:13:33.542446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.735 [2024-05-15 17:13:33.542455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:20504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.735 [2024-05-15 17:13:33.542462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.735 [2024-05-15 17:13:33.542469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:20512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.735 [2024-05-15 17:13:33.542476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.735 [2024-05-15 17:13:33.542484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:20520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.735 [2024-05-15 17:13:33.542490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.735 [2024-05-15 17:13:33.542497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:20528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.735 [2024-05-15 17:13:33.542503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.735 [2024-05-15 17:13:33.542511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.735 [2024-05-15 17:13:33.542517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.735 [2024-05-15 17:13:33.542525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:20544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.735 [2024-05-15 17:13:33.542531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.735 [2024-05-15 17:13:33.542539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:20552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.735 [2024-05-15 17:13:33.542545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.735 [2024-05-15 17:13:33.542552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:20560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.735 [2024-05-15 17:13:33.542558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.735 [2024-05-15 17:13:33.542567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:20568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.735 [2024-05-15 17:13:33.542573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.735 [2024-05-15 17:13:33.542580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:20576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.735 [2024-05-15 17:13:33.542586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.735 [2024-05-15 17:13:33.542594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:20584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.735 [2024-05-15 17:13:33.542600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.735 [2024-05-15 17:13:33.542608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.735 [2024-05-15 17:13:33.542614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.735 [2024-05-15 17:13:33.542622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:20600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.735 [2024-05-15 17:13:33.542628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.735 [2024-05-15 17:13:33.542637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:20608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.735 [2024-05-15 17:13:33.542644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.735 [2024-05-15 17:13:33.542651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.735 [2024-05-15 17:13:33.542657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.735 [2024-05-15 17:13:33.542665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:20624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.735 [2024-05-15 17:13:33.542671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.735 [2024-05-15 17:13:33.542679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:20632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.735 [2024-05-15 17:13:33.542685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.735 [2024-05-15 17:13:33.542693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:20640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.735 [2024-05-15 17:13:33.542699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.735 [2024-05-15 17:13:33.542707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:20648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.735 [2024-05-15 17:13:33.542713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.735 [2024-05-15 17:13:33.542721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:20656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.736 [2024-05-15 17:13:33.542727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.736 [2024-05-15 17:13:33.542735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.736 [2024-05-15 17:13:33.542741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.736 [2024-05-15 17:13:33.542748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:20672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.736 [2024-05-15 17:13:33.542754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.736 [2024-05-15 17:13:33.542762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:20680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.736 [2024-05-15 17:13:33.542769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.736 [2024-05-15 17:13:33.542776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:20688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.736 [2024-05-15 17:13:33.542782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.736 [2024-05-15 17:13:33.542790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:20696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.736 [2024-05-15 17:13:33.542796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.736 [2024-05-15 17:13:33.542803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:20704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.736 [2024-05-15 17:13:33.542811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.736 [2024-05-15 17:13:33.542819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:20712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.736 [2024-05-15 17:13:33.542825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.736 [2024-05-15 17:13:33.542833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:20720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.736 [2024-05-15 17:13:33.542839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.736 [2024-05-15 17:13:33.542846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:20728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.736 [2024-05-15 17:13:33.542852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.736 [2024-05-15 17:13:33.542860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:20736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.736 [2024-05-15 17:13:33.542867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.736 [2024-05-15 17:13:33.542875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:20744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.736 [2024-05-15 17:13:33.542881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.736 [2024-05-15 17:13:33.542889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:20752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.736 [2024-05-15 17:13:33.542895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.736 [2024-05-15 17:13:33.542903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:20760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.736 [2024-05-15 17:13:33.542909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.736 [2024-05-15 17:13:33.542917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:20768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.736 [2024-05-15 17:13:33.542923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.736 [2024-05-15 17:13:33.542930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:20776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.736 [2024-05-15 17:13:33.542937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.736 [2024-05-15 17:13:33.542944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:20784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.736 [2024-05-15 17:13:33.542950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.736 [2024-05-15 17:13:33.542958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:20792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.736 [2024-05-15 17:13:33.542964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.736 [2024-05-15 17:13:33.542972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:20800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.736 [2024-05-15 17:13:33.542978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.736 [2024-05-15 17:13:33.542987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.736 [2024-05-15 17:13:33.542993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.736 [2024-05-15 17:13:33.543000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:20816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.736 [2024-05-15 17:13:33.543007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.736 [2024-05-15 17:13:33.543014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:20824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.736 [2024-05-15 17:13:33.543021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.736 [2024-05-15 17:13:33.543028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:20832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.736 [2024-05-15 17:13:33.543034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.736 [2024-05-15 17:13:33.543042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:20840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.736 [2024-05-15 17:13:33.543048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.736 [2024-05-15 17:13:33.543055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:20848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.736 [2024-05-15 17:13:33.543064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.736 [2024-05-15 17:13:33.543072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:20856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.736 [2024-05-15 17:13:33.543078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.736 [2024-05-15 17:13:33.543086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:20864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.736 [2024-05-15 17:13:33.543093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.736 [2024-05-15 17:13:33.543101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:20872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.736 [2024-05-15 17:13:33.543107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.736 [2024-05-15 17:13:33.543115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:20880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.736 [2024-05-15 17:13:33.543121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.736 [2024-05-15 17:13:33.543129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:20888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.736 [2024-05-15 17:13:33.543135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.736 [2024-05-15 17:13:33.543142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:20896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.736 [2024-05-15 17:13:33.543148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.736 [2024-05-15 17:13:33.543156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:20904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.736 [2024-05-15 17:13:33.543167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.736 [2024-05-15 17:13:33.543175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:20912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.736 [2024-05-15 17:13:33.543181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.736 [2024-05-15 17:13:33.543189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:20920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.736 [2024-05-15 17:13:33.543195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.736 [2024-05-15 17:13:33.543202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:20928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.736 [2024-05-15 17:13:33.543208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.736 [2024-05-15 17:13:33.543216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:21088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.736 [2024-05-15 17:13:33.543223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.736 [2024-05-15 17:13:33.543230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:21096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.736 [2024-05-15 17:13:33.543236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.736 [2024-05-15 17:13:33.543244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:21104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.736 [2024-05-15 17:13:33.543250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.736 [2024-05-15 17:13:33.543257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:21112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.736 [2024-05-15 17:13:33.543263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.736 [2024-05-15 17:13:33.543271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:21120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.736 [2024-05-15 17:13:33.543277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.736 [2024-05-15 17:13:33.543285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:21128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.737 [2024-05-15 17:13:33.543292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.737 [2024-05-15 17:13:33.543310] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:52.737 [2024-05-15 17:13:33.543317] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:52.737 [2024-05-15 17:13:33.543322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21136 len:8 PRP1 0x0 PRP2 0x0 00:22:52.737 [2024-05-15 17:13:33.543330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.737 [2024-05-15 17:13:33.543371] bdev_nvme.c:1602:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x151dbf0 was disconnected and freed. reset controller. 00:22:52.737 [2024-05-15 17:13:33.543379] bdev_nvme.c:1858:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4422 to 10.0.0.2:4420 00:22:52.737 [2024-05-15 17:13:33.543398] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:52.737 [2024-05-15 17:13:33.543407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.737 [2024-05-15 17:13:33.543415] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:52.737 [2024-05-15 17:13:33.543422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.737 [2024-05-15 17:13:33.543429] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:52.737 [2024-05-15 17:13:33.543435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.737 [2024-05-15 17:13:33.543442] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:52.737 [2024-05-15 17:13:33.543448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.737 [2024-05-15 17:13:33.543454] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:52.737 [2024-05-15 17:13:33.546348] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:52.737 [2024-05-15 17:13:33.546378] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1354400 (9): Bad file descriptor 00:22:52.737 [2024-05-15 17:13:33.620609] bdev_nvme.c:2055:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:22:52.737 00:22:52.737 Latency(us) 00:22:52.737 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:52.737 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:22:52.737 Verification LBA range: start 0x0 length 0x4000 00:22:52.737 NVMe0n1 : 15.00 10732.16 41.92 431.17 0.00 11442.78 455.90 21199.47 00:22:52.737 =================================================================================================================== 00:22:52.737 Total : 10732.16 41.92 431.17 0.00 11442.78 455.90 21199.47 00:22:52.737 Received shutdown signal, test time was about 15.000000 seconds 00:22:52.737 00:22:52.737 Latency(us) 00:22:52.737 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:52.737 =================================================================================================================== 00:22:52.737 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:22:52.737 17:13:39 nvmf_tcp.nvmf_failover -- host/failover.sh@65 -- # grep -c 'Resetting controller successful' 00:22:52.737 17:13:39 nvmf_tcp.nvmf_failover -- host/failover.sh@65 -- # count=3 00:22:52.737 17:13:39 nvmf_tcp.nvmf_failover -- host/failover.sh@67 -- # (( count != 3 )) 00:22:52.737 17:13:39 nvmf_tcp.nvmf_failover -- host/failover.sh@73 -- # bdevperf_pid=3161697 00:22:52.737 17:13:39 nvmf_tcp.nvmf_failover -- host/failover.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 1 -f 00:22:52.737 17:13:39 nvmf_tcp.nvmf_failover -- host/failover.sh@75 -- # waitforlisten 3161697 /var/tmp/bdevperf.sock 00:22:52.737 17:13:39 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@827 -- # '[' -z 3161697 ']' 00:22:52.737 17:13:39 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:52.737 17:13:39 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@832 -- # local max_retries=100 00:22:52.737 17:13:39 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:52.737 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:52.737 17:13:39 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@836 -- # xtrace_disable 00:22:52.737 17:13:39 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:22:52.995 17:13:40 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:22:52.995 17:13:40 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@860 -- # return 0 00:22:52.995 17:13:40 nvmf_tcp.nvmf_failover -- host/failover.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:22:53.254 [2024-05-15 17:13:40.767085] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:22:53.254 17:13:40 nvmf_tcp.nvmf_failover -- host/failover.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:22:53.512 [2024-05-15 17:13:40.951601] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:22:53.512 17:13:40 nvmf_tcp.nvmf_failover -- host/failover.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:22:53.771 NVMe0n1 00:22:53.771 17:13:41 nvmf_tcp.nvmf_failover -- host/failover.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:22:54.030 00:22:54.289 17:13:41 nvmf_tcp.nvmf_failover -- host/failover.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:22:54.548 00:22:54.548 17:13:41 nvmf_tcp.nvmf_failover -- host/failover.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:22:54.548 17:13:41 nvmf_tcp.nvmf_failover -- host/failover.sh@82 -- # grep -q NVMe0 00:22:54.548 17:13:42 nvmf_tcp.nvmf_failover -- host/failover.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:22:54.807 17:13:42 nvmf_tcp.nvmf_failover -- host/failover.sh@87 -- # sleep 3 00:22:58.094 17:13:45 nvmf_tcp.nvmf_failover -- host/failover.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:22:58.094 17:13:45 nvmf_tcp.nvmf_failover -- host/failover.sh@88 -- # grep -q NVMe0 00:22:58.094 17:13:45 nvmf_tcp.nvmf_failover -- host/failover.sh@90 -- # run_test_pid=3162623 00:22:58.094 17:13:45 nvmf_tcp.nvmf_failover -- host/failover.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:22:58.094 17:13:45 nvmf_tcp.nvmf_failover -- host/failover.sh@92 -- # wait 3162623 00:22:59.030 0 00:22:59.030 17:13:46 nvmf_tcp.nvmf_failover -- host/failover.sh@94 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:22:59.030 [2024-05-15 17:13:39.798696] Starting SPDK v24.05-pre git sha1 0ba8ca574 / DPDK 23.11.0 initialization... 00:22:59.030 [2024-05-15 17:13:39.798748] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3161697 ] 00:22:59.031 EAL: No free 2048 kB hugepages reported on node 1 00:22:59.031 [2024-05-15 17:13:39.853197] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:59.031 [2024-05-15 17:13:39.922123] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:22:59.031 [2024-05-15 17:13:42.298288] bdev_nvme.c:1858:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:22:59.031 [2024-05-15 17:13:42.298334] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:59.031 [2024-05-15 17:13:42.298344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.031 [2024-05-15 17:13:42.298353] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:59.031 [2024-05-15 17:13:42.298360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.031 [2024-05-15 17:13:42.298367] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:59.031 [2024-05-15 17:13:42.298374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.031 [2024-05-15 17:13:42.298381] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:59.031 [2024-05-15 17:13:42.298387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:59.031 [2024-05-15 17:13:42.298393] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:59.031 [2024-05-15 17:13:42.298416] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcc7400 (9): Bad file descriptor 00:22:59.031 [2024-05-15 17:13:42.298428] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:59.031 [2024-05-15 17:13:42.309376] bdev_nvme.c:2055:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:22:59.031 Running I/O for 1 seconds... 00:22:59.031 00:22:59.031 Latency(us) 00:22:59.031 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:59.031 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:22:59.031 Verification LBA range: start 0x0 length 0x4000 00:22:59.031 NVMe0n1 : 1.00 10824.07 42.28 0.00 0.00 11775.70 1852.10 9801.91 00:22:59.031 =================================================================================================================== 00:22:59.031 Total : 10824.07 42.28 0.00 0.00 11775.70 1852.10 9801.91 00:22:59.031 17:13:46 nvmf_tcp.nvmf_failover -- host/failover.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:22:59.031 17:13:46 nvmf_tcp.nvmf_failover -- host/failover.sh@95 -- # grep -q NVMe0 00:22:59.291 17:13:46 nvmf_tcp.nvmf_failover -- host/failover.sh@98 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:22:59.550 17:13:47 nvmf_tcp.nvmf_failover -- host/failover.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:22:59.550 17:13:47 nvmf_tcp.nvmf_failover -- host/failover.sh@99 -- # grep -q NVMe0 00:22:59.550 17:13:47 nvmf_tcp.nvmf_failover -- host/failover.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:22:59.808 17:13:47 nvmf_tcp.nvmf_failover -- host/failover.sh@101 -- # sleep 3 00:23:03.095 17:13:50 nvmf_tcp.nvmf_failover -- host/failover.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:23:03.095 17:13:50 nvmf_tcp.nvmf_failover -- host/failover.sh@103 -- # grep -q NVMe0 00:23:03.095 17:13:50 nvmf_tcp.nvmf_failover -- host/failover.sh@108 -- # killprocess 3161697 00:23:03.095 17:13:50 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@946 -- # '[' -z 3161697 ']' 00:23:03.095 17:13:50 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@950 -- # kill -0 3161697 00:23:03.095 17:13:50 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@951 -- # uname 00:23:03.095 17:13:50 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:23:03.095 17:13:50 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3161697 00:23:03.095 17:13:50 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:23:03.095 17:13:50 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:23:03.095 17:13:50 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3161697' 00:23:03.095 killing process with pid 3161697 00:23:03.095 17:13:50 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@965 -- # kill 3161697 00:23:03.095 17:13:50 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@970 -- # wait 3161697 00:23:03.354 17:13:50 nvmf_tcp.nvmf_failover -- host/failover.sh@110 -- # sync 00:23:03.354 17:13:50 nvmf_tcp.nvmf_failover -- host/failover.sh@111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:23:03.613 17:13:51 nvmf_tcp.nvmf_failover -- host/failover.sh@113 -- # trap - SIGINT SIGTERM EXIT 00:23:03.613 17:13:51 nvmf_tcp.nvmf_failover -- host/failover.sh@115 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:23:03.613 17:13:51 nvmf_tcp.nvmf_failover -- host/failover.sh@116 -- # nvmftestfini 00:23:03.613 17:13:51 nvmf_tcp.nvmf_failover -- nvmf/common.sh@488 -- # nvmfcleanup 00:23:03.613 17:13:51 nvmf_tcp.nvmf_failover -- nvmf/common.sh@117 -- # sync 00:23:03.613 17:13:51 nvmf_tcp.nvmf_failover -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:23:03.613 17:13:51 nvmf_tcp.nvmf_failover -- nvmf/common.sh@120 -- # set +e 00:23:03.613 17:13:51 nvmf_tcp.nvmf_failover -- nvmf/common.sh@121 -- # for i in {1..20} 00:23:03.613 17:13:51 nvmf_tcp.nvmf_failover -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:23:03.613 rmmod nvme_tcp 00:23:03.613 rmmod nvme_fabrics 00:23:03.613 rmmod nvme_keyring 00:23:03.613 17:13:51 nvmf_tcp.nvmf_failover -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:23:03.613 17:13:51 nvmf_tcp.nvmf_failover -- nvmf/common.sh@124 -- # set -e 00:23:03.613 17:13:51 nvmf_tcp.nvmf_failover -- nvmf/common.sh@125 -- # return 0 00:23:03.613 17:13:51 nvmf_tcp.nvmf_failover -- nvmf/common.sh@489 -- # '[' -n 3158673 ']' 00:23:03.613 17:13:51 nvmf_tcp.nvmf_failover -- nvmf/common.sh@490 -- # killprocess 3158673 00:23:03.613 17:13:51 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@946 -- # '[' -z 3158673 ']' 00:23:03.613 17:13:51 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@950 -- # kill -0 3158673 00:23:03.613 17:13:51 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@951 -- # uname 00:23:03.613 17:13:51 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:23:03.613 17:13:51 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3158673 00:23:03.613 17:13:51 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:23:03.613 17:13:51 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:23:03.613 17:13:51 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3158673' 00:23:03.613 killing process with pid 3158673 00:23:03.613 17:13:51 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@965 -- # kill 3158673 00:23:03.613 [2024-05-15 17:13:51.124898] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:23:03.613 17:13:51 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@970 -- # wait 3158673 00:23:03.872 17:13:51 nvmf_tcp.nvmf_failover -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:23:03.872 17:13:51 nvmf_tcp.nvmf_failover -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:23:03.872 17:13:51 nvmf_tcp.nvmf_failover -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:23:03.872 17:13:51 nvmf_tcp.nvmf_failover -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:23:03.872 17:13:51 nvmf_tcp.nvmf_failover -- nvmf/common.sh@278 -- # remove_spdk_ns 00:23:03.872 17:13:51 nvmf_tcp.nvmf_failover -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:03.872 17:13:51 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:03.872 17:13:51 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:05.777 17:13:53 nvmf_tcp.nvmf_failover -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:23:05.777 00:23:05.777 real 0m37.801s 00:23:05.777 user 2m1.678s 00:23:05.777 sys 0m7.338s 00:23:05.777 17:13:53 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@1122 -- # xtrace_disable 00:23:05.777 17:13:53 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:23:05.777 ************************************ 00:23:05.777 END TEST nvmf_failover 00:23:05.777 ************************************ 00:23:06.059 17:13:53 nvmf_tcp -- nvmf/nvmf.sh@100 -- # run_test nvmf_host_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:23:06.059 17:13:53 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:23:06.059 17:13:53 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:23:06.059 17:13:53 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:23:06.059 ************************************ 00:23:06.059 START TEST nvmf_host_discovery 00:23:06.059 ************************************ 00:23:06.059 17:13:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:23:06.059 * Looking for test storage... 00:23:06.059 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:23:06.059 17:13:53 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:06.059 17:13:53 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@7 -- # uname -s 00:23:06.059 17:13:53 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:06.059 17:13:53 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:06.059 17:13:53 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:06.059 17:13:53 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:06.059 17:13:53 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:06.059 17:13:53 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:06.059 17:13:53 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:06.059 17:13:53 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:06.059 17:13:53 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:06.059 17:13:53 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:06.059 17:13:53 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:23:06.059 17:13:53 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:23:06.059 17:13:53 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:06.059 17:13:53 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:06.059 17:13:53 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:06.059 17:13:53 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:06.059 17:13:53 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:06.059 17:13:53 nvmf_tcp.nvmf_host_discovery -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:06.059 17:13:53 nvmf_tcp.nvmf_host_discovery -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:06.059 17:13:53 nvmf_tcp.nvmf_host_discovery -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:06.059 17:13:53 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:06.059 17:13:53 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:06.059 17:13:53 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:06.059 17:13:53 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@5 -- # export PATH 00:23:06.059 17:13:53 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:06.059 17:13:53 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@47 -- # : 0 00:23:06.059 17:13:53 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:23:06.059 17:13:53 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:23:06.059 17:13:53 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:06.059 17:13:53 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:06.059 17:13:53 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:06.059 17:13:53 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:23:06.059 17:13:53 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:23:06.059 17:13:53 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@51 -- # have_pci_nics=0 00:23:06.059 17:13:53 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@11 -- # '[' tcp == rdma ']' 00:23:06.059 17:13:53 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@16 -- # DISCOVERY_PORT=8009 00:23:06.059 17:13:53 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@17 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:23:06.059 17:13:53 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode 00:23:06.059 17:13:53 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@22 -- # HOST_NQN=nqn.2021-12.io.spdk:test 00:23:06.059 17:13:53 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@23 -- # HOST_SOCK=/tmp/host.sock 00:23:06.059 17:13:53 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@25 -- # nvmftestinit 00:23:06.059 17:13:53 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:23:06.059 17:13:53 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:06.059 17:13:53 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@448 -- # prepare_net_devs 00:23:06.059 17:13:53 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@410 -- # local -g is_hw=no 00:23:06.059 17:13:53 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@412 -- # remove_spdk_ns 00:23:06.059 17:13:53 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:06.059 17:13:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:06.059 17:13:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:06.059 17:13:53 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:23:06.059 17:13:53 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:23:06.059 17:13:53 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@285 -- # xtrace_disable 00:23:06.059 17:13:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:11.345 17:13:58 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:11.345 17:13:58 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@291 -- # pci_devs=() 00:23:11.345 17:13:58 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@291 -- # local -a pci_devs 00:23:11.345 17:13:58 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@292 -- # pci_net_devs=() 00:23:11.345 17:13:58 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:23:11.345 17:13:58 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@293 -- # pci_drivers=() 00:23:11.345 17:13:58 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@293 -- # local -A pci_drivers 00:23:11.345 17:13:58 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@295 -- # net_devs=() 00:23:11.345 17:13:58 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@295 -- # local -ga net_devs 00:23:11.345 17:13:58 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@296 -- # e810=() 00:23:11.345 17:13:58 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@296 -- # local -ga e810 00:23:11.345 17:13:58 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@297 -- # x722=() 00:23:11.345 17:13:58 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@297 -- # local -ga x722 00:23:11.345 17:13:58 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@298 -- # mlx=() 00:23:11.345 17:13:58 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@298 -- # local -ga mlx 00:23:11.346 17:13:58 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:11.346 17:13:58 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:11.346 17:13:58 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:11.346 17:13:58 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:11.346 17:13:58 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:11.346 17:13:58 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:11.346 17:13:58 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:11.346 17:13:58 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:11.346 17:13:58 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:11.346 17:13:58 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:11.346 17:13:58 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:11.346 17:13:58 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:23:11.346 17:13:58 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:23:11.346 17:13:58 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:23:11.346 17:13:58 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:23:11.346 17:13:58 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:23:11.346 17:13:58 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:23:11.346 17:13:58 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:11.346 17:13:58 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:23:11.346 Found 0000:86:00.0 (0x8086 - 0x159b) 00:23:11.346 17:13:58 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:11.346 17:13:58 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:11.346 17:13:58 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:11.346 17:13:58 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:11.346 17:13:58 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:11.346 17:13:58 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:11.346 17:13:58 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:23:11.346 Found 0000:86:00.1 (0x8086 - 0x159b) 00:23:11.346 17:13:58 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:11.346 17:13:58 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:11.346 17:13:58 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:11.346 17:13:58 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:11.346 17:13:58 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:11.346 17:13:58 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:23:11.346 17:13:58 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:23:11.346 17:13:58 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:23:11.346 17:13:58 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:11.346 17:13:58 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:11.346 17:13:58 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:11.346 17:13:58 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:11.346 17:13:58 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:11.346 17:13:58 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:11.346 17:13:58 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:11.346 17:13:58 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:23:11.346 Found net devices under 0000:86:00.0: cvl_0_0 00:23:11.346 17:13:58 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:11.346 17:13:58 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:11.346 17:13:58 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:11.346 17:13:58 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:11.346 17:13:58 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:11.346 17:13:58 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:11.346 17:13:58 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:11.346 17:13:58 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:11.346 17:13:58 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:23:11.346 Found net devices under 0000:86:00.1: cvl_0_1 00:23:11.346 17:13:58 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:11.346 17:13:58 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:23:11.346 17:13:58 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@414 -- # is_hw=yes 00:23:11.346 17:13:58 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:23:11.346 17:13:58 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:23:11.346 17:13:58 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:23:11.346 17:13:58 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:11.346 17:13:58 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:11.346 17:13:58 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:11.346 17:13:58 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:23:11.346 17:13:58 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:11.346 17:13:58 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:11.346 17:13:58 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:23:11.346 17:13:58 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:11.346 17:13:58 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:11.346 17:13:58 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:23:11.346 17:13:58 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:23:11.346 17:13:58 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:23:11.346 17:13:58 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:11.346 17:13:58 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:11.346 17:13:58 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:11.346 17:13:58 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:23:11.346 17:13:58 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:11.346 17:13:58 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:11.346 17:13:58 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:11.346 17:13:58 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:23:11.346 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:11.346 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.202 ms 00:23:11.346 00:23:11.346 --- 10.0.0.2 ping statistics --- 00:23:11.346 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:11.346 rtt min/avg/max/mdev = 0.202/0.202/0.202/0.000 ms 00:23:11.346 17:13:58 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:11.346 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:11.346 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.106 ms 00:23:11.346 00:23:11.346 --- 10.0.0.1 ping statistics --- 00:23:11.346 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:11.346 rtt min/avg/max/mdev = 0.106/0.106/0.106/0.000 ms 00:23:11.346 17:13:58 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:11.346 17:13:58 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@422 -- # return 0 00:23:11.346 17:13:58 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:23:11.346 17:13:58 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:11.346 17:13:58 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:23:11.346 17:13:58 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:23:11.346 17:13:58 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:11.346 17:13:58 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:23:11.346 17:13:58 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:23:11.346 17:13:58 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@30 -- # nvmfappstart -m 0x2 00:23:11.346 17:13:58 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:23:11.346 17:13:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@720 -- # xtrace_disable 00:23:11.346 17:13:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:11.346 17:13:58 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@481 -- # nvmfpid=3167046 00:23:11.346 17:13:58 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@482 -- # waitforlisten 3167046 00:23:11.346 17:13:58 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:23:11.346 17:13:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@827 -- # '[' -z 3167046 ']' 00:23:11.346 17:13:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:11.346 17:13:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@832 -- # local max_retries=100 00:23:11.346 17:13:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:11.346 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:11.346 17:13:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@836 -- # xtrace_disable 00:23:11.346 17:13:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:11.346 [2024-05-15 17:13:58.853687] Starting SPDK v24.05-pre git sha1 0ba8ca574 / DPDK 23.11.0 initialization... 00:23:11.346 [2024-05-15 17:13:58.853733] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:11.346 EAL: No free 2048 kB hugepages reported on node 1 00:23:11.346 [2024-05-15 17:13:58.911084] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:11.346 [2024-05-15 17:13:58.994093] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:11.346 [2024-05-15 17:13:58.994127] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:11.346 [2024-05-15 17:13:58.994135] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:11.346 [2024-05-15 17:13:58.994140] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:11.346 [2024-05-15 17:13:58.994145] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:11.346 [2024-05-15 17:13:58.994161] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:23:12.284 17:13:59 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:23:12.284 17:13:59 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@860 -- # return 0 00:23:12.284 17:13:59 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:23:12.284 17:13:59 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:12.284 17:13:59 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:12.284 17:13:59 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:12.284 17:13:59 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@32 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:23:12.284 17:13:59 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:12.284 17:13:59 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:12.284 [2024-05-15 17:13:59.699787] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:12.284 17:13:59 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:12.284 17:13:59 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.2 -s 8009 00:23:12.284 17:13:59 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:12.284 17:13:59 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:12.284 [2024-05-15 17:13:59.711774] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:23:12.284 [2024-05-15 17:13:59.711959] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:23:12.284 17:13:59 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:12.284 17:13:59 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@35 -- # rpc_cmd bdev_null_create null0 1000 512 00:23:12.284 17:13:59 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:12.284 17:13:59 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:12.284 null0 00:23:12.284 17:13:59 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:12.284 17:13:59 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@36 -- # rpc_cmd bdev_null_create null1 1000 512 00:23:12.284 17:13:59 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:12.284 17:13:59 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:12.284 null1 00:23:12.284 17:13:59 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:12.284 17:13:59 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@37 -- # rpc_cmd bdev_wait_for_examine 00:23:12.284 17:13:59 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:12.284 17:13:59 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:12.284 17:13:59 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:12.284 17:13:59 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@45 -- # hostpid=3167172 00:23:12.284 17:13:59 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock 00:23:12.284 17:13:59 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@46 -- # waitforlisten 3167172 /tmp/host.sock 00:23:12.284 17:13:59 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@827 -- # '[' -z 3167172 ']' 00:23:12.284 17:13:59 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@831 -- # local rpc_addr=/tmp/host.sock 00:23:12.284 17:13:59 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@832 -- # local max_retries=100 00:23:12.284 17:13:59 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:23:12.284 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:23:12.284 17:13:59 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@836 -- # xtrace_disable 00:23:12.284 17:13:59 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:12.284 [2024-05-15 17:13:59.783818] Starting SPDK v24.05-pre git sha1 0ba8ca574 / DPDK 23.11.0 initialization... 00:23:12.284 [2024-05-15 17:13:59.783858] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3167172 ] 00:23:12.284 EAL: No free 2048 kB hugepages reported on node 1 00:23:12.284 [2024-05-15 17:13:59.837016] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:12.284 [2024-05-15 17:13:59.916280] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:23:13.222 17:14:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:23:13.222 17:14:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@860 -- # return 0 00:23:13.222 17:14:00 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@48 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:23:13.222 17:14:00 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@50 -- # rpc_cmd -s /tmp/host.sock log_set_flag bdev_nvme 00:23:13.222 17:14:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:13.222 17:14:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:13.222 17:14:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:13.222 17:14:00 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@51 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test 00:23:13.222 17:14:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:13.222 17:14:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:13.222 17:14:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:13.222 17:14:00 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@72 -- # notify_id=0 00:23:13.222 17:14:00 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@83 -- # get_subsystem_names 00:23:13.222 17:14:00 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:23:13.223 17:14:00 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:23:13.223 17:14:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:13.223 17:14:00 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:23:13.223 17:14:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:13.223 17:14:00 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:23:13.223 17:14:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:13.223 17:14:00 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@83 -- # [[ '' == '' ]] 00:23:13.223 17:14:00 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@84 -- # get_bdev_list 00:23:13.223 17:14:00 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:13.223 17:14:00 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:23:13.223 17:14:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:13.223 17:14:00 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:23:13.223 17:14:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:13.223 17:14:00 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:23:13.223 17:14:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:13.223 17:14:00 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@84 -- # [[ '' == '' ]] 00:23:13.223 17:14:00 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@86 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 00:23:13.223 17:14:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:13.223 17:14:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:13.223 17:14:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:13.223 17:14:00 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@87 -- # get_subsystem_names 00:23:13.223 17:14:00 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:23:13.223 17:14:00 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:23:13.223 17:14:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:13.223 17:14:00 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:23:13.223 17:14:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:13.223 17:14:00 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:23:13.223 17:14:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:13.223 17:14:00 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@87 -- # [[ '' == '' ]] 00:23:13.223 17:14:00 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@88 -- # get_bdev_list 00:23:13.223 17:14:00 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:13.223 17:14:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:13.223 17:14:00 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:23:13.223 17:14:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:13.223 17:14:00 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:23:13.223 17:14:00 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:23:13.223 17:14:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:13.223 17:14:00 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@88 -- # [[ '' == '' ]] 00:23:13.223 17:14:00 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@90 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 00:23:13.223 17:14:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:13.223 17:14:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:13.223 17:14:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:13.223 17:14:00 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@91 -- # get_subsystem_names 00:23:13.223 17:14:00 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:23:13.223 17:14:00 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:23:13.223 17:14:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:13.223 17:14:00 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:23:13.223 17:14:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:13.223 17:14:00 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:23:13.223 17:14:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:13.483 17:14:00 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@91 -- # [[ '' == '' ]] 00:23:13.483 17:14:00 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@92 -- # get_bdev_list 00:23:13.483 17:14:00 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:13.483 17:14:00 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:23:13.483 17:14:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:13.483 17:14:00 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:23:13.483 17:14:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:13.483 17:14:00 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:23:13.483 17:14:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:13.483 17:14:00 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@92 -- # [[ '' == '' ]] 00:23:13.483 17:14:00 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@96 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:23:13.483 17:14:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:13.483 17:14:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:13.483 [2024-05-15 17:14:00.951215] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:13.483 17:14:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:13.483 17:14:00 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@97 -- # get_subsystem_names 00:23:13.483 17:14:00 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:23:13.483 17:14:00 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:23:13.483 17:14:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:13.483 17:14:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:13.483 17:14:00 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:23:13.483 17:14:00 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:23:13.483 17:14:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:13.483 17:14:01 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@97 -- # [[ '' == '' ]] 00:23:13.483 17:14:01 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@98 -- # get_bdev_list 00:23:13.483 17:14:01 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:23:13.483 17:14:01 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:13.483 17:14:01 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:23:13.484 17:14:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:13.484 17:14:01 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:23:13.484 17:14:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:13.484 17:14:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:13.484 17:14:01 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@98 -- # [[ '' == '' ]] 00:23:13.484 17:14:01 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@99 -- # is_notification_count_eq 0 00:23:13.484 17:14:01 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:23:13.484 17:14:01 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:23:13.484 17:14:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:23:13.484 17:14:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:23:13.484 17:14:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:23:13.484 17:14:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:23:13.484 17:14:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_notification_count 00:23:13.484 17:14:01 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:23:13.484 17:14:01 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:23:13.484 17:14:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:13.484 17:14:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:13.484 17:14:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:13.484 17:14:01 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:23:13.484 17:14:01 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=0 00:23:13.484 17:14:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # (( notification_count == expected_count )) 00:23:13.484 17:14:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:23:13.484 17:14:01 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@103 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2021-12.io.spdk:test 00:23:13.484 17:14:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:13.484 17:14:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:13.484 17:14:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:13.484 17:14:01 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@105 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:23:13.484 17:14:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:23:13.484 17:14:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:23:13.484 17:14:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:23:13.484 17:14:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:23:13.484 17:14:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_subsystem_names 00:23:13.484 17:14:01 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:23:13.484 17:14:01 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:23:13.484 17:14:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:13.484 17:14:01 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:23:13.484 17:14:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:13.484 17:14:01 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:23:13.484 17:14:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:13.744 17:14:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # [[ '' == \n\v\m\e\0 ]] 00:23:13.744 17:14:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # sleep 1 00:23:14.312 [2024-05-15 17:14:01.669329] bdev_nvme.c:6967:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:23:14.312 [2024-05-15 17:14:01.669351] bdev_nvme.c:7047:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:23:14.312 [2024-05-15 17:14:01.669366] bdev_nvme.c:6930:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:23:14.312 [2024-05-15 17:14:01.755619] bdev_nvme.c:6896:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:23:14.570 [2024-05-15 17:14:01.974032] bdev_nvme.c:6786:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:23:14.570 [2024-05-15 17:14:01.974052] bdev_nvme.c:6745:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:23:14.570 17:14:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:23:14.570 17:14:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:23:14.570 17:14:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_subsystem_names 00:23:14.571 17:14:02 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:23:14.571 17:14:02 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:23:14.571 17:14:02 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:23:14.571 17:14:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:14.571 17:14:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:14.571 17:14:02 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:23:14.571 17:14:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:14.571 17:14:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:14.571 17:14:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:23:14.571 17:14:02 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@106 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:23:14.571 17:14:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:23:14.571 17:14:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:23:14.571 17:14:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:23:14.571 17:14:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1"' ']]' 00:23:14.571 17:14:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_bdev_list 00:23:14.571 17:14:02 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:14.571 17:14:02 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:23:14.571 17:14:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:14.571 17:14:02 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:23:14.571 17:14:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:14.571 17:14:02 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:23:14.571 17:14:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:14.830 17:14:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # [[ nvme0n1 == \n\v\m\e\0\n\1 ]] 00:23:14.830 17:14:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:23:14.830 17:14:02 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@107 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:23:14.830 17:14:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:23:14.830 17:14:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:23:14.830 17:14:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:23:14.830 17:14:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT"' ']]' 00:23:14.830 17:14:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_subsystem_paths nvme0 00:23:14.830 17:14:02 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:23:14.830 17:14:02 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:23:14.830 17:14:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:14.830 17:14:02 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:23:14.830 17:14:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:14.830 17:14:02 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:23:14.830 17:14:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:14.830 17:14:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # [[ 4420 == \4\4\2\0 ]] 00:23:14.830 17:14:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:23:14.830 17:14:02 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@108 -- # is_notification_count_eq 1 00:23:14.830 17:14:02 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:23:14.830 17:14:02 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:23:14.830 17:14:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:23:14.830 17:14:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:23:14.830 17:14:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:23:14.830 17:14:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:23:14.830 17:14:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_notification_count 00:23:14.830 17:14:02 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:23:14.830 17:14:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:14.830 17:14:02 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:23:14.830 17:14:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:14.830 17:14:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:14.830 17:14:02 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:23:14.831 17:14:02 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=1 00:23:14.831 17:14:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # (( notification_count == expected_count )) 00:23:14.831 17:14:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:23:14.831 17:14:02 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@111 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null1 00:23:14.831 17:14:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:14.831 17:14:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:14.831 17:14:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:14.831 17:14:02 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@113 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:23:14.831 17:14:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:23:14.831 17:14:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:23:14.831 17:14:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:23:14.831 17:14:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:23:14.831 17:14:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_bdev_list 00:23:14.831 17:14:02 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:14.831 17:14:02 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:23:14.831 17:14:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:14.831 17:14:02 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:23:14.831 17:14:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:14.831 17:14:02 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:23:14.831 17:14:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:14.831 17:14:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:23:14.831 17:14:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:23:14.831 17:14:02 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@114 -- # is_notification_count_eq 1 00:23:14.831 17:14:02 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:23:14.831 17:14:02 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:23:14.831 17:14:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:23:14.831 17:14:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:23:14.831 17:14:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:23:14.831 17:14:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:23:14.831 17:14:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_notification_count 00:23:14.831 17:14:02 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 1 00:23:14.831 17:14:02 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:23:14.831 17:14:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:14.831 17:14:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:14.831 17:14:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:14.831 17:14:02 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:23:14.831 17:14:02 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:23:14.831 17:14:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # (( notification_count == expected_count )) 00:23:14.831 17:14:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:23:14.831 17:14:02 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@118 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 00:23:14.831 17:14:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:14.831 17:14:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:14.831 [2024-05-15 17:14:02.463339] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:23:14.831 [2024-05-15 17:14:02.463643] bdev_nvme.c:6949:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:23:14.831 [2024-05-15 17:14:02.463664] bdev_nvme.c:6930:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:23:14.831 17:14:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:14.831 17:14:02 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@120 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:23:14.831 17:14:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:23:14.831 17:14:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:23:14.831 17:14:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:23:14.831 17:14:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:23:14.831 17:14:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_subsystem_names 00:23:14.831 17:14:02 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:23:14.831 17:14:02 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:23:14.831 17:14:02 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:23:14.831 17:14:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:14.831 17:14:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:14.831 17:14:02 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:23:14.831 17:14:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:15.091 17:14:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:15.091 17:14:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:23:15.091 17:14:02 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@121 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:23:15.091 17:14:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:23:15.091 17:14:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:23:15.091 17:14:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:23:15.091 17:14:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:23:15.091 17:14:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_bdev_list 00:23:15.091 17:14:02 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:15.091 17:14:02 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:23:15.091 17:14:02 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:23:15.091 17:14:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:15.091 17:14:02 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:23:15.091 17:14:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:15.091 [2024-05-15 17:14:02.550167] bdev_nvme.c:6891:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new path for nvme0 00:23:15.091 17:14:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:15.091 17:14:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:23:15.091 17:14:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:23:15.091 17:14:02 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@122 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:23:15.091 17:14:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:23:15.091 17:14:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:23:15.091 17:14:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:23:15.091 17:14:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:23:15.091 17:14:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_subsystem_paths nvme0 00:23:15.091 17:14:02 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:23:15.091 17:14:02 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:23:15.092 17:14:02 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:23:15.092 17:14:02 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:23:15.092 17:14:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:15.092 17:14:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:15.092 17:14:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:15.092 17:14:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # [[ 4420 == \4\4\2\0\ \4\4\2\1 ]] 00:23:15.092 17:14:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # sleep 1 00:23:15.350 [2024-05-15 17:14:02.857382] bdev_nvme.c:6786:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:23:15.350 [2024-05-15 17:14:02.857399] bdev_nvme.c:6745:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:23:15.350 [2024-05-15 17:14:02.857404] bdev_nvme.c:6745:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:23:16.289 17:14:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:23:16.289 17:14:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:23:16.289 17:14:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_subsystem_paths nvme0 00:23:16.289 17:14:03 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:23:16.289 17:14:03 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:23:16.289 17:14:03 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:23:16.289 17:14:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:16.289 17:14:03 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:23:16.289 17:14:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:16.289 17:14:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:16.289 17:14:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:23:16.289 17:14:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:23:16.289 17:14:03 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@123 -- # is_notification_count_eq 0 00:23:16.289 17:14:03 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:23:16.289 17:14:03 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:23:16.289 17:14:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:23:16.289 17:14:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:23:16.289 17:14:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:23:16.289 17:14:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:23:16.289 17:14:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_notification_count 00:23:16.289 17:14:03 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:23:16.289 17:14:03 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:23:16.289 17:14:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:16.289 17:14:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:16.289 17:14:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:16.289 17:14:03 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:23:16.289 17:14:03 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:23:16.289 17:14:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # (( notification_count == expected_count )) 00:23:16.289 17:14:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:23:16.289 17:14:03 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@127 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:23:16.289 17:14:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:16.289 17:14:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:16.289 [2024-05-15 17:14:03.722944] bdev_nvme.c:6949:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:23:16.289 [2024-05-15 17:14:03.722965] bdev_nvme.c:6930:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:23:16.289 [2024-05-15 17:14:03.726086] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:16.289 [2024-05-15 17:14:03.726101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:16.289 [2024-05-15 17:14:03.726110] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:16.289 [2024-05-15 17:14:03.726116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:16.289 [2024-05-15 17:14:03.726124] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:16.289 [2024-05-15 17:14:03.726130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:16.289 [2024-05-15 17:14:03.726137] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:16.289 [2024-05-15 17:14:03.726143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:16.289 [2024-05-15 17:14:03.726150] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x824220 is same with the state(5) to be set 00:23:16.289 17:14:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:16.289 17:14:03 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@129 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:23:16.289 17:14:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:23:16.289 17:14:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:23:16.289 17:14:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:23:16.289 17:14:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:23:16.289 17:14:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_subsystem_names 00:23:16.289 17:14:03 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:23:16.289 17:14:03 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:23:16.289 17:14:03 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:23:16.289 17:14:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:16.289 17:14:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:16.289 17:14:03 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:23:16.289 [2024-05-15 17:14:03.736100] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x824220 (9): Bad file descriptor 00:23:16.289 [2024-05-15 17:14:03.746138] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:23:16.289 17:14:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:16.289 [2024-05-15 17:14:03.746421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:16.289 [2024-05-15 17:14:03.746550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:16.289 [2024-05-15 17:14:03.746561] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x824220 with addr=10.0.0.2, port=4420 00:23:16.289 [2024-05-15 17:14:03.746568] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x824220 is same with the state(5) to be set 00:23:16.289 [2024-05-15 17:14:03.746580] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x824220 (9): Bad file descriptor 00:23:16.289 [2024-05-15 17:14:03.746590] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:23:16.289 [2024-05-15 17:14:03.746596] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:23:16.289 [2024-05-15 17:14:03.746604] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:23:16.289 [2024-05-15 17:14:03.746615] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:16.289 [2024-05-15 17:14:03.756190] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:23:16.289 [2024-05-15 17:14:03.756457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:16.289 [2024-05-15 17:14:03.756553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:16.289 [2024-05-15 17:14:03.756563] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x824220 with addr=10.0.0.2, port=4420 00:23:16.289 [2024-05-15 17:14:03.756570] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x824220 is same with the state(5) to be set 00:23:16.289 [2024-05-15 17:14:03.756580] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x824220 (9): Bad file descriptor 00:23:16.289 [2024-05-15 17:14:03.756589] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:23:16.289 [2024-05-15 17:14:03.756595] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:23:16.289 [2024-05-15 17:14:03.756602] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:23:16.289 [2024-05-15 17:14:03.756611] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:16.289 [2024-05-15 17:14:03.766239] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:23:16.289 [2024-05-15 17:14:03.766500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:16.289 [2024-05-15 17:14:03.766758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:16.289 [2024-05-15 17:14:03.766768] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x824220 with addr=10.0.0.2, port=4420 00:23:16.289 [2024-05-15 17:14:03.766775] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x824220 is same with the state(5) to be set 00:23:16.290 [2024-05-15 17:14:03.766786] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x824220 (9): Bad file descriptor 00:23:16.290 [2024-05-15 17:14:03.766795] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:23:16.290 [2024-05-15 17:14:03.766801] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:23:16.290 [2024-05-15 17:14:03.766808] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:23:16.290 [2024-05-15 17:14:03.766817] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:16.290 [2024-05-15 17:14:03.776292] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:23:16.290 [2024-05-15 17:14:03.776569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:16.290 [2024-05-15 17:14:03.776692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:16.290 [2024-05-15 17:14:03.776704] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x824220 with addr=10.0.0.2, port=4420 00:23:16.290 [2024-05-15 17:14:03.776710] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x824220 is same with the state(5) to be set 00:23:16.290 [2024-05-15 17:14:03.776720] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x824220 (9): Bad file descriptor 00:23:16.290 [2024-05-15 17:14:03.776729] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:23:16.290 [2024-05-15 17:14:03.776735] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:23:16.290 [2024-05-15 17:14:03.776742] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:23:16.290 [2024-05-15 17:14:03.776751] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:16.290 17:14:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:16.290 17:14:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:23:16.290 17:14:03 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@130 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:23:16.290 17:14:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:23:16.290 17:14:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:23:16.290 17:14:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:23:16.290 17:14:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:23:16.290 17:14:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_bdev_list 00:23:16.290 17:14:03 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:23:16.290 17:14:03 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:16.290 17:14:03 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:23:16.290 17:14:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:16.290 17:14:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:16.290 [2024-05-15 17:14:03.786342] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:23:16.290 17:14:03 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:23:16.290 [2024-05-15 17:14:03.786604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:16.290 [2024-05-15 17:14:03.787621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:16.290 [2024-05-15 17:14:03.787641] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x824220 with addr=10.0.0.2, port=4420 00:23:16.290 [2024-05-15 17:14:03.787650] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x824220 is same with the state(5) to be set 00:23:16.290 [2024-05-15 17:14:03.787664] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x824220 (9): Bad file descriptor 00:23:16.290 [2024-05-15 17:14:03.787683] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:23:16.290 [2024-05-15 17:14:03.787689] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:23:16.290 [2024-05-15 17:14:03.787697] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:23:16.290 [2024-05-15 17:14:03.787707] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:16.290 [2024-05-15 17:14:03.796395] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:23:16.290 [2024-05-15 17:14:03.796628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:16.290 [2024-05-15 17:14:03.796811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:16.290 [2024-05-15 17:14:03.796821] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x824220 with addr=10.0.0.2, port=4420 00:23:16.290 [2024-05-15 17:14:03.796828] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x824220 is same with the state(5) to be set 00:23:16.290 [2024-05-15 17:14:03.796839] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x824220 (9): Bad file descriptor 00:23:16.290 [2024-05-15 17:14:03.796855] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:23:16.290 [2024-05-15 17:14:03.796862] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:23:16.290 [2024-05-15 17:14:03.796869] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:23:16.290 [2024-05-15 17:14:03.796878] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:16.290 [2024-05-15 17:14:03.806449] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:23:16.290 [2024-05-15 17:14:03.806740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:16.290 [2024-05-15 17:14:03.806996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:16.290 [2024-05-15 17:14:03.807008] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x824220 with addr=10.0.0.2, port=4420 00:23:16.290 [2024-05-15 17:14:03.807016] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x824220 is same with the state(5) to be set 00:23:16.290 [2024-05-15 17:14:03.807029] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x824220 (9): Bad file descriptor 00:23:16.290 [2024-05-15 17:14:03.807045] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:23:16.290 [2024-05-15 17:14:03.807052] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:23:16.290 [2024-05-15 17:14:03.807058] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:23:16.290 [2024-05-15 17:14:03.807067] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:16.290 [2024-05-15 17:14:03.816501] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:23:16.290 [2024-05-15 17:14:03.816728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:16.290 [2024-05-15 17:14:03.816913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:16.290 [2024-05-15 17:14:03.816923] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x824220 with addr=10.0.0.2, port=4420 00:23:16.290 [2024-05-15 17:14:03.816930] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x824220 is same with the state(5) to be set 00:23:16.290 [2024-05-15 17:14:03.816941] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x824220 (9): Bad file descriptor 00:23:16.290 [2024-05-15 17:14:03.816950] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:23:16.290 [2024-05-15 17:14:03.816956] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:23:16.290 [2024-05-15 17:14:03.816962] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:23:16.290 [2024-05-15 17:14:03.816971] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:16.290 17:14:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:16.290 [2024-05-15 17:14:03.826553] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:23:16.290 [2024-05-15 17:14:03.826815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:16.290 [2024-05-15 17:14:03.826994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:16.290 [2024-05-15 17:14:03.827004] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x824220 with addr=10.0.0.2, port=4420 00:23:16.290 [2024-05-15 17:14:03.827010] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x824220 is same with the state(5) to be set 00:23:16.290 [2024-05-15 17:14:03.827020] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x824220 (9): Bad file descriptor 00:23:16.290 [2024-05-15 17:14:03.827028] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:23:16.290 [2024-05-15 17:14:03.827034] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:23:16.290 [2024-05-15 17:14:03.827040] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:23:16.290 [2024-05-15 17:14:03.827048] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:16.290 17:14:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:23:16.290 17:14:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:23:16.290 17:14:03 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@131 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:23:16.290 17:14:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:23:16.290 17:14:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:23:16.290 17:14:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:23:16.290 17:14:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_SECOND_PORT"' ']]' 00:23:16.290 17:14:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_subsystem_paths nvme0 00:23:16.290 [2024-05-15 17:14:03.836602] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:23:16.290 [2024-05-15 17:14:03.836815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:16.290 [2024-05-15 17:14:03.836979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:16.290 [2024-05-15 17:14:03.836990] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x824220 with addr=10.0.0.2, port=4420 00:23:16.290 [2024-05-15 17:14:03.836997] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x824220 is same with the state(5) to be set 00:23:16.290 [2024-05-15 17:14:03.837010] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x824220 (9): Bad file descriptor 00:23:16.290 [2024-05-15 17:14:03.837019] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:23:16.290 [2024-05-15 17:14:03.837025] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:23:16.290 [2024-05-15 17:14:03.837031] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:23:16.290 [2024-05-15 17:14:03.837040] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:16.290 17:14:03 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:23:16.290 17:14:03 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:23:16.291 17:14:03 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:23:16.291 17:14:03 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:23:16.291 17:14:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:16.291 17:14:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:16.291 [2024-05-15 17:14:03.846653] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:23:16.291 [2024-05-15 17:14:03.846860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:16.291 [2024-05-15 17:14:03.846982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:16.291 [2024-05-15 17:14:03.846992] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x824220 with addr=10.0.0.2, port=4420 00:23:16.291 [2024-05-15 17:14:03.846998] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x824220 is same with the state(5) to be set 00:23:16.291 [2024-05-15 17:14:03.847009] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x824220 (9): Bad file descriptor 00:23:16.291 [2024-05-15 17:14:03.847018] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:23:16.291 [2024-05-15 17:14:03.847023] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:23:16.291 [2024-05-15 17:14:03.847030] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:23:16.291 [2024-05-15 17:14:03.847039] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:16.291 17:14:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:16.291 [2024-05-15 17:14:03.851192] bdev_nvme.c:6754:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 not found 00:23:16.291 [2024-05-15 17:14:03.851207] bdev_nvme.c:6745:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:23:16.291 17:14:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # [[ 4420 4421 == \4\4\2\1 ]] 00:23:16.291 17:14:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # sleep 1 00:23:17.669 17:14:04 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:23:17.669 17:14:04 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_SECOND_PORT"' ']]' 00:23:17.669 17:14:04 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_subsystem_paths nvme0 00:23:17.669 17:14:04 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:23:17.669 17:14:04 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:23:17.669 17:14:04 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:17.669 17:14:04 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:23:17.669 17:14:04 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:17.669 17:14:04 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:23:17.669 17:14:04 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:17.669 17:14:04 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # [[ 4421 == \4\4\2\1 ]] 00:23:17.669 17:14:04 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:23:17.669 17:14:04 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@132 -- # is_notification_count_eq 0 00:23:17.669 17:14:04 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:23:17.669 17:14:04 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:23:17.669 17:14:04 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:23:17.669 17:14:04 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:23:17.669 17:14:04 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:23:17.669 17:14:04 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:23:17.669 17:14:04 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_notification_count 00:23:17.669 17:14:04 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:23:17.669 17:14:04 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:23:17.669 17:14:04 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:17.669 17:14:04 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:17.669 17:14:04 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:17.669 17:14:04 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:23:17.669 17:14:04 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:23:17.669 17:14:04 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # (( notification_count == expected_count )) 00:23:17.669 17:14:04 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:23:17.669 17:14:04 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@134 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_discovery -b nvme 00:23:17.669 17:14:04 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:17.669 17:14:04 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:17.669 17:14:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:17.669 17:14:05 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@136 -- # waitforcondition '[[ "$(get_subsystem_names)" == "" ]]' 00:23:17.669 17:14:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=[[ "$(get_subsystem_names)" == "" ]]' 00:23:17.669 17:14:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:23:17.669 17:14:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:23:17.669 17:14:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval '[[' '"$(get_subsystem_names)"' == '""' ']]' 00:23:17.669 17:14:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_subsystem_names 00:23:17.669 17:14:05 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:23:17.669 17:14:05 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:23:17.669 17:14:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:17.669 17:14:05 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:23:17.669 17:14:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:17.669 17:14:05 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:23:17.669 17:14:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:17.669 17:14:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # [[ '' == '' ]] 00:23:17.670 17:14:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:23:17.670 17:14:05 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@137 -- # waitforcondition '[[ "$(get_bdev_list)" == "" ]]' 00:23:17.670 17:14:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=[[ "$(get_bdev_list)" == "" ]]' 00:23:17.670 17:14:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:23:17.670 17:14:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:23:17.670 17:14:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval '[[' '"$(get_bdev_list)"' == '""' ']]' 00:23:17.670 17:14:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_bdev_list 00:23:17.670 17:14:05 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:17.670 17:14:05 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:23:17.670 17:14:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:17.670 17:14:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:17.670 17:14:05 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:23:17.670 17:14:05 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:23:17.670 17:14:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:17.670 17:14:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # [[ '' == '' ]] 00:23:17.670 17:14:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:23:17.670 17:14:05 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@138 -- # is_notification_count_eq 2 00:23:17.670 17:14:05 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=2 00:23:17.670 17:14:05 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:23:17.670 17:14:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:23:17.670 17:14:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:23:17.670 17:14:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:23:17.670 17:14:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:23:17.670 17:14:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_notification_count 00:23:17.670 17:14:05 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:23:17.670 17:14:05 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:23:17.670 17:14:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:17.670 17:14:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:17.670 17:14:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:17.670 17:14:05 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=2 00:23:17.670 17:14:05 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=4 00:23:17.670 17:14:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # (( notification_count == expected_count )) 00:23:17.670 17:14:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:23:17.670 17:14:05 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@141 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:23:17.670 17:14:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:17.670 17:14:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:18.613 [2024-05-15 17:14:06.201315] bdev_nvme.c:6967:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:23:18.613 [2024-05-15 17:14:06.201332] bdev_nvme.c:7047:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:23:18.613 [2024-05-15 17:14:06.201344] bdev_nvme.c:6930:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:23:18.872 [2024-05-15 17:14:06.289613] bdev_nvme.c:6896:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new subsystem nvme0 00:23:19.131 [2024-05-15 17:14:06.558604] bdev_nvme.c:6786:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:23:19.131 [2024-05-15 17:14:06.558635] bdev_nvme.c:6745:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:23:19.131 17:14:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:19.131 17:14:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@143 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:23:19.131 17:14:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@648 -- # local es=0 00:23:19.131 17:14:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:23:19.131 17:14:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:23:19.131 17:14:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:19.131 17:14:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:23:19.131 17:14:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:19.131 17:14:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:23:19.131 17:14:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:19.131 17:14:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:19.131 request: 00:23:19.131 { 00:23:19.131 "name": "nvme", 00:23:19.131 "trtype": "tcp", 00:23:19.131 "traddr": "10.0.0.2", 00:23:19.131 "hostnqn": "nqn.2021-12.io.spdk:test", 00:23:19.131 "adrfam": "ipv4", 00:23:19.131 "trsvcid": "8009", 00:23:19.131 "wait_for_attach": true, 00:23:19.131 "method": "bdev_nvme_start_discovery", 00:23:19.131 "req_id": 1 00:23:19.131 } 00:23:19.131 Got JSON-RPC error response 00:23:19.131 response: 00:23:19.131 { 00:23:19.131 "code": -17, 00:23:19.131 "message": "File exists" 00:23:19.131 } 00:23:19.131 17:14:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:23:19.131 17:14:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # es=1 00:23:19.131 17:14:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:23:19.131 17:14:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:23:19.131 17:14:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:23:19.131 17:14:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@145 -- # get_discovery_ctrlrs 00:23:19.131 17:14:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:23:19.131 17:14:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:23:19.131 17:14:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:19.131 17:14:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:23:19.131 17:14:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:19.131 17:14:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:23:19.131 17:14:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:19.131 17:14:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@145 -- # [[ nvme == \n\v\m\e ]] 00:23:19.131 17:14:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@146 -- # get_bdev_list 00:23:19.131 17:14:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:23:19.131 17:14:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:19.131 17:14:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:19.131 17:14:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:23:19.131 17:14:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:19.131 17:14:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:23:19.131 17:14:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:19.131 17:14:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@146 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:23:19.131 17:14:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@149 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:23:19.131 17:14:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@648 -- # local es=0 00:23:19.131 17:14:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:23:19.131 17:14:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:23:19.131 17:14:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:19.131 17:14:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:23:19.131 17:14:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:19.131 17:14:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:23:19.131 17:14:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:19.131 17:14:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:19.131 request: 00:23:19.131 { 00:23:19.131 "name": "nvme_second", 00:23:19.131 "trtype": "tcp", 00:23:19.131 "traddr": "10.0.0.2", 00:23:19.131 "hostnqn": "nqn.2021-12.io.spdk:test", 00:23:19.131 "adrfam": "ipv4", 00:23:19.131 "trsvcid": "8009", 00:23:19.131 "wait_for_attach": true, 00:23:19.131 "method": "bdev_nvme_start_discovery", 00:23:19.131 "req_id": 1 00:23:19.131 } 00:23:19.131 Got JSON-RPC error response 00:23:19.131 response: 00:23:19.131 { 00:23:19.131 "code": -17, 00:23:19.131 "message": "File exists" 00:23:19.131 } 00:23:19.131 17:14:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:23:19.131 17:14:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # es=1 00:23:19.132 17:14:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:23:19.132 17:14:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:23:19.132 17:14:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:23:19.132 17:14:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@151 -- # get_discovery_ctrlrs 00:23:19.132 17:14:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:23:19.132 17:14:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:23:19.132 17:14:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:19.132 17:14:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:23:19.132 17:14:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:19.132 17:14:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:23:19.132 17:14:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:19.132 17:14:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@151 -- # [[ nvme == \n\v\m\e ]] 00:23:19.132 17:14:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@152 -- # get_bdev_list 00:23:19.132 17:14:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:19.132 17:14:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:23:19.132 17:14:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:23:19.132 17:14:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:23:19.132 17:14:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:19.132 17:14:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:19.132 17:14:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:19.132 17:14:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@152 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:23:19.132 17:14:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@155 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:23:19.132 17:14:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@648 -- # local es=0 00:23:19.132 17:14:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:23:19.132 17:14:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:23:19.132 17:14:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:19.132 17:14:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:23:19.132 17:14:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:19.132 17:14:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:23:19.132 17:14:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:19.132 17:14:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:20.512 [2024-05-15 17:14:07.794107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:20.512 [2024-05-15 17:14:07.794365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:20.512 [2024-05-15 17:14:07.794377] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x837d40 with addr=10.0.0.2, port=8010 00:23:20.512 [2024-05-15 17:14:07.794389] nvme_tcp.c:2702:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:23:20.512 [2024-05-15 17:14:07.794396] nvme.c: 821:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:23:20.512 [2024-05-15 17:14:07.794402] bdev_nvme.c:7029:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:23:21.449 [2024-05-15 17:14:08.796522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:21.449 [2024-05-15 17:14:08.796790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:21.449 [2024-05-15 17:14:08.796801] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x837d40 with addr=10.0.0.2, port=8010 00:23:21.449 [2024-05-15 17:14:08.796812] nvme_tcp.c:2702:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:23:21.449 [2024-05-15 17:14:08.796818] nvme.c: 821:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:23:21.449 [2024-05-15 17:14:08.796824] bdev_nvme.c:7029:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:23:22.385 [2024-05-15 17:14:09.798685] bdev_nvme.c:7010:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] timed out while attaching discovery ctrlr 00:23:22.385 request: 00:23:22.385 { 00:23:22.385 "name": "nvme_second", 00:23:22.385 "trtype": "tcp", 00:23:22.385 "traddr": "10.0.0.2", 00:23:22.385 "hostnqn": "nqn.2021-12.io.spdk:test", 00:23:22.385 "adrfam": "ipv4", 00:23:22.385 "trsvcid": "8010", 00:23:22.385 "attach_timeout_ms": 3000, 00:23:22.385 "method": "bdev_nvme_start_discovery", 00:23:22.385 "req_id": 1 00:23:22.385 } 00:23:22.385 Got JSON-RPC error response 00:23:22.385 response: 00:23:22.385 { 00:23:22.385 "code": -110, 00:23:22.385 "message": "Connection timed out" 00:23:22.385 } 00:23:22.385 17:14:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:23:22.385 17:14:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # es=1 00:23:22.385 17:14:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:23:22.385 17:14:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:23:22.385 17:14:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:23:22.385 17:14:09 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@157 -- # get_discovery_ctrlrs 00:23:22.385 17:14:09 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:23:22.385 17:14:09 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:23:22.385 17:14:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:22.385 17:14:09 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:23:22.385 17:14:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:22.385 17:14:09 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:23:22.385 17:14:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:22.385 17:14:09 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@157 -- # [[ nvme == \n\v\m\e ]] 00:23:22.385 17:14:09 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@159 -- # trap - SIGINT SIGTERM EXIT 00:23:22.385 17:14:09 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@161 -- # kill 3167172 00:23:22.385 17:14:09 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@162 -- # nvmftestfini 00:23:22.386 17:14:09 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@488 -- # nvmfcleanup 00:23:22.386 17:14:09 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@117 -- # sync 00:23:22.386 17:14:09 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:23:22.386 17:14:09 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@120 -- # set +e 00:23:22.386 17:14:09 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@121 -- # for i in {1..20} 00:23:22.386 17:14:09 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:23:22.386 rmmod nvme_tcp 00:23:22.386 rmmod nvme_fabrics 00:23:22.386 rmmod nvme_keyring 00:23:22.386 17:14:09 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:23:22.386 17:14:09 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@124 -- # set -e 00:23:22.386 17:14:09 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@125 -- # return 0 00:23:22.386 17:14:09 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@489 -- # '[' -n 3167046 ']' 00:23:22.386 17:14:09 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@490 -- # killprocess 3167046 00:23:22.386 17:14:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@946 -- # '[' -z 3167046 ']' 00:23:22.386 17:14:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@950 -- # kill -0 3167046 00:23:22.386 17:14:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@951 -- # uname 00:23:22.386 17:14:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:23:22.386 17:14:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3167046 00:23:22.386 17:14:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:23:22.386 17:14:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:23:22.386 17:14:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3167046' 00:23:22.386 killing process with pid 3167046 00:23:22.386 17:14:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@965 -- # kill 3167046 00:23:22.386 [2024-05-15 17:14:09.950891] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:23:22.386 17:14:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@970 -- # wait 3167046 00:23:22.645 17:14:10 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:23:22.645 17:14:10 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:23:22.645 17:14:10 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:23:22.645 17:14:10 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:23:22.645 17:14:10 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@278 -- # remove_spdk_ns 00:23:22.645 17:14:10 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:22.645 17:14:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:22.645 17:14:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:25.180 17:14:12 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:23:25.180 00:23:25.180 real 0m18.742s 00:23:25.180 user 0m24.690s 00:23:25.180 sys 0m5.328s 00:23:25.180 17:14:12 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@1122 -- # xtrace_disable 00:23:25.180 17:14:12 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:25.180 ************************************ 00:23:25.180 END TEST nvmf_host_discovery 00:23:25.180 ************************************ 00:23:25.180 17:14:12 nvmf_tcp -- nvmf/nvmf.sh@101 -- # run_test nvmf_host_multipath_status /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:23:25.180 17:14:12 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:23:25.180 17:14:12 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:23:25.180 17:14:12 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:23:25.180 ************************************ 00:23:25.180 START TEST nvmf_host_multipath_status 00:23:25.180 ************************************ 00:23:25.180 17:14:12 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:23:25.180 * Looking for test storage... 00:23:25.180 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:23:25.180 17:14:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:25.180 17:14:12 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # uname -s 00:23:25.180 17:14:12 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:25.180 17:14:12 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:25.180 17:14:12 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:25.180 17:14:12 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:25.180 17:14:12 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:25.180 17:14:12 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:25.180 17:14:12 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:25.180 17:14:12 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:25.180 17:14:12 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:25.180 17:14:12 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:25.180 17:14:12 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:23:25.180 17:14:12 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:23:25.180 17:14:12 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:25.180 17:14:12 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:25.180 17:14:12 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:25.180 17:14:12 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:25.180 17:14:12 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:25.180 17:14:12 nvmf_tcp.nvmf_host_multipath_status -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:25.180 17:14:12 nvmf_tcp.nvmf_host_multipath_status -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:25.180 17:14:12 nvmf_tcp.nvmf_host_multipath_status -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:25.180 17:14:12 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:25.180 17:14:12 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:25.180 17:14:12 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:25.180 17:14:12 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@5 -- # export PATH 00:23:25.180 17:14:12 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:25.180 17:14:12 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@47 -- # : 0 00:23:25.180 17:14:12 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:23:25.180 17:14:12 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:23:25.180 17:14:12 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:25.180 17:14:12 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:25.180 17:14:12 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:25.180 17:14:12 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:23:25.180 17:14:12 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:23:25.180 17:14:12 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@51 -- # have_pci_nics=0 00:23:25.180 17:14:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@12 -- # MALLOC_BDEV_SIZE=64 00:23:25.180 17:14:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:23:25.180 17:14:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:23:25.180 17:14:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@16 -- # bpf_sh=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/bpftrace.sh 00:23:25.180 17:14:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@18 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:23:25.180 17:14:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@21 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:23:25.180 17:14:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@31 -- # nvmftestinit 00:23:25.180 17:14:12 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:23:25.180 17:14:12 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:25.180 17:14:12 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@448 -- # prepare_net_devs 00:23:25.180 17:14:12 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@410 -- # local -g is_hw=no 00:23:25.180 17:14:12 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@412 -- # remove_spdk_ns 00:23:25.180 17:14:12 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:25.180 17:14:12 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:25.180 17:14:12 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:25.180 17:14:12 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:23:25.180 17:14:12 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:23:25.180 17:14:12 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@285 -- # xtrace_disable 00:23:25.180 17:14:12 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:23:30.446 17:14:17 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:30.446 17:14:17 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@291 -- # pci_devs=() 00:23:30.447 17:14:17 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@291 -- # local -a pci_devs 00:23:30.447 17:14:17 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@292 -- # pci_net_devs=() 00:23:30.447 17:14:17 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:23:30.447 17:14:17 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@293 -- # pci_drivers=() 00:23:30.447 17:14:17 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@293 -- # local -A pci_drivers 00:23:30.447 17:14:17 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@295 -- # net_devs=() 00:23:30.447 17:14:17 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@295 -- # local -ga net_devs 00:23:30.447 17:14:17 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@296 -- # e810=() 00:23:30.447 17:14:17 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@296 -- # local -ga e810 00:23:30.447 17:14:17 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@297 -- # x722=() 00:23:30.447 17:14:17 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@297 -- # local -ga x722 00:23:30.447 17:14:17 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@298 -- # mlx=() 00:23:30.447 17:14:17 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@298 -- # local -ga mlx 00:23:30.447 17:14:17 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:30.447 17:14:17 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:30.447 17:14:17 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:30.447 17:14:17 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:30.447 17:14:17 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:30.447 17:14:17 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:30.447 17:14:17 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:30.447 17:14:17 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:30.447 17:14:17 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:30.447 17:14:17 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:30.447 17:14:17 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:30.447 17:14:17 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:23:30.447 17:14:17 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:23:30.447 17:14:17 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:23:30.447 17:14:17 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:23:30.447 17:14:17 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:23:30.447 17:14:17 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:23:30.447 17:14:17 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:30.447 17:14:17 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:23:30.447 Found 0000:86:00.0 (0x8086 - 0x159b) 00:23:30.447 17:14:17 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:30.447 17:14:17 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:30.447 17:14:17 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:30.447 17:14:17 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:30.447 17:14:17 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:30.447 17:14:17 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:30.447 17:14:17 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:23:30.447 Found 0000:86:00.1 (0x8086 - 0x159b) 00:23:30.447 17:14:17 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:30.447 17:14:17 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:30.447 17:14:17 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:30.447 17:14:17 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:30.447 17:14:17 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:30.447 17:14:17 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:23:30.447 17:14:17 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:23:30.447 17:14:17 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:23:30.447 17:14:17 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:30.447 17:14:17 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:30.447 17:14:17 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:30.447 17:14:17 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:30.447 17:14:17 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:30.447 17:14:17 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:30.447 17:14:17 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:30.447 17:14:17 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:23:30.447 Found net devices under 0000:86:00.0: cvl_0_0 00:23:30.447 17:14:17 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:30.447 17:14:17 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:30.447 17:14:17 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:30.447 17:14:17 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:30.447 17:14:17 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:30.447 17:14:17 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:30.447 17:14:17 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:30.447 17:14:17 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:30.447 17:14:17 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:23:30.447 Found net devices under 0000:86:00.1: cvl_0_1 00:23:30.447 17:14:17 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:30.447 17:14:17 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:23:30.447 17:14:17 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@414 -- # is_hw=yes 00:23:30.447 17:14:17 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:23:30.447 17:14:17 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:23:30.447 17:14:17 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:23:30.447 17:14:17 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:30.447 17:14:17 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:30.447 17:14:17 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:30.447 17:14:17 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:23:30.447 17:14:17 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:30.447 17:14:17 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:30.447 17:14:17 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:23:30.447 17:14:17 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:30.447 17:14:17 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:30.447 17:14:17 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:23:30.447 17:14:17 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:23:30.447 17:14:17 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:23:30.447 17:14:17 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:30.447 17:14:17 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:30.447 17:14:17 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:30.447 17:14:17 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:23:30.447 17:14:17 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:30.447 17:14:17 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:30.447 17:14:17 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:30.447 17:14:17 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:23:30.447 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:30.447 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.171 ms 00:23:30.447 00:23:30.447 --- 10.0.0.2 ping statistics --- 00:23:30.447 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:30.447 rtt min/avg/max/mdev = 0.171/0.171/0.171/0.000 ms 00:23:30.447 17:14:17 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:30.447 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:30.447 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.243 ms 00:23:30.447 00:23:30.447 --- 10.0.0.1 ping statistics --- 00:23:30.447 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:30.447 rtt min/avg/max/mdev = 0.243/0.243/0.243/0.000 ms 00:23:30.447 17:14:17 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:30.447 17:14:17 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@422 -- # return 0 00:23:30.447 17:14:17 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:23:30.447 17:14:17 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:30.447 17:14:17 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:23:30.447 17:14:17 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:23:30.447 17:14:17 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:30.447 17:14:17 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:23:30.447 17:14:17 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:23:30.447 17:14:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@33 -- # nvmfappstart -m 0x3 00:23:30.447 17:14:17 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:23:30.447 17:14:17 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@720 -- # xtrace_disable 00:23:30.447 17:14:17 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:23:30.448 17:14:17 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@481 -- # nvmfpid=3172385 00:23:30.448 17:14:17 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@482 -- # waitforlisten 3172385 00:23:30.448 17:14:17 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:23:30.448 17:14:17 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@827 -- # '[' -z 3172385 ']' 00:23:30.448 17:14:17 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:30.448 17:14:17 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@832 -- # local max_retries=100 00:23:30.448 17:14:17 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:30.448 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:30.448 17:14:17 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@836 -- # xtrace_disable 00:23:30.448 17:14:17 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:23:30.448 [2024-05-15 17:14:17.806225] Starting SPDK v24.05-pre git sha1 0ba8ca574 / DPDK 23.11.0 initialization... 00:23:30.448 [2024-05-15 17:14:17.806271] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:30.448 EAL: No free 2048 kB hugepages reported on node 1 00:23:30.448 [2024-05-15 17:14:17.862228] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:23:30.448 [2024-05-15 17:14:17.944358] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:30.448 [2024-05-15 17:14:17.944389] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:30.448 [2024-05-15 17:14:17.944396] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:30.448 [2024-05-15 17:14:17.944403] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:30.448 [2024-05-15 17:14:17.944407] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:30.448 [2024-05-15 17:14:17.944440] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:23:30.448 [2024-05-15 17:14:17.944444] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:23:31.016 17:14:18 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:23:31.016 17:14:18 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@860 -- # return 0 00:23:31.016 17:14:18 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:23:31.016 17:14:18 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:31.016 17:14:18 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:23:31.016 17:14:18 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:31.016 17:14:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@34 -- # nvmfapp_pid=3172385 00:23:31.016 17:14:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:23:31.274 [2024-05-15 17:14:18.804849] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:31.274 17:14:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:23:31.533 Malloc0 00:23:31.533 17:14:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -r -m 2 00:23:31.533 17:14:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:23:31.792 17:14:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:23:32.051 [2024-05-15 17:14:19.502643] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:23:32.051 [2024-05-15 17:14:19.502866] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:32.051 17:14:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:23:32.051 [2024-05-15 17:14:19.683318] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:23:32.051 17:14:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:23:32.051 17:14:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@45 -- # bdevperf_pid=3172858 00:23:32.051 17:14:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@47 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:23:32.051 17:14:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@48 -- # waitforlisten 3172858 /var/tmp/bdevperf.sock 00:23:32.051 17:14:19 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@827 -- # '[' -z 3172858 ']' 00:23:32.051 17:14:19 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:32.051 17:14:19 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@832 -- # local max_retries=100 00:23:32.051 17:14:19 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:32.051 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:32.051 17:14:19 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@836 -- # xtrace_disable 00:23:32.051 17:14:19 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:23:32.309 17:14:19 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:23:32.309 17:14:19 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@860 -- # return 0 00:23:32.309 17:14:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:23:32.568 17:14:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -l -1 -o 10 00:23:32.827 Nvme0n1 00:23:32.827 17:14:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:23:33.428 Nvme0n1 00:23:33.428 17:14:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@78 -- # sleep 2 00:23:33.428 17:14:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:23:35.338 17:14:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@90 -- # set_ANA_state optimized optimized 00:23:35.338 17:14:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:23:35.605 17:14:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:23:35.605 17:14:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@91 -- # sleep 1 00:23:37.036 17:14:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@92 -- # check_status true false true true true true 00:23:37.036 17:14:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:23:37.036 17:14:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:37.036 17:14:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:23:37.036 17:14:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:37.036 17:14:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:23:37.036 17:14:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:37.036 17:14:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:23:37.036 17:14:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:23:37.036 17:14:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:23:37.036 17:14:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:23:37.036 17:14:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:37.294 17:14:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:37.294 17:14:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:23:37.294 17:14:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:37.294 17:14:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:23:37.551 17:14:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:37.551 17:14:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:23:37.551 17:14:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:37.551 17:14:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:23:37.551 17:14:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:37.551 17:14:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:23:37.551 17:14:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:37.551 17:14:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:23:37.809 17:14:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:37.809 17:14:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@94 -- # set_ANA_state non_optimized optimized 00:23:37.809 17:14:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:23:38.068 17:14:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:23:38.068 17:14:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@95 -- # sleep 1 00:23:39.446 17:14:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@96 -- # check_status false true true true true true 00:23:39.446 17:14:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:23:39.446 17:14:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:39.446 17:14:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:23:39.446 17:14:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:23:39.446 17:14:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:23:39.446 17:14:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:39.446 17:14:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:23:39.446 17:14:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:39.446 17:14:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:23:39.446 17:14:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:39.446 17:14:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:23:39.705 17:14:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:39.705 17:14:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:23:39.705 17:14:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:39.705 17:14:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:23:39.964 17:14:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:39.964 17:14:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:23:39.964 17:14:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:39.964 17:14:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:23:40.223 17:14:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:40.223 17:14:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:23:40.223 17:14:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:40.223 17:14:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:23:40.223 17:14:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:40.223 17:14:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@100 -- # set_ANA_state non_optimized non_optimized 00:23:40.223 17:14:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:23:40.482 17:14:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:23:40.740 17:14:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@101 -- # sleep 1 00:23:41.677 17:14:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@102 -- # check_status true false true true true true 00:23:41.677 17:14:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:23:41.677 17:14:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:41.677 17:14:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:23:41.937 17:14:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:41.937 17:14:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:23:41.937 17:14:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:41.937 17:14:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:23:41.937 17:14:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:23:41.937 17:14:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:23:41.937 17:14:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:41.937 17:14:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:23:42.197 17:14:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:42.197 17:14:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:23:42.197 17:14:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:42.197 17:14:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:23:42.456 17:14:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:42.456 17:14:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:23:42.456 17:14:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:42.456 17:14:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:23:42.456 17:14:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:42.456 17:14:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:23:42.456 17:14:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:42.715 17:14:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:23:42.715 17:14:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:42.715 17:14:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@104 -- # set_ANA_state non_optimized inaccessible 00:23:42.715 17:14:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:23:42.974 17:14:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:23:43.233 17:14:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@105 -- # sleep 1 00:23:44.167 17:14:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@106 -- # check_status true false true true true false 00:23:44.167 17:14:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:23:44.167 17:14:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:44.167 17:14:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:23:44.426 17:14:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:44.426 17:14:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:23:44.426 17:14:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:23:44.426 17:14:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:44.685 17:14:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:23:44.685 17:14:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:23:44.685 17:14:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:44.685 17:14:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:23:44.685 17:14:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:44.685 17:14:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:23:44.685 17:14:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:44.686 17:14:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:23:44.945 17:14:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:44.945 17:14:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:23:44.945 17:14:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:44.945 17:14:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:23:45.204 17:14:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:45.204 17:14:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:23:45.204 17:14:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:45.204 17:14:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:23:45.204 17:14:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:23:45.204 17:14:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@108 -- # set_ANA_state inaccessible inaccessible 00:23:45.204 17:14:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:23:45.463 17:14:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:23:45.721 17:14:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@109 -- # sleep 1 00:23:46.657 17:14:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@110 -- # check_status false false true true false false 00:23:46.657 17:14:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:23:46.657 17:14:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:46.657 17:14:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:23:46.915 17:14:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:23:46.915 17:14:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:23:46.915 17:14:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:46.915 17:14:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:23:46.915 17:14:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:23:46.915 17:14:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:23:46.915 17:14:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:46.915 17:14:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:23:47.174 17:14:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:47.174 17:14:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:23:47.174 17:14:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:47.174 17:14:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:23:47.432 17:14:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:47.432 17:14:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:23:47.432 17:14:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:47.432 17:14:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:23:47.432 17:14:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:23:47.432 17:14:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:23:47.432 17:14:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:47.432 17:14:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:23:47.690 17:14:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:23:47.690 17:14:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@112 -- # set_ANA_state inaccessible optimized 00:23:47.690 17:14:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:23:47.949 17:14:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:23:48.207 17:14:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@113 -- # sleep 1 00:23:49.142 17:14:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@114 -- # check_status false true true true false true 00:23:49.142 17:14:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:23:49.142 17:14:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:49.142 17:14:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:23:49.400 17:14:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:23:49.400 17:14:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:23:49.400 17:14:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:23:49.400 17:14:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:49.400 17:14:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:49.400 17:14:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:23:49.400 17:14:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:23:49.400 17:14:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:49.659 17:14:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:49.659 17:14:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:23:49.659 17:14:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:49.659 17:14:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:23:49.917 17:14:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:49.917 17:14:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:23:49.917 17:14:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:49.917 17:14:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:23:50.213 17:14:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:23:50.213 17:14:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:23:50.213 17:14:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:50.213 17:14:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:23:50.213 17:14:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:50.213 17:14:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@116 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_multipath_policy -b Nvme0n1 -p active_active 00:23:50.473 17:14:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@119 -- # set_ANA_state optimized optimized 00:23:50.473 17:14:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:23:50.473 17:14:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:23:50.732 17:14:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@120 -- # sleep 1 00:23:52.106 17:14:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@121 -- # check_status true true true true true true 00:23:52.107 17:14:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:23:52.107 17:14:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:52.107 17:14:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:23:52.107 17:14:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:52.107 17:14:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:23:52.107 17:14:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:52.107 17:14:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:23:52.107 17:14:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:52.107 17:14:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:23:52.107 17:14:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:52.107 17:14:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:23:52.365 17:14:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:52.365 17:14:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:23:52.365 17:14:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:52.365 17:14:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:23:52.624 17:14:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:52.624 17:14:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:23:52.624 17:14:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:23:52.624 17:14:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:52.624 17:14:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:52.624 17:14:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:23:52.624 17:14:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:52.624 17:14:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:23:52.882 17:14:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:52.882 17:14:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@123 -- # set_ANA_state non_optimized optimized 00:23:52.882 17:14:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:23:53.140 17:14:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:23:53.399 17:14:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@124 -- # sleep 1 00:23:54.335 17:14:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@125 -- # check_status false true true true true true 00:23:54.335 17:14:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:23:54.335 17:14:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:54.335 17:14:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:23:54.594 17:14:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:23:54.594 17:14:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:23:54.594 17:14:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:54.594 17:14:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:23:54.594 17:14:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:54.594 17:14:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:23:54.594 17:14:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:54.594 17:14:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:23:54.852 17:14:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:54.852 17:14:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:23:54.852 17:14:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:54.852 17:14:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:23:55.110 17:14:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:55.110 17:14:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:23:55.110 17:14:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:55.110 17:14:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:23:55.110 17:14:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:55.110 17:14:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:23:55.110 17:14:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:23:55.110 17:14:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:55.369 17:14:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:55.369 17:14:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@129 -- # set_ANA_state non_optimized non_optimized 00:23:55.369 17:14:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:23:55.626 17:14:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:23:55.884 17:14:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@130 -- # sleep 1 00:23:56.819 17:14:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@131 -- # check_status true true true true true true 00:23:56.819 17:14:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:23:56.819 17:14:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:56.819 17:14:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:23:57.077 17:14:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:57.078 17:14:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:23:57.078 17:14:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:57.078 17:14:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:23:57.078 17:14:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:57.078 17:14:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:23:57.078 17:14:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:57.078 17:14:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:23:57.336 17:14:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:57.336 17:14:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:23:57.336 17:14:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:57.336 17:14:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:23:57.595 17:14:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:57.595 17:14:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:23:57.595 17:14:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:57.595 17:14:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:23:57.882 17:14:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:57.882 17:14:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:23:57.882 17:14:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:57.882 17:14:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:23:57.882 17:14:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:57.882 17:14:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@133 -- # set_ANA_state non_optimized inaccessible 00:23:57.882 17:14:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:23:58.141 17:14:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:23:58.400 17:14:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@134 -- # sleep 1 00:23:59.336 17:14:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@135 -- # check_status true false true true true false 00:23:59.336 17:14:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:23:59.336 17:14:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:59.336 17:14:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:23:59.595 17:14:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:59.595 17:14:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:23:59.595 17:14:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:59.595 17:14:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:23:59.595 17:14:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:23:59.595 17:14:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:23:59.595 17:14:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:59.595 17:14:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:23:59.854 17:14:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:59.854 17:14:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:23:59.854 17:14:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:23:59.854 17:14:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:00.113 17:14:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:00.113 17:14:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:24:00.113 17:14:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:00.113 17:14:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:24:00.113 17:14:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:00.113 17:14:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:24:00.113 17:14:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:00.113 17:14:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:24:00.371 17:14:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:24:00.371 17:14:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@137 -- # killprocess 3172858 00:24:00.371 17:14:47 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@946 -- # '[' -z 3172858 ']' 00:24:00.371 17:14:47 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@950 -- # kill -0 3172858 00:24:00.371 17:14:47 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@951 -- # uname 00:24:00.371 17:14:47 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:24:00.371 17:14:47 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3172858 00:24:00.371 17:14:47 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@952 -- # process_name=reactor_2 00:24:00.371 17:14:47 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@956 -- # '[' reactor_2 = sudo ']' 00:24:00.371 17:14:47 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3172858' 00:24:00.371 killing process with pid 3172858 00:24:00.371 17:14:47 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@965 -- # kill 3172858 00:24:00.371 17:14:47 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@970 -- # wait 3172858 00:24:00.685 Connection closed with partial response: 00:24:00.685 00:24:00.685 00:24:00.685 17:14:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@139 -- # wait 3172858 00:24:00.685 17:14:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@141 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:24:00.685 [2024-05-15 17:14:19.726989] Starting SPDK v24.05-pre git sha1 0ba8ca574 / DPDK 23.11.0 initialization... 00:24:00.685 [2024-05-15 17:14:19.727038] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3172858 ] 00:24:00.685 EAL: No free 2048 kB hugepages reported on node 1 00:24:00.685 [2024-05-15 17:14:19.777280] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:00.685 [2024-05-15 17:14:19.850900] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:24:00.685 Running I/O for 90 seconds... 00:24:00.685 [2024-05-15 17:14:32.978807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:11888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.685 [2024-05-15 17:14:32.978847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:24:00.685 [2024-05-15 17:14:32.978884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:11896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.685 [2024-05-15 17:14:32.978892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:24:00.685 [2024-05-15 17:14:32.978906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:11904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.685 [2024-05-15 17:14:32.978913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:24:00.685 [2024-05-15 17:14:32.978926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:11912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.685 [2024-05-15 17:14:32.978933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:24:00.685 [2024-05-15 17:14:32.978945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:11920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.685 [2024-05-15 17:14:32.978952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:24:00.685 [2024-05-15 17:14:32.978965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:11928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.685 [2024-05-15 17:14:32.978971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:24:00.685 [2024-05-15 17:14:32.978984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:11936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.685 [2024-05-15 17:14:32.978990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:24:00.685 [2024-05-15 17:14:32.979003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:11944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.685 [2024-05-15 17:14:32.979010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:24:00.685 [2024-05-15 17:14:32.979023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:11952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.685 [2024-05-15 17:14:32.979029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:24:00.685 [2024-05-15 17:14:32.979041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:11960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.685 [2024-05-15 17:14:32.979048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:24:00.685 [2024-05-15 17:14:32.979060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:11968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.685 [2024-05-15 17:14:32.979074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:24:00.685 [2024-05-15 17:14:32.979086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:11976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.685 [2024-05-15 17:14:32.979093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:24:00.685 [2024-05-15 17:14:32.979105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:11984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.685 [2024-05-15 17:14:32.979112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:24:00.685 [2024-05-15 17:14:32.979125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:11992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.685 [2024-05-15 17:14:32.979132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:24:00.685 [2024-05-15 17:14:32.979144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:12000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.685 [2024-05-15 17:14:32.979151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:24:00.685 [2024-05-15 17:14:32.979163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:12008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.685 [2024-05-15 17:14:32.979175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.685 [2024-05-15 17:14:32.979187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:12016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.685 [2024-05-15 17:14:32.979194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:00.685 [2024-05-15 17:14:32.979207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:12024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.685 [2024-05-15 17:14:32.979215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:00.685 [2024-05-15 17:14:32.979227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:12032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.685 [2024-05-15 17:14:32.979234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:24:00.685 [2024-05-15 17:14:32.979246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:12040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.685 [2024-05-15 17:14:32.979253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:24:00.685 [2024-05-15 17:14:32.979265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:12048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.685 [2024-05-15 17:14:32.979272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:24:00.685 [2024-05-15 17:14:32.979285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:12056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.685 [2024-05-15 17:14:32.979291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:24:00.685 [2024-05-15 17:14:32.979303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:12064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.685 [2024-05-15 17:14:32.979310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:24:00.685 [2024-05-15 17:14:32.979325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:11496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.685 [2024-05-15 17:14:32.979332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:24:00.685 [2024-05-15 17:14:32.979553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:12072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.685 [2024-05-15 17:14:32.979564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:24:00.685 [2024-05-15 17:14:32.979578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:12080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.685 [2024-05-15 17:14:32.979585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:24:00.685 [2024-05-15 17:14:32.979597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:12088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.685 [2024-05-15 17:14:32.979604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:24:00.686 [2024-05-15 17:14:32.979616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:12096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.686 [2024-05-15 17:14:32.979623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:24:00.686 [2024-05-15 17:14:32.979635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:12104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.686 [2024-05-15 17:14:32.979642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:24:00.686 [2024-05-15 17:14:32.979654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:12112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.686 [2024-05-15 17:14:32.979660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:24:00.686 [2024-05-15 17:14:32.979672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:12120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.686 [2024-05-15 17:14:32.979679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:24:00.686 [2024-05-15 17:14:32.979691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:12128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.686 [2024-05-15 17:14:32.979698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:24:00.686 [2024-05-15 17:14:32.979710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:12136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.686 [2024-05-15 17:14:32.979717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:24:00.686 [2024-05-15 17:14:32.979729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:12144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.686 [2024-05-15 17:14:32.979736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:24:00.686 [2024-05-15 17:14:32.979749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:12152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.686 [2024-05-15 17:14:32.979755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:24:00.686 [2024-05-15 17:14:32.979770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:12160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.686 [2024-05-15 17:14:32.979777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:24:00.686 [2024-05-15 17:14:32.979789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:12168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.686 [2024-05-15 17:14:32.979796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:24:00.686 [2024-05-15 17:14:32.979808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:12176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.686 [2024-05-15 17:14:32.979815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:24:00.686 [2024-05-15 17:14:32.979827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:12184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.686 [2024-05-15 17:14:32.979834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:24:00.686 [2024-05-15 17:14:32.979846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:12192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.686 [2024-05-15 17:14:32.979853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:24:00.686 [2024-05-15 17:14:32.979865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:12200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.686 [2024-05-15 17:14:32.979871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:24:00.686 [2024-05-15 17:14:32.979883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:12208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.686 [2024-05-15 17:14:32.979890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:24:00.686 [2024-05-15 17:14:32.979903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:12216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.686 [2024-05-15 17:14:32.979910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:24:00.686 [2024-05-15 17:14:32.979922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:12224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.686 [2024-05-15 17:14:32.979929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:24:00.686 [2024-05-15 17:14:32.979942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:12232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.686 [2024-05-15 17:14:32.979948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:24:00.686 [2024-05-15 17:14:32.979961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:12240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.686 [2024-05-15 17:14:32.979967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:24:00.686 [2024-05-15 17:14:32.979979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:12248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.686 [2024-05-15 17:14:32.979986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:24:00.686 [2024-05-15 17:14:32.979999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:12256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.686 [2024-05-15 17:14:32.980007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:24:00.686 [2024-05-15 17:14:32.980230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:12264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.686 [2024-05-15 17:14:32.980239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:00.686 [2024-05-15 17:14:32.980253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:12272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.686 [2024-05-15 17:14:32.980260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:00.686 [2024-05-15 17:14:32.980273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:12280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.686 [2024-05-15 17:14:32.980280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:24:00.686 [2024-05-15 17:14:32.980292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:12288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.686 [2024-05-15 17:14:32.980299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:24:00.686 [2024-05-15 17:14:32.980311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:12296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.686 [2024-05-15 17:14:32.980317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:24:00.686 [2024-05-15 17:14:32.980329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:12304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.686 [2024-05-15 17:14:32.980336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:24:00.686 [2024-05-15 17:14:32.980348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:12312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.686 [2024-05-15 17:14:32.980354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:24:00.686 [2024-05-15 17:14:32.980366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:12320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.686 [2024-05-15 17:14:32.980373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:24:00.686 [2024-05-15 17:14:32.980385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:12328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.686 [2024-05-15 17:14:32.980391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:24:00.686 [2024-05-15 17:14:32.980404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:12336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.686 [2024-05-15 17:14:32.980411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:24:00.686 [2024-05-15 17:14:32.980423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:12344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.686 [2024-05-15 17:14:32.980429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:24:00.686 [2024-05-15 17:14:32.980441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:12352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.686 [2024-05-15 17:14:32.980450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:24:00.686 [2024-05-15 17:14:32.980462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:12360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.686 [2024-05-15 17:14:32.980469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:24:00.686 [2024-05-15 17:14:32.980481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:12368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.686 [2024-05-15 17:14:32.980488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:24:00.686 [2024-05-15 17:14:32.980500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:12376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.686 [2024-05-15 17:14:32.980506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:24:00.686 [2024-05-15 17:14:32.980518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:12384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.686 [2024-05-15 17:14:32.980525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:24:00.686 [2024-05-15 17:14:32.980683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:12392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.686 [2024-05-15 17:14:32.980692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:24:00.686 [2024-05-15 17:14:32.980705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:12400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.687 [2024-05-15 17:14:32.980712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:24:00.687 [2024-05-15 17:14:32.980724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:12408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.687 [2024-05-15 17:14:32.980731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:24:00.687 [2024-05-15 17:14:32.980743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:12416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.687 [2024-05-15 17:14:32.980750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:24:00.687 [2024-05-15 17:14:32.980762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:12424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.687 [2024-05-15 17:14:32.980769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:24:00.687 [2024-05-15 17:14:32.980781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:12432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.687 [2024-05-15 17:14:32.980787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:24:00.687 [2024-05-15 17:14:32.980799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:12440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.687 [2024-05-15 17:14:32.980814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:24:00.687 [2024-05-15 17:14:32.980826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:12448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.687 [2024-05-15 17:14:32.980833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:24:00.687 [2024-05-15 17:14:32.981646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:12456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.687 [2024-05-15 17:14:32.981656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:24:00.687 [2024-05-15 17:14:32.981669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:11504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.687 [2024-05-15 17:14:32.981676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:24:00.687 [2024-05-15 17:14:32.981688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:11512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.687 [2024-05-15 17:14:32.981695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:24:00.687 [2024-05-15 17:14:32.981707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.687 [2024-05-15 17:14:32.981714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:24:00.687 [2024-05-15 17:14:32.981726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:11528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.687 [2024-05-15 17:14:32.981733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:24:00.687 [2024-05-15 17:14:32.981745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:11536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.687 [2024-05-15 17:14:32.981751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:24:00.687 [2024-05-15 17:14:32.981763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:11544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.687 [2024-05-15 17:14:32.981770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:24:00.687 [2024-05-15 17:14:32.981782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:11552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.687 [2024-05-15 17:14:32.981788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:24:00.687 [2024-05-15 17:14:32.981800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:11560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.687 [2024-05-15 17:14:32.981807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:00.687 [2024-05-15 17:14:32.981819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:11568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.687 [2024-05-15 17:14:32.981826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:00.687 [2024-05-15 17:14:32.981838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:11576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.687 [2024-05-15 17:14:32.981845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:24:00.687 [2024-05-15 17:14:32.981857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:11584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.687 [2024-05-15 17:14:32.981863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:24:00.687 [2024-05-15 17:14:32.981877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:11592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.687 [2024-05-15 17:14:32.981884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:24:00.687 [2024-05-15 17:14:32.981896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:11600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.687 [2024-05-15 17:14:32.981902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:24:00.687 [2024-05-15 17:14:32.981915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:11608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.687 [2024-05-15 17:14:32.981923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:24:00.687 [2024-05-15 17:14:32.981935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:11616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.687 [2024-05-15 17:14:32.981942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:24:00.687 [2024-05-15 17:14:32.981954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:12464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.687 [2024-05-15 17:14:32.981961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:24:00.687 [2024-05-15 17:14:32.982100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:12472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.687 [2024-05-15 17:14:32.982109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:24:00.687 [2024-05-15 17:14:32.982122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:12480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.687 [2024-05-15 17:14:32.982129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:24:00.687 [2024-05-15 17:14:32.982141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:12488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.687 [2024-05-15 17:14:32.982148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:24:00.687 [2024-05-15 17:14:32.982160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:12496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.687 [2024-05-15 17:14:32.982172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:24:00.687 [2024-05-15 17:14:32.982184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:12504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.687 [2024-05-15 17:14:32.982191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:24:00.687 [2024-05-15 17:14:32.982203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:12512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.687 [2024-05-15 17:14:32.982209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:24:00.687 [2024-05-15 17:14:32.982222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:11624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.687 [2024-05-15 17:14:32.982228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:24:00.687 [2024-05-15 17:14:32.982240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:11632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.687 [2024-05-15 17:14:32.982249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:24:00.687 [2024-05-15 17:14:32.982266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:11640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.687 [2024-05-15 17:14:32.982272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:24:00.687 [2024-05-15 17:14:32.982285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:11648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.687 [2024-05-15 17:14:32.982291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:24:00.687 [2024-05-15 17:14:32.982303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:11656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.687 [2024-05-15 17:14:32.982310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:24:00.687 [2024-05-15 17:14:32.982322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:11664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.687 [2024-05-15 17:14:32.982329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:24:00.687 [2024-05-15 17:14:32.982340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:11672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.687 [2024-05-15 17:14:32.982347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:24:00.687 [2024-05-15 17:14:32.982359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:11680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.687 [2024-05-15 17:14:32.982368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:24:00.687 [2024-05-15 17:14:32.982380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:11688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.688 [2024-05-15 17:14:32.982386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:24:00.688 [2024-05-15 17:14:32.982398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:11696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.688 [2024-05-15 17:14:32.982405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:24:00.688 [2024-05-15 17:14:32.982417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:11704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.688 [2024-05-15 17:14:32.982423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:24:00.688 [2024-05-15 17:14:32.982435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:11712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.688 [2024-05-15 17:14:32.982442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:24:00.688 [2024-05-15 17:14:32.982454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:11720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.688 [2024-05-15 17:14:32.982460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:24:00.688 [2024-05-15 17:14:32.982472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:11728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.688 [2024-05-15 17:14:32.982481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:24:00.688 [2024-05-15 17:14:32.982493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:11736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.688 [2024-05-15 17:14:32.982499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:24:00.688 [2024-05-15 17:14:32.982511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:11744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.688 [2024-05-15 17:14:32.982518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:24:00.688 [2024-05-15 17:14:32.982530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:11752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.688 [2024-05-15 17:14:32.982536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:24:00.688 [2024-05-15 17:14:32.982548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:11760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.688 [2024-05-15 17:14:32.982555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:00.688 [2024-05-15 17:14:32.982568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:11768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.688 [2024-05-15 17:14:32.982575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:00.688 [2024-05-15 17:14:32.982587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:11776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.688 [2024-05-15 17:14:32.982593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:24:00.688 [2024-05-15 17:14:32.982606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:11784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.688 [2024-05-15 17:14:32.982612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:24:00.688 [2024-05-15 17:14:32.982624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:11792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.688 [2024-05-15 17:14:32.982631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:24:00.688 [2024-05-15 17:14:32.982643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:11800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.688 [2024-05-15 17:14:32.982649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:24:00.688 [2024-05-15 17:14:32.982662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:11808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.688 [2024-05-15 17:14:32.982670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:24:00.688 [2024-05-15 17:14:32.982682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:11816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.688 [2024-05-15 17:14:32.982688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:24:00.688 [2024-05-15 17:14:32.982700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:11824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.688 [2024-05-15 17:14:32.982707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:24:00.688 [2024-05-15 17:14:32.982723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:11832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.688 [2024-05-15 17:14:32.982729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:24:00.688 [2024-05-15 17:14:32.982742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:11840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.688 [2024-05-15 17:14:32.982748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:24:00.688 [2024-05-15 17:14:32.982760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:11848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.688 [2024-05-15 17:14:32.982767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:24:00.688 [2024-05-15 17:14:32.982779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:11856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.688 [2024-05-15 17:14:32.982786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:24:00.688 [2024-05-15 17:14:32.982798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:11864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.688 [2024-05-15 17:14:32.982804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:24:00.688 [2024-05-15 17:14:32.982816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:11872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.688 [2024-05-15 17:14:32.982823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:24:00.688 [2024-05-15 17:14:32.982835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:11880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.688 [2024-05-15 17:14:32.982842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:24:00.688 [2024-05-15 17:14:32.983139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:11888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.688 [2024-05-15 17:14:32.983150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:24:00.688 [2024-05-15 17:14:32.983169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:11896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.688 [2024-05-15 17:14:32.983176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:24:00.688 [2024-05-15 17:14:32.983188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:11904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.688 [2024-05-15 17:14:32.983195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:24:00.688 [2024-05-15 17:14:32.983207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:11912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.688 [2024-05-15 17:14:32.983214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:24:00.688 [2024-05-15 17:14:32.983226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:11920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.688 [2024-05-15 17:14:32.983232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:24:00.688 [2024-05-15 17:14:32.983246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:11928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.688 [2024-05-15 17:14:32.983253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:24:00.688 [2024-05-15 17:14:32.983265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:11936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.688 [2024-05-15 17:14:32.983273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:24:00.688 [2024-05-15 17:14:32.983286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:11944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.688 [2024-05-15 17:14:32.983293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:24:00.688 [2024-05-15 17:14:32.983305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:11952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.688 [2024-05-15 17:14:32.983311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:24:00.688 [2024-05-15 17:14:32.983323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:11960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.688 [2024-05-15 17:14:32.983330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:24:00.688 [2024-05-15 17:14:32.983342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:11968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.688 [2024-05-15 17:14:32.983349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:24:00.688 [2024-05-15 17:14:32.983361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:11976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.688 [2024-05-15 17:14:32.983368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:24:00.688 [2024-05-15 17:14:32.983380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:11984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.688 [2024-05-15 17:14:32.983386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:24:00.688 [2024-05-15 17:14:32.983398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:11992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.688 [2024-05-15 17:14:32.983405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:24:00.688 [2024-05-15 17:14:32.983417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:12000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.688 [2024-05-15 17:14:32.983423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:24:00.688 [2024-05-15 17:14:32.983436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:12008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.689 [2024-05-15 17:14:32.983442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.689 [2024-05-15 17:14:32.983454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:12016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.689 [2024-05-15 17:14:32.983461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:00.689 [2024-05-15 17:14:32.983474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:12024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.689 [2024-05-15 17:14:32.983482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:00.689 [2024-05-15 17:14:32.983494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:12032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.689 [2024-05-15 17:14:32.983501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:24:00.689 [2024-05-15 17:14:32.983513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:12040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.689 [2024-05-15 17:14:32.994057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:24:00.689 [2024-05-15 17:14:32.994073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:12048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.689 [2024-05-15 17:14:32.994081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:24:00.689 [2024-05-15 17:14:32.994093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:12056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.689 [2024-05-15 17:14:32.994099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:24:00.689 [2024-05-15 17:14:32.994111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:12064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.689 [2024-05-15 17:14:32.994119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:24:00.689 [2024-05-15 17:14:32.994131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:11496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.689 [2024-05-15 17:14:32.994138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:24:00.689 [2024-05-15 17:14:32.994150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:12072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.689 [2024-05-15 17:14:32.994157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:24:00.689 [2024-05-15 17:14:32.994172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:12080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.689 [2024-05-15 17:14:32.994179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:24:00.689 [2024-05-15 17:14:32.994192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:12088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.689 [2024-05-15 17:14:32.994198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:24:00.689 [2024-05-15 17:14:32.994210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:12096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.689 [2024-05-15 17:14:32.994218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:24:00.689 [2024-05-15 17:14:32.994230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:12104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.689 [2024-05-15 17:14:32.994236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:24:00.689 [2024-05-15 17:14:32.994248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:12112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.689 [2024-05-15 17:14:32.994257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:24:00.689 [2024-05-15 17:14:32.994269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:12120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.689 [2024-05-15 17:14:32.994276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:24:00.689 [2024-05-15 17:14:32.994288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:12128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.689 [2024-05-15 17:14:32.994295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:24:00.689 [2024-05-15 17:14:32.994306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:12136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.689 [2024-05-15 17:14:32.994313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:24:00.689 [2024-05-15 17:14:32.994326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:12144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.689 [2024-05-15 17:14:32.994333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:24:00.689 [2024-05-15 17:14:32.994345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:12152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.689 [2024-05-15 17:14:32.994351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:24:00.689 [2024-05-15 17:14:32.994363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:12160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.689 [2024-05-15 17:14:32.994370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:24:00.689 [2024-05-15 17:14:32.994382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:12168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.689 [2024-05-15 17:14:32.994389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:24:00.689 [2024-05-15 17:14:32.994400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:12176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.689 [2024-05-15 17:14:32.994407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:24:00.689 [2024-05-15 17:14:32.994420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:12184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.689 [2024-05-15 17:14:32.994426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:24:00.689 [2024-05-15 17:14:32.994438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:12192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.689 [2024-05-15 17:14:32.994445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:24:00.689 [2024-05-15 17:14:32.994457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:12200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.689 [2024-05-15 17:14:32.994464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:24:00.689 [2024-05-15 17:14:32.994856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:12208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.689 [2024-05-15 17:14:32.994871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:24:00.689 [2024-05-15 17:14:32.994887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:12216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.689 [2024-05-15 17:14:32.994894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:24:00.689 [2024-05-15 17:14:32.994907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:12224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.689 [2024-05-15 17:14:32.994914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:24:00.689 [2024-05-15 17:14:32.994926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:12232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.689 [2024-05-15 17:14:32.994932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:24:00.689 [2024-05-15 17:14:32.994944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:12240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.689 [2024-05-15 17:14:32.994951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:24:00.689 [2024-05-15 17:14:32.994963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:12248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.689 [2024-05-15 17:14:32.994970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:24:00.689 [2024-05-15 17:14:32.994982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:12256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.689 [2024-05-15 17:14:32.994988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:24:00.689 [2024-05-15 17:14:32.995000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:12264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.690 [2024-05-15 17:14:32.995007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:00.690 [2024-05-15 17:14:32.995020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:12272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.690 [2024-05-15 17:14:32.995027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:00.690 [2024-05-15 17:14:32.995039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:12280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.690 [2024-05-15 17:14:32.995046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:24:00.690 [2024-05-15 17:14:32.995058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:12288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.690 [2024-05-15 17:14:32.995064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:24:00.690 [2024-05-15 17:14:32.995077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:12296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.690 [2024-05-15 17:14:32.995083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:24:00.690 [2024-05-15 17:14:32.995095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:12304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.690 [2024-05-15 17:14:32.995102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:24:00.690 [2024-05-15 17:14:32.995116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:12312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.690 [2024-05-15 17:14:32.995122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:24:00.690 [2024-05-15 17:14:32.995134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:12320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.690 [2024-05-15 17:14:32.995141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:24:00.690 [2024-05-15 17:14:32.995155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:12328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.690 [2024-05-15 17:14:32.995162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:24:00.690 [2024-05-15 17:14:32.995180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:12336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.690 [2024-05-15 17:14:32.995187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:24:00.690 [2024-05-15 17:14:32.995199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:12344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.690 [2024-05-15 17:14:32.995207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:24:00.690 [2024-05-15 17:14:32.995219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:12352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.690 [2024-05-15 17:14:32.995225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:24:00.690 [2024-05-15 17:14:32.995238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:12360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.690 [2024-05-15 17:14:32.995245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:24:00.690 [2024-05-15 17:14:32.995257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:12368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.690 [2024-05-15 17:14:32.995264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:24:00.690 [2024-05-15 17:14:32.995276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:12376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.690 [2024-05-15 17:14:32.995283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:24:00.690 [2024-05-15 17:14:32.995295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:12384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.690 [2024-05-15 17:14:32.995305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:24:00.690 [2024-05-15 17:14:32.995318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:12392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.690 [2024-05-15 17:14:32.995324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:24:00.690 [2024-05-15 17:14:32.995337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:12400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.690 [2024-05-15 17:14:32.995344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:24:00.690 [2024-05-15 17:14:32.995357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:12408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.690 [2024-05-15 17:14:32.995365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:24:00.690 [2024-05-15 17:14:32.995378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:12416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.690 [2024-05-15 17:14:32.995384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:24:00.690 [2024-05-15 17:14:32.995397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:12424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.690 [2024-05-15 17:14:32.995403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:24:00.690 [2024-05-15 17:14:32.995416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:12432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.690 [2024-05-15 17:14:32.995422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:24:00.690 [2024-05-15 17:14:32.995434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:12440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.690 [2024-05-15 17:14:32.995441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:24:00.690 [2024-05-15 17:14:32.995453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:12448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.690 [2024-05-15 17:14:32.995460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:24:00.690 [2024-05-15 17:14:32.995472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:12456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.690 [2024-05-15 17:14:32.995479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:24:00.690 [2024-05-15 17:14:32.995491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:11504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.690 [2024-05-15 17:14:32.995498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:24:00.690 [2024-05-15 17:14:32.995510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:11512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.690 [2024-05-15 17:14:32.995517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:24:00.690 [2024-05-15 17:14:32.995529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:11520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.690 [2024-05-15 17:14:32.995536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:24:00.690 [2024-05-15 17:14:32.995548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:11528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.690 [2024-05-15 17:14:32.995555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:24:00.690 [2024-05-15 17:14:32.995567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:11536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.690 [2024-05-15 17:14:32.995574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:24:00.690 [2024-05-15 17:14:32.995586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:11544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.690 [2024-05-15 17:14:32.995594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:24:00.690 [2024-05-15 17:14:32.995607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:11552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.690 [2024-05-15 17:14:32.995613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:24:00.690 [2024-05-15 17:14:32.995625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:11560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.690 [2024-05-15 17:14:32.995632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:00.690 [2024-05-15 17:14:32.995645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:11568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.690 [2024-05-15 17:14:32.995651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:00.690 [2024-05-15 17:14:32.995664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:11576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.690 [2024-05-15 17:14:32.995670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:24:00.690 [2024-05-15 17:14:32.995683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:11584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.690 [2024-05-15 17:14:32.995689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:24:00.690 [2024-05-15 17:14:32.995702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.690 [2024-05-15 17:14:32.995709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:24:00.690 [2024-05-15 17:14:32.995721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:11600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.690 [2024-05-15 17:14:32.995728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:24:00.690 [2024-05-15 17:14:32.995740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:11608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.690 [2024-05-15 17:14:32.995747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:24:00.690 [2024-05-15 17:14:32.995759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:11616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.690 [2024-05-15 17:14:32.995766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:24:00.690 [2024-05-15 17:14:32.995778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:12464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.690 [2024-05-15 17:14:32.995785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:24:00.691 [2024-05-15 17:14:32.995797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:12472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.691 [2024-05-15 17:14:32.995803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:24:00.691 [2024-05-15 17:14:32.995816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:12480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.691 [2024-05-15 17:14:32.995824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:24:00.691 [2024-05-15 17:14:32.995836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:12488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.691 [2024-05-15 17:14:32.995843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:24:00.691 [2024-05-15 17:14:32.995854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:12496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.691 [2024-05-15 17:14:32.995861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:24:00.691 [2024-05-15 17:14:32.995873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:12504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.691 [2024-05-15 17:14:32.995880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:24:00.691 [2024-05-15 17:14:32.995892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:12512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.691 [2024-05-15 17:14:32.995898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:24:00.691 [2024-05-15 17:14:32.995911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:11624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.691 [2024-05-15 17:14:32.995917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:24:00.691 [2024-05-15 17:14:32.995929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:11632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.691 [2024-05-15 17:14:32.995936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:24:00.691 [2024-05-15 17:14:32.995948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:11640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.691 [2024-05-15 17:14:32.995955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:24:00.691 [2024-05-15 17:14:32.995967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:11648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.691 [2024-05-15 17:14:32.995973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:24:00.691 [2024-05-15 17:14:32.995986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:11656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.691 [2024-05-15 17:14:32.995992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:24:00.691 [2024-05-15 17:14:32.996004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:11664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.691 [2024-05-15 17:14:32.996011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:24:00.691 [2024-05-15 17:14:32.996023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:11672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.691 [2024-05-15 17:14:32.996030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:24:00.691 [2024-05-15 17:14:32.996042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:11680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.691 [2024-05-15 17:14:32.996048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:24:00.691 [2024-05-15 17:14:32.996062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:11688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.691 [2024-05-15 17:14:32.996069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:24:00.691 [2024-05-15 17:14:32.996081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:11696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.691 [2024-05-15 17:14:32.996088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:24:00.691 [2024-05-15 17:14:32.996100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:11704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.691 [2024-05-15 17:14:32.996107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:24:00.691 [2024-05-15 17:14:32.996119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:11712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.691 [2024-05-15 17:14:32.996126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:24:00.691 [2024-05-15 17:14:32.996138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:11720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.691 [2024-05-15 17:14:32.996144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:24:00.691 [2024-05-15 17:14:32.996157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:11728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.691 [2024-05-15 17:14:32.996168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:24:00.691 [2024-05-15 17:14:32.996180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:11736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.691 [2024-05-15 17:14:32.996187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:24:00.691 [2024-05-15 17:14:32.996199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:11744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.691 [2024-05-15 17:14:32.996206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:24:00.691 [2024-05-15 17:14:32.996218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:11752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.691 [2024-05-15 17:14:32.996225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:24:00.691 [2024-05-15 17:14:32.996237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:11760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.691 [2024-05-15 17:14:32.996243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:00.691 [2024-05-15 17:14:32.996259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:11768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.691 [2024-05-15 17:14:32.996267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:00.691 [2024-05-15 17:14:32.996279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:11776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.691 [2024-05-15 17:14:32.996286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:24:00.691 [2024-05-15 17:14:32.996300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:11784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.691 [2024-05-15 17:14:32.996307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:24:00.691 [2024-05-15 17:14:32.996319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:11792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.691 [2024-05-15 17:14:32.996326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:24:00.691 [2024-05-15 17:14:32.996338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:11800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.691 [2024-05-15 17:14:32.996345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:24:00.691 [2024-05-15 17:14:32.996358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:11808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.691 [2024-05-15 17:14:32.996365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:24:00.691 [2024-05-15 17:14:32.996377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:11816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.691 [2024-05-15 17:14:32.996383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:24:00.691 [2024-05-15 17:14:32.996396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:11824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.691 [2024-05-15 17:14:32.996402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:24:00.691 [2024-05-15 17:14:32.996414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:11832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.691 [2024-05-15 17:14:32.996421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:24:00.691 [2024-05-15 17:14:32.996434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:11840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.691 [2024-05-15 17:14:32.996440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:24:00.691 [2024-05-15 17:14:32.996452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:11848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.691 [2024-05-15 17:14:32.996459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:24:00.691 [2024-05-15 17:14:32.996471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:11856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.691 [2024-05-15 17:14:32.996479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:24:00.691 [2024-05-15 17:14:32.996490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:11864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.691 [2024-05-15 17:14:32.996497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:24:00.691 [2024-05-15 17:14:32.996509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:11872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.691 [2024-05-15 17:14:32.996516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:24:00.691 [2024-05-15 17:14:32.997181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:11880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.691 [2024-05-15 17:14:32.997197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:24:00.691 [2024-05-15 17:14:32.997212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:11888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.691 [2024-05-15 17:14:32.997219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:24:00.691 [2024-05-15 17:14:32.997232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:11896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.692 [2024-05-15 17:14:32.997238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:24:00.692 [2024-05-15 17:14:32.997250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:11904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.692 [2024-05-15 17:14:32.997257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:24:00.692 [2024-05-15 17:14:32.997269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:11912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.692 [2024-05-15 17:14:32.997276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:24:00.692 [2024-05-15 17:14:32.997288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:11920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.692 [2024-05-15 17:14:32.997294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:24:00.692 [2024-05-15 17:14:32.997306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:11928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.692 [2024-05-15 17:14:32.997313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:24:00.692 [2024-05-15 17:14:32.997325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:11936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.692 [2024-05-15 17:14:32.997331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:24:00.692 [2024-05-15 17:14:32.997343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:11944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.692 [2024-05-15 17:14:32.997350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:24:00.692 [2024-05-15 17:14:32.997362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:11952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.692 [2024-05-15 17:14:32.997369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:24:00.692 [2024-05-15 17:14:32.997381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:11960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.692 [2024-05-15 17:14:32.997387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:24:00.692 [2024-05-15 17:14:32.997399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:11968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.692 [2024-05-15 17:14:32.997406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:24:00.692 [2024-05-15 17:14:32.997418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:11976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.692 [2024-05-15 17:14:32.997426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:24:00.692 [2024-05-15 17:14:32.997439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:11984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.692 [2024-05-15 17:14:32.997445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:24:00.692 [2024-05-15 17:14:32.997457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:11992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.692 [2024-05-15 17:14:32.997464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:24:00.692 [2024-05-15 17:14:32.997476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:12000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.692 [2024-05-15 17:14:32.997482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:24:00.692 [2024-05-15 17:14:32.997494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:12008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.692 [2024-05-15 17:14:32.997501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.692 [2024-05-15 17:14:32.997513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:12016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.692 [2024-05-15 17:14:32.997519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:00.692 [2024-05-15 17:14:32.997531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:12024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.692 [2024-05-15 17:14:32.997538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:00.692 [2024-05-15 17:14:32.997550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:12032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.692 [2024-05-15 17:14:32.997556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:24:00.692 [2024-05-15 17:14:32.997568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:12040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.692 [2024-05-15 17:14:32.997575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:24:00.692 [2024-05-15 17:14:32.997587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:12048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.692 [2024-05-15 17:14:32.997593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:24:00.692 [2024-05-15 17:14:32.997605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:12056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.692 [2024-05-15 17:14:32.997612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:24:00.692 [2024-05-15 17:14:32.997624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:12064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.692 [2024-05-15 17:14:32.997631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:24:00.692 [2024-05-15 17:14:32.997643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:11496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.692 [2024-05-15 17:14:32.997649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:24:00.692 [2024-05-15 17:14:32.997664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:12072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.692 [2024-05-15 17:14:32.997671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:24:00.692 [2024-05-15 17:14:32.997683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:12080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.692 [2024-05-15 17:14:32.997690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:24:00.692 [2024-05-15 17:14:32.997702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:12088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.692 [2024-05-15 17:14:32.997708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:24:00.692 [2024-05-15 17:14:32.997720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:12096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.692 [2024-05-15 17:14:32.997727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:24:00.692 [2024-05-15 17:14:32.997739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:12104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.692 [2024-05-15 17:14:32.997745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:24:00.692 [2024-05-15 17:14:32.997757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:12112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.692 [2024-05-15 17:14:32.997764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:24:00.692 [2024-05-15 17:14:32.997776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:12120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.692 [2024-05-15 17:14:32.997783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:24:00.692 [2024-05-15 17:14:32.997795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:12128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.692 [2024-05-15 17:14:32.997801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:24:00.692 [2024-05-15 17:14:32.997813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:12136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.692 [2024-05-15 17:14:32.997820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:24:00.692 [2024-05-15 17:14:32.997832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:12144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.692 [2024-05-15 17:14:32.997838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:24:00.692 [2024-05-15 17:14:32.997850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:12152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.692 [2024-05-15 17:14:32.997857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:24:00.692 [2024-05-15 17:14:32.997869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:12160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.692 [2024-05-15 17:14:32.997875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:24:00.692 [2024-05-15 17:14:32.997889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:12168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.692 [2024-05-15 17:14:32.997896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:24:00.692 [2024-05-15 17:14:32.997907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:12176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.692 [2024-05-15 17:14:32.997915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:24:00.692 [2024-05-15 17:14:32.997927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:12184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.692 [2024-05-15 17:14:32.997933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:24:00.692 [2024-05-15 17:14:32.997945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:12192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.692 [2024-05-15 17:14:32.997952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:24:00.692 [2024-05-15 17:14:32.998539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:12200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.692 [2024-05-15 17:14:32.998552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:24:00.692 [2024-05-15 17:14:32.998566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:12208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.693 [2024-05-15 17:14:32.998573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:24:00.693 [2024-05-15 17:14:32.998585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:12216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.693 [2024-05-15 17:14:32.998592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:24:00.693 [2024-05-15 17:14:32.998604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:12224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.693 [2024-05-15 17:14:32.998611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:24:00.693 [2024-05-15 17:14:32.998623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:12232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.693 [2024-05-15 17:14:32.998630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:24:00.693 [2024-05-15 17:14:32.998642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:12240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.693 [2024-05-15 17:14:32.998648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:24:00.693 [2024-05-15 17:14:32.998660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:12248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.693 [2024-05-15 17:14:32.998667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:24:00.693 [2024-05-15 17:14:32.998679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.693 [2024-05-15 17:14:32.998686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:24:00.693 [2024-05-15 17:14:32.998698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:12264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.693 [2024-05-15 17:14:32.998709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:00.693 [2024-05-15 17:14:32.998722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:12272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.693 [2024-05-15 17:14:32.998728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:00.693 [2024-05-15 17:14:32.998741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:12280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.693 [2024-05-15 17:14:32.998747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:24:00.693 [2024-05-15 17:14:32.998759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:12288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.693 [2024-05-15 17:14:32.998766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:24:00.693 [2024-05-15 17:14:32.998778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:12296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.693 [2024-05-15 17:14:32.998784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:24:00.693 [2024-05-15 17:14:32.998796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:12304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.693 [2024-05-15 17:14:32.998803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:24:00.693 [2024-05-15 17:14:32.998815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:12312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.693 [2024-05-15 17:14:32.998821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:24:00.693 [2024-05-15 17:14:32.998833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:12320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.693 [2024-05-15 17:14:32.998840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:24:00.693 [2024-05-15 17:14:32.998853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:12328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.693 [2024-05-15 17:14:32.998861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:24:00.693 [2024-05-15 17:14:32.999023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:12336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.693 [2024-05-15 17:14:32.999032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:24:00.693 [2024-05-15 17:14:32.999045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:12344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.693 [2024-05-15 17:14:32.999051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:24:00.693 [2024-05-15 17:14:32.999064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:12352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.693 [2024-05-15 17:14:32.999071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:24:00.693 [2024-05-15 17:14:32.999083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:12360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.693 [2024-05-15 17:14:32.999092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:24:00.693 [2024-05-15 17:14:32.999104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:12368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.693 [2024-05-15 17:14:32.999111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:24:00.693 [2024-05-15 17:14:32.999123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:12376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.693 [2024-05-15 17:14:32.999130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:24:00.693 [2024-05-15 17:14:32.999142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:12384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.693 [2024-05-15 17:14:32.999149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:24:00.693 [2024-05-15 17:14:32.999161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:12392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.693 [2024-05-15 17:14:32.999174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:24:00.693 [2024-05-15 17:14:32.999187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.693 [2024-05-15 17:14:32.999194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:24:00.693 [2024-05-15 17:14:32.999206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:12408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.693 [2024-05-15 17:14:32.999213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:24:00.693 [2024-05-15 17:14:32.999226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:12416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.693 [2024-05-15 17:14:32.999232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:24:00.693 [2024-05-15 17:14:32.999245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:12424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.693 [2024-05-15 17:14:32.999251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:24:00.693 [2024-05-15 17:14:32.999263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:12432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.693 [2024-05-15 17:14:32.999270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:24:00.693 [2024-05-15 17:14:32.999282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:12440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.693 [2024-05-15 17:14:32.999289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:24:00.693 [2024-05-15 17:14:32.999301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:12448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.693 [2024-05-15 17:14:32.999308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:24:00.693 [2024-05-15 17:14:32.999320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:12456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.693 [2024-05-15 17:14:32.999327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:24:00.693 [2024-05-15 17:14:32.999341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:11504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.693 [2024-05-15 17:14:32.999347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:24:00.693 [2024-05-15 17:14:32.999360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:11512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.693 [2024-05-15 17:14:32.999366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:24:00.693 [2024-05-15 17:14:32.999379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:11520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.693 [2024-05-15 17:14:32.999385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:24:00.693 [2024-05-15 17:14:32.999397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:11528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.693 [2024-05-15 17:14:32.999404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:24:00.693 [2024-05-15 17:14:32.999416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:11536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.693 [2024-05-15 17:14:32.999423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:24:00.693 [2024-05-15 17:14:32.999435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:11544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.694 [2024-05-15 17:14:32.999441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:24:00.694 [2024-05-15 17:14:32.999454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:11552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.694 [2024-05-15 17:14:33.005380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:24:00.694 [2024-05-15 17:14:33.005395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:11560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.694 [2024-05-15 17:14:33.005403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:00.694 [2024-05-15 17:14:33.005417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:11568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.694 [2024-05-15 17:14:33.005425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:00.694 [2024-05-15 17:14:33.005437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.694 [2024-05-15 17:14:33.005444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:24:00.694 [2024-05-15 17:14:33.005456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:11584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.694 [2024-05-15 17:14:33.005463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:24:00.694 [2024-05-15 17:14:33.005475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:11592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.694 [2024-05-15 17:14:33.005482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:24:00.694 [2024-05-15 17:14:33.005496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:11600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.694 [2024-05-15 17:14:33.005503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:24:00.694 [2024-05-15 17:14:33.005515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:11608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.694 [2024-05-15 17:14:33.005522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:24:00.694 [2024-05-15 17:14:33.005534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:11616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.694 [2024-05-15 17:14:33.005541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:24:00.694 [2024-05-15 17:14:33.005553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:12464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.694 [2024-05-15 17:14:33.005560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:24:00.694 [2024-05-15 17:14:33.005572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:12472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.694 [2024-05-15 17:14:33.005579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:24:00.694 [2024-05-15 17:14:33.005591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:12480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.694 [2024-05-15 17:14:33.005598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:24:00.694 [2024-05-15 17:14:33.005610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:12488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.694 [2024-05-15 17:14:33.005617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:24:00.694 [2024-05-15 17:14:33.005629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:12496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.694 [2024-05-15 17:14:33.005636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:24:00.694 [2024-05-15 17:14:33.005648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:12504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.694 [2024-05-15 17:14:33.005655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:24:00.694 [2024-05-15 17:14:33.005987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:12512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.694 [2024-05-15 17:14:33.006000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:24:00.694 [2024-05-15 17:14:33.006014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:11624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.694 [2024-05-15 17:14:33.006021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:24:00.694 [2024-05-15 17:14:33.006033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:11632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.694 [2024-05-15 17:14:33.006040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:24:00.694 [2024-05-15 17:14:33.006053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:11640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.694 [2024-05-15 17:14:33.006062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:24:00.694 [2024-05-15 17:14:33.006074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:11648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.694 [2024-05-15 17:14:33.006081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:24:00.694 [2024-05-15 17:14:33.006093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:11656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.694 [2024-05-15 17:14:33.006100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:24:00.694 [2024-05-15 17:14:33.006113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:11664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.694 [2024-05-15 17:14:33.006119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:24:00.694 [2024-05-15 17:14:33.006132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:11672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.694 [2024-05-15 17:14:33.006138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:24:00.694 [2024-05-15 17:14:33.006150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:11680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.694 [2024-05-15 17:14:33.006158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:24:00.694 [2024-05-15 17:14:33.006176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:11688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.694 [2024-05-15 17:14:33.006183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:24:00.694 [2024-05-15 17:14:33.006195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:11696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.694 [2024-05-15 17:14:33.006202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:24:00.694 [2024-05-15 17:14:33.006214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:11704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.694 [2024-05-15 17:14:33.006220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:24:00.694 [2024-05-15 17:14:33.006232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:11712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.694 [2024-05-15 17:14:33.006239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:24:00.694 [2024-05-15 17:14:33.006251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:11720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.694 [2024-05-15 17:14:33.006258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:24:00.694 [2024-05-15 17:14:33.006270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:11728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.694 [2024-05-15 17:14:33.006276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:24:00.694 [2024-05-15 17:14:33.006289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:11736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.694 [2024-05-15 17:14:33.006297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:24:00.694 [2024-05-15 17:14:33.006310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:11744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.694 [2024-05-15 17:14:33.006317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:24:00.694 [2024-05-15 17:14:33.006330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:11752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.694 [2024-05-15 17:14:33.006336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:24:00.694 [2024-05-15 17:14:33.006348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:11760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.694 [2024-05-15 17:14:33.006355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:00.694 [2024-05-15 17:14:33.006368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:11768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.694 [2024-05-15 17:14:33.006374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:00.694 [2024-05-15 17:14:33.006386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:11776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.694 [2024-05-15 17:14:33.006393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:24:00.694 [2024-05-15 17:14:33.006405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:11784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.694 [2024-05-15 17:14:33.006412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:24:00.694 [2024-05-15 17:14:33.006424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:11792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.694 [2024-05-15 17:14:33.006431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:24:00.695 [2024-05-15 17:14:33.006443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:11800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.695 [2024-05-15 17:14:33.006449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:24:00.695 [2024-05-15 17:14:33.006461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:11808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.695 [2024-05-15 17:14:33.006468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:24:00.695 [2024-05-15 17:14:33.006480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:11816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.695 [2024-05-15 17:14:33.006487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:24:00.695 [2024-05-15 17:14:33.006500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:11824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.695 [2024-05-15 17:14:33.006506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:24:00.695 [2024-05-15 17:14:33.006519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:11832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.695 [2024-05-15 17:14:33.006525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:24:00.695 [2024-05-15 17:14:33.006538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:11840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.695 [2024-05-15 17:14:33.006545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:24:00.695 [2024-05-15 17:14:33.006557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:11848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.695 [2024-05-15 17:14:33.006564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:24:00.695 [2024-05-15 17:14:33.006576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:11856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.695 [2024-05-15 17:14:33.006583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:24:00.695 [2024-05-15 17:14:33.006595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:11864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.695 [2024-05-15 17:14:33.006602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:24:00.695 [2024-05-15 17:14:33.006614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:11872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.695 [2024-05-15 17:14:33.006621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:24:00.695 [2024-05-15 17:14:33.006633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:11880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.695 [2024-05-15 17:14:33.006639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:24:00.695 [2024-05-15 17:14:33.006651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:11888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.695 [2024-05-15 17:14:33.006658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:24:00.695 [2024-05-15 17:14:33.006670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:11896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.695 [2024-05-15 17:14:33.006677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:24:00.695 [2024-05-15 17:14:33.006689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:11904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.695 [2024-05-15 17:14:33.006696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:24:00.695 [2024-05-15 17:14:33.006708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:11912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.695 [2024-05-15 17:14:33.006714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:24:00.695 [2024-05-15 17:14:33.006726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:11920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.695 [2024-05-15 17:14:33.006733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:24:00.695 [2024-05-15 17:14:33.006745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:11928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.695 [2024-05-15 17:14:33.006752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:24:00.695 [2024-05-15 17:14:33.006765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:11936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.695 [2024-05-15 17:14:33.006772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:24:00.695 [2024-05-15 17:14:33.006784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:11944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.695 [2024-05-15 17:14:33.006791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:24:00.695 [2024-05-15 17:14:33.006803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:11952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.695 [2024-05-15 17:14:33.006809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:24:00.695 [2024-05-15 17:14:33.006821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:11960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.695 [2024-05-15 17:14:33.006828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:24:00.695 [2024-05-15 17:14:33.006840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:11968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.695 [2024-05-15 17:14:33.006847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:24:00.695 [2024-05-15 17:14:33.006859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:11976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.695 [2024-05-15 17:14:33.006865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:24:00.695 [2024-05-15 17:14:33.006877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:11984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.695 [2024-05-15 17:14:33.006884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:24:00.695 [2024-05-15 17:14:33.006896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:11992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.695 [2024-05-15 17:14:33.006903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:24:00.695 [2024-05-15 17:14:33.006915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:12000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.695 [2024-05-15 17:14:33.006922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:24:00.695 [2024-05-15 17:14:33.006934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:12008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.695 [2024-05-15 17:14:33.006941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.695 [2024-05-15 17:14:33.006953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:12016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.695 [2024-05-15 17:14:33.006960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:00.695 [2024-05-15 17:14:33.006972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:12024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.695 [2024-05-15 17:14:33.006979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:00.695 [2024-05-15 17:14:33.006991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:12032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.695 [2024-05-15 17:14:33.006999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:24:00.695 [2024-05-15 17:14:33.007011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:12040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.695 [2024-05-15 17:14:33.007018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:24:00.695 [2024-05-15 17:14:33.007030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:12048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.695 [2024-05-15 17:14:33.007036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:24:00.695 [2024-05-15 17:14:33.007049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:12056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.695 [2024-05-15 17:14:33.007055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:24:00.696 [2024-05-15 17:14:33.007067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:12064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.696 [2024-05-15 17:14:33.007074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:24:00.696 [2024-05-15 17:14:33.007086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:11496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.696 [2024-05-15 17:14:33.007093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:24:00.696 [2024-05-15 17:14:33.007105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:12072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.696 [2024-05-15 17:14:33.007112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:24:00.696 [2024-05-15 17:14:33.007123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:12080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.696 [2024-05-15 17:14:33.007130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:24:00.696 [2024-05-15 17:14:33.007142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:12088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.696 [2024-05-15 17:14:33.007149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:24:00.696 [2024-05-15 17:14:33.007161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:12096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.696 [2024-05-15 17:14:33.007172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:24:00.696 [2024-05-15 17:14:33.007184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:12104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.696 [2024-05-15 17:14:33.007191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:24:00.696 [2024-05-15 17:14:33.007203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:12112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.696 [2024-05-15 17:14:33.007209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:24:00.696 [2024-05-15 17:14:33.007221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:12120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.696 [2024-05-15 17:14:33.007229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:24:00.696 [2024-05-15 17:14:33.007241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:12128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.696 [2024-05-15 17:14:33.007248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:24:00.696 [2024-05-15 17:14:33.007260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:12136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.696 [2024-05-15 17:14:33.007266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:24:00.696 [2024-05-15 17:14:33.007280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:12144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.696 [2024-05-15 17:14:33.007287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:24:00.696 [2024-05-15 17:14:33.007299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:12152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.696 [2024-05-15 17:14:33.007306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:24:00.696 [2024-05-15 17:14:33.007318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:12160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.696 [2024-05-15 17:14:33.007324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:24:00.696 [2024-05-15 17:14:33.007336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:12168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.696 [2024-05-15 17:14:33.007343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:24:00.696 [2024-05-15 17:14:33.007356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:12176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.696 [2024-05-15 17:14:33.007363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:24:00.696 [2024-05-15 17:14:33.007376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:12184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.696 [2024-05-15 17:14:33.007382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:24:00.696 [2024-05-15 17:14:33.007395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:12192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.696 [2024-05-15 17:14:33.007403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:24:00.696 [2024-05-15 17:14:33.007417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:12200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.696 [2024-05-15 17:14:33.007425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:24:00.696 [2024-05-15 17:14:33.007438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:12208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.696 [2024-05-15 17:14:33.007445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:24:00.696 [2024-05-15 17:14:33.007457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:12216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.696 [2024-05-15 17:14:33.007464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:24:00.696 [2024-05-15 17:14:33.007477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:12224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.696 [2024-05-15 17:14:33.007484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:24:00.696 [2024-05-15 17:14:33.007496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:12232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.696 [2024-05-15 17:14:33.007503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:24:00.696 [2024-05-15 17:14:33.007516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:12240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.696 [2024-05-15 17:14:33.007522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:24:00.696 [2024-05-15 17:14:33.007534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:12248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.696 [2024-05-15 17:14:33.007542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:24:00.696 [2024-05-15 17:14:33.007554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:12256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.696 [2024-05-15 17:14:33.007561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:24:00.696 [2024-05-15 17:14:33.007573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.696 [2024-05-15 17:14:33.007579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:00.696 [2024-05-15 17:14:33.007593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:12272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.696 [2024-05-15 17:14:33.007599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:00.696 [2024-05-15 17:14:33.007612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:12280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.696 [2024-05-15 17:14:33.007619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:24:00.696 [2024-05-15 17:14:33.007631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:12288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.696 [2024-05-15 17:14:33.007638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:24:00.696 [2024-05-15 17:14:33.007650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:12296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.696 [2024-05-15 17:14:33.007657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:24:00.696 [2024-05-15 17:14:33.007669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:12304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.696 [2024-05-15 17:14:33.007675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:24:00.696 [2024-05-15 17:14:33.007687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:12312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.696 [2024-05-15 17:14:33.007694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:24:00.696 [2024-05-15 17:14:33.007708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:12320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.696 [2024-05-15 17:14:33.007715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:24:00.697 [2024-05-15 17:14:33.008427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:12328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.697 [2024-05-15 17:14:33.008442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:24:00.697 [2024-05-15 17:14:33.008457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:12336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.697 [2024-05-15 17:14:33.008464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:24:00.697 [2024-05-15 17:14:33.008477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:12344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.697 [2024-05-15 17:14:33.008484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:24:00.697 [2024-05-15 17:14:33.008496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:12352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.697 [2024-05-15 17:14:33.008503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:24:00.697 [2024-05-15 17:14:33.008515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:12360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.697 [2024-05-15 17:14:33.008522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:24:00.697 [2024-05-15 17:14:33.008534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:12368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.697 [2024-05-15 17:14:33.008541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:24:00.697 [2024-05-15 17:14:33.008554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:12376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.697 [2024-05-15 17:14:33.008560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:24:00.697 [2024-05-15 17:14:33.008573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:12384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.697 [2024-05-15 17:14:33.008579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:24:00.697 [2024-05-15 17:14:33.008591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:12392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.697 [2024-05-15 17:14:33.008598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:24:00.697 [2024-05-15 17:14:33.008611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:12400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.697 [2024-05-15 17:14:33.008618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:24:00.697 [2024-05-15 17:14:33.008630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:12408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.697 [2024-05-15 17:14:33.008637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:24:00.697 [2024-05-15 17:14:33.008650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.697 [2024-05-15 17:14:33.008659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:24:00.697 [2024-05-15 17:14:33.008673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:12424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.697 [2024-05-15 17:14:33.008681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:24:00.697 [2024-05-15 17:14:33.008694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:12432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.697 [2024-05-15 17:14:33.008702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:24:00.697 [2024-05-15 17:14:33.008715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:12440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.697 [2024-05-15 17:14:33.008722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:24:00.697 [2024-05-15 17:14:33.008734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:12448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.697 [2024-05-15 17:14:33.008741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:24:00.697 [2024-05-15 17:14:33.008754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:12456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.697 [2024-05-15 17:14:33.008761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:24:00.697 [2024-05-15 17:14:33.008773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:11504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.697 [2024-05-15 17:14:33.008780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:24:00.697 [2024-05-15 17:14:33.008793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:11512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.697 [2024-05-15 17:14:33.008800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:24:00.697 [2024-05-15 17:14:33.008812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:11520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.697 [2024-05-15 17:14:33.008821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:24:00.697 [2024-05-15 17:14:33.008834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:11528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.697 [2024-05-15 17:14:33.008841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:24:00.697 [2024-05-15 17:14:33.008853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:11536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.697 [2024-05-15 17:14:33.008860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:24:00.697 [2024-05-15 17:14:33.008873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:11544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.697 [2024-05-15 17:14:33.008880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:24:00.697 [2024-05-15 17:14:33.008894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:11552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.697 [2024-05-15 17:14:33.008903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:24:00.697 [2024-05-15 17:14:33.008916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:11560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.697 [2024-05-15 17:14:33.008923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:00.697 [2024-05-15 17:14:33.008937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:11568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.697 [2024-05-15 17:14:33.008944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:00.697 [2024-05-15 17:14:33.008956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:11576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.697 [2024-05-15 17:14:33.008964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:24:00.697 [2024-05-15 17:14:33.008976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:11584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.697 [2024-05-15 17:14:33.008983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:24:00.697 [2024-05-15 17:14:33.008997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:11592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.697 [2024-05-15 17:14:33.009004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:24:00.697 [2024-05-15 17:14:33.009017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:11600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.697 [2024-05-15 17:14:33.009023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:24:00.697 [2024-05-15 17:14:33.009036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:11608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.697 [2024-05-15 17:14:33.009045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:24:00.697 [2024-05-15 17:14:33.009058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:11616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.697 [2024-05-15 17:14:33.009064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:24:00.697 [2024-05-15 17:14:33.009077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:12464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.697 [2024-05-15 17:14:33.009084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:24:00.697 [2024-05-15 17:14:33.009097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:12472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.697 [2024-05-15 17:14:33.009103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:24:00.697 [2024-05-15 17:14:33.009118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:12480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.697 [2024-05-15 17:14:33.009125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:24:00.697 [2024-05-15 17:14:33.009137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:12488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.697 [2024-05-15 17:14:33.009144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:24:00.697 [2024-05-15 17:14:33.009158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:12496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.697 [2024-05-15 17:14:33.009170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:24:00.697 [2024-05-15 17:14:33.009470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:12504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.697 [2024-05-15 17:14:33.009479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:24:00.697 [2024-05-15 17:14:33.009493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:12512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.697 [2024-05-15 17:14:33.009500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:24:00.697 [2024-05-15 17:14:33.009512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:11624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.698 [2024-05-15 17:14:33.009519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:24:00.698 [2024-05-15 17:14:33.009531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:11632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.698 [2024-05-15 17:14:33.009537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:24:00.698 [2024-05-15 17:14:33.009550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:11640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.698 [2024-05-15 17:14:33.009557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:24:00.698 [2024-05-15 17:14:33.009569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:11648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.698 [2024-05-15 17:14:33.009576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:24:00.698 [2024-05-15 17:14:33.009588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:11656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.698 [2024-05-15 17:14:33.009595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:24:00.698 [2024-05-15 17:14:33.009607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:11664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.698 [2024-05-15 17:14:33.009614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:24:00.698 [2024-05-15 17:14:33.009626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:11672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.698 [2024-05-15 17:14:33.009633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:24:00.698 [2024-05-15 17:14:33.009645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:11680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.698 [2024-05-15 17:14:33.009652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:24:00.698 [2024-05-15 17:14:33.009664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:11688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.698 [2024-05-15 17:14:33.009671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:24:00.698 [2024-05-15 17:14:33.009688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:11696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.698 [2024-05-15 17:14:33.009695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:24:00.698 [2024-05-15 17:14:33.009707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:11704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.698 [2024-05-15 17:14:33.009714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:24:00.698 [2024-05-15 17:14:33.009726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:11712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.698 [2024-05-15 17:14:33.009733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:24:00.698 [2024-05-15 17:14:33.009745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:11720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.698 [2024-05-15 17:14:33.009752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:24:00.698 [2024-05-15 17:14:33.009765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:11728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.698 [2024-05-15 17:14:33.009771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:24:00.698 [2024-05-15 17:14:33.009783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:11736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.698 [2024-05-15 17:14:33.009790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:24:00.698 [2024-05-15 17:14:33.009802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:11744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.698 [2024-05-15 17:14:33.009809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:24:00.698 [2024-05-15 17:14:33.009821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:11752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.698 [2024-05-15 17:14:33.009828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:24:00.698 [2024-05-15 17:14:33.009840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:11760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.698 [2024-05-15 17:14:33.009847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:00.698 [2024-05-15 17:14:33.009860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:11768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.698 [2024-05-15 17:14:33.009867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:00.698 [2024-05-15 17:14:33.009879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:11776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.698 [2024-05-15 17:14:33.009886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:24:00.698 [2024-05-15 17:14:33.009899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:11784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.698 [2024-05-15 17:14:33.009905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:24:00.698 [2024-05-15 17:14:33.009919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:11792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.698 [2024-05-15 17:14:33.009926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:24:00.698 [2024-05-15 17:14:33.009938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:11800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.698 [2024-05-15 17:14:33.009945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:24:00.698 [2024-05-15 17:14:33.009957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:11808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.698 [2024-05-15 17:14:33.009964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:24:00.698 [2024-05-15 17:14:33.009977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:11816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.698 [2024-05-15 17:14:33.009984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:24:00.698 [2024-05-15 17:14:33.009996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:11824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.698 [2024-05-15 17:14:33.010003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:24:00.698 [2024-05-15 17:14:33.010015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:11832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.698 [2024-05-15 17:14:33.010022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:24:00.698 [2024-05-15 17:14:33.010034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:11840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.698 [2024-05-15 17:14:33.010041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:24:00.698 [2024-05-15 17:14:33.010053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:11848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.698 [2024-05-15 17:14:33.010060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:24:00.698 [2024-05-15 17:14:33.010072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:11856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.698 [2024-05-15 17:14:33.010079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:24:00.698 [2024-05-15 17:14:33.010091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:11864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.698 [2024-05-15 17:14:33.010098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:24:00.698 [2024-05-15 17:14:33.010111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:11872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.699 [2024-05-15 17:14:33.010117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:24:00.699 [2024-05-15 17:14:33.010129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:11880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.699 [2024-05-15 17:14:33.010136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:24:00.699 [2024-05-15 17:14:33.010148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:11888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.699 [2024-05-15 17:14:33.010156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:24:00.699 [2024-05-15 17:14:33.010174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:11896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.699 [2024-05-15 17:14:33.010182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:24:00.699 [2024-05-15 17:14:33.010194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:11904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.699 [2024-05-15 17:14:33.010201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:24:00.699 [2024-05-15 17:14:33.010214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:11912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.699 [2024-05-15 17:14:33.010220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:24:00.699 [2024-05-15 17:14:33.010233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:11920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.699 [2024-05-15 17:14:33.010240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:24:00.699 [2024-05-15 17:14:33.010549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:11928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.699 [2024-05-15 17:14:33.010560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:24:00.699 [2024-05-15 17:14:33.010573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:11936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.699 [2024-05-15 17:14:33.010581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:24:00.699 [2024-05-15 17:14:33.010593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:11944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.699 [2024-05-15 17:14:33.010600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:24:00.699 [2024-05-15 17:14:33.010612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:11952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.699 [2024-05-15 17:14:33.010619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:24:00.699 [2024-05-15 17:14:33.010631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:11960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.699 [2024-05-15 17:14:33.010638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:24:00.699 [2024-05-15 17:14:33.010650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:11968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.699 [2024-05-15 17:14:33.010657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:24:00.699 [2024-05-15 17:14:33.010669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:11976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.699 [2024-05-15 17:14:33.010676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:24:00.699 [2024-05-15 17:14:33.010688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:11984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.699 [2024-05-15 17:14:33.010697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:24:00.699 [2024-05-15 17:14:33.010709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:11992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.699 [2024-05-15 17:14:33.010716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:24:00.699 [2024-05-15 17:14:33.010729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:12000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.699 [2024-05-15 17:14:33.010736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:24:00.699 [2024-05-15 17:14:33.010748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:12008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.699 [2024-05-15 17:14:33.010755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.699 [2024-05-15 17:14:33.010767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:12016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.699 [2024-05-15 17:14:33.010774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:00.699 [2024-05-15 17:14:33.010786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:12024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.699 [2024-05-15 17:14:33.010793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:00.699 [2024-05-15 17:14:33.010805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:12032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.699 [2024-05-15 17:14:33.010812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:24:00.699 [2024-05-15 17:14:33.010824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:12040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.699 [2024-05-15 17:14:33.010831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:24:00.699 [2024-05-15 17:14:33.010843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:12048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.699 [2024-05-15 17:14:33.010850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:24:00.699 [2024-05-15 17:14:33.010862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:12056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.699 [2024-05-15 17:14:33.010868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:24:00.699 [2024-05-15 17:14:33.010880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:12064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.699 [2024-05-15 17:14:33.010887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:24:00.699 [2024-05-15 17:14:33.010900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:11496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.699 [2024-05-15 17:14:33.010906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:24:00.699 [2024-05-15 17:14:33.010918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:12072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.699 [2024-05-15 17:14:33.010925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:24:00.699 [2024-05-15 17:14:33.010939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:12080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.699 [2024-05-15 17:14:33.010946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:24:00.699 [2024-05-15 17:14:33.010958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:12088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.699 [2024-05-15 17:14:33.010964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:24:00.699 [2024-05-15 17:14:33.010976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:12096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.699 [2024-05-15 17:14:33.010983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:24:00.699 [2024-05-15 17:14:33.010995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:12104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.699 [2024-05-15 17:14:33.011002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:24:00.699 [2024-05-15 17:14:33.011014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:12112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.699 [2024-05-15 17:14:33.011020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:24:00.699 [2024-05-15 17:14:33.011032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:12120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.699 [2024-05-15 17:14:33.011039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:24:00.699 [2024-05-15 17:14:33.011052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:12128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.699 [2024-05-15 17:14:33.011059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:24:00.699 [2024-05-15 17:14:33.011071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:12136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.699 [2024-05-15 17:14:33.011077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:24:00.699 [2024-05-15 17:14:33.011089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:12144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.699 [2024-05-15 17:14:33.011096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:24:00.699 [2024-05-15 17:14:33.011108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:12152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.699 [2024-05-15 17:14:33.011115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:24:00.699 [2024-05-15 17:14:33.011127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:12160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.699 [2024-05-15 17:14:33.011133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:24:00.699 [2024-05-15 17:14:33.011146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:12168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.699 [2024-05-15 17:14:33.011153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:24:00.699 [2024-05-15 17:14:33.011172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:12176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.699 [2024-05-15 17:14:33.011179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:24:00.699 [2024-05-15 17:14:33.011191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:12184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.699 [2024-05-15 17:14:33.011198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:24:00.699 [2024-05-15 17:14:33.011210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:12192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.699 [2024-05-15 17:14:33.011216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:24:00.700 [2024-05-15 17:14:33.011229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:12200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.700 [2024-05-15 17:14:33.011235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:24:00.700 [2024-05-15 17:14:33.011247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:12208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.700 [2024-05-15 17:14:33.011254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:24:00.700 [2024-05-15 17:14:33.011266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:12216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.700 [2024-05-15 17:14:33.011273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:24:00.700 [2024-05-15 17:14:33.011286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:12224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.700 [2024-05-15 17:14:33.011293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:24:00.700 [2024-05-15 17:14:33.011305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:12232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.700 [2024-05-15 17:14:33.011312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:24:00.700 [2024-05-15 17:14:33.011324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:12240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.700 [2024-05-15 17:14:33.011331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:24:00.700 [2024-05-15 17:14:33.011697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:12248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.700 [2024-05-15 17:14:33.011708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:24:00.700 [2024-05-15 17:14:33.011721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:12256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.700 [2024-05-15 17:14:33.011728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:24:00.700 [2024-05-15 17:14:33.011741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:12264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.700 [2024-05-15 17:14:33.011747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:00.700 [2024-05-15 17:14:33.011759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:12272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.700 [2024-05-15 17:14:33.011768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:00.700 [2024-05-15 17:14:33.011781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:12280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.700 [2024-05-15 17:14:33.011788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:24:00.700 [2024-05-15 17:14:33.011800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:12288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.700 [2024-05-15 17:14:33.011807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:24:00.700 [2024-05-15 17:14:33.011820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:12296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.700 [2024-05-15 17:14:33.011827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:24:00.700 [2024-05-15 17:14:33.011838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:12304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.700 [2024-05-15 17:14:33.011845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:24:00.700 [2024-05-15 17:14:33.011858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:12312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.700 [2024-05-15 17:14:33.011864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:24:00.700 [2024-05-15 17:14:33.011876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:12320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.700 [2024-05-15 17:14:33.011883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:24:00.700 [2024-05-15 17:14:33.011895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:12328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.700 [2024-05-15 17:14:33.011902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:24:00.700 [2024-05-15 17:14:33.011914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:12336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.700 [2024-05-15 17:14:33.011922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:24:00.700 [2024-05-15 17:14:33.011934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:12344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.700 [2024-05-15 17:14:33.011941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:24:00.700 [2024-05-15 17:14:33.011954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:12352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.700 [2024-05-15 17:14:33.011962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:24:00.700 [2024-05-15 17:14:33.011974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:12360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.700 [2024-05-15 17:14:33.011981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:24:00.700 [2024-05-15 17:14:33.011993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:12368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.700 [2024-05-15 17:14:33.012002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:24:00.700 [2024-05-15 17:14:33.012014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:12376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.700 [2024-05-15 17:14:33.012021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:24:00.700 [2024-05-15 17:14:33.012033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.700 [2024-05-15 17:14:33.012039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:24:00.700 [2024-05-15 17:14:33.012051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:12392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.700 [2024-05-15 17:14:33.012058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:24:00.700 [2024-05-15 17:14:33.012071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:12400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.700 [2024-05-15 17:14:33.012077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:24:00.700 [2024-05-15 17:14:33.012089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:12408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.700 [2024-05-15 17:14:33.012096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:24:00.700 [2024-05-15 17:14:33.012108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:12416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.700 [2024-05-15 17:14:33.012115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:24:00.700 [2024-05-15 17:14:33.012127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:12424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.700 [2024-05-15 17:14:33.012134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:24:00.700 [2024-05-15 17:14:33.012146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:12432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.700 [2024-05-15 17:14:33.012153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:24:00.700 [2024-05-15 17:14:33.012377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:12440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.700 [2024-05-15 17:14:33.012386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:24:00.700 [2024-05-15 17:14:33.012399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:12448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.700 [2024-05-15 17:14:33.012406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:24:00.700 [2024-05-15 17:14:33.012419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:12456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.700 [2024-05-15 17:14:33.012427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:24:00.700 [2024-05-15 17:14:33.012440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:11504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.700 [2024-05-15 17:14:33.012447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:24:00.700 [2024-05-15 17:14:33.012462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:11512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.700 [2024-05-15 17:14:33.012470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:24:00.700 [2024-05-15 17:14:33.012485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:11520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.700 [2024-05-15 17:14:33.012494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:24:00.700 [2024-05-15 17:14:33.012508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:11528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.700 [2024-05-15 17:14:33.012516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:24:00.700 [2024-05-15 17:14:33.012529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:11536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.700 [2024-05-15 17:14:33.012536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:24:00.700 [2024-05-15 17:14:33.012548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:11544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.700 [2024-05-15 17:14:33.012556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:24:00.700 [2024-05-15 17:14:33.012569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:11552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.700 [2024-05-15 17:14:33.012575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:24:00.700 [2024-05-15 17:14:33.012588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.701 [2024-05-15 17:14:33.012595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:00.701 [2024-05-15 17:14:33.012609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:11568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.701 [2024-05-15 17:14:33.012616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:00.701 [2024-05-15 17:14:33.012628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:11576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.701 [2024-05-15 17:14:33.012635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:24:00.701 [2024-05-15 17:14:33.012647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:11584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.701 [2024-05-15 17:14:33.012656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:24:00.701 [2024-05-15 17:14:33.012669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:11592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.701 [2024-05-15 17:14:33.012677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:24:00.701 [2024-05-15 17:14:33.012690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:11600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.701 [2024-05-15 17:14:33.012697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:24:00.701 [2024-05-15 17:14:33.012711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:11608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.701 [2024-05-15 17:14:33.012718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:24:00.701 [2024-05-15 17:14:33.012730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:11616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.701 [2024-05-15 17:14:33.012737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:24:00.701 [2024-05-15 17:14:33.012749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:12464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.701 [2024-05-15 17:14:33.012757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:24:00.701 [2024-05-15 17:14:33.012770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:12472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.701 [2024-05-15 17:14:33.012778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:24:00.701 [2024-05-15 17:14:33.012791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:12480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.701 [2024-05-15 17:14:33.012798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:24:00.701 [2024-05-15 17:14:33.012810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:12488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.701 [2024-05-15 17:14:33.012817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:24:00.701 [2024-05-15 17:14:33.012829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:12496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.701 [2024-05-15 17:14:33.012837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:24:00.701 [2024-05-15 17:14:33.013025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:12504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.701 [2024-05-15 17:14:33.013034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:24:00.701 [2024-05-15 17:14:33.013047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:12512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.701 [2024-05-15 17:14:33.013054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:24:00.701 [2024-05-15 17:14:33.013067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:11624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.701 [2024-05-15 17:14:33.013073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:24:00.701 [2024-05-15 17:14:33.013086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:11632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.701 [2024-05-15 17:14:33.013093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:24:00.701 [2024-05-15 17:14:33.013107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:11640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.701 [2024-05-15 17:14:33.013113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:24:00.701 [2024-05-15 17:14:33.013126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:11648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.701 [2024-05-15 17:14:33.013135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:24:00.701 [2024-05-15 17:14:33.013147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:11656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.701 [2024-05-15 17:14:33.013154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:24:00.701 [2024-05-15 17:14:33.013172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:11664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.701 [2024-05-15 17:14:33.013180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:24:00.701 [2024-05-15 17:14:33.013192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:11672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.701 [2024-05-15 17:14:33.013199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:24:00.701 [2024-05-15 17:14:33.013211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:11680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.701 [2024-05-15 17:14:33.013218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:24:00.701 [2024-05-15 17:14:33.013230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:11688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.701 [2024-05-15 17:14:33.013237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:24:00.701 [2024-05-15 17:14:33.013249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:11696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.701 [2024-05-15 17:14:33.013257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:24:00.701 [2024-05-15 17:14:33.013269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:11704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.701 [2024-05-15 17:14:33.013276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:24:00.701 [2024-05-15 17:14:33.013288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:11712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.701 [2024-05-15 17:14:33.013295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:24:00.701 [2024-05-15 17:14:33.013307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:11720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.701 [2024-05-15 17:14:33.013314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:24:00.701 [2024-05-15 17:14:33.013326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:11728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.701 [2024-05-15 17:14:33.013333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:24:00.701 [2024-05-15 17:14:33.013345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:11736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.701 [2024-05-15 17:14:33.013352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:24:00.701 [2024-05-15 17:14:33.013364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:11744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.701 [2024-05-15 17:14:33.013373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:24:00.701 [2024-05-15 17:14:33.013385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:11752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.701 [2024-05-15 17:14:33.013392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:24:00.701 [2024-05-15 17:14:33.013404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:11760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.701 [2024-05-15 17:14:33.013411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:00.701 [2024-05-15 17:14:33.013423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:11768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.701 [2024-05-15 17:14:33.013430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:00.701 [2024-05-15 17:14:33.013443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:11776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.701 [2024-05-15 17:14:33.013449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:24:00.701 [2024-05-15 17:14:33.016920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:11784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.701 [2024-05-15 17:14:33.016929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:24:00.701 [2024-05-15 17:14:33.016942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:11792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.701 [2024-05-15 17:14:33.016949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:24:00.701 [2024-05-15 17:14:33.016962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:11800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.701 [2024-05-15 17:14:33.016968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:24:00.701 [2024-05-15 17:14:33.016981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:11808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.701 [2024-05-15 17:14:33.016987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:24:00.701 [2024-05-15 17:14:33.017000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:11816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.701 [2024-05-15 17:14:33.017006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:24:00.701 [2024-05-15 17:14:33.017019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:11824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.701 [2024-05-15 17:14:33.017025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:24:00.701 [2024-05-15 17:14:33.017038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:11832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.701 [2024-05-15 17:14:33.017044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:24:00.701 [2024-05-15 17:14:33.017057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:11840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.701 [2024-05-15 17:14:33.017063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:24:00.702 [2024-05-15 17:14:33.017078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:11848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.702 [2024-05-15 17:14:33.017085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:24:00.702 [2024-05-15 17:14:33.017097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:11856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.702 [2024-05-15 17:14:33.017104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:24:00.702 [2024-05-15 17:14:33.017116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:11864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.702 [2024-05-15 17:14:33.017123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:24:00.702 [2024-05-15 17:14:33.017135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:11872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.702 [2024-05-15 17:14:33.017142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:24:00.702 [2024-05-15 17:14:33.017154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:11880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.702 [2024-05-15 17:14:33.017161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:24:00.702 [2024-05-15 17:14:33.017179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:11888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.702 [2024-05-15 17:14:33.017185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:24:00.702 [2024-05-15 17:14:33.017198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:11896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.702 [2024-05-15 17:14:33.017205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:24:00.702 [2024-05-15 17:14:33.017217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:11904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.702 [2024-05-15 17:14:33.017224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:24:00.702 [2024-05-15 17:14:33.017236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:11912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.702 [2024-05-15 17:14:33.017242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:24:00.702 [2024-05-15 17:14:33.017254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:11920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.702 [2024-05-15 17:14:33.017261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:24:00.702 [2024-05-15 17:14:33.017274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:11928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.702 [2024-05-15 17:14:33.017281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:24:00.702 [2024-05-15 17:14:33.017610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:11936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.702 [2024-05-15 17:14:33.017621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:24:00.702 [2024-05-15 17:14:33.017638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:11944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.702 [2024-05-15 17:14:33.017644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:24:00.702 [2024-05-15 17:14:33.017657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:11952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.702 [2024-05-15 17:14:33.017664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:24:00.702 [2024-05-15 17:14:33.017676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:11960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.702 [2024-05-15 17:14:33.017683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:24:00.702 [2024-05-15 17:14:33.017695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:11968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.702 [2024-05-15 17:14:33.017702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:24:00.702 [2024-05-15 17:14:33.017714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:11976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.702 [2024-05-15 17:14:33.017720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:24:00.702 [2024-05-15 17:14:33.017732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:11984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.702 [2024-05-15 17:14:33.017739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:24:00.702 [2024-05-15 17:14:33.017751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:11992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.702 [2024-05-15 17:14:33.017758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:24:00.702 [2024-05-15 17:14:33.017770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:12000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.702 [2024-05-15 17:14:33.017777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:24:00.702 [2024-05-15 17:14:33.017789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:12008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.702 [2024-05-15 17:14:33.017796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.702 [2024-05-15 17:14:33.017808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:12016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.702 [2024-05-15 17:14:33.017815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:00.702 [2024-05-15 17:14:33.017827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:12024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.702 [2024-05-15 17:14:33.017834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:00.702 [2024-05-15 17:14:33.017846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:12032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.702 [2024-05-15 17:14:33.017852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:24:00.702 [2024-05-15 17:14:33.017864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:12040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.702 [2024-05-15 17:14:33.017873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:24:00.702 [2024-05-15 17:14:33.017885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:12048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.702 [2024-05-15 17:14:33.017891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:24:00.702 [2024-05-15 17:14:33.017904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:12056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.702 [2024-05-15 17:14:33.017910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:24:00.702 [2024-05-15 17:14:33.017922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:12064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.702 [2024-05-15 17:14:33.017929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:24:00.702 [2024-05-15 17:14:33.017941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:11496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.702 [2024-05-15 17:14:33.017948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:24:00.702 [2024-05-15 17:14:33.017960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:12072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.702 [2024-05-15 17:14:33.017967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:24:00.702 [2024-05-15 17:14:33.017979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:12080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.702 [2024-05-15 17:14:33.017986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:24:00.702 [2024-05-15 17:14:33.017998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:12088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.702 [2024-05-15 17:14:33.018004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:24:00.702 [2024-05-15 17:14:33.018017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:12096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.702 [2024-05-15 17:14:33.018023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:24:00.702 [2024-05-15 17:14:33.018036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:12104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.702 [2024-05-15 17:14:33.018042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:24:00.702 [2024-05-15 17:14:33.018054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:12112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.702 [2024-05-15 17:14:33.018061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:24:00.702 [2024-05-15 17:14:33.018073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:12120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.702 [2024-05-15 17:14:33.018080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:24:00.702 [2024-05-15 17:14:33.018092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:12128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.702 [2024-05-15 17:14:33.018102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:24:00.702 [2024-05-15 17:14:33.018115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:12136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.703 [2024-05-15 17:14:33.018122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:24:00.703 [2024-05-15 17:14:33.018134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:12144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.703 [2024-05-15 17:14:33.018141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:24:00.703 [2024-05-15 17:14:33.018152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:12152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.703 [2024-05-15 17:14:33.018159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:24:00.703 [2024-05-15 17:14:33.018177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:12160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.703 [2024-05-15 17:14:33.018184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:24:00.703 [2024-05-15 17:14:33.018196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:12168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.703 [2024-05-15 17:14:33.018203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:24:00.703 [2024-05-15 17:14:33.018215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:12176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.703 [2024-05-15 17:14:33.018222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:24:00.703 [2024-05-15 17:14:33.018234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:12184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.703 [2024-05-15 17:14:33.018241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:24:00.703 [2024-05-15 17:14:33.018253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:12192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.703 [2024-05-15 17:14:33.018259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:24:00.703 [2024-05-15 17:14:33.018271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:12200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.703 [2024-05-15 17:14:33.018278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:24:00.703 [2024-05-15 17:14:33.018291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:12208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.703 [2024-05-15 17:14:33.018298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:24:00.703 [2024-05-15 17:14:33.018310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:12216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.703 [2024-05-15 17:14:33.018316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:24:00.703 [2024-05-15 17:14:33.018328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:12224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.703 [2024-05-15 17:14:33.018335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:24:00.703 [2024-05-15 17:14:33.018348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:12232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.703 [2024-05-15 17:14:33.018355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:24:00.703 [2024-05-15 17:14:33.018367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:12240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.703 [2024-05-15 17:14:33.018374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:24:00.703 [2024-05-15 17:14:33.018386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:12248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.703 [2024-05-15 17:14:33.018393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:24:00.703 [2024-05-15 17:14:33.018405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:12256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.703 [2024-05-15 17:14:33.018412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:24:00.703 [2024-05-15 17:14:33.018423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:12264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.703 [2024-05-15 17:14:33.018430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:00.703 [2024-05-15 17:14:33.018443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:12272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.703 [2024-05-15 17:14:33.018450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:00.703 [2024-05-15 17:14:33.018462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:12280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.703 [2024-05-15 17:14:33.018469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:24:00.703 [2024-05-15 17:14:33.018481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.703 [2024-05-15 17:14:33.018488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:24:00.703 [2024-05-15 17:14:33.018500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:12296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.703 [2024-05-15 17:14:33.018506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:24:00.703 [2024-05-15 17:14:33.018518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:12304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.703 [2024-05-15 17:14:33.018525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:24:00.703 [2024-05-15 17:14:33.018537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:12312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.703 [2024-05-15 17:14:33.018544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:24:00.703 [2024-05-15 17:14:33.018556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:12320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.703 [2024-05-15 17:14:33.018563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:24:00.703 [2024-05-15 17:14:33.018576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:12328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.703 [2024-05-15 17:14:33.018583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:24:00.703 [2024-05-15 17:14:33.018595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:12336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.703 [2024-05-15 17:14:33.018602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:24:00.703 [2024-05-15 17:14:33.018614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:12344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.703 [2024-05-15 17:14:33.018621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:24:00.703 [2024-05-15 17:14:33.018633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:12352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.703 [2024-05-15 17:14:33.018639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:24:00.703 [2024-05-15 17:14:33.018651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:12360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.703 [2024-05-15 17:14:33.018658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:24:00.703 [2024-05-15 17:14:33.018670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:12368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.703 [2024-05-15 17:14:33.018677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:24:00.703 [2024-05-15 17:14:33.018689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:12376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.703 [2024-05-15 17:14:33.018696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:24:00.703 [2024-05-15 17:14:33.018708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:12384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.703 [2024-05-15 17:14:33.018715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:24:00.703 [2024-05-15 17:14:33.018726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:12392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.703 [2024-05-15 17:14:33.018733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:24:00.703 [2024-05-15 17:14:33.018745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:12400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.703 [2024-05-15 17:14:33.018752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:24:00.703 [2024-05-15 17:14:33.018764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:12408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.703 [2024-05-15 17:14:33.018771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:24:00.703 [2024-05-15 17:14:33.018783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:12416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.703 [2024-05-15 17:14:33.018790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:24:00.703 [2024-05-15 17:14:33.018802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:12424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.703 [2024-05-15 17:14:33.018810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:24:00.703 [2024-05-15 17:14:33.018822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:12432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.703 [2024-05-15 17:14:33.018829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:24:00.703 [2024-05-15 17:14:33.018841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:12440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.703 [2024-05-15 17:14:33.018848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:24:00.703 [2024-05-15 17:14:33.018860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:12448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.703 [2024-05-15 17:14:33.018867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:24:00.703 [2024-05-15 17:14:33.018879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:12456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.703 [2024-05-15 17:14:33.018886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:24:00.703 [2024-05-15 17:14:33.018898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:11504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.703 [2024-05-15 17:14:33.018905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:24:00.703 [2024-05-15 17:14:33.018917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:11512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.703 [2024-05-15 17:14:33.018924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:24:00.703 [2024-05-15 17:14:33.018936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:11520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.703 [2024-05-15 17:14:33.018942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:24:00.703 [2024-05-15 17:14:33.018954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:11528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.703 [2024-05-15 17:14:33.018962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:24:00.703 [2024-05-15 17:14:33.018974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:11536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.703 [2024-05-15 17:14:33.018980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:24:00.703 [2024-05-15 17:14:33.018993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:11544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.703 [2024-05-15 17:14:33.018999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:24:00.703 [2024-05-15 17:14:33.019011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:11552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.704 [2024-05-15 17:14:33.019018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:24:00.704 [2024-05-15 17:14:33.019030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:11560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.704 [2024-05-15 17:14:33.019039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:00.704 [2024-05-15 17:14:33.019051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.704 [2024-05-15 17:14:33.019058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:00.704 [2024-05-15 17:14:33.019070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:11576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.704 [2024-05-15 17:14:33.019077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:24:00.704 [2024-05-15 17:14:33.019089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:11584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.704 [2024-05-15 17:14:33.019096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:24:00.704 [2024-05-15 17:14:33.019108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:11592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.704 [2024-05-15 17:14:33.019115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:24:00.704 [2024-05-15 17:14:33.019127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:11600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.704 [2024-05-15 17:14:33.019134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:24:00.704 [2024-05-15 17:14:33.019146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:11608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.704 [2024-05-15 17:14:33.019153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:24:00.704 [2024-05-15 17:14:33.019170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:11616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.704 [2024-05-15 17:14:33.019177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:24:00.704 [2024-05-15 17:14:33.019189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:12464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.704 [2024-05-15 17:14:33.019195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:24:00.704 [2024-05-15 17:14:33.019208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:12472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.704 [2024-05-15 17:14:33.019214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:24:00.704 [2024-05-15 17:14:33.019227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:12480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.704 [2024-05-15 17:14:33.019233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:24:00.704 [2024-05-15 17:14:33.019245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:12488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.704 [2024-05-15 17:14:33.019252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:24:00.704 [2024-05-15 17:14:33.019977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:12496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.704 [2024-05-15 17:14:33.019992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:24:00.704 [2024-05-15 17:14:33.020009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:12504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.704 [2024-05-15 17:14:33.020017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:24:00.704 [2024-05-15 17:14:33.020029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:12512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.704 [2024-05-15 17:14:33.020036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:24:00.704 [2024-05-15 17:14:33.020048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:11624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.704 [2024-05-15 17:14:33.020055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:24:00.704 [2024-05-15 17:14:33.020068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:11632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.704 [2024-05-15 17:14:33.020075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:24:00.704 [2024-05-15 17:14:33.020087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:11640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.704 [2024-05-15 17:14:33.020094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:24:00.704 [2024-05-15 17:14:33.020106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:11648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.704 [2024-05-15 17:14:33.020113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:24:00.704 [2024-05-15 17:14:33.020125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:11656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.704 [2024-05-15 17:14:33.020132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:24:00.704 [2024-05-15 17:14:33.020144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:11664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.704 [2024-05-15 17:14:33.020151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:24:00.704 [2024-05-15 17:14:33.020169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:11672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.704 [2024-05-15 17:14:33.020176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:24:00.704 [2024-05-15 17:14:33.020189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:11680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.704 [2024-05-15 17:14:33.020196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:24:00.704 [2024-05-15 17:14:33.020208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:11688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.704 [2024-05-15 17:14:33.020215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:24:00.704 [2024-05-15 17:14:33.020227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:11696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.704 [2024-05-15 17:14:33.020234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:24:00.704 [2024-05-15 17:14:33.020247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:11704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.704 [2024-05-15 17:14:33.020254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:24:00.704 [2024-05-15 17:14:33.020267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:11712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.704 [2024-05-15 17:14:33.020273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:24:00.704 [2024-05-15 17:14:33.020286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:11720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.704 [2024-05-15 17:14:33.020293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:24:00.704 [2024-05-15 17:14:33.020305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:11728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.704 [2024-05-15 17:14:33.020312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:24:00.704 [2024-05-15 17:14:33.020324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:11736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.704 [2024-05-15 17:14:33.020331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:24:00.704 [2024-05-15 17:14:33.020343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:11744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.704 [2024-05-15 17:14:33.020350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:24:00.704 [2024-05-15 17:14:33.020362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:11752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.704 [2024-05-15 17:14:33.020369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:24:00.704 [2024-05-15 17:14:33.020381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:11760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.704 [2024-05-15 17:14:33.020388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:00.704 [2024-05-15 17:14:33.020400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:11768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.704 [2024-05-15 17:14:33.020407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:00.704 [2024-05-15 17:14:33.020419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:11776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.704 [2024-05-15 17:14:33.020426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:24:00.704 [2024-05-15 17:14:33.020438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:11784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.704 [2024-05-15 17:14:33.020445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:24:00.704 [2024-05-15 17:14:33.020458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:11792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.704 [2024-05-15 17:14:33.020464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:24:00.704 [2024-05-15 17:14:33.020477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:11800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.704 [2024-05-15 17:14:33.020485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:24:00.704 [2024-05-15 17:14:33.020498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:11808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.704 [2024-05-15 17:14:33.020504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:24:00.704 [2024-05-15 17:14:33.020517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:11816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.704 [2024-05-15 17:14:33.020523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:24:00.704 [2024-05-15 17:14:33.020536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:11824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.704 [2024-05-15 17:14:33.020542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:24:00.704 [2024-05-15 17:14:33.020555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:11832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.704 [2024-05-15 17:14:33.020561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:24:00.704 [2024-05-15 17:14:33.020574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:11840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.704 [2024-05-15 17:14:33.020580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:24:00.704 [2024-05-15 17:14:33.020592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:11848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.704 [2024-05-15 17:14:33.020599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:24:00.704 [2024-05-15 17:14:33.020612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:11856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.704 [2024-05-15 17:14:33.020619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:24:00.704 [2024-05-15 17:14:33.020631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:11864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.704 [2024-05-15 17:14:33.020638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:24:00.704 [2024-05-15 17:14:33.020650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:11872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.704 [2024-05-15 17:14:33.020657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:24:00.705 [2024-05-15 17:14:33.020669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:11880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.705 [2024-05-15 17:14:33.020676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:24:00.705 [2024-05-15 17:14:33.020688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:11888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.705 [2024-05-15 17:14:33.020695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:24:00.705 [2024-05-15 17:14:33.020707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:11896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.705 [2024-05-15 17:14:33.020715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:24:00.705 [2024-05-15 17:14:33.020728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:11904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.705 [2024-05-15 17:14:33.020734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:24:00.705 [2024-05-15 17:14:33.020747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:11912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.705 [2024-05-15 17:14:33.020753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:24:00.705 [2024-05-15 17:14:33.020766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:11920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.705 [2024-05-15 17:14:33.020773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:24:00.705 [2024-05-15 17:14:33.021078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:11928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.705 [2024-05-15 17:14:33.021087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:24:00.705 [2024-05-15 17:14:33.021101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:11936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.705 [2024-05-15 17:14:33.021109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:24:00.705 [2024-05-15 17:14:33.021122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:11944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.705 [2024-05-15 17:14:33.021129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:24:00.705 [2024-05-15 17:14:33.021142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:11952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.705 [2024-05-15 17:14:33.021149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:24:00.705 [2024-05-15 17:14:33.021161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:11960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.705 [2024-05-15 17:14:33.021173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:24:00.705 [2024-05-15 17:14:33.021185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:11968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.705 [2024-05-15 17:14:33.021192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:24:00.705 [2024-05-15 17:14:33.021204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:11976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.705 [2024-05-15 17:14:33.021211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:24:00.705 [2024-05-15 17:14:33.021223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:11984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.705 [2024-05-15 17:14:33.021230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:24:00.705 [2024-05-15 17:14:33.021243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:11992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.705 [2024-05-15 17:14:33.021254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:24:00.705 [2024-05-15 17:14:33.021266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:12000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.705 [2024-05-15 17:14:33.021273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:24:00.705 [2024-05-15 17:14:33.021285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.705 [2024-05-15 17:14:33.021293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.705 [2024-05-15 17:14:33.021305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:12016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.705 [2024-05-15 17:14:33.021312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:00.705 [2024-05-15 17:14:33.021325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:12024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.705 [2024-05-15 17:14:33.021332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:00.705 [2024-05-15 17:14:33.021345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:12032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.705 [2024-05-15 17:14:33.021352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:24:00.705 [2024-05-15 17:14:33.021364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:12040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.705 [2024-05-15 17:14:33.021372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:24:00.705 [2024-05-15 17:14:33.021384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:12048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.705 [2024-05-15 17:14:33.021391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:24:00.705 [2024-05-15 17:14:33.021404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:12056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.705 [2024-05-15 17:14:33.021411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:24:00.705 [2024-05-15 17:14:33.021423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:12064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.705 [2024-05-15 17:14:33.021430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:24:00.705 [2024-05-15 17:14:33.021442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:11496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.705 [2024-05-15 17:14:33.021449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:24:00.705 [2024-05-15 17:14:33.021461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:12072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.705 [2024-05-15 17:14:33.021468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:24:00.705 [2024-05-15 17:14:33.021480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:12080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.705 [2024-05-15 17:14:33.021487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:24:00.705 [2024-05-15 17:14:33.021500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:12088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.705 [2024-05-15 17:14:33.021507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:24:00.705 [2024-05-15 17:14:33.021519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:12096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.705 [2024-05-15 17:14:33.021526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:24:00.705 [2024-05-15 17:14:33.021538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:12104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.705 [2024-05-15 17:14:33.021545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:24:00.705 [2024-05-15 17:14:33.021557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:12112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.705 [2024-05-15 17:14:33.021563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:24:00.705 [2024-05-15 17:14:33.021575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:12120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.705 [2024-05-15 17:14:33.021582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:24:00.705 [2024-05-15 17:14:33.021594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:12128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.705 [2024-05-15 17:14:33.021601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:24:00.705 [2024-05-15 17:14:33.021613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:12136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.705 [2024-05-15 17:14:33.021620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:24:00.705 [2024-05-15 17:14:33.021632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:12144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.705 [2024-05-15 17:14:33.021639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:24:00.705 [2024-05-15 17:14:33.021651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:12152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.705 [2024-05-15 17:14:33.021658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:24:00.705 [2024-05-15 17:14:33.021669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:12160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.705 [2024-05-15 17:14:33.021676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:24:00.705 [2024-05-15 17:14:33.021688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:12168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.705 [2024-05-15 17:14:33.021695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:24:00.705 [2024-05-15 17:14:33.021707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:12176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.705 [2024-05-15 17:14:33.021714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:24:00.705 [2024-05-15 17:14:33.021726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:12184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.705 [2024-05-15 17:14:33.021734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:24:00.705 [2024-05-15 17:14:33.021746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:12192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.705 [2024-05-15 17:14:33.021753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:24:00.705 [2024-05-15 17:14:33.021765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:12200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.705 [2024-05-15 17:14:33.021772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:24:00.705 [2024-05-15 17:14:33.021784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:12208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.705 [2024-05-15 17:14:33.021791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:24:00.705 [2024-05-15 17:14:33.021803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:12216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.705 [2024-05-15 17:14:33.021810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:24:00.705 [2024-05-15 17:14:33.021822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:12224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.705 [2024-05-15 17:14:33.021829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:24:00.705 [2024-05-15 17:14:33.021841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:12232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.705 [2024-05-15 17:14:33.021848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:24:00.705 [2024-05-15 17:14:33.022223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.706 [2024-05-15 17:14:33.022235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:24:00.706 [2024-05-15 17:14:33.022249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:12248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.706 [2024-05-15 17:14:33.022256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:24:00.706 [2024-05-15 17:14:33.022268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:12256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.706 [2024-05-15 17:14:33.022275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:24:00.706 [2024-05-15 17:14:33.022287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:12264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.706 [2024-05-15 17:14:33.022294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:00.706 [2024-05-15 17:14:33.022307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:12272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.706 [2024-05-15 17:14:33.022314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:00.706 [2024-05-15 17:14:33.022326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:12280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.706 [2024-05-15 17:14:33.022335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:24:00.706 [2024-05-15 17:14:33.022347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:12288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.706 [2024-05-15 17:14:33.022354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:24:00.706 [2024-05-15 17:14:33.022366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:12296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.706 [2024-05-15 17:14:33.022373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:24:00.706 [2024-05-15 17:14:33.022385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:12304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.706 [2024-05-15 17:14:33.022392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:24:00.706 [2024-05-15 17:14:33.022404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:12312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.706 [2024-05-15 17:14:33.022411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:24:00.706 [2024-05-15 17:14:33.022423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:12320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.706 [2024-05-15 17:14:33.022429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:24:00.706 [2024-05-15 17:14:33.022441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:12328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.706 [2024-05-15 17:14:33.022448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:24:00.706 [2024-05-15 17:14:33.022460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:12336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.706 [2024-05-15 17:14:33.022467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:24:00.706 [2024-05-15 17:14:33.022479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:12344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.706 [2024-05-15 17:14:33.022486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:24:00.706 [2024-05-15 17:14:33.022497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:12352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.706 [2024-05-15 17:14:33.022504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:24:00.706 [2024-05-15 17:14:33.022516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:12360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.706 [2024-05-15 17:14:33.022523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:24:00.706 [2024-05-15 17:14:33.022535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:12368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.706 [2024-05-15 17:14:33.022542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:24:00.706 [2024-05-15 17:14:33.022554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:12376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.706 [2024-05-15 17:14:33.022560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:24:00.706 [2024-05-15 17:14:33.022574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:12384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.706 [2024-05-15 17:14:33.022581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:24:00.706 [2024-05-15 17:14:33.022593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:12392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.706 [2024-05-15 17:14:33.022600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:24:00.706 [2024-05-15 17:14:33.022612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:12400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.706 [2024-05-15 17:14:33.022618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:24:00.706 [2024-05-15 17:14:33.022631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:12408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.706 [2024-05-15 17:14:33.022637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:24:00.706 [2024-05-15 17:14:33.022650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:12416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.706 [2024-05-15 17:14:33.022656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:24:00.706 [2024-05-15 17:14:33.022669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:12424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.706 [2024-05-15 17:14:33.022675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:24:00.706 [2024-05-15 17:14:33.022894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:12432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.706 [2024-05-15 17:14:33.022903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:24:00.706 [2024-05-15 17:14:33.022916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:12440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.706 [2024-05-15 17:14:33.022923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:24:00.706 [2024-05-15 17:14:33.022935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:12448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.706 [2024-05-15 17:14:33.022942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:24:00.706 [2024-05-15 17:14:33.022954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:12456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.706 [2024-05-15 17:14:33.022961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:24:00.706 [2024-05-15 17:14:33.022973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:11504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.706 [2024-05-15 17:14:33.022980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:24:00.706 [2024-05-15 17:14:33.022992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:11512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.706 [2024-05-15 17:14:33.022999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:24:00.706 [2024-05-15 17:14:33.023012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:11520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.706 [2024-05-15 17:14:33.023019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:24:00.706 [2024-05-15 17:14:33.023031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:11528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.706 [2024-05-15 17:14:33.023038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:24:00.706 [2024-05-15 17:14:33.023050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:11536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.706 [2024-05-15 17:14:33.023057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:24:00.706 [2024-05-15 17:14:33.023069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:11544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.706 [2024-05-15 17:14:33.023076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:24:00.706 [2024-05-15 17:14:33.023088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:11552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.706 [2024-05-15 17:14:33.023095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:24:00.706 [2024-05-15 17:14:33.023107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:11560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.706 [2024-05-15 17:14:33.023114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:00.706 [2024-05-15 17:14:33.023126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:11568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.706 [2024-05-15 17:14:33.023133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:00.706 [2024-05-15 17:14:33.023145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:11576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.706 [2024-05-15 17:14:33.023152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:24:00.706 [2024-05-15 17:14:33.023170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:11584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.706 [2024-05-15 17:14:33.023178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:24:00.706 [2024-05-15 17:14:33.023190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:11592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.706 [2024-05-15 17:14:33.023197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:24:00.706 [2024-05-15 17:14:33.023209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:11600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.706 [2024-05-15 17:14:33.023216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:24:00.706 [2024-05-15 17:14:33.023229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:11608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.706 [2024-05-15 17:14:33.023236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:24:00.707 [2024-05-15 17:14:33.023253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:11616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.707 [2024-05-15 17:14:33.023263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:24:00.707 [2024-05-15 17:14:33.023278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:12464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.707 [2024-05-15 17:14:33.023286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:24:00.707 [2024-05-15 17:14:33.023299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:12472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.707 [2024-05-15 17:14:33.023306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:24:00.707 [2024-05-15 17:14:33.023318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:12480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.707 [2024-05-15 17:14:33.023325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:24:00.707 [2024-05-15 17:14:33.023337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:12488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.707 [2024-05-15 17:14:33.023344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:24:00.707 [2024-05-15 17:14:33.023534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:12496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.707 [2024-05-15 17:14:33.023543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:24:00.707 [2024-05-15 17:14:33.023556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:12504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.707 [2024-05-15 17:14:33.023563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:24:00.707 [2024-05-15 17:14:33.023575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:12512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.707 [2024-05-15 17:14:33.023582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:24:00.707 [2024-05-15 17:14:33.023595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:11624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.707 [2024-05-15 17:14:33.023602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:24:00.707 [2024-05-15 17:14:33.023614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:11632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.707 [2024-05-15 17:14:33.023621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:24:00.707 [2024-05-15 17:14:33.023634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:11640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.707 [2024-05-15 17:14:33.023641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:24:00.707 [2024-05-15 17:14:33.023655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:11648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.707 [2024-05-15 17:14:33.023661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:24:00.707 [2024-05-15 17:14:33.023673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:11656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.707 [2024-05-15 17:14:33.023684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:24:00.707 [2024-05-15 17:14:33.023696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:11664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.707 [2024-05-15 17:14:33.023703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:24:00.707 [2024-05-15 17:14:33.023715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:11672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.707 [2024-05-15 17:14:33.023723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:24:00.707 [2024-05-15 17:14:33.023735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:11680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.707 [2024-05-15 17:14:33.023742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:24:00.707 [2024-05-15 17:14:33.023754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:11688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.707 [2024-05-15 17:14:33.023761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:24:00.707 [2024-05-15 17:14:33.023774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:11696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.707 [2024-05-15 17:14:33.023780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:24:00.707 [2024-05-15 17:14:33.023792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:11704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.707 [2024-05-15 17:14:33.023799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:24:00.707 [2024-05-15 17:14:33.023811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:11712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.707 [2024-05-15 17:14:33.023818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:24:00.707 [2024-05-15 17:14:33.023830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:11720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.707 [2024-05-15 17:14:33.023837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:24:00.707 [2024-05-15 17:14:33.023850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:11728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.707 [2024-05-15 17:14:33.023858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:24:00.707 [2024-05-15 17:14:33.023871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:11736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.707 [2024-05-15 17:14:33.023878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:24:00.707 [2024-05-15 17:14:33.023891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:11744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.707 [2024-05-15 17:14:33.023897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:24:00.707 [2024-05-15 17:14:33.023910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:11752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.707 [2024-05-15 17:14:33.023919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:24:00.707 [2024-05-15 17:14:33.023931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:11760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.707 [2024-05-15 17:14:33.023938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:00.707 [2024-05-15 17:14:33.023951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:11768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.707 [2024-05-15 17:14:33.023959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:00.707 [2024-05-15 17:14:33.023973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:11776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.707 [2024-05-15 17:14:33.023982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:24:00.707 [2024-05-15 17:14:33.023994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:11784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.707 [2024-05-15 17:14:33.024001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:24:00.707 [2024-05-15 17:14:33.024013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:11792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.707 [2024-05-15 17:14:33.024021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:24:00.707 [2024-05-15 17:14:33.024033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:11800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.707 [2024-05-15 17:14:33.024040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:24:00.707 [2024-05-15 17:14:33.024053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:11808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.707 [2024-05-15 17:14:33.024059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:24:00.707 [2024-05-15 17:14:33.024072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:11816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.707 [2024-05-15 17:14:33.024079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:24:00.707 [2024-05-15 17:14:33.024093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:11824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.707 [2024-05-15 17:14:33.024100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:24:00.707 [2024-05-15 17:14:33.024112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:11832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.707 [2024-05-15 17:14:33.024119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:24:00.707 [2024-05-15 17:14:33.024131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:11840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.707 [2024-05-15 17:14:33.024138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:24:00.707 [2024-05-15 17:14:33.024151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:11848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.707 [2024-05-15 17:14:33.024158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:24:00.707 [2024-05-15 17:14:33.024177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:11856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.707 [2024-05-15 17:14:33.024184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:24:00.707 [2024-05-15 17:14:33.024197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:11864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.707 [2024-05-15 17:14:33.024204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:24:00.707 [2024-05-15 17:14:33.024217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:11872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.707 [2024-05-15 17:14:33.024224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:24:00.707 [2024-05-15 17:14:33.024236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:11880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.707 [2024-05-15 17:14:33.024243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:24:00.707 [2024-05-15 17:14:33.024255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:11888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.707 [2024-05-15 17:14:33.024261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:24:00.707 [2024-05-15 17:14:33.024274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:11896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.707 [2024-05-15 17:14:33.024280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:24:00.707 [2024-05-15 17:14:33.024293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:11904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.707 [2024-05-15 17:14:33.024299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:24:00.707 [2024-05-15 17:14:33.024311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:11912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.707 [2024-05-15 17:14:33.024318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:24:00.707 [2024-05-15 17:14:33.024331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:11920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.707 [2024-05-15 17:14:33.024337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:24:00.707 [2024-05-15 17:14:33.024643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:11928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.707 [2024-05-15 17:14:33.024652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:24:00.707 [2024-05-15 17:14:33.024665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:11936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.707 [2024-05-15 17:14:33.024672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:24:00.707 [2024-05-15 17:14:33.024684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:11944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.708 [2024-05-15 17:14:33.024691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:24:00.708 [2024-05-15 17:14:33.024706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:11952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.708 [2024-05-15 17:14:33.024712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:24:00.708 [2024-05-15 17:14:33.024724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:11960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.708 [2024-05-15 17:14:33.024731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:24:00.708 [2024-05-15 17:14:33.024744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:11968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.708 [2024-05-15 17:14:33.024750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:24:00.708 [2024-05-15 17:14:33.024763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:11976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.708 [2024-05-15 17:14:33.024769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:24:00.708 [2024-05-15 17:14:33.024782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:11984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.708 [2024-05-15 17:14:33.024788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:24:00.708 [2024-05-15 17:14:33.024800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:11992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.708 [2024-05-15 17:14:33.024807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:24:00.708 [2024-05-15 17:14:33.024819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:12000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.708 [2024-05-15 17:14:33.024826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:24:00.708 [2024-05-15 17:14:33.024838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:12008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.708 [2024-05-15 17:14:33.024844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.708 [2024-05-15 17:14:33.024857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:12016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.708 [2024-05-15 17:14:33.024863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:00.708 [2024-05-15 17:14:33.024875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:12024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.708 [2024-05-15 17:14:33.024882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:00.708 [2024-05-15 17:14:33.024894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:12032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.708 [2024-05-15 17:14:33.024901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:24:00.708 [2024-05-15 17:14:33.024913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:12040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.708 [2024-05-15 17:14:33.024920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:24:00.708 [2024-05-15 17:14:33.024932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:12048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.708 [2024-05-15 17:14:33.024940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:24:00.708 [2024-05-15 17:14:33.024953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:12056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.708 [2024-05-15 17:14:33.024960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:24:00.708 [2024-05-15 17:14:33.024972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:12064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.708 [2024-05-15 17:14:33.024979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:24:00.708 [2024-05-15 17:14:33.024991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:11496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.708 [2024-05-15 17:14:33.024998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:24:00.708 [2024-05-15 17:14:33.025011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:12072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.708 [2024-05-15 17:14:33.025017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:24:00.708 [2024-05-15 17:14:33.025029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:12080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.708 [2024-05-15 17:14:33.025036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:24:00.708 [2024-05-15 17:14:33.025048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:12088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.708 [2024-05-15 17:14:33.025055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:24:00.708 [2024-05-15 17:14:33.025067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:12096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.708 [2024-05-15 17:14:33.025074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:24:00.708 [2024-05-15 17:14:33.025086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:12104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.708 [2024-05-15 17:14:33.025093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:24:00.708 [2024-05-15 17:14:33.025105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:12112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.708 [2024-05-15 17:14:33.025112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:24:00.708 [2024-05-15 17:14:33.025124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:12120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.708 [2024-05-15 17:14:33.025131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:24:00.708 [2024-05-15 17:14:33.025143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:12128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.708 [2024-05-15 17:14:33.025150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:24:00.708 [2024-05-15 17:14:33.025162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:12136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.708 [2024-05-15 17:14:33.025176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:24:00.708 [2024-05-15 17:14:33.025189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.708 [2024-05-15 17:14:33.025195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:24:00.708 [2024-05-15 17:14:33.025208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:12152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.708 [2024-05-15 17:14:33.025215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:24:00.708 [2024-05-15 17:14:33.025227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:12160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.708 [2024-05-15 17:14:33.025234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:24:00.708 [2024-05-15 17:14:33.025247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:12168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.708 [2024-05-15 17:14:33.025254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:24:00.708 [2024-05-15 17:14:33.025266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:12176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.708 [2024-05-15 17:14:33.025273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:24:00.708 [2024-05-15 17:14:33.025284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:12184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.708 [2024-05-15 17:14:33.025291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:24:00.708 [2024-05-15 17:14:33.025303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:12192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.708 [2024-05-15 17:14:33.025310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:24:00.708 [2024-05-15 17:14:33.025322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:12200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.708 [2024-05-15 17:14:33.025330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:24:00.708 [2024-05-15 17:14:33.025342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:12208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.708 [2024-05-15 17:14:33.025349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:24:00.708 [2024-05-15 17:14:33.025362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:12216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.708 [2024-05-15 17:14:33.025369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:24:00.708 [2024-05-15 17:14:33.025382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:12224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.708 [2024-05-15 17:14:33.025389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:24:00.708 [2024-05-15 17:14:33.025401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:12232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.708 [2024-05-15 17:14:33.025408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:24:00.708 [2024-05-15 17:14:33.025422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:12240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.708 [2024-05-15 17:14:33.025429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:24:00.708 [2024-05-15 17:14:33.025795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:12248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.708 [2024-05-15 17:14:33.025806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:24:00.708 [2024-05-15 17:14:33.025819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:12256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.708 [2024-05-15 17:14:33.025826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:24:00.708 [2024-05-15 17:14:33.025839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:12264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.708 [2024-05-15 17:14:33.025846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:00.708 [2024-05-15 17:14:33.025859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:12272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.708 [2024-05-15 17:14:33.025866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:00.708 [2024-05-15 17:14:33.025883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:12280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.708 [2024-05-15 17:14:33.025890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:24:00.708 [2024-05-15 17:14:33.025903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:12288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.708 [2024-05-15 17:14:33.025909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:24:00.708 [2024-05-15 17:14:33.025922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:12296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.708 [2024-05-15 17:14:33.025929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:24:00.708 [2024-05-15 17:14:33.025941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:12304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.708 [2024-05-15 17:14:33.025948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:24:00.708 [2024-05-15 17:14:33.025960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:12312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.708 [2024-05-15 17:14:33.025967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:24:00.708 [2024-05-15 17:14:33.025979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:12320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.708 [2024-05-15 17:14:33.025986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:24:00.708 [2024-05-15 17:14:33.025998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:12328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.708 [2024-05-15 17:14:33.026005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:24:00.709 [2024-05-15 17:14:33.026019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:12336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.709 [2024-05-15 17:14:33.026026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:24:00.709 [2024-05-15 17:14:33.026038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:12344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.709 [2024-05-15 17:14:33.026045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:24:00.709 [2024-05-15 17:14:33.026057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:12352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.709 [2024-05-15 17:14:33.026064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:24:00.709 [2024-05-15 17:14:33.026076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:12360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.709 [2024-05-15 17:14:33.026083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:24:00.709 [2024-05-15 17:14:33.026096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:12368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.709 [2024-05-15 17:14:33.026103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:24:00.709 [2024-05-15 17:14:33.026116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:12376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.709 [2024-05-15 17:14:33.026122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:24:00.709 [2024-05-15 17:14:33.026135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:12384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.709 [2024-05-15 17:14:33.026143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:24:00.709 [2024-05-15 17:14:33.026155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:12392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.709 [2024-05-15 17:14:33.026163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:24:00.709 [2024-05-15 17:14:33.026180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:12400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.709 [2024-05-15 17:14:33.026187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:24:00.709 [2024-05-15 17:14:33.026200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:12408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.709 [2024-05-15 17:14:33.026206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:24:00.709 [2024-05-15 17:14:33.026219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.709 [2024-05-15 17:14:33.026226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:24:00.709 [2024-05-15 17:14:33.026238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:12424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.709 [2024-05-15 17:14:33.026245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:24:00.709 [2024-05-15 17:14:33.026258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:12432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.709 [2024-05-15 17:14:33.026266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:24:00.709 [2024-05-15 17:14:33.026496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:12440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.709 [2024-05-15 17:14:33.026506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:24:00.709 [2024-05-15 17:14:33.026520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:12448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.709 [2024-05-15 17:14:33.026527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:24:00.709 [2024-05-15 17:14:33.026539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:12456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.709 [2024-05-15 17:14:33.026546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:24:00.709 [2024-05-15 17:14:33.026559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:11504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.709 [2024-05-15 17:14:33.026565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:24:00.709 [2024-05-15 17:14:33.026578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:11512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.709 [2024-05-15 17:14:33.026584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:24:00.709 [2024-05-15 17:14:33.026596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:11520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.709 [2024-05-15 17:14:33.026603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:24:00.709 [2024-05-15 17:14:33.026616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:11528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.709 [2024-05-15 17:14:33.026622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:24:00.709 [2024-05-15 17:14:33.026635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:11536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.709 [2024-05-15 17:14:33.026641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:24:00.709 [2024-05-15 17:14:33.026654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:11544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.709 [2024-05-15 17:14:33.026661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:24:00.709 [2024-05-15 17:14:33.026673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:11552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.709 [2024-05-15 17:14:33.026679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:24:00.709 [2024-05-15 17:14:33.026692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:11560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.709 [2024-05-15 17:14:33.026699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:00.709 [2024-05-15 17:14:33.026711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:11568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.709 [2024-05-15 17:14:33.026720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:00.709 [2024-05-15 17:14:33.026732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:11576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.709 [2024-05-15 17:14:33.026739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:24:00.709 [2024-05-15 17:14:33.026751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:11584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.709 [2024-05-15 17:14:33.026758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:24:00.709 [2024-05-15 17:14:33.026770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:11592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.709 [2024-05-15 17:14:33.026777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:24:00.709 [2024-05-15 17:14:33.026789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:11600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.709 [2024-05-15 17:14:33.026796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:24:00.709 [2024-05-15 17:14:33.026808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:11608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.709 [2024-05-15 17:14:33.026814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:24:00.709 [2024-05-15 17:14:33.026826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:11616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.709 [2024-05-15 17:14:33.026833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:24:00.709 [2024-05-15 17:14:33.026845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:12464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.709 [2024-05-15 17:14:33.026852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:24:00.709 [2024-05-15 17:14:33.026864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:12472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.709 [2024-05-15 17:14:33.026871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:24:00.709 [2024-05-15 17:14:33.026883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:12480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.709 [2024-05-15 17:14:33.026890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:24:00.709 [2024-05-15 17:14:33.026902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:12488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.709 [2024-05-15 17:14:33.026909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:24:00.709 [2024-05-15 17:14:33.026921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:12496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.709 [2024-05-15 17:14:33.026928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:24:00.709 [2024-05-15 17:14:33.027117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:12504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.709 [2024-05-15 17:14:33.027126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:24:00.709 [2024-05-15 17:14:33.027141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:12512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.709 [2024-05-15 17:14:33.027148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:24:00.709 [2024-05-15 17:14:33.027160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:11624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.709 [2024-05-15 17:14:33.027173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:24:00.709 [2024-05-15 17:14:33.027186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:11632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.709 [2024-05-15 17:14:33.027192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:24:00.709 [2024-05-15 17:14:33.027205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:11640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.709 [2024-05-15 17:14:33.027211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:24:00.709 [2024-05-15 17:14:33.027223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:11648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.709 [2024-05-15 17:14:33.027231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:24:00.709 [2024-05-15 17:14:33.027242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:11656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.709 [2024-05-15 17:14:33.027249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:24:00.709 [2024-05-15 17:14:33.027261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:11664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.709 [2024-05-15 17:14:33.027268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:24:00.709 [2024-05-15 17:14:33.027280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:11672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.709 [2024-05-15 17:14:33.027287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:24:00.709 [2024-05-15 17:14:33.027299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:11680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.709 [2024-05-15 17:14:33.027306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:24:00.709 [2024-05-15 17:14:33.027318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:11688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.709 [2024-05-15 17:14:33.027325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:24:00.709 [2024-05-15 17:14:33.027337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:11696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.709 [2024-05-15 17:14:33.027343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:24:00.709 [2024-05-15 17:14:33.027356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:11704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.709 [2024-05-15 17:14:33.027362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:24:00.709 [2024-05-15 17:14:33.027376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:11712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.709 [2024-05-15 17:14:33.027383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:24:00.709 [2024-05-15 17:14:33.027395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:11720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.709 [2024-05-15 17:14:33.027401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:24:00.709 [2024-05-15 17:14:33.027413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:11728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.709 [2024-05-15 17:14:33.027420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:24:00.709 [2024-05-15 17:14:33.027432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:11736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.709 [2024-05-15 17:14:33.027439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:24:00.709 [2024-05-15 17:14:33.027451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:11744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.709 [2024-05-15 17:14:33.027458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:24:00.710 [2024-05-15 17:14:33.027470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:11752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.710 [2024-05-15 17:14:33.027476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:24:00.710 [2024-05-15 17:14:33.027489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:11760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.710 [2024-05-15 17:14:33.027495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:00.710 [2024-05-15 17:14:33.027508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:11768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.710 [2024-05-15 17:14:33.027514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:00.710 [2024-05-15 17:14:33.027526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:11776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.710 [2024-05-15 17:14:33.027533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:24:00.710 [2024-05-15 17:14:33.027545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:11784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.710 [2024-05-15 17:14:33.027552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:24:00.710 [2024-05-15 17:14:33.027564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:11792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.710 [2024-05-15 17:14:33.027571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:24:00.710 [2024-05-15 17:14:33.027583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:11800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.710 [2024-05-15 17:14:33.027590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:24:00.710 [2024-05-15 17:14:33.027602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:11808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.710 [2024-05-15 17:14:33.027611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:24:00.710 [2024-05-15 17:14:33.027623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:11816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.710 [2024-05-15 17:14:33.027630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:24:00.710 [2024-05-15 17:14:33.027642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:11824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.710 [2024-05-15 17:14:33.027649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:24:00.710 [2024-05-15 17:14:33.027661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:11832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.710 [2024-05-15 17:14:33.027667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:24:00.710 [2024-05-15 17:14:33.027679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:11840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.710 [2024-05-15 17:14:33.027686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:24:00.710 [2024-05-15 17:14:33.027698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:11848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.710 [2024-05-15 17:14:33.027705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:24:00.710 [2024-05-15 17:14:33.027717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:11856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.710 [2024-05-15 17:14:33.027724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:24:00.710 [2024-05-15 17:14:33.027736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:11864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.710 [2024-05-15 17:14:33.027742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:24:00.710 [2024-05-15 17:14:33.027754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:11872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.710 [2024-05-15 17:14:33.027761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:24:00.710 [2024-05-15 17:14:33.027773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:11880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.710 [2024-05-15 17:14:33.027780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:24:00.710 [2024-05-15 17:14:33.027792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:11888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.710 [2024-05-15 17:14:33.027798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:24:00.710 [2024-05-15 17:14:33.027814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:11896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.710 [2024-05-15 17:14:33.027822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:24:00.710 [2024-05-15 17:14:33.027834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:11904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.710 [2024-05-15 17:14:33.027843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:24:00.710 [2024-05-15 17:14:33.027855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:11912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.710 [2024-05-15 17:14:33.027862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:24:00.710 [2024-05-15 17:14:33.027874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:11920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.710 [2024-05-15 17:14:33.027881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:24:00.710 [2024-05-15 17:14:33.027894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:11928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.710 [2024-05-15 17:14:33.027901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:24:00.710 [2024-05-15 17:14:33.028225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:11936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.710 [2024-05-15 17:14:33.028236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:24:00.710 [2024-05-15 17:14:33.028250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:11944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.710 [2024-05-15 17:14:33.028257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:24:00.710 [2024-05-15 17:14:33.028269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:11952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.710 [2024-05-15 17:14:33.028276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:24:00.710 [2024-05-15 17:14:33.028288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:11960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.710 [2024-05-15 17:14:33.028295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:24:00.710 [2024-05-15 17:14:33.028307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:11968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.710 [2024-05-15 17:14:33.028314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:24:00.710 [2024-05-15 17:14:33.028326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:11976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.710 [2024-05-15 17:14:33.028333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:24:00.710 [2024-05-15 17:14:33.028345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:11984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.710 [2024-05-15 17:14:33.028352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:24:00.710 [2024-05-15 17:14:33.028364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:11992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.710 [2024-05-15 17:14:33.028371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:24:00.710 [2024-05-15 17:14:33.028383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:12000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.710 [2024-05-15 17:14:33.028390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:24:00.710 [2024-05-15 17:14:33.028407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:12008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.710 [2024-05-15 17:14:33.028415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.710 [2024-05-15 17:14:33.028427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.710 [2024-05-15 17:14:33.028434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:00.710 [2024-05-15 17:14:33.028447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:12024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.710 [2024-05-15 17:14:33.028454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:00.710 [2024-05-15 17:14:33.028466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:12032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.710 [2024-05-15 17:14:33.028473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:24:00.710 [2024-05-15 17:14:33.028485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:12040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.710 [2024-05-15 17:14:33.028492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:24:00.710 [2024-05-15 17:14:33.028504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:12048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.710 [2024-05-15 17:14:33.028510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:24:00.710 [2024-05-15 17:14:33.028523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:12056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.710 [2024-05-15 17:14:33.028529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:24:00.710 [2024-05-15 17:14:33.028541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:12064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.710 [2024-05-15 17:14:33.028548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:24:00.710 [2024-05-15 17:14:33.028560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:11496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.710 [2024-05-15 17:14:33.028567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:24:00.710 [2024-05-15 17:14:33.028579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:12072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.710 [2024-05-15 17:14:33.028586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:24:00.710 [2024-05-15 17:14:33.028598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:12080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.710 [2024-05-15 17:14:33.028605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:24:00.710 [2024-05-15 17:14:33.028617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:12088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.710 [2024-05-15 17:14:33.028624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:24:00.710 [2024-05-15 17:14:33.028638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:12096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.710 [2024-05-15 17:14:33.028645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:24:00.710 [2024-05-15 17:14:33.028657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:12104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.710 [2024-05-15 17:14:33.028664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:24:00.710 [2024-05-15 17:14:33.028676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:12112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.710 [2024-05-15 17:14:33.028683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:24:00.710 [2024-05-15 17:14:33.028695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:12120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.710 [2024-05-15 17:14:33.028702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:24:00.711 [2024-05-15 17:14:33.028714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:12128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.711 [2024-05-15 17:14:33.028721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:24:00.711 [2024-05-15 17:14:33.028733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:12136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.711 [2024-05-15 17:14:33.028739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:24:00.711 [2024-05-15 17:14:33.028752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:12144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.711 [2024-05-15 17:14:33.028759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:24:00.711 [2024-05-15 17:14:33.028771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:12152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.711 [2024-05-15 17:14:33.028778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:24:00.711 [2024-05-15 17:14:33.028790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:12160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.711 [2024-05-15 17:14:33.028797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:24:00.711 [2024-05-15 17:14:33.028809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:12168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.711 [2024-05-15 17:14:33.028816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:24:00.711 [2024-05-15 17:14:33.028828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:12176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.711 [2024-05-15 17:14:33.028835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:24:00.711 [2024-05-15 17:14:33.028847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:12184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.711 [2024-05-15 17:14:33.028854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:24:00.711 [2024-05-15 17:14:33.029157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:12192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.711 [2024-05-15 17:14:33.029175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:24:00.711 [2024-05-15 17:14:33.029189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:12200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.711 [2024-05-15 17:14:33.029196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:24:00.711 [2024-05-15 17:14:33.029209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:12208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.711 [2024-05-15 17:14:33.029215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:24:00.711 [2024-05-15 17:14:33.029228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:12216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.711 [2024-05-15 17:14:33.029234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:24:00.711 [2024-05-15 17:14:33.029246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:12224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.711 [2024-05-15 17:14:33.029253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:24:00.711 [2024-05-15 17:14:33.029266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:12232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.711 [2024-05-15 17:14:33.029272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:24:00.711 [2024-05-15 17:14:33.029285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:12240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.711 [2024-05-15 17:14:33.029292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:24:00.711 [2024-05-15 17:14:33.029304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:12248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.711 [2024-05-15 17:14:33.029311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:24:00.711 [2024-05-15 17:14:33.029323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.711 [2024-05-15 17:14:33.029330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:24:00.711 [2024-05-15 17:14:33.029342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:12264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.711 [2024-05-15 17:14:33.029349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:00.711 [2024-05-15 17:14:33.029361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:12272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.711 [2024-05-15 17:14:33.029368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:00.711 [2024-05-15 17:14:33.029380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:12280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.711 [2024-05-15 17:14:33.029387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:24:00.711 [2024-05-15 17:14:33.029398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:12288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.711 [2024-05-15 17:14:33.029407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:24:00.711 [2024-05-15 17:14:33.029420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:12296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.711 [2024-05-15 17:14:33.029426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:24:00.711 [2024-05-15 17:14:33.029438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:12304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.711 [2024-05-15 17:14:33.029445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:24:00.711 [2024-05-15 17:14:33.029457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:12312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.711 [2024-05-15 17:14:33.029464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:24:00.711 [2024-05-15 17:14:33.029476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:12320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.711 [2024-05-15 17:14:33.029482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:24:00.711 [2024-05-15 17:14:33.029494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:12328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.711 [2024-05-15 17:14:33.029501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:24:00.711 [2024-05-15 17:14:33.029513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:12336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.711 [2024-05-15 17:14:33.029520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:24:00.711 [2024-05-15 17:14:33.029532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:12344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.711 [2024-05-15 17:14:33.029539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:24:00.711 [2024-05-15 17:14:33.029551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:12352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.711 [2024-05-15 17:14:33.029558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:24:00.711 [2024-05-15 17:14:33.029570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:12360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.711 [2024-05-15 17:14:33.029576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:24:00.711 [2024-05-15 17:14:33.029589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:12368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.711 [2024-05-15 17:14:33.029595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:24:00.711 [2024-05-15 17:14:33.029607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:12376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.711 [2024-05-15 17:14:33.029614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:24:00.711 [2024-05-15 17:14:33.029626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:12384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.711 [2024-05-15 17:14:33.029633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:24:00.711 [2024-05-15 17:14:33.029646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:12392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.711 [2024-05-15 17:14:33.029653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:24:00.711 [2024-05-15 17:14:33.029665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:12400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.711 [2024-05-15 17:14:33.029672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:24:00.711 [2024-05-15 17:14:33.029684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:12408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.711 [2024-05-15 17:14:33.029691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:24:00.711 [2024-05-15 17:14:33.029703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:12416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.711 [2024-05-15 17:14:33.029709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:24:00.711 [2024-05-15 17:14:33.029721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:12424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.711 [2024-05-15 17:14:33.029728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:24:00.711 [2024-05-15 17:14:33.029740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:12432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.711 [2024-05-15 17:14:33.029747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:24:00.711 [2024-05-15 17:14:33.029759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:12440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.711 [2024-05-15 17:14:33.029766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:24:00.711 [2024-05-15 17:14:33.030049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:12448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.711 [2024-05-15 17:14:33.030058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:24:00.711 [2024-05-15 17:14:33.030072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:12456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.711 [2024-05-15 17:14:33.030079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:24:00.711 [2024-05-15 17:14:33.030091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:11504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.711 [2024-05-15 17:14:33.030098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:24:00.711 [2024-05-15 17:14:33.030110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:11512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.711 [2024-05-15 17:14:33.030117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:24:00.711 [2024-05-15 17:14:33.030129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:11520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.711 [2024-05-15 17:14:33.030136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:24:00.711 [2024-05-15 17:14:33.030150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:11528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.711 [2024-05-15 17:14:33.030157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:24:00.711 [2024-05-15 17:14:33.030174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:11536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.711 [2024-05-15 17:14:33.030181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:24:00.711 [2024-05-15 17:14:33.030193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:11544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.711 [2024-05-15 17:14:33.030200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:24:00.711 [2024-05-15 17:14:33.030212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:11552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.711 [2024-05-15 17:14:33.030219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:24:00.711 [2024-05-15 17:14:33.030231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:11560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.711 [2024-05-15 17:14:33.030238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:00.711 [2024-05-15 17:14:33.030251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:11568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.711 [2024-05-15 17:14:33.030257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:00.711 [2024-05-15 17:14:33.030269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:11576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.711 [2024-05-15 17:14:33.030276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:24:00.711 [2024-05-15 17:14:33.030289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:11584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.711 [2024-05-15 17:14:33.030295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:24:00.711 [2024-05-15 17:14:33.030307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:11592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.712 [2024-05-15 17:14:33.030314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:24:00.712 [2024-05-15 17:14:33.030326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:11600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.712 [2024-05-15 17:14:33.030333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:24:00.712 [2024-05-15 17:14:33.030345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:11608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.712 [2024-05-15 17:14:33.030352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:24:00.712 [2024-05-15 17:14:33.030364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:11616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.712 [2024-05-15 17:14:33.030370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:24:00.712 [2024-05-15 17:14:33.030384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:12464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.712 [2024-05-15 17:14:33.030391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:24:00.712 [2024-05-15 17:14:33.030403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:12472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.712 [2024-05-15 17:14:33.030410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:24:00.712 [2024-05-15 17:14:33.030422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:12480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.712 [2024-05-15 17:14:33.030428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:24:00.712 [2024-05-15 17:14:33.030441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:12488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.712 [2024-05-15 17:14:33.030447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:24:00.712 [2024-05-15 17:14:33.030459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:12496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.712 [2024-05-15 17:14:33.030466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:24:00.712 [2024-05-15 17:14:33.030479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:12504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.712 [2024-05-15 17:14:33.030485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:24:00.712 [2024-05-15 17:14:33.030712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:12512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.712 [2024-05-15 17:14:33.030722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:24:00.712 [2024-05-15 17:14:33.030736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:11624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.712 [2024-05-15 17:14:33.030743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:24:00.712 [2024-05-15 17:14:33.030755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:11632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.712 [2024-05-15 17:14:33.030762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:24:00.712 [2024-05-15 17:14:33.030774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:11640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.712 [2024-05-15 17:14:33.030781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:24:00.712 [2024-05-15 17:14:33.030793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:11648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.712 [2024-05-15 17:14:33.030800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:24:00.712 [2024-05-15 17:14:33.030813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:11656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.712 [2024-05-15 17:14:33.030819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:24:00.712 [2024-05-15 17:14:33.030831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:11664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.712 [2024-05-15 17:14:33.030840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:24:00.712 [2024-05-15 17:14:33.030853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:11672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.712 [2024-05-15 17:14:33.030859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:24:00.712 [2024-05-15 17:14:33.030872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:11680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.712 [2024-05-15 17:14:33.030879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:24:00.712 [2024-05-15 17:14:33.030891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:11688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.712 [2024-05-15 17:14:33.030897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:24:00.712 [2024-05-15 17:14:33.030909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:11696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.712 [2024-05-15 17:14:33.030916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:24:00.712 [2024-05-15 17:14:33.030929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:11704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.712 [2024-05-15 17:14:33.030935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:24:00.712 [2024-05-15 17:14:33.030947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:11712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.712 [2024-05-15 17:14:33.030954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:24:00.712 [2024-05-15 17:14:33.030966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:11720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.712 [2024-05-15 17:14:33.030973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:24:00.712 [2024-05-15 17:14:33.030985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:11728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.712 [2024-05-15 17:14:33.030992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:24:00.712 [2024-05-15 17:14:33.031004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:11736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.712 [2024-05-15 17:14:33.031011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:24:00.712 [2024-05-15 17:14:33.031023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:11744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.712 [2024-05-15 17:14:33.031030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:24:00.712 [2024-05-15 17:14:33.031185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:11752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.712 [2024-05-15 17:14:33.031195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:24:00.712 [2024-05-15 17:14:33.031208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:11760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.712 [2024-05-15 17:14:33.031217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:00.712 [2024-05-15 17:14:33.031230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:11768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.712 [2024-05-15 17:14:33.031236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:00.712 [2024-05-15 17:14:33.031249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:11776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.712 [2024-05-15 17:14:33.031255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:24:00.712 [2024-05-15 17:14:33.031268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:11784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.712 [2024-05-15 17:14:33.031275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:24:00.712 [2024-05-15 17:14:33.031287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:11792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.712 [2024-05-15 17:14:33.031294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:24:00.712 [2024-05-15 17:14:33.031306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:11800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.712 [2024-05-15 17:14:33.031313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:24:00.712 [2024-05-15 17:14:33.031325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:11808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.712 [2024-05-15 17:14:33.031332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:24:00.712 [2024-05-15 17:14:33.031344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:11816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.712 [2024-05-15 17:14:33.031351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:24:00.712 [2024-05-15 17:14:33.031364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:11824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.712 [2024-05-15 17:14:33.031371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:24:00.712 [2024-05-15 17:14:33.031383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:11832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.712 [2024-05-15 17:14:33.031395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:24:00.712 [2024-05-15 17:14:33.031407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:11840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.712 [2024-05-15 17:14:33.031414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:24:00.712 [2024-05-15 17:14:33.031426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:11848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.712 [2024-05-15 17:14:33.031433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:24:00.712 [2024-05-15 17:14:33.031447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:11856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.712 [2024-05-15 17:14:33.031454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:24:00.712 [2024-05-15 17:14:33.031468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:11864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.712 [2024-05-15 17:14:33.031475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:24:00.712 [2024-05-15 17:14:33.031487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:11872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.712 [2024-05-15 17:14:33.031494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:24:00.712 [2024-05-15 17:14:33.031506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:11880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.712 [2024-05-15 17:14:33.031513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:24:00.712 [2024-05-15 17:14:33.031525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:11888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.712 [2024-05-15 17:14:33.031532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:24:00.712 [2024-05-15 17:14:33.031544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:11896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.712 [2024-05-15 17:14:33.031551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:24:00.712 [2024-05-15 17:14:33.031563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:11904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.712 [2024-05-15 17:14:33.031570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:24:00.712 [2024-05-15 17:14:33.031582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:11912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.712 [2024-05-15 17:14:33.031589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:24:00.712 [2024-05-15 17:14:33.031601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:11920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.712 [2024-05-15 17:14:33.031610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:24:00.712 [2024-05-15 17:14:33.031627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:11928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.712 [2024-05-15 17:14:33.031636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:24:00.712 [2024-05-15 17:14:33.031653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:11936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.712 [2024-05-15 17:14:33.031663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:24:00.712 [2024-05-15 17:14:33.031863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:11944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.712 [2024-05-15 17:14:33.031872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:24:00.712 [2024-05-15 17:14:33.031885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:11952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.712 [2024-05-15 17:14:33.031892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:24:00.712 [2024-05-15 17:14:33.031907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:11960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.712 [2024-05-15 17:14:33.031914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:24:00.712 [2024-05-15 17:14:33.031926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:11968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.713 [2024-05-15 17:14:33.031933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:24:00.713 [2024-05-15 17:14:33.031945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:11976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.713 [2024-05-15 17:14:33.031952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:24:00.713 [2024-05-15 17:14:33.031964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:11984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.713 [2024-05-15 17:14:33.031971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:24:00.713 [2024-05-15 17:14:33.031983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:11992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.713 [2024-05-15 17:14:33.031990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:24:00.713 [2024-05-15 17:14:33.032002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:12000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.713 [2024-05-15 17:14:33.032009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:24:00.713 [2024-05-15 17:14:33.032021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:12008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.713 [2024-05-15 17:14:33.032028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.713 [2024-05-15 17:14:33.032040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:12016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.713 [2024-05-15 17:14:33.032046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:00.713 [2024-05-15 17:14:33.032059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:12024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.713 [2024-05-15 17:14:33.032066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:00.713 [2024-05-15 17:14:33.032078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:12032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.713 [2024-05-15 17:14:33.032085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:24:00.713 [2024-05-15 17:14:33.032097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:12040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.713 [2024-05-15 17:14:33.032103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:24:00.713 [2024-05-15 17:14:33.032116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:12048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.713 [2024-05-15 17:14:33.032123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:24:00.713 [2024-05-15 17:14:33.032135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:12056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.713 [2024-05-15 17:14:33.032144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:24:00.713 [2024-05-15 17:14:33.032156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:12064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.713 [2024-05-15 17:14:33.032169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:24:00.713 [2024-05-15 17:14:33.032186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:11496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.713 [2024-05-15 17:14:33.032193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:24:00.713 [2024-05-15 17:14:33.032205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:12072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.713 [2024-05-15 17:14:33.032212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:24:00.713 [2024-05-15 17:14:33.032224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:12080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.713 [2024-05-15 17:14:33.032231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:24:00.713 [2024-05-15 17:14:33.032243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:12088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.713 [2024-05-15 17:14:33.032250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:24:00.713 [2024-05-15 17:14:33.032262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.713 [2024-05-15 17:14:33.032269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:24:00.713 [2024-05-15 17:14:33.032281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:12104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.713 [2024-05-15 17:14:33.032287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:24:00.713 [2024-05-15 17:14:33.032299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:12112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.713 [2024-05-15 17:14:33.032306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:24:00.713 [2024-05-15 17:14:33.032318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:12120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.713 [2024-05-15 17:14:33.032325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:24:00.713 [2024-05-15 17:14:33.032337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:12128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.713 [2024-05-15 17:14:33.032344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:24:00.713 [2024-05-15 17:14:33.032356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:12136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.713 [2024-05-15 17:14:33.032362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:24:00.713 [2024-05-15 17:14:33.032375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:12144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.713 [2024-05-15 17:14:33.032383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:24:00.713 [2024-05-15 17:14:33.032395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:12152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.713 [2024-05-15 17:14:33.032402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:24:00.713 [2024-05-15 17:14:33.032414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:12160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.713 [2024-05-15 17:14:33.032421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:24:00.713 [2024-05-15 17:14:33.032434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:12168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.713 [2024-05-15 17:14:33.032441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:24:00.713 [2024-05-15 17:14:33.032453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:12176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.713 [2024-05-15 17:14:33.032461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:24:00.713 [2024-05-15 17:14:33.032474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:12184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.713 [2024-05-15 17:14:33.032480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:24:00.713 [2024-05-15 17:14:33.032493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:12192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.713 [2024-05-15 17:14:33.032499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:24:00.713 [2024-05-15 17:14:33.032794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:12200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.713 [2024-05-15 17:14:33.032804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:24:00.713 [2024-05-15 17:14:33.032818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:12208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.713 [2024-05-15 17:14:33.032825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:24:00.713 [2024-05-15 17:14:33.032838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:12216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.713 [2024-05-15 17:14:33.032845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:24:00.713 [2024-05-15 17:14:33.032857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:12224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.713 [2024-05-15 17:14:33.032864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:24:00.713 [2024-05-15 17:14:33.032878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:12232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.713 [2024-05-15 17:14:33.032886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:24:00.713 [2024-05-15 17:14:33.032899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:12240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.713 [2024-05-15 17:14:33.032905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:24:00.713 [2024-05-15 17:14:33.032920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:12248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.713 [2024-05-15 17:14:33.032927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:24:00.713 [2024-05-15 17:14:33.032939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:12256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.713 [2024-05-15 17:14:33.032945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:24:00.713 [2024-05-15 17:14:33.032957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:12264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.713 [2024-05-15 17:14:33.032964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:00.713 [2024-05-15 17:14:33.032977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:12272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.713 [2024-05-15 17:14:33.032984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:00.713 [2024-05-15 17:14:33.032997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:12280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.713 [2024-05-15 17:14:33.033004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:24:00.713 [2024-05-15 17:14:33.033016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:12288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.713 [2024-05-15 17:14:33.033024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:24:00.713 [2024-05-15 17:14:33.033037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:12296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.713 [2024-05-15 17:14:33.033044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:24:00.713 [2024-05-15 17:14:33.033056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:12304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.713 [2024-05-15 17:14:33.033063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:24:00.713 [2024-05-15 17:14:33.033075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:12312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.713 [2024-05-15 17:14:33.033082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:24:00.713 [2024-05-15 17:14:33.033094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:12320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.713 [2024-05-15 17:14:33.033101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:24:00.713 [2024-05-15 17:14:33.033113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:12328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.713 [2024-05-15 17:14:33.033120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:24:00.713 [2024-05-15 17:14:33.033132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:12336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.713 [2024-05-15 17:14:33.033138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:24:00.713 [2024-05-15 17:14:33.033152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:12344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.713 [2024-05-15 17:14:33.033159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:24:00.714 [2024-05-15 17:14:33.033178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:12352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.714 [2024-05-15 17:14:33.033185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:24:00.714 [2024-05-15 17:14:33.033199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:12360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.714 [2024-05-15 17:14:33.033206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:24:00.714 [2024-05-15 17:14:33.033218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.714 [2024-05-15 17:14:33.033225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:24:00.714 [2024-05-15 17:14:33.033237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:12376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.714 [2024-05-15 17:14:33.033243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:24:00.714 [2024-05-15 17:14:33.033256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:12384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.714 [2024-05-15 17:14:33.033263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:24:00.714 [2024-05-15 17:14:33.033482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:12392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.714 [2024-05-15 17:14:33.033492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:24:00.714 [2024-05-15 17:14:33.033505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:12400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.714 [2024-05-15 17:14:33.033512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:24:00.714 [2024-05-15 17:14:33.033525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:12408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.714 [2024-05-15 17:14:33.033532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:24:00.714 [2024-05-15 17:14:33.033545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:12416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.714 [2024-05-15 17:14:33.033552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:24:00.714 [2024-05-15 17:14:33.033564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:12424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.714 [2024-05-15 17:14:33.033571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:24:00.714 [2024-05-15 17:14:33.033583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:12432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.714 [2024-05-15 17:14:33.033590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:24:00.714 [2024-05-15 17:14:33.033602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:12440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.714 [2024-05-15 17:14:33.033613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:24:00.714 [2024-05-15 17:14:33.033626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:12448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.714 [2024-05-15 17:14:33.033633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:24:00.714 [2024-05-15 17:14:33.034316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:12456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.714 [2024-05-15 17:14:33.034327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:24:00.714 [2024-05-15 17:14:33.034340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:11504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.714 [2024-05-15 17:14:33.034347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:24:00.714 [2024-05-15 17:14:33.034361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:11512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.714 [2024-05-15 17:14:33.034368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:24:00.714 [2024-05-15 17:14:33.034381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:11520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.714 [2024-05-15 17:14:33.034388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:24:00.714 [2024-05-15 17:14:33.034400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:11528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.714 [2024-05-15 17:14:33.034407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:24:00.714 [2024-05-15 17:14:33.034420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:11536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.714 [2024-05-15 17:14:33.034427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:24:00.714 [2024-05-15 17:14:33.034439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:11544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.714 [2024-05-15 17:14:33.034447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:24:00.714 [2024-05-15 17:14:33.034459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:11552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.714 [2024-05-15 17:14:33.034466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:24:00.714 [2024-05-15 17:14:33.034479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:11560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.714 [2024-05-15 17:14:33.034486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:00.714 [2024-05-15 17:14:33.034499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:11568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.714 [2024-05-15 17:14:33.034506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:00.714 [2024-05-15 17:14:33.034518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:11576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.714 [2024-05-15 17:14:33.034527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:24:00.714 [2024-05-15 17:14:33.034539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:11584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.714 [2024-05-15 17:14:33.034546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:24:00.714 [2024-05-15 17:14:33.034558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:11592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.714 [2024-05-15 17:14:33.034565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:24:00.714 [2024-05-15 17:14:33.034577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:11600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.714 [2024-05-15 17:14:33.034584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:24:00.714 [2024-05-15 17:14:33.034596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:11608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.714 [2024-05-15 17:14:33.034603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:24:00.714 [2024-05-15 17:14:33.034615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:11616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.714 [2024-05-15 17:14:33.034621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:24:00.714 [2024-05-15 17:14:33.034634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:12464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.714 [2024-05-15 17:14:33.034640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:24:00.714 [2024-05-15 17:14:33.034790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:12472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.714 [2024-05-15 17:14:33.034800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:24:00.714 [2024-05-15 17:14:33.034813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:12480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.714 [2024-05-15 17:14:33.034820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:24:00.714 [2024-05-15 17:14:33.034832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:12488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.714 [2024-05-15 17:14:33.034839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:24:00.714 [2024-05-15 17:14:33.034851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:12496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.714 [2024-05-15 17:14:33.034858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:24:00.714 [2024-05-15 17:14:33.034870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:12504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.714 [2024-05-15 17:14:33.034877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:24:00.714 [2024-05-15 17:14:33.034889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:12512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.714 [2024-05-15 17:14:33.034896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:24:00.714 [2024-05-15 17:14:33.034910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:11624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.714 [2024-05-15 17:14:33.034917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:24:00.714 [2024-05-15 17:14:33.034929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:11632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.714 [2024-05-15 17:14:33.034936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:24:00.714 [2024-05-15 17:14:33.034948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:11640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.714 [2024-05-15 17:14:33.034955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:24:00.714 [2024-05-15 17:14:33.034967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:11648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.714 [2024-05-15 17:14:33.034974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:24:00.714 [2024-05-15 17:14:33.034986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:11656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.714 [2024-05-15 17:14:33.034993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:24:00.714 [2024-05-15 17:14:33.035005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:11664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.714 [2024-05-15 17:14:33.035012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:24:00.714 [2024-05-15 17:14:33.035024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:11672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.714 [2024-05-15 17:14:33.035030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:24:00.714 [2024-05-15 17:14:33.035042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:11680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.714 [2024-05-15 17:14:33.035049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:24:00.714 [2024-05-15 17:14:33.035061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:11688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.714 [2024-05-15 17:14:33.035068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:24:00.714 [2024-05-15 17:14:33.035080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:11696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.714 [2024-05-15 17:14:33.035087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:24:00.714 [2024-05-15 17:14:33.035099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:11704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.714 [2024-05-15 17:14:33.035106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:24:00.714 [2024-05-15 17:14:33.035118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:11712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.714 [2024-05-15 17:14:33.035125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:24:00.714 [2024-05-15 17:14:33.035138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:11720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.714 [2024-05-15 17:14:33.035145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:24:00.714 [2024-05-15 17:14:33.035158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:11728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.714 [2024-05-15 17:14:33.035169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:24:00.714 [2024-05-15 17:14:33.035183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:11736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.714 [2024-05-15 17:14:33.035190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:24:00.714 [2024-05-15 17:14:33.035202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:11744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.714 [2024-05-15 17:14:33.035209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:24:00.714 [2024-05-15 17:14:33.035221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:11752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.714 [2024-05-15 17:14:33.035228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:24:00.714 [2024-05-15 17:14:33.035240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:11760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.714 [2024-05-15 17:14:33.035246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:00.714 [2024-05-15 17:14:33.035259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:11768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.714 [2024-05-15 17:14:33.035266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:00.714 [2024-05-15 17:14:33.035278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:11776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.715 [2024-05-15 17:14:33.035285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:24:00.715 [2024-05-15 17:14:33.035298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:11784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.715 [2024-05-15 17:14:33.035305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:24:00.715 [2024-05-15 17:14:33.035318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:11792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.715 [2024-05-15 17:14:33.035324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:24:00.715 [2024-05-15 17:14:33.035337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:11800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.715 [2024-05-15 17:14:33.035343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:24:00.715 [2024-05-15 17:14:33.035356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:11808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.715 [2024-05-15 17:14:33.035363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:24:00.715 [2024-05-15 17:14:33.035376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:11816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.715 [2024-05-15 17:14:33.035384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:24:00.715 [2024-05-15 17:14:33.035396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:11824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.715 [2024-05-15 17:14:33.035403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:24:00.715 [2024-05-15 17:14:33.035415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:11832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.715 [2024-05-15 17:14:33.035422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:24:00.715 [2024-05-15 17:14:33.035434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:11840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.715 [2024-05-15 17:14:33.035441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:24:00.715 [2024-05-15 17:14:33.035453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:11848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.715 [2024-05-15 17:14:33.035460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:24:00.715 [2024-05-15 17:14:33.035472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:11856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.715 [2024-05-15 17:14:33.035479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:24:00.715 [2024-05-15 17:14:33.035491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:11864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.715 [2024-05-15 17:14:33.035499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:24:00.715 [2024-05-15 17:14:33.035511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:11872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.715 [2024-05-15 17:14:33.035518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:24:00.715 [2024-05-15 17:14:33.035530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:11880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.715 [2024-05-15 17:14:33.035537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:24:00.715 [2024-05-15 17:14:33.035826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:11888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.715 [2024-05-15 17:14:33.035835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:24:00.715 [2024-05-15 17:14:33.035849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:11896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.715 [2024-05-15 17:14:33.035856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:24:00.715 [2024-05-15 17:14:33.035868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:11904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.715 [2024-05-15 17:14:33.035875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:24:00.715 [2024-05-15 17:14:33.035887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:11912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.715 [2024-05-15 17:14:33.035897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:24:00.715 [2024-05-15 17:14:33.035909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:11920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.715 [2024-05-15 17:14:33.035916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:24:00.715 [2024-05-15 17:14:33.035928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:11928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.715 [2024-05-15 17:14:33.035935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:24:00.715 [2024-05-15 17:14:33.035947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:11936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.715 [2024-05-15 17:14:33.035954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:24:00.715 [2024-05-15 17:14:33.035966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:11944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.715 [2024-05-15 17:14:33.035973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:24:00.715 [2024-05-15 17:14:33.035985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:11952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.715 [2024-05-15 17:14:33.035992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:24:00.715 [2024-05-15 17:14:33.036004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:11960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.715 [2024-05-15 17:14:33.036011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:24:00.715 [2024-05-15 17:14:33.036023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:11968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.715 [2024-05-15 17:14:33.036030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:24:00.715 [2024-05-15 17:14:33.036042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:11976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.715 [2024-05-15 17:14:33.036048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:24:00.715 [2024-05-15 17:14:33.036060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:11984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.715 [2024-05-15 17:14:33.036067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:24:00.715 [2024-05-15 17:14:33.036080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:11992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.715 [2024-05-15 17:14:33.036087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:24:00.715 [2024-05-15 17:14:33.036099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:12000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.715 [2024-05-15 17:14:33.036105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:24:00.715 [2024-05-15 17:14:33.036118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:12008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.715 [2024-05-15 17:14:33.036125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.715 [2024-05-15 17:14:33.036139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:12016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.715 [2024-05-15 17:14:33.036146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:00.715 [2024-05-15 17:14:33.036158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:12024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.715 [2024-05-15 17:14:33.036172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:00.715 [2024-05-15 17:14:33.036185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.715 [2024-05-15 17:14:33.036192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:24:00.715 [2024-05-15 17:14:33.036204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:12040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.715 [2024-05-15 17:14:33.036211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:24:00.715 [2024-05-15 17:14:33.036223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:12048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.715 [2024-05-15 17:14:33.036230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:24:00.715 [2024-05-15 17:14:33.036242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:12056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.715 [2024-05-15 17:14:33.036249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:24:00.715 [2024-05-15 17:14:33.036261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:12064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.715 [2024-05-15 17:14:33.036268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:24:00.715 [2024-05-15 17:14:33.036280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:11496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.715 [2024-05-15 17:14:33.036288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:24:00.715 [2024-05-15 17:14:33.036301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:12072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.715 [2024-05-15 17:14:33.036309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:24:00.715 [2024-05-15 17:14:33.036321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:12080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.715 [2024-05-15 17:14:33.036328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:24:00.715 [2024-05-15 17:14:33.036341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:12088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.715 [2024-05-15 17:14:33.036348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:24:00.715 [2024-05-15 17:14:33.036360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:12096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.715 [2024-05-15 17:14:33.036367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:24:00.715 [2024-05-15 17:14:33.036381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:12104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.715 [2024-05-15 17:14:33.036388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:24:00.715 [2024-05-15 17:14:33.036401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:12112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.715 [2024-05-15 17:14:33.036409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:24:00.715 [2024-05-15 17:14:33.036422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:12120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.715 [2024-05-15 17:14:33.036428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:24:00.715 [2024-05-15 17:14:33.036441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:12128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.715 [2024-05-15 17:14:33.036448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:24:00.715 [2024-05-15 17:14:33.036460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:12136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.715 [2024-05-15 17:14:33.036468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:24:00.715 [2024-05-15 17:14:33.036480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:12144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.715 [2024-05-15 17:14:33.036487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:24:00.715 [2024-05-15 17:14:33.036499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:12152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.715 [2024-05-15 17:14:33.036505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:24:00.715 [2024-05-15 17:14:33.036518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:12160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.715 [2024-05-15 17:14:33.036525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:24:00.715 [2024-05-15 17:14:33.036537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:12168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.715 [2024-05-15 17:14:33.036544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:24:00.715 [2024-05-15 17:14:33.036556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:12176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.715 [2024-05-15 17:14:33.036563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:24:00.715 [2024-05-15 17:14:33.036575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:12184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.715 [2024-05-15 17:14:33.036581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:24:00.715 [2024-05-15 17:14:33.036593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:12192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.715 [2024-05-15 17:14:33.036600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:24:00.715 [2024-05-15 17:14:33.036612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:12200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.715 [2024-05-15 17:14:33.036621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:24:00.715 [2024-05-15 17:14:33.036633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.715 [2024-05-15 17:14:33.036640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:24:00.715 [2024-05-15 17:14:33.036652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:12216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.715 [2024-05-15 17:14:33.036659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:24:00.715 [2024-05-15 17:14:33.036672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:12224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.716 [2024-05-15 17:14:33.036680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:24:00.716 [2024-05-15 17:14:33.036692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:12232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.716 [2024-05-15 17:14:33.036699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:24:00.716 [2024-05-15 17:14:33.036711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:12240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.716 [2024-05-15 17:14:33.036718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:24:00.716 [2024-05-15 17:14:33.036731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:12248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.716 [2024-05-15 17:14:33.036738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:24:00.716 [2024-05-15 17:14:33.036751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:12256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.716 [2024-05-15 17:14:33.036757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:24:00.716 [2024-05-15 17:14:33.036770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:12264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.716 [2024-05-15 17:14:33.036776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:00.716 [2024-05-15 17:14:33.037207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:12272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.716 [2024-05-15 17:14:33.037220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:00.716 [2024-05-15 17:14:33.037234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:12280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.716 [2024-05-15 17:14:33.037241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:24:00.716 [2024-05-15 17:14:33.037254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:12288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.716 [2024-05-15 17:14:33.037260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:24:00.716 [2024-05-15 17:14:33.037273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:12296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.716 [2024-05-15 17:14:33.037283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:24:00.716 [2024-05-15 17:14:33.037296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:12304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.716 [2024-05-15 17:14:33.037303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:24:00.716 [2024-05-15 17:14:33.037315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:12312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.716 [2024-05-15 17:14:33.037322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:24:00.716 [2024-05-15 17:14:33.037334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:12320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.716 [2024-05-15 17:14:33.037341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:24:00.716 [2024-05-15 17:14:33.037353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:12328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.716 [2024-05-15 17:14:33.037360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:24:00.716 [2024-05-15 17:14:33.037372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:12336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.716 [2024-05-15 17:14:33.037379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:24:00.716 [2024-05-15 17:14:33.037391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:12344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.716 [2024-05-15 17:14:33.037398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:24:00.716 [2024-05-15 17:14:33.037410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:12352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.716 [2024-05-15 17:14:33.037416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:24:00.716 [2024-05-15 17:14:33.037428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:12360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.716 [2024-05-15 17:14:33.037435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:24:00.716 [2024-05-15 17:14:33.037447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:12368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.716 [2024-05-15 17:14:33.037454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:24:00.716 [2024-05-15 17:14:33.037466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:12376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.716 [2024-05-15 17:14:33.037473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:24:00.716 [2024-05-15 17:14:33.037485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:12384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.716 [2024-05-15 17:14:33.037492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:24:00.716 [2024-05-15 17:14:33.037505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:12392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.716 [2024-05-15 17:14:33.037512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:24:00.716 [2024-05-15 17:14:33.037526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:12400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.716 [2024-05-15 17:14:33.037532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:24:00.716 [2024-05-15 17:14:33.037544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:12408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.716 [2024-05-15 17:14:33.037551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:24:00.716 [2024-05-15 17:14:33.037563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:12416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.716 [2024-05-15 17:14:33.037570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:24:00.716 [2024-05-15 17:14:33.037582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:12424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.716 [2024-05-15 17:14:33.037589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:24:00.716 [2024-05-15 17:14:33.037601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:12432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.716 [2024-05-15 17:14:33.037608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:24:00.716 [2024-05-15 17:14:33.037620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:12440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.716 [2024-05-15 17:14:33.037629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:24:00.716 [2024-05-15 17:14:33.037642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:12448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.716 [2024-05-15 17:14:33.037648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:24:00.716 [2024-05-15 17:14:33.037661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:12456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.716 [2024-05-15 17:14:33.037668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:24:00.716 [2024-05-15 17:14:33.037682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:11504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.716 [2024-05-15 17:14:33.037689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:24:00.716 [2024-05-15 17:14:33.037702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:11512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.716 [2024-05-15 17:14:33.037709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:24:00.716 [2024-05-15 17:14:33.037721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:11520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.716 [2024-05-15 17:14:33.037728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:24:00.716 [2024-05-15 17:14:33.037740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:11528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.716 [2024-05-15 17:14:33.037747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:24:00.716 [2024-05-15 17:14:33.037761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:11536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.716 [2024-05-15 17:14:33.037768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:24:00.716 [2024-05-15 17:14:33.037781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:11544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.716 [2024-05-15 17:14:33.037788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:24:00.716 [2024-05-15 17:14:33.037800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:11552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.716 [2024-05-15 17:14:33.037807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:24:00.716 [2024-05-15 17:14:33.037820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:11560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.716 [2024-05-15 17:14:33.037827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:00.716 [2024-05-15 17:14:33.037842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:11568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.716 [2024-05-15 17:14:33.037850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:00.716 [2024-05-15 17:14:33.037862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:11576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.716 [2024-05-15 17:14:33.037869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:24:00.716 [2024-05-15 17:14:33.037881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:11584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.716 [2024-05-15 17:14:33.037888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:24:00.716 [2024-05-15 17:14:33.037900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:11592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.716 [2024-05-15 17:14:33.037907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:24:00.716 [2024-05-15 17:14:33.037919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:11600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.716 [2024-05-15 17:14:33.037926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:24:00.716 [2024-05-15 17:14:33.037938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:11608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.716 [2024-05-15 17:14:33.037945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:24:00.716 [2024-05-15 17:14:33.037958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:11616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.716 [2024-05-15 17:14:33.037964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:24:00.716 [2024-05-15 17:14:33.038094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:12464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.716 [2024-05-15 17:14:33.038105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:24:00.716 [2024-05-15 17:14:33.038130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:12472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.716 [2024-05-15 17:14:33.038140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:24:00.716 [2024-05-15 17:14:33.038155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:12480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.716 [2024-05-15 17:14:33.038163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:24:00.716 [2024-05-15 17:14:33.038185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:12488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.716 [2024-05-15 17:14:33.038192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:24:00.716 [2024-05-15 17:14:33.038206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:12496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.716 [2024-05-15 17:14:33.038214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:24:00.716 [2024-05-15 17:14:33.038229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:12504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.716 [2024-05-15 17:14:33.038235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:24:00.716 [2024-05-15 17:14:33.038250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:12512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.716 [2024-05-15 17:14:33.038257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:24:00.716 [2024-05-15 17:14:33.038271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:11624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.716 [2024-05-15 17:14:33.038277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:24:00.716 [2024-05-15 17:14:33.038292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:11632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.716 [2024-05-15 17:14:33.038298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:24:00.716 [2024-05-15 17:14:33.038313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:11640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.716 [2024-05-15 17:14:33.038320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:24:00.717 [2024-05-15 17:14:33.038334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:11648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.717 [2024-05-15 17:14:33.038341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:24:00.717 [2024-05-15 17:14:33.038355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:11656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.717 [2024-05-15 17:14:33.038362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:24:00.717 [2024-05-15 17:14:33.038378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:11664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.717 [2024-05-15 17:14:33.038385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:24:00.717 [2024-05-15 17:14:33.038399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:11672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.717 [2024-05-15 17:14:33.038408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:24:00.717 [2024-05-15 17:14:33.038422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:11680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.717 [2024-05-15 17:14:33.038429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:24:00.717 [2024-05-15 17:14:33.038443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:11688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.717 [2024-05-15 17:14:33.038450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:24:00.717 [2024-05-15 17:14:33.038464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:11696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.717 [2024-05-15 17:14:33.038471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:24:00.717 [2024-05-15 17:14:33.038485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:11704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.717 [2024-05-15 17:14:33.038492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:24:00.717 [2024-05-15 17:14:33.038506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:11712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.717 [2024-05-15 17:14:33.038513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:24:00.717 [2024-05-15 17:14:33.038527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:11720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.717 [2024-05-15 17:14:33.038533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:24:00.717 [2024-05-15 17:14:33.038547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:11728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.717 [2024-05-15 17:14:33.038554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:24:00.717 [2024-05-15 17:14:33.038569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:11736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.717 [2024-05-15 17:14:33.038576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:24:00.717 [2024-05-15 17:14:33.038592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:11744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.717 [2024-05-15 17:14:33.038599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:24:00.717 [2024-05-15 17:14:33.038613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:11752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.717 [2024-05-15 17:14:33.038620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:24:00.717 [2024-05-15 17:14:33.038635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:11760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.717 [2024-05-15 17:14:33.038643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:00.717 [2024-05-15 17:14:33.038658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:11768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.717 [2024-05-15 17:14:33.038667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:00.717 [2024-05-15 17:14:33.038682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:11776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.717 [2024-05-15 17:14:33.038690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:24:00.717 [2024-05-15 17:14:33.038705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:11784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.717 [2024-05-15 17:14:33.038712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:24:00.717 [2024-05-15 17:14:33.038728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:11792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.717 [2024-05-15 17:14:33.038735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:24:00.717 [2024-05-15 17:14:33.038749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:11800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.717 [2024-05-15 17:14:33.038756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:24:00.717 [2024-05-15 17:14:33.038772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:11808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.717 [2024-05-15 17:14:33.038779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:24:00.717 [2024-05-15 17:14:33.038793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:11816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.717 [2024-05-15 17:14:33.038800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:24:00.717 [2024-05-15 17:14:33.038815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:11824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.717 [2024-05-15 17:14:33.038821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:24:00.717 [2024-05-15 17:14:33.038836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:11832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.717 [2024-05-15 17:14:33.038842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:24:00.717 [2024-05-15 17:14:33.038856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:11840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.717 [2024-05-15 17:14:33.038863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:24:00.717 [2024-05-15 17:14:33.038877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:11848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.717 [2024-05-15 17:14:33.038884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:24:00.717 [2024-05-15 17:14:33.038898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:11856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.717 [2024-05-15 17:14:33.038905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:24:00.717 [2024-05-15 17:14:33.038919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:11864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.717 [2024-05-15 17:14:33.038926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:24:00.717 [2024-05-15 17:14:33.038944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:11872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.717 [2024-05-15 17:14:33.038952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:24:00.717 [2024-05-15 17:14:33.039052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:11880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.717 [2024-05-15 17:14:33.039061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:24:00.717 [2024-05-15 17:14:33.039078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:11888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.717 [2024-05-15 17:14:33.039085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:24:00.717 [2024-05-15 17:14:33.039102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:11896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.717 [2024-05-15 17:14:33.039108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:24:00.717 [2024-05-15 17:14:33.039124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:11904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.717 [2024-05-15 17:14:33.039132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:24:00.717 [2024-05-15 17:14:33.039149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:11912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.717 [2024-05-15 17:14:33.039156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:24:00.717 [2024-05-15 17:14:33.039178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:11920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.717 [2024-05-15 17:14:33.039186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:24:00.717 [2024-05-15 17:14:33.039202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:11928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.717 [2024-05-15 17:14:33.039209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:24:00.717 [2024-05-15 17:14:33.039225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.717 [2024-05-15 17:14:33.039232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:24:00.717 [2024-05-15 17:14:33.039249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:11944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.717 [2024-05-15 17:14:33.039255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:24:00.717 [2024-05-15 17:14:33.039271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:11952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.717 [2024-05-15 17:14:33.039278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:24:00.717 [2024-05-15 17:14:33.039295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:11960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.717 [2024-05-15 17:14:33.039301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:24:00.717 [2024-05-15 17:14:33.039319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:11968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.717 [2024-05-15 17:14:33.039326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:24:00.717 [2024-05-15 17:14:33.039342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:11976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.717 [2024-05-15 17:14:33.039350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:24:00.717 [2024-05-15 17:14:33.039367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:11984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.717 [2024-05-15 17:14:33.039374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:24:00.717 [2024-05-15 17:14:33.039390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:11992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.717 [2024-05-15 17:14:33.039397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:24:00.717 [2024-05-15 17:14:33.039413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:12000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.717 [2024-05-15 17:14:33.039419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:24:00.717 [2024-05-15 17:14:33.039435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:12008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.717 [2024-05-15 17:14:33.039442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.717 [2024-05-15 17:14:33.039458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:12016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.717 [2024-05-15 17:14:33.039465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:00.717 [2024-05-15 17:14:33.039481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:12024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.717 [2024-05-15 17:14:33.039488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:00.717 [2024-05-15 17:14:33.039504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:12032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.717 [2024-05-15 17:14:33.039511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:24:00.717 [2024-05-15 17:14:33.039527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:12040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.717 [2024-05-15 17:14:33.039534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:24:00.717 [2024-05-15 17:14:33.039550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:12048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.717 [2024-05-15 17:14:33.039557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:24:00.717 [2024-05-15 17:14:33.039573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:12056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.717 [2024-05-15 17:14:33.039579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:24:00.717 [2024-05-15 17:14:33.039595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:12064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.717 [2024-05-15 17:14:33.039604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:24:00.717 [2024-05-15 17:14:33.039620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:11496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.717 [2024-05-15 17:14:33.039627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:24:00.717 [2024-05-15 17:14:33.039643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:12072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.717 [2024-05-15 17:14:33.039650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:24:00.717 [2024-05-15 17:14:33.039666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:12080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.717 [2024-05-15 17:14:33.039673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:24:00.717 [2024-05-15 17:14:33.039689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:12088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.717 [2024-05-15 17:14:33.039696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:24:00.717 [2024-05-15 17:14:33.039713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:12096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.717 [2024-05-15 17:14:33.039720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:24:00.717 [2024-05-15 17:14:33.039737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:12104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.717 [2024-05-15 17:14:33.039743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:24:00.717 [2024-05-15 17:14:33.039760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.718 [2024-05-15 17:14:33.039766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:24:00.718 [2024-05-15 17:14:33.039782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:12120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.718 [2024-05-15 17:14:33.039790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:24:00.718 [2024-05-15 17:14:33.039807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:12128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.718 [2024-05-15 17:14:33.039813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:24:00.718 [2024-05-15 17:14:33.039890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:12136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.718 [2024-05-15 17:14:33.039898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:24:00.718 [2024-05-15 17:14:33.039918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:12144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.718 [2024-05-15 17:14:33.039925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:24:00.718 [2024-05-15 17:14:33.039943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:12152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.718 [2024-05-15 17:14:33.039951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:24:00.718 [2024-05-15 17:14:33.039969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:12160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.718 [2024-05-15 17:14:33.039977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:24:00.718 [2024-05-15 17:14:33.039996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:12168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.718 [2024-05-15 17:14:33.040003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:24:00.718 [2024-05-15 17:14:33.040021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:12176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.718 [2024-05-15 17:14:33.040028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:24:00.718 [2024-05-15 17:14:33.040045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:12184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.718 [2024-05-15 17:14:33.040052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:24:00.718 [2024-05-15 17:14:33.040070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:12192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.718 [2024-05-15 17:14:33.040077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:24:00.718 [2024-05-15 17:14:33.040095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:12200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.718 [2024-05-15 17:14:33.040101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:24:00.718 [2024-05-15 17:14:33.040119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:12208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.718 [2024-05-15 17:14:33.040126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:24:00.718 [2024-05-15 17:14:33.040144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:12216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.718 [2024-05-15 17:14:33.040151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:24:00.718 [2024-05-15 17:14:33.040174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:12224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.718 [2024-05-15 17:14:33.040183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:24:00.718 [2024-05-15 17:14:33.040203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:12232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.718 [2024-05-15 17:14:33.040210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:24:00.718 [2024-05-15 17:14:33.040228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:12240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.718 [2024-05-15 17:14:33.040235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:24:00.718 [2024-05-15 17:14:33.040253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:12248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.718 [2024-05-15 17:14:33.040260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:24:00.718 [2024-05-15 17:14:33.040280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:12256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.718 [2024-05-15 17:14:33.040287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:24:00.718 [2024-05-15 17:14:33.040305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:12264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.718 [2024-05-15 17:14:33.040311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:00.718 [2024-05-15 17:14:45.821847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:23768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.718 [2024-05-15 17:14:45.821887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.718 [2024-05-15 17:14:45.821921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:23784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.718 [2024-05-15 17:14:45.821929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:00.718 [2024-05-15 17:14:45.821943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:23800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.718 [2024-05-15 17:14:45.821949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:00.718 [2024-05-15 17:14:45.821962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:23816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.718 [2024-05-15 17:14:45.821969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:24:00.718 [2024-05-15 17:14:45.821981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:23832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.718 [2024-05-15 17:14:45.821988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:24:00.718 [2024-05-15 17:14:45.822000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:23848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.718 [2024-05-15 17:14:45.822007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:24:00.718 [2024-05-15 17:14:45.822019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:23864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.718 [2024-05-15 17:14:45.822026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:24:00.718 [2024-05-15 17:14:45.822039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:23880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.718 [2024-05-15 17:14:45.822046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:24:00.718 [2024-05-15 17:14:45.822058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:23896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.718 [2024-05-15 17:14:45.822065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:24:00.718 [2024-05-15 17:14:45.822077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:23912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.718 [2024-05-15 17:14:45.822084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:24:00.718 [2024-05-15 17:14:45.822101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:23928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.718 [2024-05-15 17:14:45.822108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:24:00.718 [2024-05-15 17:14:45.822999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:23944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.718 [2024-05-15 17:14:45.823015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:24:00.718 [2024-05-15 17:14:45.823029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:23960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.718 [2024-05-15 17:14:45.823036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:24:00.718 [2024-05-15 17:14:45.823049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:23976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.718 [2024-05-15 17:14:45.823056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:24:00.718 [2024-05-15 17:14:45.823068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:23992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.718 [2024-05-15 17:14:45.823075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:24:00.718 [2024-05-15 17:14:45.823088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:24008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.718 [2024-05-15 17:14:45.823094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:24:00.718 [2024-05-15 17:14:45.823106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:24024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.718 [2024-05-15 17:14:45.823113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:24:00.718 [2024-05-15 17:14:45.823126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:24040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.718 [2024-05-15 17:14:45.823133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:24:00.718 [2024-05-15 17:14:45.823145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:24056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.718 [2024-05-15 17:14:45.823152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:24:00.718 [2024-05-15 17:14:45.823169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:24072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.718 [2024-05-15 17:14:45.823176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:24:00.718 [2024-05-15 17:14:45.823188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:24088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.718 [2024-05-15 17:14:45.823195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:24:00.718 [2024-05-15 17:14:45.823207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:24104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.718 [2024-05-15 17:14:45.823215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:24:00.718 [2024-05-15 17:14:45.823227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:24120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.718 [2024-05-15 17:14:45.823237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:24:00.718 [2024-05-15 17:14:45.823250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:24136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.718 [2024-05-15 17:14:45.823257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:24:00.718 [2024-05-15 17:14:45.823269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:23632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.718 [2024-05-15 17:14:45.823277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:24:00.718 [2024-05-15 17:14:45.823289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:23664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.718 [2024-05-15 17:14:45.823296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:00.718 [2024-05-15 17:14:45.823308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:23696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.718 [2024-05-15 17:14:45.823315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:00.718 [2024-05-15 17:14:45.823328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.718 [2024-05-15 17:14:45.823335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:24:00.718 [2024-05-15 17:14:45.823348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:23760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.718 [2024-05-15 17:14:45.823354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:24:00.718 [2024-05-15 17:14:45.823366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:23792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.718 [2024-05-15 17:14:45.823373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:24:00.718 [2024-05-15 17:14:45.824979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:24160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.718 [2024-05-15 17:14:45.825001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:24:00.718 [2024-05-15 17:14:45.825018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:24176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.718 [2024-05-15 17:14:45.825025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:24:00.718 [2024-05-15 17:14:45.825038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:24192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.718 [2024-05-15 17:14:45.825045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:24:00.718 [2024-05-15 17:14:45.825058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:24208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.718 [2024-05-15 17:14:45.825065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:24:00.718 [2024-05-15 17:14:45.825077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.718 [2024-05-15 17:14:45.825086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:24:00.718 [2024-05-15 17:14:45.825099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:24240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.718 [2024-05-15 17:14:45.825106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:24:00.718 [2024-05-15 17:14:45.825119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:24256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.718 [2024-05-15 17:14:45.825125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:24:00.718 [2024-05-15 17:14:45.825138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:24272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.718 [2024-05-15 17:14:45.825144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:24:00.718 [2024-05-15 17:14:45.825156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:24288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.718 [2024-05-15 17:14:45.825170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:24:00.718 [2024-05-15 17:14:45.825183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:24304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.718 [2024-05-15 17:14:45.825190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:24:00.718 [2024-05-15 17:14:45.825202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:23808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.718 [2024-05-15 17:14:45.825209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:24:00.719 [2024-05-15 17:14:45.825222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:23840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.719 [2024-05-15 17:14:45.825229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:24:00.719 [2024-05-15 17:14:45.825241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:23872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.719 [2024-05-15 17:14:45.825248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:24:00.719 [2024-05-15 17:14:45.825261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:23904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.719 [2024-05-15 17:14:45.825268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:24:00.719 [2024-05-15 17:14:45.825281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:23936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.719 [2024-05-15 17:14:45.825287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:24:00.719 [2024-05-15 17:14:45.825300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.719 [2024-05-15 17:14:45.825306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:24:00.719 [2024-05-15 17:14:45.825319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:24328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.719 [2024-05-15 17:14:45.825326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:24:00.719 [2024-05-15 17:14:45.825339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.719 [2024-05-15 17:14:45.825346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:24:00.719 [2024-05-15 17:14:45.825358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:24360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.719 [2024-05-15 17:14:45.825365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:24:00.719 [2024-05-15 17:14:45.825378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:24376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.719 [2024-05-15 17:14:45.825385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:24:00.719 [2024-05-15 17:14:45.825397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:24392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.719 [2024-05-15 17:14:45.825403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:24:00.719 [2024-05-15 17:14:45.825415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:24408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.719 [2024-05-15 17:14:45.825422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:24:00.719 [2024-05-15 17:14:45.825434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:24424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.719 [2024-05-15 17:14:45.825441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:24:00.719 [2024-05-15 17:14:45.825453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:24440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.719 [2024-05-15 17:14:45.825460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:24:00.719 [2024-05-15 17:14:45.825472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:24456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.719 [2024-05-15 17:14:45.825479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:24:00.719 [2024-05-15 17:14:45.825492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:24472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:00.719 [2024-05-15 17:14:45.825499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:24:00.719 Received shutdown signal, test time was about 26.979669 seconds 00:24:00.719 00:24:00.719 Latency(us) 00:24:00.719 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:00.719 Job: Nvme0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:24:00.719 Verification LBA range: start 0x0 length 0x4000 00:24:00.719 Nvme0n1 : 26.98 10111.27 39.50 0.00 0.00 12637.74 544.95 3078254.41 00:24:00.719 =================================================================================================================== 00:24:00.719 Total : 10111.27 39.50 0.00 0.00 12637.74 544.95 3078254.41 00:24:00.719 17:14:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@143 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:24:00.978 17:14:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@145 -- # trap - SIGINT SIGTERM EXIT 00:24:00.978 17:14:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@147 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:24:00.978 17:14:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@148 -- # nvmftestfini 00:24:00.978 17:14:48 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@488 -- # nvmfcleanup 00:24:00.978 17:14:48 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@117 -- # sync 00:24:00.978 17:14:48 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:24:00.978 17:14:48 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@120 -- # set +e 00:24:00.978 17:14:48 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@121 -- # for i in {1..20} 00:24:00.978 17:14:48 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:24:00.978 rmmod nvme_tcp 00:24:00.978 rmmod nvme_fabrics 00:24:00.978 rmmod nvme_keyring 00:24:00.978 17:14:48 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:24:00.978 17:14:48 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@124 -- # set -e 00:24:00.978 17:14:48 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@125 -- # return 0 00:24:00.978 17:14:48 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@489 -- # '[' -n 3172385 ']' 00:24:00.978 17:14:48 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@490 -- # killprocess 3172385 00:24:00.978 17:14:48 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@946 -- # '[' -z 3172385 ']' 00:24:00.978 17:14:48 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@950 -- # kill -0 3172385 00:24:00.978 17:14:48 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@951 -- # uname 00:24:00.978 17:14:48 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:24:00.978 17:14:48 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3172385 00:24:00.978 17:14:48 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:24:00.978 17:14:48 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:24:00.978 17:14:48 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3172385' 00:24:00.978 killing process with pid 3172385 00:24:00.978 17:14:48 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@965 -- # kill 3172385 00:24:00.978 [2024-05-15 17:14:48.496002] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:24:00.978 17:14:48 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@970 -- # wait 3172385 00:24:01.236 17:14:48 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:24:01.236 17:14:48 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:24:01.237 17:14:48 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:24:01.237 17:14:48 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:24:01.237 17:14:48 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@278 -- # remove_spdk_ns 00:24:01.237 17:14:48 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:01.237 17:14:48 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:01.237 17:14:48 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:03.139 17:14:50 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:24:03.139 00:24:03.139 real 0m38.459s 00:24:03.139 user 1m43.973s 00:24:03.139 sys 0m10.341s 00:24:03.139 17:14:50 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@1122 -- # xtrace_disable 00:24:03.139 17:14:50 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:24:03.139 ************************************ 00:24:03.139 END TEST nvmf_host_multipath_status 00:24:03.139 ************************************ 00:24:03.398 17:14:50 nvmf_tcp -- nvmf/nvmf.sh@102 -- # run_test nvmf_discovery_remove_ifc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:24:03.398 17:14:50 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:24:03.398 17:14:50 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:24:03.398 17:14:50 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:24:03.398 ************************************ 00:24:03.398 START TEST nvmf_discovery_remove_ifc 00:24:03.398 ************************************ 00:24:03.398 17:14:50 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:24:03.398 * Looking for test storage... 00:24:03.398 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:24:03.398 17:14:50 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:03.398 17:14:50 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # uname -s 00:24:03.398 17:14:50 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:03.398 17:14:50 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:03.398 17:14:50 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:03.398 17:14:50 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:03.398 17:14:50 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:03.398 17:14:50 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:03.398 17:14:50 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:03.398 17:14:50 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:03.398 17:14:50 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:03.398 17:14:50 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:03.398 17:14:50 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:24:03.398 17:14:50 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:24:03.398 17:14:50 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:03.398 17:14:50 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:03.398 17:14:50 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:03.398 17:14:50 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:03.398 17:14:50 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:03.398 17:14:50 nvmf_tcp.nvmf_discovery_remove_ifc -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:03.398 17:14:50 nvmf_tcp.nvmf_discovery_remove_ifc -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:03.398 17:14:50 nvmf_tcp.nvmf_discovery_remove_ifc -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:03.398 17:14:50 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:03.398 17:14:50 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:03.398 17:14:50 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:03.398 17:14:50 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@5 -- # export PATH 00:24:03.398 17:14:50 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:03.398 17:14:50 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@47 -- # : 0 00:24:03.398 17:14:50 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:24:03.398 17:14:50 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:24:03.398 17:14:50 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:03.398 17:14:50 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:03.398 17:14:50 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:03.398 17:14:50 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:24:03.398 17:14:50 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:24:03.398 17:14:50 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@51 -- # have_pci_nics=0 00:24:03.398 17:14:50 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@14 -- # '[' tcp == rdma ']' 00:24:03.398 17:14:50 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@19 -- # discovery_port=8009 00:24:03.398 17:14:50 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@20 -- # discovery_nqn=nqn.2014-08.org.nvmexpress.discovery 00:24:03.398 17:14:50 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@23 -- # nqn=nqn.2016-06.io.spdk:cnode 00:24:03.398 17:14:50 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@25 -- # host_nqn=nqn.2021-12.io.spdk:test 00:24:03.398 17:14:50 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@26 -- # host_sock=/tmp/host.sock 00:24:03.398 17:14:50 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@39 -- # nvmftestinit 00:24:03.398 17:14:50 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:24:03.398 17:14:50 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:03.398 17:14:50 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@448 -- # prepare_net_devs 00:24:03.398 17:14:50 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@410 -- # local -g is_hw=no 00:24:03.398 17:14:50 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@412 -- # remove_spdk_ns 00:24:03.398 17:14:50 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:03.398 17:14:50 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:03.398 17:14:50 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:03.398 17:14:50 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:24:03.398 17:14:50 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:24:03.398 17:14:50 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@285 -- # xtrace_disable 00:24:03.398 17:14:50 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:08.666 17:14:55 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:08.666 17:14:55 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@291 -- # pci_devs=() 00:24:08.666 17:14:55 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@291 -- # local -a pci_devs 00:24:08.666 17:14:55 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@292 -- # pci_net_devs=() 00:24:08.666 17:14:55 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:24:08.666 17:14:55 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@293 -- # pci_drivers=() 00:24:08.666 17:14:55 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@293 -- # local -A pci_drivers 00:24:08.666 17:14:55 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@295 -- # net_devs=() 00:24:08.666 17:14:55 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@295 -- # local -ga net_devs 00:24:08.666 17:14:55 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@296 -- # e810=() 00:24:08.666 17:14:55 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@296 -- # local -ga e810 00:24:08.666 17:14:55 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@297 -- # x722=() 00:24:08.666 17:14:55 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@297 -- # local -ga x722 00:24:08.666 17:14:55 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@298 -- # mlx=() 00:24:08.666 17:14:55 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@298 -- # local -ga mlx 00:24:08.666 17:14:55 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:08.666 17:14:55 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:08.666 17:14:55 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:08.666 17:14:55 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:08.666 17:14:55 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:08.666 17:14:55 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:08.666 17:14:55 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:08.666 17:14:55 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:08.666 17:14:55 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:08.666 17:14:55 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:08.666 17:14:55 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:08.666 17:14:55 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:24:08.666 17:14:55 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:24:08.666 17:14:55 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:24:08.666 17:14:55 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:24:08.666 17:14:55 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:24:08.666 17:14:55 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:24:08.666 17:14:55 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:08.666 17:14:55 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:24:08.666 Found 0000:86:00.0 (0x8086 - 0x159b) 00:24:08.666 17:14:55 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:08.666 17:14:55 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:08.666 17:14:55 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:08.666 17:14:55 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:08.666 17:14:55 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:08.666 17:14:55 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:08.666 17:14:55 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:24:08.666 Found 0000:86:00.1 (0x8086 - 0x159b) 00:24:08.666 17:14:55 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:08.666 17:14:55 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:08.666 17:14:55 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:08.666 17:14:55 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:08.666 17:14:55 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:08.666 17:14:55 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:24:08.666 17:14:55 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:24:08.666 17:14:55 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:24:08.666 17:14:55 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:08.666 17:14:55 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:08.666 17:14:55 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:08.666 17:14:55 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:08.666 17:14:55 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:08.666 17:14:55 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:08.666 17:14:55 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:08.666 17:14:55 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:24:08.666 Found net devices under 0000:86:00.0: cvl_0_0 00:24:08.666 17:14:55 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:08.666 17:14:55 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:08.666 17:14:55 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:08.666 17:14:55 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:08.666 17:14:55 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:08.666 17:14:55 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:08.666 17:14:55 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:08.666 17:14:55 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:08.666 17:14:55 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:24:08.666 Found net devices under 0000:86:00.1: cvl_0_1 00:24:08.666 17:14:55 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:08.666 17:14:55 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:24:08.666 17:14:55 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@414 -- # is_hw=yes 00:24:08.667 17:14:55 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:24:08.667 17:14:55 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:24:08.667 17:14:55 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:24:08.667 17:14:55 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:08.667 17:14:55 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:08.667 17:14:55 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:08.667 17:14:55 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:24:08.667 17:14:55 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:08.667 17:14:55 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:08.667 17:14:55 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:24:08.667 17:14:55 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:08.667 17:14:55 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:08.667 17:14:55 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:24:08.667 17:14:55 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:24:08.667 17:14:55 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:24:08.667 17:14:55 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:08.667 17:14:55 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:08.667 17:14:55 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:08.667 17:14:55 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:24:08.667 17:14:55 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:08.667 17:14:55 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:08.667 17:14:55 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:08.667 17:14:55 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:24:08.667 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:08.667 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.208 ms 00:24:08.667 00:24:08.667 --- 10.0.0.2 ping statistics --- 00:24:08.667 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:08.667 rtt min/avg/max/mdev = 0.208/0.208/0.208/0.000 ms 00:24:08.667 17:14:55 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:08.667 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:08.667 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.225 ms 00:24:08.667 00:24:08.667 --- 10.0.0.1 ping statistics --- 00:24:08.667 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:08.667 rtt min/avg/max/mdev = 0.225/0.225/0.225/0.000 ms 00:24:08.667 17:14:56 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:08.667 17:14:56 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@422 -- # return 0 00:24:08.667 17:14:56 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:24:08.667 17:14:56 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:08.667 17:14:56 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:24:08.667 17:14:56 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:24:08.667 17:14:56 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:08.667 17:14:56 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:24:08.667 17:14:56 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:24:08.667 17:14:56 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@40 -- # nvmfappstart -m 0x2 00:24:08.667 17:14:56 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:24:08.667 17:14:56 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@720 -- # xtrace_disable 00:24:08.667 17:14:56 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:08.667 17:14:56 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@481 -- # nvmfpid=3180939 00:24:08.667 17:14:56 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@482 -- # waitforlisten 3180939 00:24:08.667 17:14:56 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@827 -- # '[' -z 3180939 ']' 00:24:08.667 17:14:56 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:08.667 17:14:56 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@832 -- # local max_retries=100 00:24:08.667 17:14:56 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:08.667 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:08.667 17:14:56 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@836 -- # xtrace_disable 00:24:08.667 17:14:56 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:24:08.667 17:14:56 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:08.667 [2024-05-15 17:14:56.088629] Starting SPDK v24.05-pre git sha1 0ba8ca574 / DPDK 23.11.0 initialization... 00:24:08.667 [2024-05-15 17:14:56.088673] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:08.667 EAL: No free 2048 kB hugepages reported on node 1 00:24:08.667 [2024-05-15 17:14:56.146046] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:08.667 [2024-05-15 17:14:56.224670] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:08.667 [2024-05-15 17:14:56.224705] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:08.667 [2024-05-15 17:14:56.224712] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:08.667 [2024-05-15 17:14:56.224718] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:08.667 [2024-05-15 17:14:56.224723] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:08.667 [2024-05-15 17:14:56.224741] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:24:09.233 17:14:56 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:24:09.233 17:14:56 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@860 -- # return 0 00:24:09.233 17:14:56 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:24:09.233 17:14:56 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@726 -- # xtrace_disable 00:24:09.233 17:14:56 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:09.548 17:14:56 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:09.548 17:14:56 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@43 -- # rpc_cmd 00:24:09.548 17:14:56 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:09.548 17:14:56 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:09.548 [2024-05-15 17:14:56.935716] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:09.548 [2024-05-15 17:14:56.943686] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:24:09.548 [2024-05-15 17:14:56.943847] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:24:09.548 null0 00:24:09.548 [2024-05-15 17:14:56.975847] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:09.548 17:14:56 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:09.548 17:14:56 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@59 -- # hostpid=3181186 00:24:09.548 17:14:56 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@60 -- # waitforlisten 3181186 /tmp/host.sock 00:24:09.548 17:14:56 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@827 -- # '[' -z 3181186 ']' 00:24:09.548 17:14:56 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@831 -- # local rpc_addr=/tmp/host.sock 00:24:09.548 17:14:56 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@832 -- # local max_retries=100 00:24:09.548 17:14:56 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:24:09.548 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:24:09.548 17:14:56 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock --wait-for-rpc -L bdev_nvme 00:24:09.548 17:14:56 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@836 -- # xtrace_disable 00:24:09.548 17:14:56 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:09.548 [2024-05-15 17:14:57.038817] Starting SPDK v24.05-pre git sha1 0ba8ca574 / DPDK 23.11.0 initialization... 00:24:09.548 [2024-05-15 17:14:57.038858] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3181186 ] 00:24:09.548 EAL: No free 2048 kB hugepages reported on node 1 00:24:09.548 [2024-05-15 17:14:57.091941] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:09.548 [2024-05-15 17:14:57.175281] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:24:10.491 17:14:57 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:24:10.491 17:14:57 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@860 -- # return 0 00:24:10.491 17:14:57 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@62 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:24:10.491 17:14:57 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_set_options -e 1 00:24:10.491 17:14:57 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:10.491 17:14:57 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:10.491 17:14:57 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:10.491 17:14:57 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@66 -- # rpc_cmd -s /tmp/host.sock framework_start_init 00:24:10.491 17:14:57 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:10.491 17:14:57 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:10.491 17:14:57 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:10.491 17:14:57 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test --ctrlr-loss-timeout-sec 2 --reconnect-delay-sec 1 --fast-io-fail-timeout-sec 1 --wait-for-attach 00:24:10.491 17:14:57 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:10.491 17:14:57 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:11.425 [2024-05-15 17:14:58.974256] bdev_nvme.c:6967:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:24:11.425 [2024-05-15 17:14:58.974281] bdev_nvme.c:7047:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:24:11.425 [2024-05-15 17:14:58.974296] bdev_nvme.c:6930:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:24:11.425 [2024-05-15 17:14:59.060553] bdev_nvme.c:6896:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:24:11.683 [2024-05-15 17:14:59.116241] bdev_nvme.c:7757:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:24:11.683 [2024-05-15 17:14:59.116284] bdev_nvme.c:7757:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:24:11.683 [2024-05-15 17:14:59.116305] bdev_nvme.c:7757:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:24:11.683 [2024-05-15 17:14:59.116318] bdev_nvme.c:6786:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:24:11.683 [2024-05-15 17:14:59.116335] bdev_nvme.c:6745:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:24:11.683 17:14:59 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:11.683 17:14:59 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@72 -- # wait_for_bdev nvme0n1 00:24:11.683 17:14:59 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:24:11.683 17:14:59 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:11.683 17:14:59 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:24:11.683 [2024-05-15 17:14:59.122623] bdev_nvme.c:1607:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0x14198b0 was disconnected and freed. delete nvme_qpair. 00:24:11.683 17:14:59 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:11.683 17:14:59 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:24:11.683 17:14:59 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:11.683 17:14:59 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:24:11.683 17:14:59 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:11.683 17:14:59 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != \n\v\m\e\0\n\1 ]] 00:24:11.683 17:14:59 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@75 -- # ip netns exec cvl_0_0_ns_spdk ip addr del 10.0.0.2/24 dev cvl_0_0 00:24:11.683 17:14:59 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@76 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 down 00:24:11.683 17:14:59 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@79 -- # wait_for_bdev '' 00:24:11.683 17:14:59 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:24:11.683 17:14:59 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:11.683 17:14:59 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:24:11.683 17:14:59 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:11.683 17:14:59 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:24:11.683 17:14:59 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:11.683 17:14:59 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:24:11.683 17:14:59 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:11.683 17:14:59 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:24:11.683 17:14:59 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:24:13.056 17:15:00 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:24:13.056 17:15:00 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:13.056 17:15:00 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:24:13.056 17:15:00 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:13.056 17:15:00 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:13.056 17:15:00 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:24:13.056 17:15:00 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:24:13.056 17:15:00 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:13.056 17:15:00 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:24:13.056 17:15:00 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:24:13.989 17:15:01 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:24:13.989 17:15:01 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:13.989 17:15:01 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:24:13.989 17:15:01 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:13.989 17:15:01 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:24:13.989 17:15:01 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:13.989 17:15:01 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:24:13.989 17:15:01 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:13.989 17:15:01 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:24:13.989 17:15:01 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:24:14.922 17:15:02 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:24:14.922 17:15:02 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:14.922 17:15:02 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:24:14.922 17:15:02 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:14.922 17:15:02 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:24:14.922 17:15:02 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:14.922 17:15:02 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:24:14.922 17:15:02 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:14.922 17:15:02 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:24:14.922 17:15:02 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:24:15.854 17:15:03 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:24:15.854 17:15:03 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:15.854 17:15:03 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:24:15.855 17:15:03 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:15.855 17:15:03 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:24:15.855 17:15:03 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:15.855 17:15:03 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:24:15.855 17:15:03 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:16.112 17:15:03 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:24:16.112 17:15:03 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:24:17.046 17:15:04 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:24:17.046 17:15:04 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:17.046 17:15:04 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:24:17.046 17:15:04 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:24:17.046 17:15:04 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:17.046 17:15:04 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:24:17.046 17:15:04 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:17.046 17:15:04 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:17.046 [2024-05-15 17:15:04.557746] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 110: Connection timed out 00:24:17.046 [2024-05-15 17:15:04.557788] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:17.046 [2024-05-15 17:15:04.557799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.046 [2024-05-15 17:15:04.557809] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:17.046 [2024-05-15 17:15:04.557815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.046 [2024-05-15 17:15:04.557822] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:17.046 [2024-05-15 17:15:04.557829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.046 [2024-05-15 17:15:04.557836] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:17.046 [2024-05-15 17:15:04.557843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.046 [2024-05-15 17:15:04.557850] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:24:17.046 [2024-05-15 17:15:04.557857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.046 [2024-05-15 17:15:04.557864] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13e09e0 is same with the state(5) to be set 00:24:17.046 [2024-05-15 17:15:04.567765] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13e09e0 (9): Bad file descriptor 00:24:17.046 [2024-05-15 17:15:04.577806] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:24:17.046 17:15:04 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:24:17.046 17:15:04 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:24:17.980 17:15:05 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:24:17.980 17:15:05 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:17.980 17:15:05 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:24:17.980 17:15:05 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:17.980 17:15:05 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:24:17.980 17:15:05 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:17.980 17:15:05 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:24:17.980 [2024-05-15 17:15:05.628209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 110 00:24:19.354 [2024-05-15 17:15:06.652204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 110 00:24:19.354 [2024-05-15 17:15:06.652252] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13e09e0 with addr=10.0.0.2, port=4420 00:24:19.354 [2024-05-15 17:15:06.652269] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13e09e0 is same with the state(5) to be set 00:24:19.354 [2024-05-15 17:15:06.652699] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13e09e0 (9): Bad file descriptor 00:24:19.354 [2024-05-15 17:15:06.652726] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:19.354 [2024-05-15 17:15:06.652750] bdev_nvme.c:6718:remove_discovery_entry: *INFO*: Discovery[10.0.0.2:8009] Remove discovery entry: nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 00:24:19.354 [2024-05-15 17:15:06.652774] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:19.354 [2024-05-15 17:15:06.652788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:19.354 [2024-05-15 17:15:06.652802] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:19.354 [2024-05-15 17:15:06.652812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:19.354 [2024-05-15 17:15:06.652823] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:19.354 [2024-05-15 17:15:06.652832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:19.354 [2024-05-15 17:15:06.652842] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:19.354 [2024-05-15 17:15:06.652852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:19.354 [2024-05-15 17:15:06.652862] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:24:19.354 [2024-05-15 17:15:06.652871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:19.354 [2024-05-15 17:15:06.652881] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] in failed state. 00:24:19.354 [2024-05-15 17:15:06.653297] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13dfe10 (9): Bad file descriptor 00:24:19.354 [2024-05-15 17:15:06.654311] nvme_fabric.c: 214:nvme_fabric_prop_get_cmd_async: *ERROR*: Failed to send Property Get fabrics command 00:24:19.354 [2024-05-15 17:15:06.654326] nvme_ctrlr.c:1149:nvme_ctrlr_shutdown_async: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] Failed to read the CC register 00:24:19.354 17:15:06 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:19.354 17:15:06 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:24:19.354 17:15:06 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:24:20.287 17:15:07 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:24:20.287 17:15:07 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:20.287 17:15:07 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:24:20.287 17:15:07 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:20.287 17:15:07 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:24:20.287 17:15:07 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:20.287 17:15:07 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:24:20.287 17:15:07 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:20.287 17:15:07 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != '' ]] 00:24:20.287 17:15:07 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@82 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:20.287 17:15:07 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@83 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:20.287 17:15:07 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@86 -- # wait_for_bdev nvme1n1 00:24:20.287 17:15:07 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:24:20.287 17:15:07 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:20.287 17:15:07 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:24:20.287 17:15:07 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:20.287 17:15:07 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:24:20.287 17:15:07 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:20.287 17:15:07 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:24:20.287 17:15:07 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:20.287 17:15:07 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:24:20.287 17:15:07 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:24:21.220 [2024-05-15 17:15:08.668701] bdev_nvme.c:6967:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:24:21.220 [2024-05-15 17:15:08.668719] bdev_nvme.c:7047:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:24:21.220 [2024-05-15 17:15:08.668732] bdev_nvme.c:6930:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:24:21.220 [2024-05-15 17:15:08.795126] bdev_nvme.c:6896:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme1 00:24:21.220 17:15:08 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:24:21.220 17:15:08 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:24:21.220 17:15:08 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:21.220 17:15:08 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:21.220 17:15:08 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:24:21.220 17:15:08 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:21.220 17:15:08 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:24:21.220 17:15:08 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:21.478 17:15:08 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:24:21.478 17:15:08 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:24:21.478 [2024-05-15 17:15:08.930638] bdev_nvme.c:7757:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:24:21.478 [2024-05-15 17:15:08.930674] bdev_nvme.c:7757:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:24:21.478 [2024-05-15 17:15:08.930691] bdev_nvme.c:7757:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:24:21.478 [2024-05-15 17:15:08.930704] bdev_nvme.c:6786:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme1 done 00:24:21.478 [2024-05-15 17:15:08.930711] bdev_nvme.c:6745:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:24:21.478 [2024-05-15 17:15:08.937998] bdev_nvme.c:1607:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0x14246a0 was disconnected and freed. delete nvme_qpair. 00:24:22.410 17:15:09 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:24:22.410 17:15:09 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:22.410 17:15:09 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:24:22.410 17:15:09 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:24:22.410 17:15:09 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:22.410 17:15:09 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:24:22.410 17:15:09 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:22.410 17:15:09 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:22.410 17:15:09 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme1n1 != \n\v\m\e\1\n\1 ]] 00:24:22.410 17:15:09 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@88 -- # trap - SIGINT SIGTERM EXIT 00:24:22.410 17:15:09 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@90 -- # killprocess 3181186 00:24:22.410 17:15:09 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@946 -- # '[' -z 3181186 ']' 00:24:22.410 17:15:09 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@950 -- # kill -0 3181186 00:24:22.410 17:15:09 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@951 -- # uname 00:24:22.410 17:15:09 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:24:22.410 17:15:09 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3181186 00:24:22.410 17:15:09 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:24:22.411 17:15:09 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:24:22.411 17:15:09 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3181186' 00:24:22.411 killing process with pid 3181186 00:24:22.411 17:15:09 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@965 -- # kill 3181186 00:24:22.411 17:15:09 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@970 -- # wait 3181186 00:24:22.669 17:15:10 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@91 -- # nvmftestfini 00:24:22.669 17:15:10 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@488 -- # nvmfcleanup 00:24:22.669 17:15:10 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@117 -- # sync 00:24:22.669 17:15:10 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:24:22.669 17:15:10 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@120 -- # set +e 00:24:22.669 17:15:10 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@121 -- # for i in {1..20} 00:24:22.669 17:15:10 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:24:22.669 rmmod nvme_tcp 00:24:22.669 rmmod nvme_fabrics 00:24:22.669 rmmod nvme_keyring 00:24:22.669 17:15:10 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:24:22.669 17:15:10 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@124 -- # set -e 00:24:22.669 17:15:10 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@125 -- # return 0 00:24:22.669 17:15:10 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@489 -- # '[' -n 3180939 ']' 00:24:22.669 17:15:10 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@490 -- # killprocess 3180939 00:24:22.669 17:15:10 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@946 -- # '[' -z 3180939 ']' 00:24:22.669 17:15:10 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@950 -- # kill -0 3180939 00:24:22.669 17:15:10 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@951 -- # uname 00:24:22.669 17:15:10 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:24:22.669 17:15:10 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3180939 00:24:22.669 17:15:10 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:24:22.669 17:15:10 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:24:22.669 17:15:10 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3180939' 00:24:22.669 killing process with pid 3180939 00:24:22.669 17:15:10 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@965 -- # kill 3180939 00:24:22.669 [2024-05-15 17:15:10.289252] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:24:22.669 17:15:10 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@970 -- # wait 3180939 00:24:22.927 17:15:10 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:24:22.927 17:15:10 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:24:22.927 17:15:10 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:24:22.927 17:15:10 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:24:22.927 17:15:10 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@278 -- # remove_spdk_ns 00:24:22.927 17:15:10 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:22.927 17:15:10 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:22.927 17:15:10 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:25.456 17:15:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:24:25.456 00:24:25.456 real 0m21.715s 00:24:25.456 user 0m27.315s 00:24:25.456 sys 0m5.164s 00:24:25.456 17:15:12 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:24:25.456 17:15:12 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:25.456 ************************************ 00:24:25.456 END TEST nvmf_discovery_remove_ifc 00:24:25.456 ************************************ 00:24:25.456 17:15:12 nvmf_tcp -- nvmf/nvmf.sh@103 -- # run_test nvmf_identify_kernel_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:24:25.456 17:15:12 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:24:25.456 17:15:12 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:24:25.456 17:15:12 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:24:25.456 ************************************ 00:24:25.456 START TEST nvmf_identify_kernel_target 00:24:25.456 ************************************ 00:24:25.456 17:15:12 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:24:25.456 * Looking for test storage... 00:24:25.456 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:24:25.456 17:15:12 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:25.456 17:15:12 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # uname -s 00:24:25.456 17:15:12 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:25.456 17:15:12 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:25.456 17:15:12 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:25.456 17:15:12 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:25.456 17:15:12 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:25.456 17:15:12 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:25.456 17:15:12 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:25.456 17:15:12 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:25.456 17:15:12 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:25.456 17:15:12 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:25.456 17:15:12 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:24:25.456 17:15:12 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:24:25.456 17:15:12 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:25.456 17:15:12 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:25.456 17:15:12 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:25.456 17:15:12 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:25.456 17:15:12 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:25.456 17:15:12 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:25.456 17:15:12 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:25.456 17:15:12 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:25.456 17:15:12 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:25.456 17:15:12 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:25.456 17:15:12 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:25.456 17:15:12 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@5 -- # export PATH 00:24:25.456 17:15:12 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:25.456 17:15:12 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@47 -- # : 0 00:24:25.456 17:15:12 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:24:25.457 17:15:12 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:24:25.457 17:15:12 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:25.457 17:15:12 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:25.457 17:15:12 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:25.457 17:15:12 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:24:25.457 17:15:12 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:24:25.457 17:15:12 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@51 -- # have_pci_nics=0 00:24:25.457 17:15:12 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@11 -- # nvmftestinit 00:24:25.457 17:15:12 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:24:25.457 17:15:12 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:25.457 17:15:12 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@448 -- # prepare_net_devs 00:24:25.457 17:15:12 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@410 -- # local -g is_hw=no 00:24:25.457 17:15:12 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@412 -- # remove_spdk_ns 00:24:25.457 17:15:12 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:25.457 17:15:12 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:25.457 17:15:12 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:25.457 17:15:12 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:24:25.457 17:15:12 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:24:25.457 17:15:12 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@285 -- # xtrace_disable 00:24:25.457 17:15:12 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:24:30.722 17:15:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:30.722 17:15:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@291 -- # pci_devs=() 00:24:30.722 17:15:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@291 -- # local -a pci_devs 00:24:30.722 17:15:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@292 -- # pci_net_devs=() 00:24:30.722 17:15:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:24:30.722 17:15:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@293 -- # pci_drivers=() 00:24:30.722 17:15:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@293 -- # local -A pci_drivers 00:24:30.722 17:15:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@295 -- # net_devs=() 00:24:30.722 17:15:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@295 -- # local -ga net_devs 00:24:30.722 17:15:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@296 -- # e810=() 00:24:30.722 17:15:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@296 -- # local -ga e810 00:24:30.722 17:15:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@297 -- # x722=() 00:24:30.722 17:15:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@297 -- # local -ga x722 00:24:30.722 17:15:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@298 -- # mlx=() 00:24:30.722 17:15:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@298 -- # local -ga mlx 00:24:30.722 17:15:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:30.722 17:15:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:30.722 17:15:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:30.722 17:15:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:30.722 17:15:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:30.722 17:15:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:30.722 17:15:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:30.722 17:15:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:30.722 17:15:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:30.722 17:15:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:30.722 17:15:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:30.722 17:15:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:24:30.722 17:15:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:24:30.723 17:15:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:24:30.723 17:15:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:24:30.723 17:15:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:24:30.723 17:15:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:24:30.723 17:15:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:30.723 17:15:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:24:30.723 Found 0000:86:00.0 (0x8086 - 0x159b) 00:24:30.723 17:15:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:30.723 17:15:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:30.723 17:15:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:30.723 17:15:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:30.723 17:15:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:30.723 17:15:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:30.723 17:15:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:24:30.723 Found 0000:86:00.1 (0x8086 - 0x159b) 00:24:30.723 17:15:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:30.723 17:15:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:30.723 17:15:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:30.723 17:15:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:30.723 17:15:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:30.723 17:15:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:24:30.723 17:15:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:24:30.723 17:15:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:24:30.723 17:15:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:30.723 17:15:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:30.723 17:15:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:30.723 17:15:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:30.723 17:15:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:30.723 17:15:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:30.723 17:15:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:30.723 17:15:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:24:30.723 Found net devices under 0000:86:00.0: cvl_0_0 00:24:30.723 17:15:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:30.723 17:15:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:30.723 17:15:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:30.723 17:15:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:30.723 17:15:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:30.723 17:15:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:30.723 17:15:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:30.723 17:15:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:30.723 17:15:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:24:30.723 Found net devices under 0000:86:00.1: cvl_0_1 00:24:30.723 17:15:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:30.723 17:15:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:24:30.723 17:15:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@414 -- # is_hw=yes 00:24:30.723 17:15:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:24:30.723 17:15:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:24:30.723 17:15:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:24:30.723 17:15:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:30.723 17:15:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:30.723 17:15:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:30.723 17:15:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:24:30.723 17:15:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:30.723 17:15:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:30.723 17:15:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:24:30.723 17:15:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:30.723 17:15:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:30.723 17:15:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:24:30.723 17:15:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:24:30.723 17:15:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:24:30.723 17:15:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:30.723 17:15:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:30.723 17:15:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:30.723 17:15:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:24:30.723 17:15:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:30.723 17:15:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:30.723 17:15:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:30.723 17:15:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:24:30.723 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:30.723 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.167 ms 00:24:30.723 00:24:30.723 --- 10.0.0.2 ping statistics --- 00:24:30.723 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:30.723 rtt min/avg/max/mdev = 0.167/0.167/0.167/0.000 ms 00:24:30.723 17:15:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:30.723 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:30.723 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.240 ms 00:24:30.723 00:24:30.723 --- 10.0.0.1 ping statistics --- 00:24:30.723 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:30.723 rtt min/avg/max/mdev = 0.240/0.240/0.240/0.000 ms 00:24:30.723 17:15:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:30.723 17:15:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@422 -- # return 0 00:24:30.723 17:15:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:24:30.723 17:15:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:30.723 17:15:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:24:30.723 17:15:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:24:30.723 17:15:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:30.723 17:15:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:24:30.723 17:15:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:24:30.723 17:15:18 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@13 -- # trap 'nvmftestfini || :; clean_kernel_target' EXIT 00:24:30.723 17:15:18 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # get_main_ns_ip 00:24:30.723 17:15:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@741 -- # local ip 00:24:30.723 17:15:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:30.723 17:15:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:30.723 17:15:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:30.723 17:15:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:30.723 17:15:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:30.723 17:15:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:30.723 17:15:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:30.723 17:15:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:30.723 17:15:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:30.723 17:15:18 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # target_ip=10.0.0.1 00:24:30.723 17:15:18 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@16 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:24:30.723 17:15:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@632 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:24:30.723 17:15:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@634 -- # nvmet=/sys/kernel/config/nvmet 00:24:30.723 17:15:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@635 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:24:30.723 17:15:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@636 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:24:30.723 17:15:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@637 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:24:30.723 17:15:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@639 -- # local block nvme 00:24:30.723 17:15:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@641 -- # [[ ! -e /sys/module/nvmet ]] 00:24:30.723 17:15:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@642 -- # modprobe nvmet 00:24:30.723 17:15:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@645 -- # [[ -e /sys/kernel/config/nvmet ]] 00:24:30.723 17:15:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@647 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:24:32.623 Waiting for block devices as requested 00:24:32.623 0000:5e:00.0 (8086 0a54): vfio-pci -> nvme 00:24:32.623 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:24:32.881 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:24:32.881 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:24:32.881 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:24:32.881 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:24:33.167 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:24:33.167 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:24:33.167 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:24:33.167 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:24:33.426 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:24:33.426 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:24:33.426 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:24:33.426 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:24:33.683 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:24:33.683 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:24:33.683 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:24:33.942 17:15:21 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:24:33.942 17:15:21 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n1 ]] 00:24:33.942 17:15:21 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@652 -- # is_block_zoned nvme0n1 00:24:33.942 17:15:21 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1658 -- # local device=nvme0n1 00:24:33.942 17:15:21 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:24:33.942 17:15:21 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1661 -- # [[ none != none ]] 00:24:33.942 17:15:21 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # block_in_use nvme0n1 00:24:33.942 17:15:21 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:24:33.942 17:15:21 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:24:33.942 No valid GPT data, bailing 00:24:33.942 17:15:21 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:24:33.942 17:15:21 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # pt= 00:24:33.942 17:15:21 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@392 -- # return 1 00:24:33.942 17:15:21 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n1 00:24:33.942 17:15:21 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@656 -- # [[ -b /dev/nvme0n1 ]] 00:24:33.942 17:15:21 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@658 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:24:33.942 17:15:21 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@659 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:24:33.942 17:15:21 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@660 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:24:33.942 17:15:21 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@665 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:24:33.942 17:15:21 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@667 -- # echo 1 00:24:33.942 17:15:21 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@668 -- # echo /dev/nvme0n1 00:24:33.942 17:15:21 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@669 -- # echo 1 00:24:33.942 17:15:21 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@671 -- # echo 10.0.0.1 00:24:33.942 17:15:21 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@672 -- # echo tcp 00:24:33.942 17:15:21 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@673 -- # echo 4420 00:24:33.942 17:15:21 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@674 -- # echo ipv4 00:24:33.942 17:15:21 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@677 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:24:33.942 17:15:21 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@680 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -a 10.0.0.1 -t tcp -s 4420 00:24:33.942 00:24:33.942 Discovery Log Number of Records 2, Generation counter 2 00:24:33.942 =====Discovery Log Entry 0====== 00:24:33.942 trtype: tcp 00:24:33.942 adrfam: ipv4 00:24:33.942 subtype: current discovery subsystem 00:24:33.942 treq: not specified, sq flow control disable supported 00:24:33.942 portid: 1 00:24:33.942 trsvcid: 4420 00:24:33.942 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:24:33.942 traddr: 10.0.0.1 00:24:33.942 eflags: none 00:24:33.942 sectype: none 00:24:33.942 =====Discovery Log Entry 1====== 00:24:33.942 trtype: tcp 00:24:33.942 adrfam: ipv4 00:24:33.942 subtype: nvme subsystem 00:24:33.942 treq: not specified, sq flow control disable supported 00:24:33.942 portid: 1 00:24:33.942 trsvcid: 4420 00:24:33.942 subnqn: nqn.2016-06.io.spdk:testnqn 00:24:33.942 traddr: 10.0.0.1 00:24:33.942 eflags: none 00:24:33.942 sectype: none 00:24:33.942 17:15:21 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 00:24:33.942 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' 00:24:33.942 EAL: No free 2048 kB hugepages reported on node 1 00:24:33.942 ===================================================== 00:24:33.942 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2014-08.org.nvmexpress.discovery 00:24:33.942 ===================================================== 00:24:33.942 Controller Capabilities/Features 00:24:33.942 ================================ 00:24:33.942 Vendor ID: 0000 00:24:33.942 Subsystem Vendor ID: 0000 00:24:33.942 Serial Number: 399799a1d65613e1b442 00:24:33.942 Model Number: Linux 00:24:33.942 Firmware Version: 6.7.0-68 00:24:33.942 Recommended Arb Burst: 0 00:24:33.942 IEEE OUI Identifier: 00 00 00 00:24:33.942 Multi-path I/O 00:24:33.942 May have multiple subsystem ports: No 00:24:33.942 May have multiple controllers: No 00:24:33.942 Associated with SR-IOV VF: No 00:24:33.942 Max Data Transfer Size: Unlimited 00:24:33.942 Max Number of Namespaces: 0 00:24:33.942 Max Number of I/O Queues: 1024 00:24:33.942 NVMe Specification Version (VS): 1.3 00:24:33.942 NVMe Specification Version (Identify): 1.3 00:24:33.942 Maximum Queue Entries: 1024 00:24:33.942 Contiguous Queues Required: No 00:24:33.942 Arbitration Mechanisms Supported 00:24:33.942 Weighted Round Robin: Not Supported 00:24:33.942 Vendor Specific: Not Supported 00:24:33.942 Reset Timeout: 7500 ms 00:24:33.942 Doorbell Stride: 4 bytes 00:24:33.942 NVM Subsystem Reset: Not Supported 00:24:33.942 Command Sets Supported 00:24:33.942 NVM Command Set: Supported 00:24:33.942 Boot Partition: Not Supported 00:24:33.942 Memory Page Size Minimum: 4096 bytes 00:24:33.942 Memory Page Size Maximum: 4096 bytes 00:24:33.942 Persistent Memory Region: Not Supported 00:24:33.942 Optional Asynchronous Events Supported 00:24:33.942 Namespace Attribute Notices: Not Supported 00:24:33.942 Firmware Activation Notices: Not Supported 00:24:33.942 ANA Change Notices: Not Supported 00:24:33.942 PLE Aggregate Log Change Notices: Not Supported 00:24:33.942 LBA Status Info Alert Notices: Not Supported 00:24:33.942 EGE Aggregate Log Change Notices: Not Supported 00:24:33.942 Normal NVM Subsystem Shutdown event: Not Supported 00:24:33.942 Zone Descriptor Change Notices: Not Supported 00:24:33.942 Discovery Log Change Notices: Supported 00:24:33.942 Controller Attributes 00:24:33.942 128-bit Host Identifier: Not Supported 00:24:33.942 Non-Operational Permissive Mode: Not Supported 00:24:33.942 NVM Sets: Not Supported 00:24:33.942 Read Recovery Levels: Not Supported 00:24:33.942 Endurance Groups: Not Supported 00:24:33.942 Predictable Latency Mode: Not Supported 00:24:33.942 Traffic Based Keep ALive: Not Supported 00:24:33.942 Namespace Granularity: Not Supported 00:24:33.942 SQ Associations: Not Supported 00:24:33.942 UUID List: Not Supported 00:24:33.942 Multi-Domain Subsystem: Not Supported 00:24:33.942 Fixed Capacity Management: Not Supported 00:24:33.942 Variable Capacity Management: Not Supported 00:24:33.942 Delete Endurance Group: Not Supported 00:24:33.942 Delete NVM Set: Not Supported 00:24:33.942 Extended LBA Formats Supported: Not Supported 00:24:33.942 Flexible Data Placement Supported: Not Supported 00:24:33.942 00:24:33.942 Controller Memory Buffer Support 00:24:33.942 ================================ 00:24:33.942 Supported: No 00:24:33.942 00:24:33.942 Persistent Memory Region Support 00:24:33.942 ================================ 00:24:33.942 Supported: No 00:24:33.942 00:24:33.942 Admin Command Set Attributes 00:24:33.942 ============================ 00:24:33.942 Security Send/Receive: Not Supported 00:24:33.942 Format NVM: Not Supported 00:24:33.942 Firmware Activate/Download: Not Supported 00:24:33.942 Namespace Management: Not Supported 00:24:33.942 Device Self-Test: Not Supported 00:24:33.942 Directives: Not Supported 00:24:33.942 NVMe-MI: Not Supported 00:24:33.942 Virtualization Management: Not Supported 00:24:33.942 Doorbell Buffer Config: Not Supported 00:24:33.942 Get LBA Status Capability: Not Supported 00:24:33.942 Command & Feature Lockdown Capability: Not Supported 00:24:33.942 Abort Command Limit: 1 00:24:33.942 Async Event Request Limit: 1 00:24:33.943 Number of Firmware Slots: N/A 00:24:33.943 Firmware Slot 1 Read-Only: N/A 00:24:33.943 Firmware Activation Without Reset: N/A 00:24:33.943 Multiple Update Detection Support: N/A 00:24:33.943 Firmware Update Granularity: No Information Provided 00:24:33.943 Per-Namespace SMART Log: No 00:24:33.943 Asymmetric Namespace Access Log Page: Not Supported 00:24:33.943 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:24:33.943 Command Effects Log Page: Not Supported 00:24:33.943 Get Log Page Extended Data: Supported 00:24:33.943 Telemetry Log Pages: Not Supported 00:24:33.943 Persistent Event Log Pages: Not Supported 00:24:33.943 Supported Log Pages Log Page: May Support 00:24:33.943 Commands Supported & Effects Log Page: Not Supported 00:24:33.943 Feature Identifiers & Effects Log Page:May Support 00:24:33.943 NVMe-MI Commands & Effects Log Page: May Support 00:24:33.943 Data Area 4 for Telemetry Log: Not Supported 00:24:33.943 Error Log Page Entries Supported: 1 00:24:33.943 Keep Alive: Not Supported 00:24:33.943 00:24:33.943 NVM Command Set Attributes 00:24:33.943 ========================== 00:24:33.943 Submission Queue Entry Size 00:24:33.943 Max: 1 00:24:33.943 Min: 1 00:24:33.943 Completion Queue Entry Size 00:24:33.943 Max: 1 00:24:33.943 Min: 1 00:24:33.943 Number of Namespaces: 0 00:24:33.943 Compare Command: Not Supported 00:24:33.943 Write Uncorrectable Command: Not Supported 00:24:33.943 Dataset Management Command: Not Supported 00:24:33.943 Write Zeroes Command: Not Supported 00:24:33.943 Set Features Save Field: Not Supported 00:24:33.943 Reservations: Not Supported 00:24:33.943 Timestamp: Not Supported 00:24:33.943 Copy: Not Supported 00:24:33.943 Volatile Write Cache: Not Present 00:24:33.943 Atomic Write Unit (Normal): 1 00:24:33.943 Atomic Write Unit (PFail): 1 00:24:33.943 Atomic Compare & Write Unit: 1 00:24:33.943 Fused Compare & Write: Not Supported 00:24:33.943 Scatter-Gather List 00:24:33.943 SGL Command Set: Supported 00:24:33.943 SGL Keyed: Not Supported 00:24:33.943 SGL Bit Bucket Descriptor: Not Supported 00:24:33.943 SGL Metadata Pointer: Not Supported 00:24:33.943 Oversized SGL: Not Supported 00:24:33.943 SGL Metadata Address: Not Supported 00:24:33.943 SGL Offset: Supported 00:24:33.943 Transport SGL Data Block: Not Supported 00:24:33.943 Replay Protected Memory Block: Not Supported 00:24:33.943 00:24:33.943 Firmware Slot Information 00:24:33.943 ========================= 00:24:33.943 Active slot: 0 00:24:33.943 00:24:33.943 00:24:33.943 Error Log 00:24:33.943 ========= 00:24:33.943 00:24:33.943 Active Namespaces 00:24:33.943 ================= 00:24:33.943 Discovery Log Page 00:24:33.943 ================== 00:24:33.943 Generation Counter: 2 00:24:33.943 Number of Records: 2 00:24:33.943 Record Format: 0 00:24:33.943 00:24:33.943 Discovery Log Entry 0 00:24:33.943 ---------------------- 00:24:33.943 Transport Type: 3 (TCP) 00:24:33.943 Address Family: 1 (IPv4) 00:24:33.943 Subsystem Type: 3 (Current Discovery Subsystem) 00:24:33.943 Entry Flags: 00:24:33.943 Duplicate Returned Information: 0 00:24:33.943 Explicit Persistent Connection Support for Discovery: 0 00:24:33.943 Transport Requirements: 00:24:33.943 Secure Channel: Not Specified 00:24:33.943 Port ID: 1 (0x0001) 00:24:33.943 Controller ID: 65535 (0xffff) 00:24:33.943 Admin Max SQ Size: 32 00:24:33.943 Transport Service Identifier: 4420 00:24:33.943 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:24:33.943 Transport Address: 10.0.0.1 00:24:33.943 Discovery Log Entry 1 00:24:33.943 ---------------------- 00:24:33.943 Transport Type: 3 (TCP) 00:24:33.943 Address Family: 1 (IPv4) 00:24:33.943 Subsystem Type: 2 (NVM Subsystem) 00:24:33.943 Entry Flags: 00:24:33.943 Duplicate Returned Information: 0 00:24:33.943 Explicit Persistent Connection Support for Discovery: 0 00:24:33.943 Transport Requirements: 00:24:33.943 Secure Channel: Not Specified 00:24:33.943 Port ID: 1 (0x0001) 00:24:33.943 Controller ID: 65535 (0xffff) 00:24:33.943 Admin Max SQ Size: 32 00:24:33.943 Transport Service Identifier: 4420 00:24:33.943 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:testnqn 00:24:33.943 Transport Address: 10.0.0.1 00:24:33.943 17:15:21 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:24:34.202 EAL: No free 2048 kB hugepages reported on node 1 00:24:34.202 get_feature(0x01) failed 00:24:34.202 get_feature(0x02) failed 00:24:34.202 get_feature(0x04) failed 00:24:34.202 ===================================================== 00:24:34.202 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:24:34.202 ===================================================== 00:24:34.202 Controller Capabilities/Features 00:24:34.202 ================================ 00:24:34.202 Vendor ID: 0000 00:24:34.202 Subsystem Vendor ID: 0000 00:24:34.202 Serial Number: 17f5e5783fd5aff5a98d 00:24:34.202 Model Number: SPDK-nqn.2016-06.io.spdk:testnqn 00:24:34.202 Firmware Version: 6.7.0-68 00:24:34.202 Recommended Arb Burst: 6 00:24:34.202 IEEE OUI Identifier: 00 00 00 00:24:34.202 Multi-path I/O 00:24:34.202 May have multiple subsystem ports: Yes 00:24:34.202 May have multiple controllers: Yes 00:24:34.202 Associated with SR-IOV VF: No 00:24:34.202 Max Data Transfer Size: Unlimited 00:24:34.202 Max Number of Namespaces: 1024 00:24:34.202 Max Number of I/O Queues: 128 00:24:34.202 NVMe Specification Version (VS): 1.3 00:24:34.202 NVMe Specification Version (Identify): 1.3 00:24:34.202 Maximum Queue Entries: 1024 00:24:34.202 Contiguous Queues Required: No 00:24:34.202 Arbitration Mechanisms Supported 00:24:34.202 Weighted Round Robin: Not Supported 00:24:34.202 Vendor Specific: Not Supported 00:24:34.202 Reset Timeout: 7500 ms 00:24:34.202 Doorbell Stride: 4 bytes 00:24:34.202 NVM Subsystem Reset: Not Supported 00:24:34.202 Command Sets Supported 00:24:34.202 NVM Command Set: Supported 00:24:34.202 Boot Partition: Not Supported 00:24:34.202 Memory Page Size Minimum: 4096 bytes 00:24:34.202 Memory Page Size Maximum: 4096 bytes 00:24:34.202 Persistent Memory Region: Not Supported 00:24:34.202 Optional Asynchronous Events Supported 00:24:34.202 Namespace Attribute Notices: Supported 00:24:34.202 Firmware Activation Notices: Not Supported 00:24:34.202 ANA Change Notices: Supported 00:24:34.202 PLE Aggregate Log Change Notices: Not Supported 00:24:34.202 LBA Status Info Alert Notices: Not Supported 00:24:34.202 EGE Aggregate Log Change Notices: Not Supported 00:24:34.202 Normal NVM Subsystem Shutdown event: Not Supported 00:24:34.202 Zone Descriptor Change Notices: Not Supported 00:24:34.202 Discovery Log Change Notices: Not Supported 00:24:34.202 Controller Attributes 00:24:34.202 128-bit Host Identifier: Supported 00:24:34.202 Non-Operational Permissive Mode: Not Supported 00:24:34.202 NVM Sets: Not Supported 00:24:34.202 Read Recovery Levels: Not Supported 00:24:34.202 Endurance Groups: Not Supported 00:24:34.202 Predictable Latency Mode: Not Supported 00:24:34.202 Traffic Based Keep ALive: Supported 00:24:34.202 Namespace Granularity: Not Supported 00:24:34.202 SQ Associations: Not Supported 00:24:34.202 UUID List: Not Supported 00:24:34.202 Multi-Domain Subsystem: Not Supported 00:24:34.202 Fixed Capacity Management: Not Supported 00:24:34.202 Variable Capacity Management: Not Supported 00:24:34.202 Delete Endurance Group: Not Supported 00:24:34.203 Delete NVM Set: Not Supported 00:24:34.203 Extended LBA Formats Supported: Not Supported 00:24:34.203 Flexible Data Placement Supported: Not Supported 00:24:34.203 00:24:34.203 Controller Memory Buffer Support 00:24:34.203 ================================ 00:24:34.203 Supported: No 00:24:34.203 00:24:34.203 Persistent Memory Region Support 00:24:34.203 ================================ 00:24:34.203 Supported: No 00:24:34.203 00:24:34.203 Admin Command Set Attributes 00:24:34.203 ============================ 00:24:34.203 Security Send/Receive: Not Supported 00:24:34.203 Format NVM: Not Supported 00:24:34.203 Firmware Activate/Download: Not Supported 00:24:34.203 Namespace Management: Not Supported 00:24:34.203 Device Self-Test: Not Supported 00:24:34.203 Directives: Not Supported 00:24:34.203 NVMe-MI: Not Supported 00:24:34.203 Virtualization Management: Not Supported 00:24:34.203 Doorbell Buffer Config: Not Supported 00:24:34.203 Get LBA Status Capability: Not Supported 00:24:34.203 Command & Feature Lockdown Capability: Not Supported 00:24:34.203 Abort Command Limit: 4 00:24:34.203 Async Event Request Limit: 4 00:24:34.203 Number of Firmware Slots: N/A 00:24:34.203 Firmware Slot 1 Read-Only: N/A 00:24:34.203 Firmware Activation Without Reset: N/A 00:24:34.203 Multiple Update Detection Support: N/A 00:24:34.203 Firmware Update Granularity: No Information Provided 00:24:34.203 Per-Namespace SMART Log: Yes 00:24:34.203 Asymmetric Namespace Access Log Page: Supported 00:24:34.203 ANA Transition Time : 10 sec 00:24:34.203 00:24:34.203 Asymmetric Namespace Access Capabilities 00:24:34.203 ANA Optimized State : Supported 00:24:34.203 ANA Non-Optimized State : Supported 00:24:34.203 ANA Inaccessible State : Supported 00:24:34.203 ANA Persistent Loss State : Supported 00:24:34.203 ANA Change State : Supported 00:24:34.203 ANAGRPID is not changed : No 00:24:34.203 Non-Zero ANAGRPID for NS Mgmt Cmd : Not Supported 00:24:34.203 00:24:34.203 ANA Group Identifier Maximum : 128 00:24:34.203 Number of ANA Group Identifiers : 128 00:24:34.203 Max Number of Allowed Namespaces : 1024 00:24:34.203 Subsystem NQN: nqn.2016-06.io.spdk:testnqn 00:24:34.203 Command Effects Log Page: Supported 00:24:34.203 Get Log Page Extended Data: Supported 00:24:34.203 Telemetry Log Pages: Not Supported 00:24:34.203 Persistent Event Log Pages: Not Supported 00:24:34.203 Supported Log Pages Log Page: May Support 00:24:34.203 Commands Supported & Effects Log Page: Not Supported 00:24:34.203 Feature Identifiers & Effects Log Page:May Support 00:24:34.203 NVMe-MI Commands & Effects Log Page: May Support 00:24:34.203 Data Area 4 for Telemetry Log: Not Supported 00:24:34.203 Error Log Page Entries Supported: 128 00:24:34.203 Keep Alive: Supported 00:24:34.203 Keep Alive Granularity: 1000 ms 00:24:34.203 00:24:34.203 NVM Command Set Attributes 00:24:34.203 ========================== 00:24:34.203 Submission Queue Entry Size 00:24:34.203 Max: 64 00:24:34.203 Min: 64 00:24:34.203 Completion Queue Entry Size 00:24:34.203 Max: 16 00:24:34.203 Min: 16 00:24:34.203 Number of Namespaces: 1024 00:24:34.203 Compare Command: Not Supported 00:24:34.203 Write Uncorrectable Command: Not Supported 00:24:34.203 Dataset Management Command: Supported 00:24:34.203 Write Zeroes Command: Supported 00:24:34.203 Set Features Save Field: Not Supported 00:24:34.203 Reservations: Not Supported 00:24:34.203 Timestamp: Not Supported 00:24:34.203 Copy: Not Supported 00:24:34.203 Volatile Write Cache: Present 00:24:34.203 Atomic Write Unit (Normal): 1 00:24:34.203 Atomic Write Unit (PFail): 1 00:24:34.203 Atomic Compare & Write Unit: 1 00:24:34.203 Fused Compare & Write: Not Supported 00:24:34.203 Scatter-Gather List 00:24:34.203 SGL Command Set: Supported 00:24:34.203 SGL Keyed: Not Supported 00:24:34.203 SGL Bit Bucket Descriptor: Not Supported 00:24:34.203 SGL Metadata Pointer: Not Supported 00:24:34.203 Oversized SGL: Not Supported 00:24:34.203 SGL Metadata Address: Not Supported 00:24:34.203 SGL Offset: Supported 00:24:34.203 Transport SGL Data Block: Not Supported 00:24:34.203 Replay Protected Memory Block: Not Supported 00:24:34.203 00:24:34.203 Firmware Slot Information 00:24:34.203 ========================= 00:24:34.203 Active slot: 0 00:24:34.203 00:24:34.203 Asymmetric Namespace Access 00:24:34.203 =========================== 00:24:34.203 Change Count : 0 00:24:34.203 Number of ANA Group Descriptors : 1 00:24:34.203 ANA Group Descriptor : 0 00:24:34.203 ANA Group ID : 1 00:24:34.203 Number of NSID Values : 1 00:24:34.203 Change Count : 0 00:24:34.203 ANA State : 1 00:24:34.203 Namespace Identifier : 1 00:24:34.203 00:24:34.203 Commands Supported and Effects 00:24:34.203 ============================== 00:24:34.203 Admin Commands 00:24:34.203 -------------- 00:24:34.203 Get Log Page (02h): Supported 00:24:34.203 Identify (06h): Supported 00:24:34.203 Abort (08h): Supported 00:24:34.203 Set Features (09h): Supported 00:24:34.203 Get Features (0Ah): Supported 00:24:34.203 Asynchronous Event Request (0Ch): Supported 00:24:34.203 Keep Alive (18h): Supported 00:24:34.203 I/O Commands 00:24:34.203 ------------ 00:24:34.203 Flush (00h): Supported 00:24:34.203 Write (01h): Supported LBA-Change 00:24:34.203 Read (02h): Supported 00:24:34.203 Write Zeroes (08h): Supported LBA-Change 00:24:34.203 Dataset Management (09h): Supported 00:24:34.203 00:24:34.203 Error Log 00:24:34.203 ========= 00:24:34.203 Entry: 0 00:24:34.203 Error Count: 0x3 00:24:34.203 Submission Queue Id: 0x0 00:24:34.203 Command Id: 0x5 00:24:34.203 Phase Bit: 0 00:24:34.203 Status Code: 0x2 00:24:34.203 Status Code Type: 0x0 00:24:34.203 Do Not Retry: 1 00:24:34.203 Error Location: 0x28 00:24:34.203 LBA: 0x0 00:24:34.203 Namespace: 0x0 00:24:34.203 Vendor Log Page: 0x0 00:24:34.203 ----------- 00:24:34.203 Entry: 1 00:24:34.203 Error Count: 0x2 00:24:34.203 Submission Queue Id: 0x0 00:24:34.203 Command Id: 0x5 00:24:34.203 Phase Bit: 0 00:24:34.203 Status Code: 0x2 00:24:34.203 Status Code Type: 0x0 00:24:34.203 Do Not Retry: 1 00:24:34.203 Error Location: 0x28 00:24:34.203 LBA: 0x0 00:24:34.203 Namespace: 0x0 00:24:34.203 Vendor Log Page: 0x0 00:24:34.203 ----------- 00:24:34.203 Entry: 2 00:24:34.203 Error Count: 0x1 00:24:34.203 Submission Queue Id: 0x0 00:24:34.203 Command Id: 0x4 00:24:34.203 Phase Bit: 0 00:24:34.203 Status Code: 0x2 00:24:34.203 Status Code Type: 0x0 00:24:34.203 Do Not Retry: 1 00:24:34.203 Error Location: 0x28 00:24:34.203 LBA: 0x0 00:24:34.203 Namespace: 0x0 00:24:34.203 Vendor Log Page: 0x0 00:24:34.203 00:24:34.203 Number of Queues 00:24:34.203 ================ 00:24:34.203 Number of I/O Submission Queues: 128 00:24:34.203 Number of I/O Completion Queues: 128 00:24:34.203 00:24:34.203 ZNS Specific Controller Data 00:24:34.203 ============================ 00:24:34.203 Zone Append Size Limit: 0 00:24:34.203 00:24:34.203 00:24:34.203 Active Namespaces 00:24:34.203 ================= 00:24:34.203 get_feature(0x05) failed 00:24:34.203 Namespace ID:1 00:24:34.203 Command Set Identifier: NVM (00h) 00:24:34.203 Deallocate: Supported 00:24:34.203 Deallocated/Unwritten Error: Not Supported 00:24:34.203 Deallocated Read Value: Unknown 00:24:34.203 Deallocate in Write Zeroes: Not Supported 00:24:34.203 Deallocated Guard Field: 0xFFFF 00:24:34.203 Flush: Supported 00:24:34.203 Reservation: Not Supported 00:24:34.203 Namespace Sharing Capabilities: Multiple Controllers 00:24:34.203 Size (in LBAs): 1953525168 (931GiB) 00:24:34.203 Capacity (in LBAs): 1953525168 (931GiB) 00:24:34.203 Utilization (in LBAs): 1953525168 (931GiB) 00:24:34.203 UUID: 0a712d20-157a-4f32-9f58-515fc313dca2 00:24:34.203 Thin Provisioning: Not Supported 00:24:34.203 Per-NS Atomic Units: Yes 00:24:34.203 Atomic Boundary Size (Normal): 0 00:24:34.203 Atomic Boundary Size (PFail): 0 00:24:34.203 Atomic Boundary Offset: 0 00:24:34.203 NGUID/EUI64 Never Reused: No 00:24:34.203 ANA group ID: 1 00:24:34.203 Namespace Write Protected: No 00:24:34.203 Number of LBA Formats: 1 00:24:34.203 Current LBA Format: LBA Format #00 00:24:34.203 LBA Format #00: Data Size: 512 Metadata Size: 0 00:24:34.203 00:24:34.203 17:15:21 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # nvmftestfini 00:24:34.203 17:15:21 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@488 -- # nvmfcleanup 00:24:34.203 17:15:21 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@117 -- # sync 00:24:34.203 17:15:21 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:24:34.203 17:15:21 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@120 -- # set +e 00:24:34.203 17:15:21 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@121 -- # for i in {1..20} 00:24:34.203 17:15:21 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:24:34.203 rmmod nvme_tcp 00:24:34.203 rmmod nvme_fabrics 00:24:34.203 17:15:21 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:24:34.203 17:15:21 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@124 -- # set -e 00:24:34.204 17:15:21 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@125 -- # return 0 00:24:34.204 17:15:21 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:24:34.204 17:15:21 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:24:34.204 17:15:21 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:24:34.204 17:15:21 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:24:34.204 17:15:21 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:24:34.204 17:15:21 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@278 -- # remove_spdk_ns 00:24:34.204 17:15:21 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:34.204 17:15:21 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:34.204 17:15:21 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:36.106 17:15:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:24:36.106 17:15:23 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # clean_kernel_target 00:24:36.106 17:15:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@684 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:24:36.106 17:15:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@686 -- # echo 0 00:24:36.106 17:15:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@688 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:24:36.106 17:15:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@689 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:24:36.106 17:15:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@690 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:24:36.106 17:15:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@691 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:24:36.106 17:15:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@693 -- # modules=(/sys/module/nvmet/holders/*) 00:24:36.106 17:15:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@695 -- # modprobe -r nvmet_tcp nvmet 00:24:36.364 17:15:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@698 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:24:38.896 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:24:38.896 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:24:38.896 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:24:38.897 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:24:38.897 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:24:38.897 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:24:38.897 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:24:38.897 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:24:38.897 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:24:38.897 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:24:38.897 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:24:38.897 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:24:38.897 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:24:38.897 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:24:38.897 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:24:38.897 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:24:39.464 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:24:39.723 00:24:39.723 real 0m14.536s 00:24:39.723 user 0m3.127s 00:24:39.723 sys 0m7.395s 00:24:39.723 17:15:27 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1122 -- # xtrace_disable 00:24:39.723 17:15:27 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:24:39.723 ************************************ 00:24:39.723 END TEST nvmf_identify_kernel_target 00:24:39.723 ************************************ 00:24:39.723 17:15:27 nvmf_tcp -- nvmf/nvmf.sh@104 -- # run_test nvmf_auth_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:24:39.723 17:15:27 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:24:39.723 17:15:27 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:24:39.723 17:15:27 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:24:39.723 ************************************ 00:24:39.723 START TEST nvmf_auth_host 00:24:39.723 ************************************ 00:24:39.723 17:15:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:24:39.723 * Looking for test storage... 00:24:39.723 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:24:39.723 17:15:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:39.723 17:15:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@7 -- # uname -s 00:24:39.723 17:15:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:39.723 17:15:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:39.723 17:15:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:39.723 17:15:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:39.723 17:15:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:39.723 17:15:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:39.723 17:15:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:39.723 17:15:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:39.723 17:15:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:39.723 17:15:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:39.723 17:15:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:24:39.723 17:15:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:24:39.723 17:15:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:39.723 17:15:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:39.723 17:15:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:39.723 17:15:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:39.723 17:15:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:39.723 17:15:27 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:39.723 17:15:27 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:39.723 17:15:27 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:39.723 17:15:27 nvmf_tcp.nvmf_auth_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:39.723 17:15:27 nvmf_tcp.nvmf_auth_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:39.723 17:15:27 nvmf_tcp.nvmf_auth_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:39.723 17:15:27 nvmf_tcp.nvmf_auth_host -- paths/export.sh@5 -- # export PATH 00:24:39.723 17:15:27 nvmf_tcp.nvmf_auth_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:39.723 17:15:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@47 -- # : 0 00:24:39.723 17:15:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:24:39.723 17:15:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:24:39.723 17:15:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:39.724 17:15:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:39.724 17:15:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:39.724 17:15:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:24:39.724 17:15:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:24:39.724 17:15:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@51 -- # have_pci_nics=0 00:24:39.724 17:15:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:24:39.724 17:15:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@16 -- # dhgroups=("ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:24:39.724 17:15:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@17 -- # subnqn=nqn.2024-02.io.spdk:cnode0 00:24:39.724 17:15:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@18 -- # hostnqn=nqn.2024-02.io.spdk:host0 00:24:39.724 17:15:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@19 -- # nvmet_subsys=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:24:39.724 17:15:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@20 -- # nvmet_host=/sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:24:39.724 17:15:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@21 -- # keys=() 00:24:39.724 17:15:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@21 -- # ckeys=() 00:24:39.724 17:15:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@68 -- # nvmftestinit 00:24:39.724 17:15:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:24:39.724 17:15:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:39.724 17:15:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@448 -- # prepare_net_devs 00:24:39.724 17:15:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@410 -- # local -g is_hw=no 00:24:39.724 17:15:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@412 -- # remove_spdk_ns 00:24:39.724 17:15:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:39.724 17:15:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:39.724 17:15:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:39.724 17:15:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:24:39.724 17:15:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:24:39.724 17:15:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@285 -- # xtrace_disable 00:24:39.724 17:15:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:44.993 17:15:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:44.993 17:15:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@291 -- # pci_devs=() 00:24:44.993 17:15:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@291 -- # local -a pci_devs 00:24:44.993 17:15:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@292 -- # pci_net_devs=() 00:24:44.993 17:15:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:24:44.993 17:15:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@293 -- # pci_drivers=() 00:24:44.993 17:15:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@293 -- # local -A pci_drivers 00:24:44.993 17:15:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@295 -- # net_devs=() 00:24:44.993 17:15:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@295 -- # local -ga net_devs 00:24:44.993 17:15:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@296 -- # e810=() 00:24:44.993 17:15:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@296 -- # local -ga e810 00:24:44.993 17:15:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@297 -- # x722=() 00:24:44.993 17:15:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@297 -- # local -ga x722 00:24:44.993 17:15:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@298 -- # mlx=() 00:24:44.993 17:15:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@298 -- # local -ga mlx 00:24:44.993 17:15:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:44.993 17:15:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:44.993 17:15:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:44.993 17:15:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:44.993 17:15:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:44.993 17:15:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:44.993 17:15:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:44.993 17:15:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:44.993 17:15:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:44.993 17:15:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:44.993 17:15:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:44.993 17:15:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:24:44.993 17:15:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:24:44.993 17:15:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:24:44.993 17:15:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:24:44.993 17:15:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:24:44.993 17:15:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:24:44.993 17:15:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:44.993 17:15:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:24:44.993 Found 0000:86:00.0 (0x8086 - 0x159b) 00:24:44.993 17:15:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:44.993 17:15:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:44.993 17:15:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:44.993 17:15:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:44.993 17:15:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:44.993 17:15:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:44.993 17:15:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:24:44.993 Found 0000:86:00.1 (0x8086 - 0x159b) 00:24:44.993 17:15:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:44.993 17:15:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:44.993 17:15:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:44.993 17:15:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:44.993 17:15:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:44.993 17:15:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:24:44.993 17:15:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:24:44.993 17:15:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:24:44.993 17:15:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:44.993 17:15:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:44.993 17:15:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:44.993 17:15:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:44.993 17:15:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:44.993 17:15:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:44.993 17:15:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:44.993 17:15:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:24:44.993 Found net devices under 0000:86:00.0: cvl_0_0 00:24:44.994 17:15:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:44.994 17:15:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:44.994 17:15:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:44.994 17:15:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:44.994 17:15:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:44.994 17:15:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:44.994 17:15:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:44.994 17:15:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:44.994 17:15:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:24:44.994 Found net devices under 0000:86:00.1: cvl_0_1 00:24:44.994 17:15:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:44.994 17:15:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:24:44.994 17:15:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@414 -- # is_hw=yes 00:24:44.994 17:15:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:24:44.994 17:15:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:24:44.994 17:15:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:24:44.994 17:15:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:44.994 17:15:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:44.994 17:15:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:44.994 17:15:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:24:44.994 17:15:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:44.994 17:15:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:44.994 17:15:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:24:44.994 17:15:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:44.994 17:15:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:44.994 17:15:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:24:44.994 17:15:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:24:44.994 17:15:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:24:44.994 17:15:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:44.994 17:15:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:44.994 17:15:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:44.994 17:15:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:24:44.994 17:15:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:44.994 17:15:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:44.994 17:15:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:44.994 17:15:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:24:44.994 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:44.994 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.175 ms 00:24:44.994 00:24:44.994 --- 10.0.0.2 ping statistics --- 00:24:44.994 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:44.994 rtt min/avg/max/mdev = 0.175/0.175/0.175/0.000 ms 00:24:44.994 17:15:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:44.994 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:44.994 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.224 ms 00:24:44.994 00:24:44.994 --- 10.0.0.1 ping statistics --- 00:24:44.994 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:44.994 rtt min/avg/max/mdev = 0.224/0.224/0.224/0.000 ms 00:24:44.994 17:15:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:44.994 17:15:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@422 -- # return 0 00:24:44.994 17:15:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:24:44.994 17:15:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:44.994 17:15:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:24:44.994 17:15:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:24:44.994 17:15:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:44.994 17:15:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:24:44.994 17:15:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:24:45.251 17:15:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@69 -- # nvmfappstart -L nvme_auth 00:24:45.251 17:15:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:24:45.251 17:15:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@720 -- # xtrace_disable 00:24:45.251 17:15:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:45.251 17:15:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@481 -- # nvmfpid=3193436 00:24:45.251 17:15:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@482 -- # waitforlisten 3193436 00:24:45.251 17:15:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvme_auth 00:24:45.251 17:15:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@827 -- # '[' -z 3193436 ']' 00:24:45.251 17:15:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:45.251 17:15:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@832 -- # local max_retries=100 00:24:45.251 17:15:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:45.251 17:15:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@836 -- # xtrace_disable 00:24:45.251 17:15:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:46.181 17:15:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:24:46.181 17:15:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@860 -- # return 0 00:24:46.181 17:15:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:24:46.181 17:15:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@726 -- # xtrace_disable 00:24:46.181 17:15:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:46.181 17:15:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:46.181 17:15:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@70 -- # trap 'cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log; cleanup' SIGINT SIGTERM EXIT 00:24:46.181 17:15:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key null 32 00:24:46.181 17:15:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:24:46.181 17:15:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:24:46.181 17:15:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:24:46.181 17:15:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=null 00:24:46.181 17:15:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:24:46.181 17:15:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:24:46.181 17:15:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=fbce5b836863e43f6f03051f3e893874 00:24:46.181 17:15:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:24:46.181 17:15:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.gCn 00:24:46.181 17:15:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key fbce5b836863e43f6f03051f3e893874 0 00:24:46.181 17:15:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 fbce5b836863e43f6f03051f3e893874 0 00:24:46.181 17:15:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:24:46.181 17:15:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:24:46.181 17:15:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=fbce5b836863e43f6f03051f3e893874 00:24:46.181 17:15:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=0 00:24:46.182 17:15:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:24:46.182 17:15:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.gCn 00:24:46.182 17:15:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.gCn 00:24:46.182 17:15:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@73 -- # keys[0]=/tmp/spdk.key-null.gCn 00:24:46.182 17:15:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key sha512 64 00:24:46.182 17:15:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:24:46.182 17:15:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:24:46.182 17:15:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:24:46.182 17:15:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha512 00:24:46.182 17:15:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=64 00:24:46.182 17:15:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:24:46.182 17:15:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=1f8ad62d483160a5bdbb7b6f2ec78ba7b412bd4b2707ac0aa4576de629d181bf 00:24:46.182 17:15:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:24:46.182 17:15:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.Kko 00:24:46.182 17:15:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 1f8ad62d483160a5bdbb7b6f2ec78ba7b412bd4b2707ac0aa4576de629d181bf 3 00:24:46.182 17:15:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 1f8ad62d483160a5bdbb7b6f2ec78ba7b412bd4b2707ac0aa4576de629d181bf 3 00:24:46.182 17:15:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:24:46.182 17:15:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:24:46.182 17:15:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=1f8ad62d483160a5bdbb7b6f2ec78ba7b412bd4b2707ac0aa4576de629d181bf 00:24:46.182 17:15:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=3 00:24:46.182 17:15:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:24:46.182 17:15:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.Kko 00:24:46.182 17:15:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.Kko 00:24:46.182 17:15:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@73 -- # ckeys[0]=/tmp/spdk.key-sha512.Kko 00:24:46.182 17:15:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key null 48 00:24:46.182 17:15:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:24:46.182 17:15:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:24:46.182 17:15:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:24:46.182 17:15:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=null 00:24:46.182 17:15:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=48 00:24:46.182 17:15:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:24:46.182 17:15:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=a385012c24584ec4f48bb2f0c0a79085bdfed2f10a685c04 00:24:46.182 17:15:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:24:46.182 17:15:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.3uc 00:24:46.182 17:15:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key a385012c24584ec4f48bb2f0c0a79085bdfed2f10a685c04 0 00:24:46.182 17:15:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 a385012c24584ec4f48bb2f0c0a79085bdfed2f10a685c04 0 00:24:46.182 17:15:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:24:46.182 17:15:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:24:46.182 17:15:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=a385012c24584ec4f48bb2f0c0a79085bdfed2f10a685c04 00:24:46.182 17:15:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=0 00:24:46.182 17:15:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:24:46.182 17:15:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.3uc 00:24:46.182 17:15:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.3uc 00:24:46.182 17:15:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@74 -- # keys[1]=/tmp/spdk.key-null.3uc 00:24:46.182 17:15:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key sha384 48 00:24:46.182 17:15:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:24:46.182 17:15:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:24:46.182 17:15:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:24:46.182 17:15:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha384 00:24:46.182 17:15:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=48 00:24:46.182 17:15:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:24:46.182 17:15:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=832d9934db6fdb631c91945cb4e5d75011fcdca1e9aded15 00:24:46.182 17:15:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:24:46.182 17:15:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.17y 00:24:46.182 17:15:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 832d9934db6fdb631c91945cb4e5d75011fcdca1e9aded15 2 00:24:46.182 17:15:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 832d9934db6fdb631c91945cb4e5d75011fcdca1e9aded15 2 00:24:46.182 17:15:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:24:46.182 17:15:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:24:46.182 17:15:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=832d9934db6fdb631c91945cb4e5d75011fcdca1e9aded15 00:24:46.182 17:15:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=2 00:24:46.182 17:15:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:24:46.182 17:15:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.17y 00:24:46.182 17:15:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.17y 00:24:46.182 17:15:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@74 -- # ckeys[1]=/tmp/spdk.key-sha384.17y 00:24:46.182 17:15:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:24:46.182 17:15:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:24:46.182 17:15:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:24:46.182 17:15:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:24:46.182 17:15:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha256 00:24:46.182 17:15:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:24:46.182 17:15:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:24:46.182 17:15:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=79fc8ede4ef03e1337d25c154c369109 00:24:46.182 17:15:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:24:46.182 17:15:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.DZi 00:24:46.182 17:15:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 79fc8ede4ef03e1337d25c154c369109 1 00:24:46.182 17:15:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 79fc8ede4ef03e1337d25c154c369109 1 00:24:46.182 17:15:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:24:46.182 17:15:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:24:46.182 17:15:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=79fc8ede4ef03e1337d25c154c369109 00:24:46.182 17:15:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=1 00:24:46.182 17:15:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:24:46.440 17:15:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.DZi 00:24:46.440 17:15:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.DZi 00:24:46.440 17:15:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@75 -- # keys[2]=/tmp/spdk.key-sha256.DZi 00:24:46.440 17:15:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:24:46.440 17:15:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:24:46.440 17:15:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:24:46.440 17:15:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:24:46.440 17:15:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha256 00:24:46.440 17:15:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:24:46.440 17:15:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:24:46.440 17:15:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=b501345288d704dd06ed9ae3fc4a3b28 00:24:46.440 17:15:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:24:46.440 17:15:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.pua 00:24:46.440 17:15:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key b501345288d704dd06ed9ae3fc4a3b28 1 00:24:46.440 17:15:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 b501345288d704dd06ed9ae3fc4a3b28 1 00:24:46.440 17:15:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:24:46.440 17:15:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:24:46.440 17:15:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=b501345288d704dd06ed9ae3fc4a3b28 00:24:46.440 17:15:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=1 00:24:46.440 17:15:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:24:46.440 17:15:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.pua 00:24:46.440 17:15:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.pua 00:24:46.440 17:15:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@75 -- # ckeys[2]=/tmp/spdk.key-sha256.pua 00:24:46.440 17:15:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key sha384 48 00:24:46.440 17:15:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:24:46.440 17:15:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:24:46.440 17:15:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:24:46.440 17:15:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha384 00:24:46.440 17:15:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=48 00:24:46.440 17:15:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:24:46.440 17:15:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=6bf6534d5bfcc28777932611e7749b04bcfc97d1454a66ac 00:24:46.440 17:15:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:24:46.440 17:15:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.sSA 00:24:46.440 17:15:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 6bf6534d5bfcc28777932611e7749b04bcfc97d1454a66ac 2 00:24:46.440 17:15:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 6bf6534d5bfcc28777932611e7749b04bcfc97d1454a66ac 2 00:24:46.440 17:15:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:24:46.440 17:15:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:24:46.440 17:15:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=6bf6534d5bfcc28777932611e7749b04bcfc97d1454a66ac 00:24:46.440 17:15:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=2 00:24:46.440 17:15:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:24:46.440 17:15:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.sSA 00:24:46.440 17:15:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.sSA 00:24:46.440 17:15:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@76 -- # keys[3]=/tmp/spdk.key-sha384.sSA 00:24:46.440 17:15:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key null 32 00:24:46.440 17:15:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:24:46.440 17:15:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:24:46.440 17:15:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:24:46.440 17:15:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=null 00:24:46.440 17:15:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:24:46.440 17:15:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:24:46.440 17:15:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=19a6d3fa8a63d63f6ea0dba46fc31e2d 00:24:46.440 17:15:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:24:46.440 17:15:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.LX3 00:24:46.440 17:15:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 19a6d3fa8a63d63f6ea0dba46fc31e2d 0 00:24:46.440 17:15:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 19a6d3fa8a63d63f6ea0dba46fc31e2d 0 00:24:46.440 17:15:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:24:46.440 17:15:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:24:46.440 17:15:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=19a6d3fa8a63d63f6ea0dba46fc31e2d 00:24:46.440 17:15:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=0 00:24:46.440 17:15:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:24:46.440 17:15:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.LX3 00:24:46.440 17:15:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.LX3 00:24:46.440 17:15:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@76 -- # ckeys[3]=/tmp/spdk.key-null.LX3 00:24:46.440 17:15:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@77 -- # gen_dhchap_key sha512 64 00:24:46.440 17:15:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:24:46.440 17:15:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:24:46.440 17:15:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:24:46.440 17:15:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha512 00:24:46.440 17:15:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=64 00:24:46.440 17:15:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:24:46.440 17:15:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=2ff2b1121cdc19ccfe21fbe8274205e0667c6ca3d3682a9e278a2f9fff95a12c 00:24:46.440 17:15:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:24:46.440 17:15:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.rqk 00:24:46.440 17:15:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 2ff2b1121cdc19ccfe21fbe8274205e0667c6ca3d3682a9e278a2f9fff95a12c 3 00:24:46.440 17:15:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 2ff2b1121cdc19ccfe21fbe8274205e0667c6ca3d3682a9e278a2f9fff95a12c 3 00:24:46.440 17:15:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:24:46.440 17:15:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:24:46.440 17:15:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=2ff2b1121cdc19ccfe21fbe8274205e0667c6ca3d3682a9e278a2f9fff95a12c 00:24:46.440 17:15:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=3 00:24:46.440 17:15:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:24:46.441 17:15:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.rqk 00:24:46.441 17:15:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.rqk 00:24:46.441 17:15:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@77 -- # keys[4]=/tmp/spdk.key-sha512.rqk 00:24:46.441 17:15:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@77 -- # ckeys[4]= 00:24:46.441 17:15:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@79 -- # waitforlisten 3193436 00:24:46.441 17:15:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@827 -- # '[' -z 3193436 ']' 00:24:46.441 17:15:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:46.441 17:15:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@832 -- # local max_retries=100 00:24:46.441 17:15:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:46.441 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:46.441 17:15:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@836 -- # xtrace_disable 00:24:46.441 17:15:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:46.698 17:15:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:24:46.698 17:15:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@860 -- # return 0 00:24:46.698 17:15:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:24:46.698 17:15:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.gCn 00:24:46.698 17:15:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:46.698 17:15:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:46.698 17:15:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:46.698 17:15:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha512.Kko ]] 00:24:46.698 17:15:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.Kko 00:24:46.698 17:15:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:46.699 17:15:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:46.699 17:15:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:46.699 17:15:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:24:46.699 17:15:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-null.3uc 00:24:46.699 17:15:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:46.699 17:15:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:46.699 17:15:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:46.699 17:15:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha384.17y ]] 00:24:46.699 17:15:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.17y 00:24:46.699 17:15:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:46.699 17:15:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:46.699 17:15:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:46.699 17:15:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:24:46.699 17:15:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha256.DZi 00:24:46.699 17:15:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:46.699 17:15:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:46.699 17:15:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:46.699 17:15:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha256.pua ]] 00:24:46.699 17:15:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.pua 00:24:46.699 17:15:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:46.699 17:15:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:46.699 17:15:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:46.699 17:15:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:24:46.699 17:15:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha384.sSA 00:24:46.699 17:15:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:46.699 17:15:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:46.699 17:15:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:46.699 17:15:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-null.LX3 ]] 00:24:46.699 17:15:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey3 /tmp/spdk.key-null.LX3 00:24:46.699 17:15:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:46.699 17:15:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:46.699 17:15:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:46.699 17:15:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:24:46.699 17:15:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key4 /tmp/spdk.key-sha512.rqk 00:24:46.699 17:15:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:46.699 17:15:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:46.699 17:15:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:46.699 17:15:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n '' ]] 00:24:46.699 17:15:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@85 -- # nvmet_auth_init 00:24:46.699 17:15:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@35 -- # get_main_ns_ip 00:24:46.699 17:15:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:46.699 17:15:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:46.699 17:15:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:46.699 17:15:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:46.699 17:15:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:46.699 17:15:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:46.699 17:15:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:46.699 17:15:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:46.699 17:15:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:46.699 17:15:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:46.699 17:15:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@35 -- # configure_kernel_target nqn.2024-02.io.spdk:cnode0 10.0.0.1 00:24:46.699 17:15:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@632 -- # local kernel_name=nqn.2024-02.io.spdk:cnode0 kernel_target_ip=10.0.0.1 00:24:46.699 17:15:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@634 -- # nvmet=/sys/kernel/config/nvmet 00:24:46.699 17:15:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@635 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:24:46.699 17:15:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@636 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:24:46.699 17:15:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@637 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:24:46.699 17:15:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@639 -- # local block nvme 00:24:46.699 17:15:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@641 -- # [[ ! -e /sys/module/nvmet ]] 00:24:46.699 17:15:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@642 -- # modprobe nvmet 00:24:46.956 17:15:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@645 -- # [[ -e /sys/kernel/config/nvmet ]] 00:24:46.956 17:15:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@647 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:24:49.480 Waiting for block devices as requested 00:24:49.480 0000:5e:00.0 (8086 0a54): vfio-pci -> nvme 00:24:49.480 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:24:49.480 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:24:49.480 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:24:49.480 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:24:49.480 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:24:49.480 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:24:49.738 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:24:49.738 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:24:49.738 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:24:49.996 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:24:49.996 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:24:49.996 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:24:49.996 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:24:50.253 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:24:50.253 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:24:50.253 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:24:50.819 17:15:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:24:50.819 17:15:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n1 ]] 00:24:50.819 17:15:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@652 -- # is_block_zoned nvme0n1 00:24:50.819 17:15:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1658 -- # local device=nvme0n1 00:24:50.819 17:15:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:24:50.819 17:15:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1661 -- # [[ none != none ]] 00:24:50.819 17:15:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@653 -- # block_in_use nvme0n1 00:24:50.819 17:15:38 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:24:50.819 17:15:38 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:24:51.077 No valid GPT data, bailing 00:24:51.077 17:15:38 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:24:51.077 17:15:38 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@391 -- # pt= 00:24:51.077 17:15:38 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@392 -- # return 1 00:24:51.077 17:15:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n1 00:24:51.077 17:15:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@656 -- # [[ -b /dev/nvme0n1 ]] 00:24:51.077 17:15:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@658 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:24:51.077 17:15:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@659 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:24:51.077 17:15:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@660 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:24:51.077 17:15:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@665 -- # echo SPDK-nqn.2024-02.io.spdk:cnode0 00:24:51.077 17:15:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@667 -- # echo 1 00:24:51.077 17:15:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@668 -- # echo /dev/nvme0n1 00:24:51.077 17:15:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@669 -- # echo 1 00:24:51.077 17:15:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@671 -- # echo 10.0.0.1 00:24:51.077 17:15:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@672 -- # echo tcp 00:24:51.077 17:15:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@673 -- # echo 4420 00:24:51.077 17:15:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@674 -- # echo ipv4 00:24:51.077 17:15:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@677 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 /sys/kernel/config/nvmet/ports/1/subsystems/ 00:24:51.077 17:15:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@680 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -a 10.0.0.1 -t tcp -s 4420 00:24:51.077 00:24:51.077 Discovery Log Number of Records 2, Generation counter 2 00:24:51.077 =====Discovery Log Entry 0====== 00:24:51.077 trtype: tcp 00:24:51.077 adrfam: ipv4 00:24:51.077 subtype: current discovery subsystem 00:24:51.077 treq: not specified, sq flow control disable supported 00:24:51.077 portid: 1 00:24:51.077 trsvcid: 4420 00:24:51.077 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:24:51.077 traddr: 10.0.0.1 00:24:51.077 eflags: none 00:24:51.077 sectype: none 00:24:51.077 =====Discovery Log Entry 1====== 00:24:51.077 trtype: tcp 00:24:51.077 adrfam: ipv4 00:24:51.077 subtype: nvme subsystem 00:24:51.077 treq: not specified, sq flow control disable supported 00:24:51.077 portid: 1 00:24:51.077 trsvcid: 4420 00:24:51.077 subnqn: nqn.2024-02.io.spdk:cnode0 00:24:51.077 traddr: 10.0.0.1 00:24:51.077 eflags: none 00:24:51.077 sectype: none 00:24:51.077 17:15:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@36 -- # mkdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:24:51.077 17:15:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@37 -- # echo 0 00:24:51.077 17:15:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@38 -- # ln -s /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:24:51.077 17:15:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@88 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:24:51.077 17:15:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:51.077 17:15:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:51.077 17:15:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:24:51.077 17:15:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:24:51.077 17:15:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YTM4NTAxMmMyNDU4NGVjNGY0OGJiMmYwYzBhNzkwODViZGZlZDJmMTBhNjg1YzA0n8CKew==: 00:24:51.077 17:15:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ODMyZDk5MzRkYjZmZGI2MzFjOTE5NDVjYjRlNWQ3NTAxMWZjZGNhMWU5YWRlZDE1Yk8z+w==: 00:24:51.077 17:15:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:51.077 17:15:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:24:51.077 17:15:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YTM4NTAxMmMyNDU4NGVjNGY0OGJiMmYwYzBhNzkwODViZGZlZDJmMTBhNjg1YzA0n8CKew==: 00:24:51.077 17:15:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ODMyZDk5MzRkYjZmZGI2MzFjOTE5NDVjYjRlNWQ3NTAxMWZjZGNhMWU5YWRlZDE1Yk8z+w==: ]] 00:24:51.077 17:15:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ODMyZDk5MzRkYjZmZGI2MzFjOTE5NDVjYjRlNWQ3NTAxMWZjZGNhMWU5YWRlZDE1Yk8z+w==: 00:24:51.077 17:15:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:24:51.077 17:15:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@94 -- # printf %s sha256,sha384,sha512 00:24:51.077 17:15:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:24:51.077 17:15:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@94 -- # printf %s ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:24:51.077 17:15:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@93 -- # connect_authenticate sha256,sha384,sha512 ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 1 00:24:51.077 17:15:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:51.077 17:15:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256,sha384,sha512 00:24:51.077 17:15:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:24:51.077 17:15:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:24:51.077 17:15:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:51.077 17:15:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:24:51.077 17:15:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:51.077 17:15:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:51.077 17:15:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:51.077 17:15:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:51.077 17:15:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:51.077 17:15:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:51.077 17:15:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:51.077 17:15:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:51.077 17:15:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:51.077 17:15:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:51.077 17:15:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:51.077 17:15:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:51.077 17:15:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:51.077 17:15:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:51.077 17:15:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:51.077 17:15:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:51.077 17:15:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:51.335 nvme0n1 00:24:51.335 17:15:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:51.335 17:15:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:51.335 17:15:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:51.335 17:15:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:51.335 17:15:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:51.335 17:15:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:51.335 17:15:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:51.335 17:15:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:51.335 17:15:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:51.335 17:15:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:51.335 17:15:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:51.335 17:15:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:24:51.335 17:15:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:24:51.335 17:15:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:51.335 17:15:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 0 00:24:51.335 17:15:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:51.335 17:15:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:51.335 17:15:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:24:51.335 17:15:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:24:51.335 17:15:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZmJjZTViODM2ODYzZTQzZjZmMDMwNTFmM2U4OTM4NzTXuYHt: 00:24:51.335 17:15:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MWY4YWQ2MmQ0ODMxNjBhNWJkYmI3YjZmMmVjNzhiYTdiNDEyYmQ0YjI3MDdhYzBhYTQ1NzZkZTYyOWQxODFiZit4JFk=: 00:24:51.335 17:15:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:51.335 17:15:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:24:51.335 17:15:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZmJjZTViODM2ODYzZTQzZjZmMDMwNTFmM2U4OTM4NzTXuYHt: 00:24:51.335 17:15:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MWY4YWQ2MmQ0ODMxNjBhNWJkYmI3YjZmMmVjNzhiYTdiNDEyYmQ0YjI3MDdhYzBhYTQ1NzZkZTYyOWQxODFiZit4JFk=: ]] 00:24:51.335 17:15:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MWY4YWQ2MmQ0ODMxNjBhNWJkYmI3YjZmMmVjNzhiYTdiNDEyYmQ0YjI3MDdhYzBhYTQ1NzZkZTYyOWQxODFiZit4JFk=: 00:24:51.335 17:15:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 0 00:24:51.335 17:15:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:51.335 17:15:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:51.335 17:15:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:24:51.335 17:15:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:24:51.335 17:15:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:51.335 17:15:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:24:51.335 17:15:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:51.335 17:15:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:51.335 17:15:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:51.335 17:15:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:51.335 17:15:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:51.335 17:15:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:51.335 17:15:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:51.335 17:15:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:51.335 17:15:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:51.335 17:15:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:51.335 17:15:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:51.335 17:15:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:51.335 17:15:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:51.335 17:15:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:51.335 17:15:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:24:51.335 17:15:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:51.335 17:15:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:51.593 nvme0n1 00:24:51.593 17:15:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:51.593 17:15:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:51.593 17:15:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:51.593 17:15:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:51.593 17:15:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:51.593 17:15:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:51.593 17:15:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:51.593 17:15:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:51.593 17:15:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:51.593 17:15:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:51.593 17:15:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:51.593 17:15:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:51.593 17:15:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:24:51.593 17:15:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:51.593 17:15:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:51.593 17:15:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:24:51.593 17:15:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:24:51.593 17:15:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YTM4NTAxMmMyNDU4NGVjNGY0OGJiMmYwYzBhNzkwODViZGZlZDJmMTBhNjg1YzA0n8CKew==: 00:24:51.593 17:15:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ODMyZDk5MzRkYjZmZGI2MzFjOTE5NDVjYjRlNWQ3NTAxMWZjZGNhMWU5YWRlZDE1Yk8z+w==: 00:24:51.593 17:15:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:51.593 17:15:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:24:51.593 17:15:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YTM4NTAxMmMyNDU4NGVjNGY0OGJiMmYwYzBhNzkwODViZGZlZDJmMTBhNjg1YzA0n8CKew==: 00:24:51.593 17:15:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ODMyZDk5MzRkYjZmZGI2MzFjOTE5NDVjYjRlNWQ3NTAxMWZjZGNhMWU5YWRlZDE1Yk8z+w==: ]] 00:24:51.593 17:15:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ODMyZDk5MzRkYjZmZGI2MzFjOTE5NDVjYjRlNWQ3NTAxMWZjZGNhMWU5YWRlZDE1Yk8z+w==: 00:24:51.593 17:15:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 1 00:24:51.593 17:15:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:51.593 17:15:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:51.593 17:15:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:24:51.593 17:15:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:24:51.593 17:15:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:51.593 17:15:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:24:51.593 17:15:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:51.593 17:15:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:51.593 17:15:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:51.593 17:15:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:51.593 17:15:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:51.593 17:15:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:51.593 17:15:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:51.593 17:15:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:51.593 17:15:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:51.593 17:15:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:51.593 17:15:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:51.593 17:15:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:51.593 17:15:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:51.593 17:15:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:51.593 17:15:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:51.593 17:15:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:51.593 17:15:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:51.593 nvme0n1 00:24:51.593 17:15:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:51.593 17:15:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:51.593 17:15:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:51.593 17:15:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:51.593 17:15:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:51.593 17:15:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:51.850 17:15:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:51.850 17:15:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:51.850 17:15:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:51.850 17:15:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:51.850 17:15:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:51.850 17:15:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:51.850 17:15:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:24:51.850 17:15:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:51.850 17:15:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:51.850 17:15:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:24:51.850 17:15:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:24:51.850 17:15:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NzlmYzhlZGU0ZWYwM2UxMzM3ZDI1YzE1NGMzNjkxMDlh7lHs: 00:24:51.850 17:15:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YjUwMTM0NTI4OGQ3MDRkZDA2ZWQ5YWUzZmM0YTNiMjgM+wAH: 00:24:51.850 17:15:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:51.850 17:15:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:24:51.851 17:15:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NzlmYzhlZGU0ZWYwM2UxMzM3ZDI1YzE1NGMzNjkxMDlh7lHs: 00:24:51.851 17:15:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YjUwMTM0NTI4OGQ3MDRkZDA2ZWQ5YWUzZmM0YTNiMjgM+wAH: ]] 00:24:51.851 17:15:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YjUwMTM0NTI4OGQ3MDRkZDA2ZWQ5YWUzZmM0YTNiMjgM+wAH: 00:24:51.851 17:15:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 2 00:24:51.851 17:15:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:51.851 17:15:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:51.851 17:15:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:24:51.851 17:15:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:24:51.851 17:15:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:51.851 17:15:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:24:51.851 17:15:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:51.851 17:15:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:51.851 17:15:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:51.851 17:15:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:51.851 17:15:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:51.851 17:15:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:51.851 17:15:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:51.851 17:15:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:51.851 17:15:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:51.851 17:15:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:51.851 17:15:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:51.851 17:15:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:51.851 17:15:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:51.851 17:15:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:51.851 17:15:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:24:51.851 17:15:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:51.851 17:15:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:51.851 nvme0n1 00:24:51.851 17:15:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:51.851 17:15:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:51.851 17:15:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:51.851 17:15:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:51.851 17:15:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:51.851 17:15:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:51.851 17:15:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:51.851 17:15:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:51.851 17:15:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:51.851 17:15:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:52.108 17:15:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:52.108 17:15:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:52.108 17:15:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 3 00:24:52.108 17:15:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:52.108 17:15:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:52.108 17:15:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:24:52.108 17:15:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:24:52.108 17:15:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NmJmNjUzNGQ1YmZjYzI4Nzc3OTMyNjExZTc3NDliMDRiY2ZjOTdkMTQ1NGE2NmFjfs2flw==: 00:24:52.108 17:15:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MTlhNmQzZmE4YTYzZDYzZjZlYTBkYmE0NmZjMzFlMmTKp+77: 00:24:52.108 17:15:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:52.108 17:15:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:24:52.108 17:15:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NmJmNjUzNGQ1YmZjYzI4Nzc3OTMyNjExZTc3NDliMDRiY2ZjOTdkMTQ1NGE2NmFjfs2flw==: 00:24:52.108 17:15:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MTlhNmQzZmE4YTYzZDYzZjZlYTBkYmE0NmZjMzFlMmTKp+77: ]] 00:24:52.108 17:15:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MTlhNmQzZmE4YTYzZDYzZjZlYTBkYmE0NmZjMzFlMmTKp+77: 00:24:52.108 17:15:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 3 00:24:52.108 17:15:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:52.108 17:15:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:52.108 17:15:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:24:52.109 17:15:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:24:52.109 17:15:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:52.109 17:15:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:24:52.109 17:15:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:52.109 17:15:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:52.109 17:15:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:52.109 17:15:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:52.109 17:15:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:52.109 17:15:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:52.109 17:15:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:52.109 17:15:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:52.109 17:15:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:52.109 17:15:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:52.109 17:15:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:52.109 17:15:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:52.109 17:15:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:52.109 17:15:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:52.109 17:15:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:24:52.109 17:15:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:52.109 17:15:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:52.109 nvme0n1 00:24:52.109 17:15:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:52.109 17:15:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:52.109 17:15:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:52.109 17:15:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:52.109 17:15:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:52.109 17:15:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:52.109 17:15:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:52.109 17:15:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:52.109 17:15:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:52.109 17:15:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:52.109 17:15:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:52.109 17:15:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:52.109 17:15:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 4 00:24:52.109 17:15:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:52.109 17:15:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:52.109 17:15:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:24:52.109 17:15:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:24:52.109 17:15:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MmZmMmIxMTIxY2RjMTljY2ZlMjFmYmU4Mjc0MjA1ZTA2NjdjNmNhM2QzNjgyYTllMjc4YTJmOWZmZjk1YTEyY/5IjHc=: 00:24:52.109 17:15:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:24:52.109 17:15:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:52.109 17:15:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:24:52.109 17:15:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MmZmMmIxMTIxY2RjMTljY2ZlMjFmYmU4Mjc0MjA1ZTA2NjdjNmNhM2QzNjgyYTllMjc4YTJmOWZmZjk1YTEyY/5IjHc=: 00:24:52.109 17:15:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:24:52.109 17:15:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 4 00:24:52.109 17:15:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:52.109 17:15:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:52.109 17:15:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:24:52.109 17:15:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:24:52.109 17:15:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:52.109 17:15:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:24:52.109 17:15:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:52.109 17:15:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:52.109 17:15:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:52.109 17:15:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:52.366 17:15:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:52.366 17:15:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:52.366 17:15:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:52.366 17:15:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:52.366 17:15:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:52.366 17:15:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:52.366 17:15:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:52.366 17:15:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:52.366 17:15:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:52.366 17:15:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:52.366 17:15:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:24:52.366 17:15:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:52.366 17:15:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:52.366 nvme0n1 00:24:52.366 17:15:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:52.366 17:15:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:52.366 17:15:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:52.366 17:15:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:52.366 17:15:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:52.366 17:15:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:52.366 17:15:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:52.366 17:15:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:52.366 17:15:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:52.366 17:15:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:52.366 17:15:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:52.366 17:15:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:24:52.366 17:15:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:52.366 17:15:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 0 00:24:52.366 17:15:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:52.366 17:15:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:52.366 17:15:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:24:52.366 17:15:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:24:52.366 17:15:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZmJjZTViODM2ODYzZTQzZjZmMDMwNTFmM2U4OTM4NzTXuYHt: 00:24:52.366 17:15:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MWY4YWQ2MmQ0ODMxNjBhNWJkYmI3YjZmMmVjNzhiYTdiNDEyYmQ0YjI3MDdhYzBhYTQ1NzZkZTYyOWQxODFiZit4JFk=: 00:24:52.366 17:15:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:52.366 17:15:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:24:52.366 17:15:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZmJjZTViODM2ODYzZTQzZjZmMDMwNTFmM2U4OTM4NzTXuYHt: 00:24:52.366 17:15:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MWY4YWQ2MmQ0ODMxNjBhNWJkYmI3YjZmMmVjNzhiYTdiNDEyYmQ0YjI3MDdhYzBhYTQ1NzZkZTYyOWQxODFiZit4JFk=: ]] 00:24:52.366 17:15:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MWY4YWQ2MmQ0ODMxNjBhNWJkYmI3YjZmMmVjNzhiYTdiNDEyYmQ0YjI3MDdhYzBhYTQ1NzZkZTYyOWQxODFiZit4JFk=: 00:24:52.366 17:15:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 0 00:24:52.366 17:15:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:52.366 17:15:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:52.366 17:15:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:24:52.366 17:15:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:24:52.366 17:15:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:52.366 17:15:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:24:52.366 17:15:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:52.366 17:15:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:52.366 17:15:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:52.366 17:15:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:52.366 17:15:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:52.366 17:15:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:52.366 17:15:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:52.366 17:15:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:52.366 17:15:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:52.366 17:15:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:52.366 17:15:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:52.366 17:15:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:52.366 17:15:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:52.366 17:15:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:52.366 17:15:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:24:52.366 17:15:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:52.366 17:15:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:52.623 nvme0n1 00:24:52.623 17:15:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:52.623 17:15:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:52.623 17:15:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:52.623 17:15:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:52.623 17:15:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:52.623 17:15:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:52.623 17:15:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:52.623 17:15:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:52.623 17:15:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:52.623 17:15:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:52.623 17:15:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:52.623 17:15:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:52.623 17:15:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 1 00:24:52.623 17:15:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:52.623 17:15:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:52.623 17:15:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:24:52.623 17:15:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:24:52.623 17:15:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YTM4NTAxMmMyNDU4NGVjNGY0OGJiMmYwYzBhNzkwODViZGZlZDJmMTBhNjg1YzA0n8CKew==: 00:24:52.623 17:15:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ODMyZDk5MzRkYjZmZGI2MzFjOTE5NDVjYjRlNWQ3NTAxMWZjZGNhMWU5YWRlZDE1Yk8z+w==: 00:24:52.623 17:15:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:52.623 17:15:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:24:52.623 17:15:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YTM4NTAxMmMyNDU4NGVjNGY0OGJiMmYwYzBhNzkwODViZGZlZDJmMTBhNjg1YzA0n8CKew==: 00:24:52.623 17:15:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ODMyZDk5MzRkYjZmZGI2MzFjOTE5NDVjYjRlNWQ3NTAxMWZjZGNhMWU5YWRlZDE1Yk8z+w==: ]] 00:24:52.623 17:15:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ODMyZDk5MzRkYjZmZGI2MzFjOTE5NDVjYjRlNWQ3NTAxMWZjZGNhMWU5YWRlZDE1Yk8z+w==: 00:24:52.623 17:15:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 1 00:24:52.623 17:15:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:52.623 17:15:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:52.623 17:15:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:24:52.623 17:15:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:24:52.623 17:15:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:52.623 17:15:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:24:52.623 17:15:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:52.623 17:15:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:52.623 17:15:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:52.623 17:15:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:52.623 17:15:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:52.623 17:15:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:52.623 17:15:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:52.623 17:15:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:52.623 17:15:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:52.623 17:15:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:52.623 17:15:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:52.623 17:15:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:52.623 17:15:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:52.623 17:15:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:52.623 17:15:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:52.623 17:15:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:52.623 17:15:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:52.880 nvme0n1 00:24:52.880 17:15:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:52.880 17:15:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:52.880 17:15:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:52.880 17:15:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:52.880 17:15:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:52.880 17:15:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:52.880 17:15:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:52.880 17:15:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:52.880 17:15:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:52.880 17:15:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:52.880 17:15:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:52.880 17:15:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:52.880 17:15:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 2 00:24:52.880 17:15:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:52.880 17:15:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:52.880 17:15:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:24:52.880 17:15:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:24:52.880 17:15:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NzlmYzhlZGU0ZWYwM2UxMzM3ZDI1YzE1NGMzNjkxMDlh7lHs: 00:24:52.880 17:15:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YjUwMTM0NTI4OGQ3MDRkZDA2ZWQ5YWUzZmM0YTNiMjgM+wAH: 00:24:52.880 17:15:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:52.880 17:15:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:24:52.880 17:15:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NzlmYzhlZGU0ZWYwM2UxMzM3ZDI1YzE1NGMzNjkxMDlh7lHs: 00:24:52.880 17:15:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YjUwMTM0NTI4OGQ3MDRkZDA2ZWQ5YWUzZmM0YTNiMjgM+wAH: ]] 00:24:52.880 17:15:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YjUwMTM0NTI4OGQ3MDRkZDA2ZWQ5YWUzZmM0YTNiMjgM+wAH: 00:24:52.880 17:15:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 2 00:24:52.880 17:15:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:52.880 17:15:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:52.880 17:15:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:24:52.880 17:15:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:24:52.880 17:15:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:52.880 17:15:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:24:52.880 17:15:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:52.880 17:15:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:52.880 17:15:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:52.880 17:15:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:52.880 17:15:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:52.880 17:15:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:52.880 17:15:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:52.880 17:15:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:52.880 17:15:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:52.880 17:15:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:52.880 17:15:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:52.880 17:15:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:52.880 17:15:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:52.880 17:15:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:52.880 17:15:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:24:52.880 17:15:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:52.880 17:15:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:53.137 nvme0n1 00:24:53.137 17:15:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:53.137 17:15:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:53.137 17:15:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:53.137 17:15:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:53.137 17:15:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:53.137 17:15:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:53.137 17:15:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:53.137 17:15:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:53.137 17:15:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:53.137 17:15:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:53.137 17:15:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:53.137 17:15:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:53.137 17:15:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 3 00:24:53.137 17:15:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:53.137 17:15:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:53.137 17:15:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:24:53.137 17:15:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:24:53.137 17:15:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NmJmNjUzNGQ1YmZjYzI4Nzc3OTMyNjExZTc3NDliMDRiY2ZjOTdkMTQ1NGE2NmFjfs2flw==: 00:24:53.137 17:15:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MTlhNmQzZmE4YTYzZDYzZjZlYTBkYmE0NmZjMzFlMmTKp+77: 00:24:53.137 17:15:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:53.137 17:15:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:24:53.137 17:15:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NmJmNjUzNGQ1YmZjYzI4Nzc3OTMyNjExZTc3NDliMDRiY2ZjOTdkMTQ1NGE2NmFjfs2flw==: 00:24:53.137 17:15:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MTlhNmQzZmE4YTYzZDYzZjZlYTBkYmE0NmZjMzFlMmTKp+77: ]] 00:24:53.137 17:15:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MTlhNmQzZmE4YTYzZDYzZjZlYTBkYmE0NmZjMzFlMmTKp+77: 00:24:53.137 17:15:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 3 00:24:53.137 17:15:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:53.137 17:15:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:53.137 17:15:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:24:53.137 17:15:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:24:53.137 17:15:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:53.137 17:15:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:24:53.137 17:15:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:53.137 17:15:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:53.137 17:15:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:53.137 17:15:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:53.137 17:15:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:53.137 17:15:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:53.137 17:15:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:53.137 17:15:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:53.137 17:15:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:53.137 17:15:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:53.137 17:15:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:53.137 17:15:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:53.138 17:15:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:53.138 17:15:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:53.138 17:15:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:24:53.138 17:15:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:53.138 17:15:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:53.395 nvme0n1 00:24:53.395 17:15:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:53.395 17:15:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:53.395 17:15:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:53.395 17:15:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:53.395 17:15:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:53.395 17:15:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:53.395 17:15:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:53.395 17:15:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:53.395 17:15:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:53.395 17:15:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:53.395 17:15:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:53.395 17:15:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:53.395 17:15:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 4 00:24:53.395 17:15:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:53.395 17:15:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:53.395 17:15:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:24:53.395 17:15:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:24:53.395 17:15:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MmZmMmIxMTIxY2RjMTljY2ZlMjFmYmU4Mjc0MjA1ZTA2NjdjNmNhM2QzNjgyYTllMjc4YTJmOWZmZjk1YTEyY/5IjHc=: 00:24:53.395 17:15:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:24:53.395 17:15:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:53.395 17:15:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:24:53.395 17:15:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MmZmMmIxMTIxY2RjMTljY2ZlMjFmYmU4Mjc0MjA1ZTA2NjdjNmNhM2QzNjgyYTllMjc4YTJmOWZmZjk1YTEyY/5IjHc=: 00:24:53.395 17:15:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:24:53.395 17:15:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 4 00:24:53.395 17:15:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:53.395 17:15:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:53.395 17:15:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:24:53.395 17:15:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:24:53.395 17:15:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:53.395 17:15:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:24:53.395 17:15:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:53.395 17:15:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:53.395 17:15:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:53.395 17:15:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:53.395 17:15:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:53.395 17:15:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:53.395 17:15:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:53.395 17:15:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:53.395 17:15:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:53.395 17:15:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:53.395 17:15:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:53.395 17:15:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:53.395 17:15:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:53.395 17:15:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:53.395 17:15:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:24:53.395 17:15:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:53.395 17:15:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:53.654 nvme0n1 00:24:53.654 17:15:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:53.654 17:15:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:53.654 17:15:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:53.654 17:15:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:53.654 17:15:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:53.654 17:15:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:53.654 17:15:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:53.654 17:15:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:53.654 17:15:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:53.654 17:15:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:53.654 17:15:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:53.654 17:15:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:24:53.654 17:15:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:53.654 17:15:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 0 00:24:53.654 17:15:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:53.654 17:15:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:53.654 17:15:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:24:53.654 17:15:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:24:53.654 17:15:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZmJjZTViODM2ODYzZTQzZjZmMDMwNTFmM2U4OTM4NzTXuYHt: 00:24:53.654 17:15:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MWY4YWQ2MmQ0ODMxNjBhNWJkYmI3YjZmMmVjNzhiYTdiNDEyYmQ0YjI3MDdhYzBhYTQ1NzZkZTYyOWQxODFiZit4JFk=: 00:24:53.654 17:15:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:53.654 17:15:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:24:53.654 17:15:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZmJjZTViODM2ODYzZTQzZjZmMDMwNTFmM2U4OTM4NzTXuYHt: 00:24:53.654 17:15:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MWY4YWQ2MmQ0ODMxNjBhNWJkYmI3YjZmMmVjNzhiYTdiNDEyYmQ0YjI3MDdhYzBhYTQ1NzZkZTYyOWQxODFiZit4JFk=: ]] 00:24:53.654 17:15:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MWY4YWQ2MmQ0ODMxNjBhNWJkYmI3YjZmMmVjNzhiYTdiNDEyYmQ0YjI3MDdhYzBhYTQ1NzZkZTYyOWQxODFiZit4JFk=: 00:24:53.654 17:15:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 0 00:24:53.654 17:15:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:53.654 17:15:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:53.654 17:15:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:24:53.654 17:15:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:24:53.654 17:15:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:53.654 17:15:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:24:53.654 17:15:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:53.654 17:15:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:53.654 17:15:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:53.654 17:15:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:53.654 17:15:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:53.654 17:15:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:53.654 17:15:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:53.654 17:15:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:53.654 17:15:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:53.654 17:15:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:53.654 17:15:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:53.654 17:15:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:53.654 17:15:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:53.654 17:15:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:53.654 17:15:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:24:53.654 17:15:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:53.654 17:15:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:53.969 nvme0n1 00:24:53.969 17:15:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:53.969 17:15:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:53.969 17:15:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:53.969 17:15:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:53.969 17:15:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:53.969 17:15:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:53.969 17:15:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:53.969 17:15:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:53.969 17:15:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:53.969 17:15:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:53.969 17:15:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:53.969 17:15:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:53.969 17:15:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 1 00:24:53.969 17:15:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:53.969 17:15:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:53.969 17:15:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:24:53.969 17:15:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:24:53.969 17:15:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YTM4NTAxMmMyNDU4NGVjNGY0OGJiMmYwYzBhNzkwODViZGZlZDJmMTBhNjg1YzA0n8CKew==: 00:24:53.969 17:15:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ODMyZDk5MzRkYjZmZGI2MzFjOTE5NDVjYjRlNWQ3NTAxMWZjZGNhMWU5YWRlZDE1Yk8z+w==: 00:24:53.969 17:15:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:53.969 17:15:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:24:53.969 17:15:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YTM4NTAxMmMyNDU4NGVjNGY0OGJiMmYwYzBhNzkwODViZGZlZDJmMTBhNjg1YzA0n8CKew==: 00:24:53.969 17:15:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ODMyZDk5MzRkYjZmZGI2MzFjOTE5NDVjYjRlNWQ3NTAxMWZjZGNhMWU5YWRlZDE1Yk8z+w==: ]] 00:24:53.969 17:15:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ODMyZDk5MzRkYjZmZGI2MzFjOTE5NDVjYjRlNWQ3NTAxMWZjZGNhMWU5YWRlZDE1Yk8z+w==: 00:24:53.969 17:15:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 1 00:24:53.969 17:15:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:53.969 17:15:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:53.969 17:15:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:24:53.969 17:15:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:24:53.969 17:15:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:53.969 17:15:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:24:53.969 17:15:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:53.969 17:15:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:53.969 17:15:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:53.969 17:15:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:53.969 17:15:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:53.969 17:15:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:53.969 17:15:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:53.969 17:15:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:53.969 17:15:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:53.969 17:15:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:53.969 17:15:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:53.969 17:15:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:53.969 17:15:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:53.969 17:15:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:53.969 17:15:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:53.969 17:15:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:54.226 17:15:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:54.226 nvme0n1 00:24:54.226 17:15:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:54.226 17:15:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:54.227 17:15:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:54.227 17:15:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:54.227 17:15:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:54.227 17:15:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:54.227 17:15:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:54.227 17:15:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:54.227 17:15:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:54.227 17:15:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:54.484 17:15:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:54.484 17:15:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:54.484 17:15:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 2 00:24:54.484 17:15:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:54.484 17:15:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:54.484 17:15:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:24:54.484 17:15:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:24:54.484 17:15:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NzlmYzhlZGU0ZWYwM2UxMzM3ZDI1YzE1NGMzNjkxMDlh7lHs: 00:24:54.484 17:15:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YjUwMTM0NTI4OGQ3MDRkZDA2ZWQ5YWUzZmM0YTNiMjgM+wAH: 00:24:54.484 17:15:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:54.484 17:15:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:24:54.484 17:15:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NzlmYzhlZGU0ZWYwM2UxMzM3ZDI1YzE1NGMzNjkxMDlh7lHs: 00:24:54.484 17:15:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YjUwMTM0NTI4OGQ3MDRkZDA2ZWQ5YWUzZmM0YTNiMjgM+wAH: ]] 00:24:54.484 17:15:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YjUwMTM0NTI4OGQ3MDRkZDA2ZWQ5YWUzZmM0YTNiMjgM+wAH: 00:24:54.484 17:15:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 2 00:24:54.484 17:15:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:54.484 17:15:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:54.484 17:15:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:24:54.484 17:15:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:24:54.484 17:15:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:54.484 17:15:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:24:54.484 17:15:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:54.484 17:15:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:54.484 17:15:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:54.484 17:15:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:54.484 17:15:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:54.484 17:15:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:54.484 17:15:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:54.484 17:15:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:54.484 17:15:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:54.484 17:15:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:54.484 17:15:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:54.484 17:15:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:54.484 17:15:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:54.484 17:15:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:54.484 17:15:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:24:54.484 17:15:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:54.484 17:15:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:54.742 nvme0n1 00:24:54.742 17:15:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:54.742 17:15:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:54.742 17:15:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:54.742 17:15:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:54.742 17:15:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:54.742 17:15:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:54.742 17:15:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:54.742 17:15:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:54.742 17:15:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:54.742 17:15:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:54.742 17:15:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:54.742 17:15:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:54.742 17:15:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 3 00:24:54.742 17:15:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:54.742 17:15:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:54.742 17:15:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:24:54.742 17:15:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:24:54.742 17:15:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NmJmNjUzNGQ1YmZjYzI4Nzc3OTMyNjExZTc3NDliMDRiY2ZjOTdkMTQ1NGE2NmFjfs2flw==: 00:24:54.742 17:15:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MTlhNmQzZmE4YTYzZDYzZjZlYTBkYmE0NmZjMzFlMmTKp+77: 00:24:54.742 17:15:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:54.742 17:15:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:24:54.742 17:15:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NmJmNjUzNGQ1YmZjYzI4Nzc3OTMyNjExZTc3NDliMDRiY2ZjOTdkMTQ1NGE2NmFjfs2flw==: 00:24:54.742 17:15:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MTlhNmQzZmE4YTYzZDYzZjZlYTBkYmE0NmZjMzFlMmTKp+77: ]] 00:24:54.742 17:15:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MTlhNmQzZmE4YTYzZDYzZjZlYTBkYmE0NmZjMzFlMmTKp+77: 00:24:54.742 17:15:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 3 00:24:54.742 17:15:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:54.742 17:15:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:54.742 17:15:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:24:54.742 17:15:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:24:54.742 17:15:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:54.742 17:15:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:24:54.742 17:15:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:54.742 17:15:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:54.742 17:15:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:54.742 17:15:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:54.742 17:15:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:54.742 17:15:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:54.742 17:15:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:54.742 17:15:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:54.742 17:15:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:54.742 17:15:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:54.742 17:15:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:54.742 17:15:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:54.742 17:15:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:54.742 17:15:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:54.742 17:15:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:24:54.742 17:15:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:54.742 17:15:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:54.999 nvme0n1 00:24:54.999 17:15:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:54.999 17:15:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:54.999 17:15:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:54.999 17:15:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:54.999 17:15:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:54.999 17:15:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:54.999 17:15:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:54.999 17:15:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:54.999 17:15:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:54.999 17:15:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:54.999 17:15:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:54.999 17:15:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:54.999 17:15:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 4 00:24:54.999 17:15:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:54.999 17:15:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:54.999 17:15:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:24:54.999 17:15:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:24:54.999 17:15:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MmZmMmIxMTIxY2RjMTljY2ZlMjFmYmU4Mjc0MjA1ZTA2NjdjNmNhM2QzNjgyYTllMjc4YTJmOWZmZjk1YTEyY/5IjHc=: 00:24:54.999 17:15:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:24:54.999 17:15:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:54.999 17:15:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:24:54.999 17:15:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MmZmMmIxMTIxY2RjMTljY2ZlMjFmYmU4Mjc0MjA1ZTA2NjdjNmNhM2QzNjgyYTllMjc4YTJmOWZmZjk1YTEyY/5IjHc=: 00:24:54.999 17:15:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:24:54.999 17:15:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 4 00:24:54.999 17:15:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:54.999 17:15:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:54.999 17:15:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:24:54.999 17:15:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:24:54.999 17:15:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:54.999 17:15:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:24:54.999 17:15:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:54.999 17:15:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:54.999 17:15:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:54.999 17:15:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:54.999 17:15:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:54.999 17:15:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:54.999 17:15:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:54.999 17:15:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:54.999 17:15:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:54.999 17:15:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:54.999 17:15:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:54.999 17:15:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:54.999 17:15:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:54.999 17:15:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:54.999 17:15:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:24:54.999 17:15:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:54.999 17:15:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:55.256 nvme0n1 00:24:55.257 17:15:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:55.257 17:15:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:55.257 17:15:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:55.257 17:15:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:55.257 17:15:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:55.257 17:15:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:55.257 17:15:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:55.257 17:15:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:55.257 17:15:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:55.257 17:15:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:55.257 17:15:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:55.257 17:15:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:24:55.257 17:15:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:55.257 17:15:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 0 00:24:55.257 17:15:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:55.257 17:15:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:55.257 17:15:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:24:55.257 17:15:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:24:55.257 17:15:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZmJjZTViODM2ODYzZTQzZjZmMDMwNTFmM2U4OTM4NzTXuYHt: 00:24:55.257 17:15:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MWY4YWQ2MmQ0ODMxNjBhNWJkYmI3YjZmMmVjNzhiYTdiNDEyYmQ0YjI3MDdhYzBhYTQ1NzZkZTYyOWQxODFiZit4JFk=: 00:24:55.257 17:15:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:55.257 17:15:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:24:55.257 17:15:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZmJjZTViODM2ODYzZTQzZjZmMDMwNTFmM2U4OTM4NzTXuYHt: 00:24:55.257 17:15:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MWY4YWQ2MmQ0ODMxNjBhNWJkYmI3YjZmMmVjNzhiYTdiNDEyYmQ0YjI3MDdhYzBhYTQ1NzZkZTYyOWQxODFiZit4JFk=: ]] 00:24:55.257 17:15:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MWY4YWQ2MmQ0ODMxNjBhNWJkYmI3YjZmMmVjNzhiYTdiNDEyYmQ0YjI3MDdhYzBhYTQ1NzZkZTYyOWQxODFiZit4JFk=: 00:24:55.257 17:15:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 0 00:24:55.257 17:15:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:55.257 17:15:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:55.257 17:15:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:24:55.257 17:15:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:24:55.257 17:15:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:55.257 17:15:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:24:55.257 17:15:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:55.257 17:15:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:55.257 17:15:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:55.257 17:15:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:55.257 17:15:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:55.257 17:15:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:55.257 17:15:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:55.257 17:15:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:55.257 17:15:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:55.257 17:15:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:55.257 17:15:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:55.257 17:15:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:55.257 17:15:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:55.257 17:15:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:55.257 17:15:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:24:55.257 17:15:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:55.257 17:15:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:55.820 nvme0n1 00:24:55.820 17:15:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:55.820 17:15:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:55.820 17:15:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:55.820 17:15:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:55.820 17:15:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:55.820 17:15:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:55.820 17:15:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:55.820 17:15:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:55.820 17:15:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:55.820 17:15:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:55.820 17:15:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:55.820 17:15:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:55.820 17:15:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 1 00:24:55.820 17:15:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:55.820 17:15:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:55.820 17:15:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:24:55.820 17:15:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:24:55.820 17:15:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YTM4NTAxMmMyNDU4NGVjNGY0OGJiMmYwYzBhNzkwODViZGZlZDJmMTBhNjg1YzA0n8CKew==: 00:24:55.820 17:15:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ODMyZDk5MzRkYjZmZGI2MzFjOTE5NDVjYjRlNWQ3NTAxMWZjZGNhMWU5YWRlZDE1Yk8z+w==: 00:24:55.820 17:15:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:55.820 17:15:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:24:55.820 17:15:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YTM4NTAxMmMyNDU4NGVjNGY0OGJiMmYwYzBhNzkwODViZGZlZDJmMTBhNjg1YzA0n8CKew==: 00:24:55.820 17:15:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ODMyZDk5MzRkYjZmZGI2MzFjOTE5NDVjYjRlNWQ3NTAxMWZjZGNhMWU5YWRlZDE1Yk8z+w==: ]] 00:24:55.820 17:15:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ODMyZDk5MzRkYjZmZGI2MzFjOTE5NDVjYjRlNWQ3NTAxMWZjZGNhMWU5YWRlZDE1Yk8z+w==: 00:24:55.820 17:15:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 1 00:24:55.820 17:15:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:55.820 17:15:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:55.820 17:15:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:24:55.820 17:15:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:24:55.820 17:15:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:55.820 17:15:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:24:55.820 17:15:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:55.820 17:15:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:55.820 17:15:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:55.820 17:15:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:55.820 17:15:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:55.820 17:15:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:55.820 17:15:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:55.820 17:15:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:55.820 17:15:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:55.820 17:15:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:55.820 17:15:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:55.820 17:15:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:55.820 17:15:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:55.820 17:15:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:55.820 17:15:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:55.820 17:15:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:55.820 17:15:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:56.077 nvme0n1 00:24:56.077 17:15:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:56.077 17:15:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:56.077 17:15:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:56.077 17:15:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:56.077 17:15:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:56.077 17:15:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:56.334 17:15:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:56.334 17:15:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:56.334 17:15:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:56.335 17:15:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:56.335 17:15:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:56.335 17:15:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:56.335 17:15:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 2 00:24:56.335 17:15:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:56.335 17:15:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:56.335 17:15:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:24:56.335 17:15:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:24:56.335 17:15:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NzlmYzhlZGU0ZWYwM2UxMzM3ZDI1YzE1NGMzNjkxMDlh7lHs: 00:24:56.335 17:15:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YjUwMTM0NTI4OGQ3MDRkZDA2ZWQ5YWUzZmM0YTNiMjgM+wAH: 00:24:56.335 17:15:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:56.335 17:15:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:24:56.335 17:15:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NzlmYzhlZGU0ZWYwM2UxMzM3ZDI1YzE1NGMzNjkxMDlh7lHs: 00:24:56.335 17:15:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YjUwMTM0NTI4OGQ3MDRkZDA2ZWQ5YWUzZmM0YTNiMjgM+wAH: ]] 00:24:56.335 17:15:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YjUwMTM0NTI4OGQ3MDRkZDA2ZWQ5YWUzZmM0YTNiMjgM+wAH: 00:24:56.335 17:15:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 2 00:24:56.335 17:15:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:56.335 17:15:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:56.335 17:15:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:24:56.335 17:15:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:24:56.335 17:15:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:56.335 17:15:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:24:56.335 17:15:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:56.335 17:15:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:56.335 17:15:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:56.335 17:15:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:56.335 17:15:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:56.335 17:15:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:56.335 17:15:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:56.335 17:15:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:56.335 17:15:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:56.335 17:15:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:56.335 17:15:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:56.335 17:15:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:56.335 17:15:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:56.335 17:15:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:56.335 17:15:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:24:56.335 17:15:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:56.335 17:15:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:56.592 nvme0n1 00:24:56.592 17:15:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:56.592 17:15:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:56.592 17:15:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:56.592 17:15:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:56.592 17:15:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:56.592 17:15:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:56.592 17:15:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:56.593 17:15:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:56.593 17:15:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:56.593 17:15:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:56.593 17:15:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:56.593 17:15:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:56.593 17:15:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 3 00:24:56.593 17:15:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:56.593 17:15:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:56.593 17:15:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:24:56.593 17:15:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:24:56.593 17:15:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NmJmNjUzNGQ1YmZjYzI4Nzc3OTMyNjExZTc3NDliMDRiY2ZjOTdkMTQ1NGE2NmFjfs2flw==: 00:24:56.593 17:15:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MTlhNmQzZmE4YTYzZDYzZjZlYTBkYmE0NmZjMzFlMmTKp+77: 00:24:56.593 17:15:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:56.593 17:15:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:24:56.593 17:15:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NmJmNjUzNGQ1YmZjYzI4Nzc3OTMyNjExZTc3NDliMDRiY2ZjOTdkMTQ1NGE2NmFjfs2flw==: 00:24:56.593 17:15:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MTlhNmQzZmE4YTYzZDYzZjZlYTBkYmE0NmZjMzFlMmTKp+77: ]] 00:24:56.593 17:15:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MTlhNmQzZmE4YTYzZDYzZjZlYTBkYmE0NmZjMzFlMmTKp+77: 00:24:56.593 17:15:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 3 00:24:56.593 17:15:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:56.593 17:15:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:56.593 17:15:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:24:56.593 17:15:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:24:56.593 17:15:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:56.593 17:15:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:24:56.593 17:15:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:56.593 17:15:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:56.593 17:15:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:56.593 17:15:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:56.593 17:15:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:56.593 17:15:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:56.593 17:15:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:56.593 17:15:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:56.593 17:15:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:56.593 17:15:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:56.593 17:15:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:56.593 17:15:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:56.593 17:15:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:56.593 17:15:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:56.593 17:15:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:24:56.593 17:15:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:56.593 17:15:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:57.156 nvme0n1 00:24:57.156 17:15:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:57.156 17:15:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:57.156 17:15:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:57.156 17:15:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:57.156 17:15:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:57.156 17:15:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:57.156 17:15:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:57.156 17:15:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:57.156 17:15:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:57.156 17:15:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:57.156 17:15:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:57.156 17:15:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:57.156 17:15:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 4 00:24:57.156 17:15:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:57.156 17:15:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:57.156 17:15:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:24:57.156 17:15:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:24:57.156 17:15:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MmZmMmIxMTIxY2RjMTljY2ZlMjFmYmU4Mjc0MjA1ZTA2NjdjNmNhM2QzNjgyYTllMjc4YTJmOWZmZjk1YTEyY/5IjHc=: 00:24:57.156 17:15:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:24:57.156 17:15:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:57.156 17:15:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:24:57.156 17:15:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MmZmMmIxMTIxY2RjMTljY2ZlMjFmYmU4Mjc0MjA1ZTA2NjdjNmNhM2QzNjgyYTllMjc4YTJmOWZmZjk1YTEyY/5IjHc=: 00:24:57.156 17:15:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:24:57.156 17:15:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 4 00:24:57.156 17:15:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:57.156 17:15:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:57.156 17:15:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:24:57.156 17:15:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:24:57.156 17:15:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:57.156 17:15:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:24:57.156 17:15:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:57.156 17:15:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:57.157 17:15:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:57.157 17:15:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:57.157 17:15:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:57.157 17:15:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:57.157 17:15:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:57.157 17:15:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:57.157 17:15:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:57.157 17:15:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:57.157 17:15:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:57.157 17:15:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:57.157 17:15:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:57.157 17:15:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:57.157 17:15:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:24:57.157 17:15:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:57.157 17:15:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:57.720 nvme0n1 00:24:57.720 17:15:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:57.720 17:15:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:57.720 17:15:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:57.720 17:15:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:57.720 17:15:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:57.721 17:15:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:57.721 17:15:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:57.721 17:15:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:57.721 17:15:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:57.721 17:15:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:57.721 17:15:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:57.721 17:15:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:24:57.721 17:15:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:57.721 17:15:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 0 00:24:57.721 17:15:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:57.721 17:15:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:57.721 17:15:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:24:57.721 17:15:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:24:57.721 17:15:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZmJjZTViODM2ODYzZTQzZjZmMDMwNTFmM2U4OTM4NzTXuYHt: 00:24:57.721 17:15:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MWY4YWQ2MmQ0ODMxNjBhNWJkYmI3YjZmMmVjNzhiYTdiNDEyYmQ0YjI3MDdhYzBhYTQ1NzZkZTYyOWQxODFiZit4JFk=: 00:24:57.721 17:15:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:57.721 17:15:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:24:57.721 17:15:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZmJjZTViODM2ODYzZTQzZjZmMDMwNTFmM2U4OTM4NzTXuYHt: 00:24:57.721 17:15:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MWY4YWQ2MmQ0ODMxNjBhNWJkYmI3YjZmMmVjNzhiYTdiNDEyYmQ0YjI3MDdhYzBhYTQ1NzZkZTYyOWQxODFiZit4JFk=: ]] 00:24:57.721 17:15:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MWY4YWQ2MmQ0ODMxNjBhNWJkYmI3YjZmMmVjNzhiYTdiNDEyYmQ0YjI3MDdhYzBhYTQ1NzZkZTYyOWQxODFiZit4JFk=: 00:24:57.721 17:15:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 0 00:24:57.721 17:15:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:57.721 17:15:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:57.721 17:15:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:24:57.721 17:15:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:24:57.721 17:15:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:57.721 17:15:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:24:57.721 17:15:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:57.721 17:15:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:57.721 17:15:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:57.721 17:15:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:57.721 17:15:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:57.721 17:15:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:57.721 17:15:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:57.721 17:15:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:57.721 17:15:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:57.721 17:15:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:57.721 17:15:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:57.721 17:15:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:57.721 17:15:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:57.721 17:15:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:57.721 17:15:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:24:57.721 17:15:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:57.721 17:15:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:58.286 nvme0n1 00:24:58.286 17:15:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:58.286 17:15:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:58.286 17:15:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:58.286 17:15:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:58.286 17:15:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:58.286 17:15:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:58.286 17:15:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:58.286 17:15:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:58.286 17:15:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:58.286 17:15:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:58.286 17:15:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:58.286 17:15:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:58.286 17:15:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 1 00:24:58.286 17:15:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:58.286 17:15:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:58.286 17:15:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:24:58.286 17:15:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:24:58.286 17:15:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YTM4NTAxMmMyNDU4NGVjNGY0OGJiMmYwYzBhNzkwODViZGZlZDJmMTBhNjg1YzA0n8CKew==: 00:24:58.286 17:15:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ODMyZDk5MzRkYjZmZGI2MzFjOTE5NDVjYjRlNWQ3NTAxMWZjZGNhMWU5YWRlZDE1Yk8z+w==: 00:24:58.286 17:15:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:58.286 17:15:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:24:58.286 17:15:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YTM4NTAxMmMyNDU4NGVjNGY0OGJiMmYwYzBhNzkwODViZGZlZDJmMTBhNjg1YzA0n8CKew==: 00:24:58.286 17:15:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ODMyZDk5MzRkYjZmZGI2MzFjOTE5NDVjYjRlNWQ3NTAxMWZjZGNhMWU5YWRlZDE1Yk8z+w==: ]] 00:24:58.286 17:15:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ODMyZDk5MzRkYjZmZGI2MzFjOTE5NDVjYjRlNWQ3NTAxMWZjZGNhMWU5YWRlZDE1Yk8z+w==: 00:24:58.286 17:15:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 1 00:24:58.286 17:15:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:58.286 17:15:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:58.286 17:15:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:24:58.286 17:15:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:24:58.286 17:15:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:58.286 17:15:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:24:58.286 17:15:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:58.286 17:15:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:58.286 17:15:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:58.286 17:15:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:58.286 17:15:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:58.286 17:15:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:58.286 17:15:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:58.286 17:15:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:58.286 17:15:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:58.286 17:15:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:58.286 17:15:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:58.286 17:15:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:58.286 17:15:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:58.286 17:15:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:58.286 17:15:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:58.286 17:15:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:58.286 17:15:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:58.851 nvme0n1 00:24:58.851 17:15:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:58.851 17:15:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:58.851 17:15:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:58.851 17:15:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:58.851 17:15:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:58.851 17:15:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:58.851 17:15:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:58.851 17:15:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:58.851 17:15:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:58.851 17:15:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:58.851 17:15:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:58.851 17:15:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:58.851 17:15:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 2 00:24:58.851 17:15:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:58.851 17:15:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:58.851 17:15:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:24:58.851 17:15:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:24:58.851 17:15:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NzlmYzhlZGU0ZWYwM2UxMzM3ZDI1YzE1NGMzNjkxMDlh7lHs: 00:24:58.851 17:15:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YjUwMTM0NTI4OGQ3MDRkZDA2ZWQ5YWUzZmM0YTNiMjgM+wAH: 00:24:58.851 17:15:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:58.851 17:15:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:24:58.851 17:15:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NzlmYzhlZGU0ZWYwM2UxMzM3ZDI1YzE1NGMzNjkxMDlh7lHs: 00:24:58.851 17:15:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YjUwMTM0NTI4OGQ3MDRkZDA2ZWQ5YWUzZmM0YTNiMjgM+wAH: ]] 00:24:58.851 17:15:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YjUwMTM0NTI4OGQ3MDRkZDA2ZWQ5YWUzZmM0YTNiMjgM+wAH: 00:24:58.851 17:15:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 2 00:24:58.851 17:15:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:58.851 17:15:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:58.851 17:15:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:24:58.851 17:15:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:24:58.851 17:15:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:58.851 17:15:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:24:58.851 17:15:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:58.851 17:15:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:58.851 17:15:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:58.851 17:15:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:58.851 17:15:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:58.851 17:15:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:58.851 17:15:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:58.851 17:15:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:58.851 17:15:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:58.851 17:15:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:58.851 17:15:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:58.851 17:15:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:58.851 17:15:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:58.851 17:15:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:58.851 17:15:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:24:58.851 17:15:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:58.851 17:15:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:59.416 nvme0n1 00:24:59.416 17:15:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:59.416 17:15:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:59.416 17:15:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:59.416 17:15:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:59.416 17:15:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:59.416 17:15:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:59.674 17:15:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:59.674 17:15:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:59.674 17:15:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:59.674 17:15:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:59.674 17:15:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:59.674 17:15:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:59.674 17:15:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 3 00:24:59.674 17:15:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:59.674 17:15:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:59.674 17:15:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:24:59.674 17:15:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:24:59.674 17:15:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NmJmNjUzNGQ1YmZjYzI4Nzc3OTMyNjExZTc3NDliMDRiY2ZjOTdkMTQ1NGE2NmFjfs2flw==: 00:24:59.674 17:15:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MTlhNmQzZmE4YTYzZDYzZjZlYTBkYmE0NmZjMzFlMmTKp+77: 00:24:59.674 17:15:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:59.674 17:15:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:24:59.674 17:15:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NmJmNjUzNGQ1YmZjYzI4Nzc3OTMyNjExZTc3NDliMDRiY2ZjOTdkMTQ1NGE2NmFjfs2flw==: 00:24:59.674 17:15:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MTlhNmQzZmE4YTYzZDYzZjZlYTBkYmE0NmZjMzFlMmTKp+77: ]] 00:24:59.674 17:15:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MTlhNmQzZmE4YTYzZDYzZjZlYTBkYmE0NmZjMzFlMmTKp+77: 00:24:59.674 17:15:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 3 00:24:59.674 17:15:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:59.674 17:15:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:59.674 17:15:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:24:59.674 17:15:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:24:59.674 17:15:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:59.674 17:15:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:24:59.674 17:15:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:59.674 17:15:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:59.674 17:15:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:59.674 17:15:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:59.674 17:15:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:59.674 17:15:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:59.674 17:15:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:59.674 17:15:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:59.674 17:15:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:59.674 17:15:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:59.674 17:15:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:59.674 17:15:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:59.674 17:15:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:59.674 17:15:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:59.674 17:15:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:24:59.674 17:15:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:59.674 17:15:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:00.240 nvme0n1 00:25:00.240 17:15:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:00.240 17:15:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:00.240 17:15:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:00.240 17:15:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:00.240 17:15:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:00.240 17:15:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:00.240 17:15:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:00.240 17:15:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:00.240 17:15:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:00.240 17:15:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:00.240 17:15:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:00.240 17:15:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:00.240 17:15:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 4 00:25:00.240 17:15:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:00.240 17:15:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:00.240 17:15:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:25:00.240 17:15:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:25:00.240 17:15:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MmZmMmIxMTIxY2RjMTljY2ZlMjFmYmU4Mjc0MjA1ZTA2NjdjNmNhM2QzNjgyYTllMjc4YTJmOWZmZjk1YTEyY/5IjHc=: 00:25:00.240 17:15:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:25:00.240 17:15:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:00.240 17:15:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:25:00.240 17:15:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MmZmMmIxMTIxY2RjMTljY2ZlMjFmYmU4Mjc0MjA1ZTA2NjdjNmNhM2QzNjgyYTllMjc4YTJmOWZmZjk1YTEyY/5IjHc=: 00:25:00.240 17:15:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:25:00.240 17:15:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 4 00:25:00.240 17:15:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:00.240 17:15:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:00.240 17:15:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:25:00.240 17:15:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:25:00.240 17:15:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:00.240 17:15:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:25:00.240 17:15:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:00.240 17:15:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:00.240 17:15:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:00.240 17:15:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:00.240 17:15:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:00.240 17:15:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:00.240 17:15:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:00.240 17:15:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:00.240 17:15:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:00.240 17:15:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:00.240 17:15:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:00.240 17:15:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:00.240 17:15:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:00.240 17:15:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:00.240 17:15:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:25:00.240 17:15:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:00.240 17:15:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:00.806 nvme0n1 00:25:00.806 17:15:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:00.806 17:15:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:00.806 17:15:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:00.806 17:15:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:00.806 17:15:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:00.806 17:15:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:00.806 17:15:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:00.806 17:15:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:00.806 17:15:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:00.806 17:15:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:00.806 17:15:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:00.806 17:15:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:25:00.806 17:15:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:25:00.806 17:15:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:00.806 17:15:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 0 00:25:00.806 17:15:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:00.806 17:15:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:00.806 17:15:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:00.806 17:15:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:25:00.806 17:15:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZmJjZTViODM2ODYzZTQzZjZmMDMwNTFmM2U4OTM4NzTXuYHt: 00:25:00.806 17:15:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MWY4YWQ2MmQ0ODMxNjBhNWJkYmI3YjZmMmVjNzhiYTdiNDEyYmQ0YjI3MDdhYzBhYTQ1NzZkZTYyOWQxODFiZit4JFk=: 00:25:00.806 17:15:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:00.806 17:15:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:00.806 17:15:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZmJjZTViODM2ODYzZTQzZjZmMDMwNTFmM2U4OTM4NzTXuYHt: 00:25:00.806 17:15:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MWY4YWQ2MmQ0ODMxNjBhNWJkYmI3YjZmMmVjNzhiYTdiNDEyYmQ0YjI3MDdhYzBhYTQ1NzZkZTYyOWQxODFiZit4JFk=: ]] 00:25:00.806 17:15:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MWY4YWQ2MmQ0ODMxNjBhNWJkYmI3YjZmMmVjNzhiYTdiNDEyYmQ0YjI3MDdhYzBhYTQ1NzZkZTYyOWQxODFiZit4JFk=: 00:25:00.806 17:15:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 0 00:25:00.806 17:15:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:00.806 17:15:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:00.806 17:15:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:25:00.806 17:15:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:25:00.806 17:15:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:00.806 17:15:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:25:00.806 17:15:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:00.806 17:15:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:00.806 17:15:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:00.806 17:15:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:00.806 17:15:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:00.806 17:15:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:00.806 17:15:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:00.806 17:15:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:00.806 17:15:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:00.806 17:15:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:00.806 17:15:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:00.806 17:15:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:00.806 17:15:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:00.806 17:15:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:00.806 17:15:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:00.806 17:15:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:00.806 17:15:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:01.064 nvme0n1 00:25:01.064 17:15:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:01.064 17:15:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:01.064 17:15:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:01.064 17:15:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:01.064 17:15:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:01.064 17:15:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:01.064 17:15:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:01.064 17:15:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:01.064 17:15:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:01.064 17:15:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:01.064 17:15:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:01.064 17:15:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:01.064 17:15:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 1 00:25:01.064 17:15:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:01.064 17:15:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:01.064 17:15:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:01.064 17:15:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:01.064 17:15:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YTM4NTAxMmMyNDU4NGVjNGY0OGJiMmYwYzBhNzkwODViZGZlZDJmMTBhNjg1YzA0n8CKew==: 00:25:01.064 17:15:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ODMyZDk5MzRkYjZmZGI2MzFjOTE5NDVjYjRlNWQ3NTAxMWZjZGNhMWU5YWRlZDE1Yk8z+w==: 00:25:01.064 17:15:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:01.064 17:15:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:01.064 17:15:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YTM4NTAxMmMyNDU4NGVjNGY0OGJiMmYwYzBhNzkwODViZGZlZDJmMTBhNjg1YzA0n8CKew==: 00:25:01.064 17:15:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ODMyZDk5MzRkYjZmZGI2MzFjOTE5NDVjYjRlNWQ3NTAxMWZjZGNhMWU5YWRlZDE1Yk8z+w==: ]] 00:25:01.064 17:15:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ODMyZDk5MzRkYjZmZGI2MzFjOTE5NDVjYjRlNWQ3NTAxMWZjZGNhMWU5YWRlZDE1Yk8z+w==: 00:25:01.064 17:15:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 1 00:25:01.064 17:15:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:01.064 17:15:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:01.064 17:15:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:25:01.064 17:15:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:25:01.064 17:15:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:01.064 17:15:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:25:01.064 17:15:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:01.064 17:15:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:01.064 17:15:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:01.064 17:15:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:01.064 17:15:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:01.064 17:15:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:01.064 17:15:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:01.064 17:15:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:01.064 17:15:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:01.064 17:15:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:01.064 17:15:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:01.064 17:15:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:01.064 17:15:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:01.064 17:15:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:01.064 17:15:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:01.064 17:15:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:01.064 17:15:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:01.322 nvme0n1 00:25:01.322 17:15:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:01.322 17:15:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:01.322 17:15:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:01.322 17:15:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:01.322 17:15:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:01.322 17:15:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:01.322 17:15:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:01.322 17:15:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:01.322 17:15:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:01.322 17:15:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:01.322 17:15:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:01.322 17:15:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:01.322 17:15:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 2 00:25:01.322 17:15:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:01.322 17:15:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:01.322 17:15:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:01.322 17:15:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:25:01.322 17:15:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NzlmYzhlZGU0ZWYwM2UxMzM3ZDI1YzE1NGMzNjkxMDlh7lHs: 00:25:01.322 17:15:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YjUwMTM0NTI4OGQ3MDRkZDA2ZWQ5YWUzZmM0YTNiMjgM+wAH: 00:25:01.322 17:15:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:01.322 17:15:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:01.322 17:15:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NzlmYzhlZGU0ZWYwM2UxMzM3ZDI1YzE1NGMzNjkxMDlh7lHs: 00:25:01.322 17:15:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YjUwMTM0NTI4OGQ3MDRkZDA2ZWQ5YWUzZmM0YTNiMjgM+wAH: ]] 00:25:01.322 17:15:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YjUwMTM0NTI4OGQ3MDRkZDA2ZWQ5YWUzZmM0YTNiMjgM+wAH: 00:25:01.322 17:15:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 2 00:25:01.322 17:15:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:01.322 17:15:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:01.322 17:15:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:25:01.322 17:15:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:25:01.322 17:15:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:01.322 17:15:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:25:01.322 17:15:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:01.322 17:15:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:01.322 17:15:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:01.322 17:15:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:01.322 17:15:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:01.322 17:15:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:01.322 17:15:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:01.322 17:15:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:01.322 17:15:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:01.322 17:15:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:01.322 17:15:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:01.322 17:15:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:01.322 17:15:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:01.322 17:15:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:01.322 17:15:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:01.322 17:15:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:01.322 17:15:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:01.580 nvme0n1 00:25:01.580 17:15:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:01.580 17:15:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:01.580 17:15:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:01.580 17:15:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:01.580 17:15:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:01.580 17:15:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:01.580 17:15:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:01.580 17:15:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:01.580 17:15:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:01.580 17:15:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:01.580 17:15:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:01.580 17:15:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:01.580 17:15:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 3 00:25:01.580 17:15:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:01.580 17:15:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:01.580 17:15:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:01.580 17:15:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:25:01.580 17:15:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NmJmNjUzNGQ1YmZjYzI4Nzc3OTMyNjExZTc3NDliMDRiY2ZjOTdkMTQ1NGE2NmFjfs2flw==: 00:25:01.580 17:15:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MTlhNmQzZmE4YTYzZDYzZjZlYTBkYmE0NmZjMzFlMmTKp+77: 00:25:01.580 17:15:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:01.580 17:15:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:01.580 17:15:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NmJmNjUzNGQ1YmZjYzI4Nzc3OTMyNjExZTc3NDliMDRiY2ZjOTdkMTQ1NGE2NmFjfs2flw==: 00:25:01.580 17:15:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MTlhNmQzZmE4YTYzZDYzZjZlYTBkYmE0NmZjMzFlMmTKp+77: ]] 00:25:01.580 17:15:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MTlhNmQzZmE4YTYzZDYzZjZlYTBkYmE0NmZjMzFlMmTKp+77: 00:25:01.580 17:15:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 3 00:25:01.580 17:15:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:01.580 17:15:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:01.580 17:15:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:25:01.580 17:15:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:25:01.580 17:15:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:01.580 17:15:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:25:01.580 17:15:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:01.580 17:15:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:01.580 17:15:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:01.580 17:15:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:01.580 17:15:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:01.580 17:15:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:01.581 17:15:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:01.581 17:15:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:01.581 17:15:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:01.581 17:15:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:01.581 17:15:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:01.581 17:15:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:01.581 17:15:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:01.581 17:15:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:01.581 17:15:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:25:01.581 17:15:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:01.581 17:15:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:01.838 nvme0n1 00:25:01.838 17:15:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:01.838 17:15:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:01.839 17:15:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:01.839 17:15:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:01.839 17:15:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:01.839 17:15:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:01.839 17:15:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:01.839 17:15:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:01.839 17:15:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:01.839 17:15:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:01.839 17:15:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:01.839 17:15:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:01.839 17:15:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 4 00:25:01.839 17:15:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:01.839 17:15:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:01.839 17:15:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:01.839 17:15:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:25:01.839 17:15:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MmZmMmIxMTIxY2RjMTljY2ZlMjFmYmU4Mjc0MjA1ZTA2NjdjNmNhM2QzNjgyYTllMjc4YTJmOWZmZjk1YTEyY/5IjHc=: 00:25:01.839 17:15:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:25:01.839 17:15:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:01.839 17:15:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:01.839 17:15:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MmZmMmIxMTIxY2RjMTljY2ZlMjFmYmU4Mjc0MjA1ZTA2NjdjNmNhM2QzNjgyYTllMjc4YTJmOWZmZjk1YTEyY/5IjHc=: 00:25:01.839 17:15:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:25:01.839 17:15:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 4 00:25:01.839 17:15:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:01.839 17:15:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:01.839 17:15:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:25:01.839 17:15:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:25:01.839 17:15:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:01.839 17:15:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:25:01.839 17:15:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:01.839 17:15:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:01.839 17:15:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:01.839 17:15:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:01.839 17:15:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:01.839 17:15:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:01.839 17:15:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:01.839 17:15:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:01.839 17:15:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:01.839 17:15:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:01.839 17:15:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:01.839 17:15:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:01.839 17:15:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:01.839 17:15:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:01.839 17:15:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:25:01.839 17:15:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:01.839 17:15:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:01.839 nvme0n1 00:25:01.839 17:15:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:01.839 17:15:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:01.839 17:15:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:01.839 17:15:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:01.839 17:15:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:01.839 17:15:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:02.096 17:15:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:02.096 17:15:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:02.096 17:15:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:02.096 17:15:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:02.096 17:15:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:02.096 17:15:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:25:02.096 17:15:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:02.096 17:15:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 0 00:25:02.096 17:15:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:02.096 17:15:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:02.096 17:15:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:25:02.096 17:15:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:25:02.096 17:15:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZmJjZTViODM2ODYzZTQzZjZmMDMwNTFmM2U4OTM4NzTXuYHt: 00:25:02.096 17:15:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MWY4YWQ2MmQ0ODMxNjBhNWJkYmI3YjZmMmVjNzhiYTdiNDEyYmQ0YjI3MDdhYzBhYTQ1NzZkZTYyOWQxODFiZit4JFk=: 00:25:02.096 17:15:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:02.096 17:15:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:25:02.096 17:15:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZmJjZTViODM2ODYzZTQzZjZmMDMwNTFmM2U4OTM4NzTXuYHt: 00:25:02.096 17:15:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MWY4YWQ2MmQ0ODMxNjBhNWJkYmI3YjZmMmVjNzhiYTdiNDEyYmQ0YjI3MDdhYzBhYTQ1NzZkZTYyOWQxODFiZit4JFk=: ]] 00:25:02.096 17:15:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MWY4YWQ2MmQ0ODMxNjBhNWJkYmI3YjZmMmVjNzhiYTdiNDEyYmQ0YjI3MDdhYzBhYTQ1NzZkZTYyOWQxODFiZit4JFk=: 00:25:02.096 17:15:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 0 00:25:02.096 17:15:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:02.096 17:15:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:02.096 17:15:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:25:02.096 17:15:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:25:02.096 17:15:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:02.096 17:15:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:25:02.096 17:15:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:02.096 17:15:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:02.096 17:15:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:02.096 17:15:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:02.096 17:15:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:02.096 17:15:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:02.096 17:15:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:02.096 17:15:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:02.096 17:15:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:02.096 17:15:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:02.096 17:15:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:02.096 17:15:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:02.097 17:15:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:02.097 17:15:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:02.097 17:15:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:02.097 17:15:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:02.097 17:15:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:02.097 nvme0n1 00:25:02.097 17:15:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:02.097 17:15:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:02.097 17:15:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:02.097 17:15:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:02.097 17:15:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:02.097 17:15:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:02.097 17:15:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:02.097 17:15:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:02.097 17:15:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:02.097 17:15:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:02.355 17:15:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:02.355 17:15:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:02.355 17:15:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 1 00:25:02.355 17:15:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:02.355 17:15:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:02.355 17:15:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:25:02.355 17:15:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:02.355 17:15:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YTM4NTAxMmMyNDU4NGVjNGY0OGJiMmYwYzBhNzkwODViZGZlZDJmMTBhNjg1YzA0n8CKew==: 00:25:02.355 17:15:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ODMyZDk5MzRkYjZmZGI2MzFjOTE5NDVjYjRlNWQ3NTAxMWZjZGNhMWU5YWRlZDE1Yk8z+w==: 00:25:02.355 17:15:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:02.355 17:15:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:25:02.355 17:15:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YTM4NTAxMmMyNDU4NGVjNGY0OGJiMmYwYzBhNzkwODViZGZlZDJmMTBhNjg1YzA0n8CKew==: 00:25:02.355 17:15:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ODMyZDk5MzRkYjZmZGI2MzFjOTE5NDVjYjRlNWQ3NTAxMWZjZGNhMWU5YWRlZDE1Yk8z+w==: ]] 00:25:02.355 17:15:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ODMyZDk5MzRkYjZmZGI2MzFjOTE5NDVjYjRlNWQ3NTAxMWZjZGNhMWU5YWRlZDE1Yk8z+w==: 00:25:02.355 17:15:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 1 00:25:02.355 17:15:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:02.355 17:15:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:02.355 17:15:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:25:02.355 17:15:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:25:02.355 17:15:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:02.355 17:15:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:25:02.355 17:15:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:02.355 17:15:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:02.355 17:15:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:02.355 17:15:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:02.355 17:15:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:02.355 17:15:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:02.355 17:15:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:02.355 17:15:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:02.355 17:15:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:02.355 17:15:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:02.355 17:15:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:02.355 17:15:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:02.355 17:15:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:02.355 17:15:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:02.355 17:15:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:02.355 17:15:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:02.355 17:15:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:02.355 nvme0n1 00:25:02.355 17:15:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:02.355 17:15:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:02.355 17:15:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:02.355 17:15:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:02.355 17:15:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:02.355 17:15:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:02.355 17:15:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:02.355 17:15:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:02.355 17:15:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:02.612 17:15:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:02.612 17:15:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:02.612 17:15:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:02.612 17:15:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 2 00:25:02.612 17:15:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:02.612 17:15:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:02.612 17:15:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:25:02.612 17:15:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:25:02.613 17:15:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NzlmYzhlZGU0ZWYwM2UxMzM3ZDI1YzE1NGMzNjkxMDlh7lHs: 00:25:02.613 17:15:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YjUwMTM0NTI4OGQ3MDRkZDA2ZWQ5YWUzZmM0YTNiMjgM+wAH: 00:25:02.613 17:15:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:02.613 17:15:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:25:02.613 17:15:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NzlmYzhlZGU0ZWYwM2UxMzM3ZDI1YzE1NGMzNjkxMDlh7lHs: 00:25:02.613 17:15:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YjUwMTM0NTI4OGQ3MDRkZDA2ZWQ5YWUzZmM0YTNiMjgM+wAH: ]] 00:25:02.613 17:15:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YjUwMTM0NTI4OGQ3MDRkZDA2ZWQ5YWUzZmM0YTNiMjgM+wAH: 00:25:02.613 17:15:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 2 00:25:02.613 17:15:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:02.613 17:15:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:02.613 17:15:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:25:02.613 17:15:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:25:02.613 17:15:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:02.613 17:15:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:25:02.613 17:15:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:02.613 17:15:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:02.613 17:15:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:02.613 17:15:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:02.613 17:15:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:02.613 17:15:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:02.613 17:15:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:02.613 17:15:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:02.613 17:15:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:02.613 17:15:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:02.613 17:15:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:02.613 17:15:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:02.613 17:15:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:02.613 17:15:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:02.613 17:15:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:02.613 17:15:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:02.613 17:15:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:02.613 nvme0n1 00:25:02.613 17:15:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:02.613 17:15:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:02.613 17:15:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:02.613 17:15:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:02.613 17:15:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:02.613 17:15:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:02.613 17:15:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:02.613 17:15:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:02.613 17:15:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:02.613 17:15:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:02.870 17:15:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:02.870 17:15:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:02.870 17:15:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 3 00:25:02.870 17:15:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:02.870 17:15:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:02.870 17:15:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:25:02.870 17:15:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:25:02.870 17:15:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NmJmNjUzNGQ1YmZjYzI4Nzc3OTMyNjExZTc3NDliMDRiY2ZjOTdkMTQ1NGE2NmFjfs2flw==: 00:25:02.870 17:15:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MTlhNmQzZmE4YTYzZDYzZjZlYTBkYmE0NmZjMzFlMmTKp+77: 00:25:02.870 17:15:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:02.870 17:15:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:25:02.870 17:15:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NmJmNjUzNGQ1YmZjYzI4Nzc3OTMyNjExZTc3NDliMDRiY2ZjOTdkMTQ1NGE2NmFjfs2flw==: 00:25:02.870 17:15:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MTlhNmQzZmE4YTYzZDYzZjZlYTBkYmE0NmZjMzFlMmTKp+77: ]] 00:25:02.870 17:15:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MTlhNmQzZmE4YTYzZDYzZjZlYTBkYmE0NmZjMzFlMmTKp+77: 00:25:02.870 17:15:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 3 00:25:02.870 17:15:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:02.870 17:15:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:02.870 17:15:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:25:02.870 17:15:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:25:02.870 17:15:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:02.870 17:15:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:25:02.870 17:15:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:02.870 17:15:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:02.870 17:15:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:02.870 17:15:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:02.870 17:15:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:02.870 17:15:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:02.870 17:15:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:02.870 17:15:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:02.870 17:15:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:02.870 17:15:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:02.870 17:15:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:02.870 17:15:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:02.870 17:15:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:02.870 17:15:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:02.870 17:15:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:25:02.870 17:15:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:02.870 17:15:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:02.870 nvme0n1 00:25:02.870 17:15:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:02.870 17:15:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:02.870 17:15:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:02.870 17:15:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:02.870 17:15:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:02.870 17:15:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:02.870 17:15:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:02.870 17:15:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:02.870 17:15:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:02.870 17:15:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:03.128 17:15:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:03.128 17:15:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:03.128 17:15:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 4 00:25:03.128 17:15:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:03.128 17:15:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:03.128 17:15:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:25:03.128 17:15:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:25:03.128 17:15:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MmZmMmIxMTIxY2RjMTljY2ZlMjFmYmU4Mjc0MjA1ZTA2NjdjNmNhM2QzNjgyYTllMjc4YTJmOWZmZjk1YTEyY/5IjHc=: 00:25:03.128 17:15:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:25:03.128 17:15:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:03.128 17:15:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:25:03.128 17:15:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MmZmMmIxMTIxY2RjMTljY2ZlMjFmYmU4Mjc0MjA1ZTA2NjdjNmNhM2QzNjgyYTllMjc4YTJmOWZmZjk1YTEyY/5IjHc=: 00:25:03.128 17:15:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:25:03.128 17:15:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 4 00:25:03.128 17:15:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:03.128 17:15:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:03.128 17:15:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:25:03.128 17:15:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:25:03.128 17:15:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:03.128 17:15:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:25:03.128 17:15:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:03.128 17:15:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:03.128 17:15:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:03.128 17:15:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:03.128 17:15:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:03.128 17:15:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:03.128 17:15:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:03.128 17:15:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:03.128 17:15:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:03.128 17:15:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:03.128 17:15:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:03.128 17:15:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:03.128 17:15:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:03.128 17:15:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:03.128 17:15:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:25:03.128 17:15:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:03.128 17:15:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:03.128 nvme0n1 00:25:03.128 17:15:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:03.128 17:15:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:03.128 17:15:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:03.128 17:15:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:03.128 17:15:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:03.128 17:15:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:03.128 17:15:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:03.128 17:15:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:03.128 17:15:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:03.128 17:15:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:03.386 17:15:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:03.386 17:15:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:25:03.386 17:15:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:03.386 17:15:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 0 00:25:03.386 17:15:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:03.386 17:15:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:03.386 17:15:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:03.386 17:15:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:25:03.386 17:15:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZmJjZTViODM2ODYzZTQzZjZmMDMwNTFmM2U4OTM4NzTXuYHt: 00:25:03.386 17:15:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MWY4YWQ2MmQ0ODMxNjBhNWJkYmI3YjZmMmVjNzhiYTdiNDEyYmQ0YjI3MDdhYzBhYTQ1NzZkZTYyOWQxODFiZit4JFk=: 00:25:03.386 17:15:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:03.386 17:15:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:25:03.386 17:15:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZmJjZTViODM2ODYzZTQzZjZmMDMwNTFmM2U4OTM4NzTXuYHt: 00:25:03.386 17:15:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MWY4YWQ2MmQ0ODMxNjBhNWJkYmI3YjZmMmVjNzhiYTdiNDEyYmQ0YjI3MDdhYzBhYTQ1NzZkZTYyOWQxODFiZit4JFk=: ]] 00:25:03.386 17:15:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MWY4YWQ2MmQ0ODMxNjBhNWJkYmI3YjZmMmVjNzhiYTdiNDEyYmQ0YjI3MDdhYzBhYTQ1NzZkZTYyOWQxODFiZit4JFk=: 00:25:03.386 17:15:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 0 00:25:03.386 17:15:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:03.386 17:15:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:03.386 17:15:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:25:03.386 17:15:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:25:03.386 17:15:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:03.386 17:15:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:25:03.386 17:15:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:03.386 17:15:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:03.386 17:15:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:03.386 17:15:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:03.386 17:15:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:03.386 17:15:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:03.386 17:15:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:03.386 17:15:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:03.386 17:15:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:03.386 17:15:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:03.386 17:15:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:03.386 17:15:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:03.386 17:15:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:03.386 17:15:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:03.386 17:15:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:03.386 17:15:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:03.386 17:15:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:03.386 nvme0n1 00:25:03.386 17:15:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:03.386 17:15:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:03.386 17:15:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:03.386 17:15:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:03.386 17:15:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:03.645 17:15:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:03.645 17:15:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:03.645 17:15:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:03.645 17:15:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:03.645 17:15:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:03.645 17:15:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:03.645 17:15:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:03.645 17:15:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 1 00:25:03.645 17:15:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:03.645 17:15:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:03.645 17:15:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:03.645 17:15:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:03.645 17:15:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YTM4NTAxMmMyNDU4NGVjNGY0OGJiMmYwYzBhNzkwODViZGZlZDJmMTBhNjg1YzA0n8CKew==: 00:25:03.645 17:15:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ODMyZDk5MzRkYjZmZGI2MzFjOTE5NDVjYjRlNWQ3NTAxMWZjZGNhMWU5YWRlZDE1Yk8z+w==: 00:25:03.645 17:15:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:03.645 17:15:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:25:03.645 17:15:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YTM4NTAxMmMyNDU4NGVjNGY0OGJiMmYwYzBhNzkwODViZGZlZDJmMTBhNjg1YzA0n8CKew==: 00:25:03.645 17:15:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ODMyZDk5MzRkYjZmZGI2MzFjOTE5NDVjYjRlNWQ3NTAxMWZjZGNhMWU5YWRlZDE1Yk8z+w==: ]] 00:25:03.645 17:15:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ODMyZDk5MzRkYjZmZGI2MzFjOTE5NDVjYjRlNWQ3NTAxMWZjZGNhMWU5YWRlZDE1Yk8z+w==: 00:25:03.645 17:15:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 1 00:25:03.645 17:15:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:03.645 17:15:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:03.645 17:15:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:25:03.645 17:15:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:25:03.645 17:15:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:03.645 17:15:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:25:03.645 17:15:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:03.645 17:15:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:03.645 17:15:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:03.645 17:15:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:03.645 17:15:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:03.645 17:15:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:03.645 17:15:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:03.645 17:15:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:03.645 17:15:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:03.645 17:15:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:03.645 17:15:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:03.645 17:15:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:03.645 17:15:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:03.645 17:15:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:03.645 17:15:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:03.645 17:15:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:03.645 17:15:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:03.903 nvme0n1 00:25:03.903 17:15:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:03.903 17:15:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:03.903 17:15:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:03.903 17:15:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:03.903 17:15:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:03.903 17:15:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:03.903 17:15:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:03.903 17:15:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:03.903 17:15:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:03.903 17:15:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:03.903 17:15:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:03.903 17:15:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:03.903 17:15:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 2 00:25:03.903 17:15:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:03.903 17:15:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:03.903 17:15:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:03.903 17:15:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:25:03.903 17:15:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NzlmYzhlZGU0ZWYwM2UxMzM3ZDI1YzE1NGMzNjkxMDlh7lHs: 00:25:03.903 17:15:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YjUwMTM0NTI4OGQ3MDRkZDA2ZWQ5YWUzZmM0YTNiMjgM+wAH: 00:25:03.903 17:15:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:03.903 17:15:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:25:03.903 17:15:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NzlmYzhlZGU0ZWYwM2UxMzM3ZDI1YzE1NGMzNjkxMDlh7lHs: 00:25:03.903 17:15:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YjUwMTM0NTI4OGQ3MDRkZDA2ZWQ5YWUzZmM0YTNiMjgM+wAH: ]] 00:25:03.903 17:15:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YjUwMTM0NTI4OGQ3MDRkZDA2ZWQ5YWUzZmM0YTNiMjgM+wAH: 00:25:03.903 17:15:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 2 00:25:03.903 17:15:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:03.903 17:15:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:03.903 17:15:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:25:03.903 17:15:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:25:03.903 17:15:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:03.903 17:15:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:25:03.903 17:15:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:03.903 17:15:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:03.903 17:15:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:03.903 17:15:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:03.903 17:15:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:03.903 17:15:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:03.903 17:15:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:03.903 17:15:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:03.903 17:15:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:03.903 17:15:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:03.903 17:15:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:03.903 17:15:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:03.903 17:15:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:03.903 17:15:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:03.903 17:15:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:03.903 17:15:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:03.903 17:15:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:04.160 nvme0n1 00:25:04.160 17:15:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:04.160 17:15:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:04.160 17:15:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:04.160 17:15:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:04.160 17:15:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:04.160 17:15:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:04.160 17:15:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:04.160 17:15:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:04.160 17:15:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:04.160 17:15:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:04.160 17:15:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:04.160 17:15:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:04.160 17:15:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 3 00:25:04.160 17:15:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:04.160 17:15:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:04.160 17:15:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:04.160 17:15:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:25:04.160 17:15:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NmJmNjUzNGQ1YmZjYzI4Nzc3OTMyNjExZTc3NDliMDRiY2ZjOTdkMTQ1NGE2NmFjfs2flw==: 00:25:04.160 17:15:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MTlhNmQzZmE4YTYzZDYzZjZlYTBkYmE0NmZjMzFlMmTKp+77: 00:25:04.160 17:15:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:04.160 17:15:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:25:04.160 17:15:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NmJmNjUzNGQ1YmZjYzI4Nzc3OTMyNjExZTc3NDliMDRiY2ZjOTdkMTQ1NGE2NmFjfs2flw==: 00:25:04.160 17:15:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MTlhNmQzZmE4YTYzZDYzZjZlYTBkYmE0NmZjMzFlMmTKp+77: ]] 00:25:04.160 17:15:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MTlhNmQzZmE4YTYzZDYzZjZlYTBkYmE0NmZjMzFlMmTKp+77: 00:25:04.160 17:15:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 3 00:25:04.160 17:15:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:04.160 17:15:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:04.160 17:15:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:25:04.160 17:15:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:25:04.160 17:15:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:04.160 17:15:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:25:04.160 17:15:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:04.160 17:15:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:04.160 17:15:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:04.160 17:15:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:04.160 17:15:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:04.160 17:15:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:04.160 17:15:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:04.160 17:15:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:04.160 17:15:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:04.160 17:15:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:04.160 17:15:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:04.160 17:15:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:04.160 17:15:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:04.160 17:15:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:04.160 17:15:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:25:04.160 17:15:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:04.160 17:15:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:04.417 nvme0n1 00:25:04.417 17:15:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:04.417 17:15:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:04.417 17:15:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:04.417 17:15:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:04.417 17:15:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:04.417 17:15:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:04.417 17:15:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:04.417 17:15:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:04.417 17:15:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:04.417 17:15:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:04.675 17:15:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:04.675 17:15:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:04.675 17:15:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 4 00:25:04.675 17:15:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:04.675 17:15:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:04.675 17:15:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:04.675 17:15:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:25:04.675 17:15:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MmZmMmIxMTIxY2RjMTljY2ZlMjFmYmU4Mjc0MjA1ZTA2NjdjNmNhM2QzNjgyYTllMjc4YTJmOWZmZjk1YTEyY/5IjHc=: 00:25:04.675 17:15:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:25:04.675 17:15:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:04.675 17:15:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:25:04.675 17:15:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MmZmMmIxMTIxY2RjMTljY2ZlMjFmYmU4Mjc0MjA1ZTA2NjdjNmNhM2QzNjgyYTllMjc4YTJmOWZmZjk1YTEyY/5IjHc=: 00:25:04.675 17:15:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:25:04.675 17:15:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 4 00:25:04.675 17:15:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:04.675 17:15:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:04.675 17:15:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:25:04.675 17:15:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:25:04.675 17:15:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:04.675 17:15:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:25:04.675 17:15:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:04.675 17:15:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:04.676 17:15:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:04.676 17:15:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:04.676 17:15:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:04.676 17:15:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:04.676 17:15:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:04.676 17:15:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:04.676 17:15:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:04.676 17:15:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:04.676 17:15:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:04.676 17:15:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:04.676 17:15:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:04.676 17:15:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:04.676 17:15:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:25:04.676 17:15:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:04.676 17:15:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:04.933 nvme0n1 00:25:04.933 17:15:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:04.933 17:15:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:04.933 17:15:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:04.933 17:15:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:04.933 17:15:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:04.933 17:15:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:04.933 17:15:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:04.933 17:15:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:04.933 17:15:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:04.933 17:15:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:04.933 17:15:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:04.933 17:15:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:25:04.933 17:15:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:04.933 17:15:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 0 00:25:04.933 17:15:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:04.933 17:15:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:04.933 17:15:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:25:04.933 17:15:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:25:04.933 17:15:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZmJjZTViODM2ODYzZTQzZjZmMDMwNTFmM2U4OTM4NzTXuYHt: 00:25:04.933 17:15:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MWY4YWQ2MmQ0ODMxNjBhNWJkYmI3YjZmMmVjNzhiYTdiNDEyYmQ0YjI3MDdhYzBhYTQ1NzZkZTYyOWQxODFiZit4JFk=: 00:25:04.933 17:15:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:04.933 17:15:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:25:04.933 17:15:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZmJjZTViODM2ODYzZTQzZjZmMDMwNTFmM2U4OTM4NzTXuYHt: 00:25:04.933 17:15:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MWY4YWQ2MmQ0ODMxNjBhNWJkYmI3YjZmMmVjNzhiYTdiNDEyYmQ0YjI3MDdhYzBhYTQ1NzZkZTYyOWQxODFiZit4JFk=: ]] 00:25:04.933 17:15:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MWY4YWQ2MmQ0ODMxNjBhNWJkYmI3YjZmMmVjNzhiYTdiNDEyYmQ0YjI3MDdhYzBhYTQ1NzZkZTYyOWQxODFiZit4JFk=: 00:25:04.933 17:15:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 0 00:25:04.933 17:15:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:04.933 17:15:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:04.933 17:15:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:25:04.933 17:15:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:25:04.933 17:15:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:04.933 17:15:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:25:04.933 17:15:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:04.933 17:15:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:04.933 17:15:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:04.933 17:15:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:04.933 17:15:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:04.933 17:15:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:04.933 17:15:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:04.933 17:15:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:04.933 17:15:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:04.933 17:15:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:04.933 17:15:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:04.933 17:15:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:04.933 17:15:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:04.933 17:15:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:04.933 17:15:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:04.933 17:15:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:04.933 17:15:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:05.191 nvme0n1 00:25:05.191 17:15:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:05.191 17:15:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:05.191 17:15:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:05.191 17:15:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:05.191 17:15:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:05.191 17:15:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:05.191 17:15:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:05.191 17:15:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:05.191 17:15:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:05.191 17:15:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:05.449 17:15:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:05.449 17:15:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:05.449 17:15:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 1 00:25:05.449 17:15:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:05.449 17:15:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:05.449 17:15:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:25:05.449 17:15:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:05.449 17:15:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YTM4NTAxMmMyNDU4NGVjNGY0OGJiMmYwYzBhNzkwODViZGZlZDJmMTBhNjg1YzA0n8CKew==: 00:25:05.449 17:15:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ODMyZDk5MzRkYjZmZGI2MzFjOTE5NDVjYjRlNWQ3NTAxMWZjZGNhMWU5YWRlZDE1Yk8z+w==: 00:25:05.449 17:15:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:05.449 17:15:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:25:05.449 17:15:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YTM4NTAxMmMyNDU4NGVjNGY0OGJiMmYwYzBhNzkwODViZGZlZDJmMTBhNjg1YzA0n8CKew==: 00:25:05.449 17:15:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ODMyZDk5MzRkYjZmZGI2MzFjOTE5NDVjYjRlNWQ3NTAxMWZjZGNhMWU5YWRlZDE1Yk8z+w==: ]] 00:25:05.449 17:15:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ODMyZDk5MzRkYjZmZGI2MzFjOTE5NDVjYjRlNWQ3NTAxMWZjZGNhMWU5YWRlZDE1Yk8z+w==: 00:25:05.449 17:15:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 1 00:25:05.449 17:15:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:05.449 17:15:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:05.449 17:15:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:25:05.449 17:15:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:25:05.449 17:15:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:05.449 17:15:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:25:05.449 17:15:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:05.449 17:15:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:05.449 17:15:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:05.449 17:15:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:05.449 17:15:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:05.449 17:15:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:05.449 17:15:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:05.449 17:15:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:05.449 17:15:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:05.449 17:15:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:05.449 17:15:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:05.449 17:15:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:05.449 17:15:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:05.449 17:15:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:05.449 17:15:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:05.449 17:15:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:05.449 17:15:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:05.707 nvme0n1 00:25:05.707 17:15:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:05.707 17:15:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:05.707 17:15:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:05.707 17:15:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:05.707 17:15:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:05.707 17:15:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:05.707 17:15:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:05.707 17:15:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:05.707 17:15:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:05.707 17:15:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:05.707 17:15:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:05.707 17:15:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:05.707 17:15:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 2 00:25:05.707 17:15:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:05.707 17:15:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:05.707 17:15:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:25:05.707 17:15:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:25:05.707 17:15:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NzlmYzhlZGU0ZWYwM2UxMzM3ZDI1YzE1NGMzNjkxMDlh7lHs: 00:25:05.707 17:15:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YjUwMTM0NTI4OGQ3MDRkZDA2ZWQ5YWUzZmM0YTNiMjgM+wAH: 00:25:05.707 17:15:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:05.707 17:15:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:25:05.707 17:15:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NzlmYzhlZGU0ZWYwM2UxMzM3ZDI1YzE1NGMzNjkxMDlh7lHs: 00:25:05.707 17:15:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YjUwMTM0NTI4OGQ3MDRkZDA2ZWQ5YWUzZmM0YTNiMjgM+wAH: ]] 00:25:05.708 17:15:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YjUwMTM0NTI4OGQ3MDRkZDA2ZWQ5YWUzZmM0YTNiMjgM+wAH: 00:25:05.708 17:15:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 2 00:25:05.708 17:15:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:05.708 17:15:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:05.708 17:15:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:25:05.708 17:15:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:25:05.708 17:15:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:05.708 17:15:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:25:05.708 17:15:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:05.708 17:15:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:05.708 17:15:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:05.708 17:15:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:05.708 17:15:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:05.708 17:15:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:05.708 17:15:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:05.708 17:15:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:05.708 17:15:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:05.708 17:15:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:05.708 17:15:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:05.708 17:15:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:05.708 17:15:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:05.708 17:15:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:05.708 17:15:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:05.708 17:15:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:05.708 17:15:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:06.272 nvme0n1 00:25:06.272 17:15:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:06.272 17:15:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:06.272 17:15:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:06.272 17:15:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:06.272 17:15:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:06.272 17:15:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:06.272 17:15:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:06.272 17:15:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:06.272 17:15:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:06.272 17:15:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:06.272 17:15:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:06.272 17:15:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:06.272 17:15:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 3 00:25:06.272 17:15:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:06.272 17:15:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:06.272 17:15:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:25:06.272 17:15:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:25:06.272 17:15:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NmJmNjUzNGQ1YmZjYzI4Nzc3OTMyNjExZTc3NDliMDRiY2ZjOTdkMTQ1NGE2NmFjfs2flw==: 00:25:06.272 17:15:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MTlhNmQzZmE4YTYzZDYzZjZlYTBkYmE0NmZjMzFlMmTKp+77: 00:25:06.272 17:15:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:06.272 17:15:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:25:06.272 17:15:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NmJmNjUzNGQ1YmZjYzI4Nzc3OTMyNjExZTc3NDliMDRiY2ZjOTdkMTQ1NGE2NmFjfs2flw==: 00:25:06.272 17:15:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MTlhNmQzZmE4YTYzZDYzZjZlYTBkYmE0NmZjMzFlMmTKp+77: ]] 00:25:06.272 17:15:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MTlhNmQzZmE4YTYzZDYzZjZlYTBkYmE0NmZjMzFlMmTKp+77: 00:25:06.272 17:15:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 3 00:25:06.272 17:15:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:06.272 17:15:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:06.272 17:15:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:25:06.272 17:15:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:25:06.272 17:15:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:06.272 17:15:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:25:06.272 17:15:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:06.272 17:15:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:06.272 17:15:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:06.272 17:15:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:06.272 17:15:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:06.272 17:15:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:06.272 17:15:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:06.272 17:15:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:06.272 17:15:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:06.272 17:15:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:06.272 17:15:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:06.272 17:15:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:06.272 17:15:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:06.272 17:15:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:06.272 17:15:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:25:06.272 17:15:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:06.272 17:15:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:06.530 nvme0n1 00:25:06.530 17:15:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:06.530 17:15:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:06.530 17:15:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:06.530 17:15:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:06.530 17:15:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:06.530 17:15:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:06.787 17:15:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:06.787 17:15:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:06.787 17:15:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:06.787 17:15:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:06.787 17:15:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:06.787 17:15:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:06.787 17:15:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 4 00:25:06.787 17:15:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:06.787 17:15:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:06.787 17:15:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:25:06.787 17:15:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:25:06.787 17:15:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MmZmMmIxMTIxY2RjMTljY2ZlMjFmYmU4Mjc0MjA1ZTA2NjdjNmNhM2QzNjgyYTllMjc4YTJmOWZmZjk1YTEyY/5IjHc=: 00:25:06.787 17:15:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:25:06.787 17:15:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:06.787 17:15:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:25:06.787 17:15:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MmZmMmIxMTIxY2RjMTljY2ZlMjFmYmU4Mjc0MjA1ZTA2NjdjNmNhM2QzNjgyYTllMjc4YTJmOWZmZjk1YTEyY/5IjHc=: 00:25:06.787 17:15:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:25:06.787 17:15:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 4 00:25:06.787 17:15:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:06.787 17:15:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:06.787 17:15:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:25:06.787 17:15:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:25:06.787 17:15:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:06.788 17:15:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:25:06.788 17:15:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:06.788 17:15:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:06.788 17:15:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:06.788 17:15:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:06.788 17:15:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:06.788 17:15:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:06.788 17:15:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:06.788 17:15:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:06.788 17:15:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:06.788 17:15:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:06.788 17:15:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:06.788 17:15:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:06.788 17:15:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:06.788 17:15:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:06.788 17:15:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:25:06.788 17:15:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:06.788 17:15:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:07.045 nvme0n1 00:25:07.045 17:15:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:07.045 17:15:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:07.045 17:15:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:07.045 17:15:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:07.046 17:15:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:07.046 17:15:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:07.046 17:15:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:07.046 17:15:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:07.046 17:15:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:07.046 17:15:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:07.046 17:15:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:07.046 17:15:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:25:07.046 17:15:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:07.046 17:15:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 0 00:25:07.046 17:15:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:07.046 17:15:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:07.046 17:15:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:25:07.046 17:15:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:25:07.046 17:15:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZmJjZTViODM2ODYzZTQzZjZmMDMwNTFmM2U4OTM4NzTXuYHt: 00:25:07.046 17:15:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MWY4YWQ2MmQ0ODMxNjBhNWJkYmI3YjZmMmVjNzhiYTdiNDEyYmQ0YjI3MDdhYzBhYTQ1NzZkZTYyOWQxODFiZit4JFk=: 00:25:07.046 17:15:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:07.046 17:15:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:25:07.046 17:15:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZmJjZTViODM2ODYzZTQzZjZmMDMwNTFmM2U4OTM4NzTXuYHt: 00:25:07.046 17:15:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MWY4YWQ2MmQ0ODMxNjBhNWJkYmI3YjZmMmVjNzhiYTdiNDEyYmQ0YjI3MDdhYzBhYTQ1NzZkZTYyOWQxODFiZit4JFk=: ]] 00:25:07.046 17:15:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MWY4YWQ2MmQ0ODMxNjBhNWJkYmI3YjZmMmVjNzhiYTdiNDEyYmQ0YjI3MDdhYzBhYTQ1NzZkZTYyOWQxODFiZit4JFk=: 00:25:07.046 17:15:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 0 00:25:07.046 17:15:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:07.046 17:15:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:07.046 17:15:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:25:07.046 17:15:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:25:07.046 17:15:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:07.046 17:15:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:25:07.046 17:15:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:07.046 17:15:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:07.046 17:15:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:07.046 17:15:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:07.046 17:15:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:07.046 17:15:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:07.046 17:15:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:07.046 17:15:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:07.046 17:15:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:07.046 17:15:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:07.046 17:15:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:07.046 17:15:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:07.046 17:15:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:07.046 17:15:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:07.046 17:15:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:07.046 17:15:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:07.046 17:15:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:07.610 nvme0n1 00:25:07.610 17:15:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:07.610 17:15:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:07.610 17:15:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:07.610 17:15:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:07.610 17:15:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:07.610 17:15:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:07.867 17:15:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:07.867 17:15:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:07.868 17:15:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:07.868 17:15:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:07.868 17:15:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:07.868 17:15:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:07.868 17:15:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 1 00:25:07.868 17:15:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:07.868 17:15:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:07.868 17:15:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:25:07.868 17:15:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:07.868 17:15:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YTM4NTAxMmMyNDU4NGVjNGY0OGJiMmYwYzBhNzkwODViZGZlZDJmMTBhNjg1YzA0n8CKew==: 00:25:07.868 17:15:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ODMyZDk5MzRkYjZmZGI2MzFjOTE5NDVjYjRlNWQ3NTAxMWZjZGNhMWU5YWRlZDE1Yk8z+w==: 00:25:07.868 17:15:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:07.868 17:15:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:25:07.868 17:15:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YTM4NTAxMmMyNDU4NGVjNGY0OGJiMmYwYzBhNzkwODViZGZlZDJmMTBhNjg1YzA0n8CKew==: 00:25:07.868 17:15:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ODMyZDk5MzRkYjZmZGI2MzFjOTE5NDVjYjRlNWQ3NTAxMWZjZGNhMWU5YWRlZDE1Yk8z+w==: ]] 00:25:07.868 17:15:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ODMyZDk5MzRkYjZmZGI2MzFjOTE5NDVjYjRlNWQ3NTAxMWZjZGNhMWU5YWRlZDE1Yk8z+w==: 00:25:07.868 17:15:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 1 00:25:07.868 17:15:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:07.868 17:15:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:07.868 17:15:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:25:07.868 17:15:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:25:07.868 17:15:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:07.868 17:15:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:25:07.868 17:15:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:07.868 17:15:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:07.868 17:15:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:07.868 17:15:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:07.868 17:15:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:07.868 17:15:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:07.868 17:15:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:07.868 17:15:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:07.868 17:15:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:07.868 17:15:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:07.868 17:15:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:07.868 17:15:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:07.868 17:15:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:07.868 17:15:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:07.868 17:15:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:07.868 17:15:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:07.868 17:15:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:08.431 nvme0n1 00:25:08.431 17:15:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:08.431 17:15:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:08.431 17:15:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:08.431 17:15:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:08.431 17:15:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:08.431 17:15:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:08.431 17:15:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:08.431 17:15:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:08.431 17:15:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:08.431 17:15:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:08.431 17:15:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:08.431 17:15:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:08.431 17:15:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 2 00:25:08.431 17:15:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:08.431 17:15:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:08.431 17:15:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:25:08.431 17:15:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:25:08.431 17:15:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NzlmYzhlZGU0ZWYwM2UxMzM3ZDI1YzE1NGMzNjkxMDlh7lHs: 00:25:08.431 17:15:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YjUwMTM0NTI4OGQ3MDRkZDA2ZWQ5YWUzZmM0YTNiMjgM+wAH: 00:25:08.431 17:15:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:08.431 17:15:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:25:08.431 17:15:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NzlmYzhlZGU0ZWYwM2UxMzM3ZDI1YzE1NGMzNjkxMDlh7lHs: 00:25:08.431 17:15:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YjUwMTM0NTI4OGQ3MDRkZDA2ZWQ5YWUzZmM0YTNiMjgM+wAH: ]] 00:25:08.431 17:15:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YjUwMTM0NTI4OGQ3MDRkZDA2ZWQ5YWUzZmM0YTNiMjgM+wAH: 00:25:08.431 17:15:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 2 00:25:08.431 17:15:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:08.431 17:15:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:08.431 17:15:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:25:08.431 17:15:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:25:08.431 17:15:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:08.431 17:15:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:25:08.431 17:15:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:08.431 17:15:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:08.431 17:15:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:08.431 17:15:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:08.431 17:15:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:08.431 17:15:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:08.431 17:15:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:08.431 17:15:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:08.431 17:15:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:08.431 17:15:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:08.431 17:15:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:08.431 17:15:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:08.431 17:15:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:08.431 17:15:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:08.431 17:15:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:08.431 17:15:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:08.431 17:15:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:08.994 nvme0n1 00:25:08.994 17:15:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:08.994 17:15:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:08.994 17:15:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:08.994 17:15:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:08.994 17:15:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:08.995 17:15:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:08.995 17:15:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:08.995 17:15:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:08.995 17:15:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:08.995 17:15:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:08.995 17:15:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:08.995 17:15:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:08.995 17:15:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 3 00:25:08.995 17:15:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:08.995 17:15:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:08.995 17:15:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:25:08.995 17:15:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:25:08.995 17:15:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NmJmNjUzNGQ1YmZjYzI4Nzc3OTMyNjExZTc3NDliMDRiY2ZjOTdkMTQ1NGE2NmFjfs2flw==: 00:25:08.995 17:15:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MTlhNmQzZmE4YTYzZDYzZjZlYTBkYmE0NmZjMzFlMmTKp+77: 00:25:08.995 17:15:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:08.995 17:15:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:25:08.995 17:15:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NmJmNjUzNGQ1YmZjYzI4Nzc3OTMyNjExZTc3NDliMDRiY2ZjOTdkMTQ1NGE2NmFjfs2flw==: 00:25:08.995 17:15:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MTlhNmQzZmE4YTYzZDYzZjZlYTBkYmE0NmZjMzFlMmTKp+77: ]] 00:25:08.995 17:15:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MTlhNmQzZmE4YTYzZDYzZjZlYTBkYmE0NmZjMzFlMmTKp+77: 00:25:08.995 17:15:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 3 00:25:08.995 17:15:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:08.995 17:15:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:08.995 17:15:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:25:08.995 17:15:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:25:08.995 17:15:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:08.995 17:15:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:25:08.995 17:15:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:08.995 17:15:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:08.995 17:15:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:08.995 17:15:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:08.995 17:15:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:08.995 17:15:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:08.995 17:15:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:08.995 17:15:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:08.995 17:15:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:08.995 17:15:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:08.995 17:15:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:08.995 17:15:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:08.995 17:15:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:08.995 17:15:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:08.995 17:15:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:25:08.995 17:15:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:08.995 17:15:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:09.926 nvme0n1 00:25:09.926 17:15:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:09.926 17:15:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:09.926 17:15:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:09.926 17:15:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:09.926 17:15:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:09.926 17:15:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:09.926 17:15:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:09.926 17:15:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:09.926 17:15:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:09.926 17:15:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:09.926 17:15:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:09.926 17:15:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:09.926 17:15:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 4 00:25:09.926 17:15:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:09.926 17:15:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:09.926 17:15:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:25:09.926 17:15:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:25:09.926 17:15:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MmZmMmIxMTIxY2RjMTljY2ZlMjFmYmU4Mjc0MjA1ZTA2NjdjNmNhM2QzNjgyYTllMjc4YTJmOWZmZjk1YTEyY/5IjHc=: 00:25:09.926 17:15:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:25:09.926 17:15:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:09.926 17:15:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:25:09.926 17:15:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MmZmMmIxMTIxY2RjMTljY2ZlMjFmYmU4Mjc0MjA1ZTA2NjdjNmNhM2QzNjgyYTllMjc4YTJmOWZmZjk1YTEyY/5IjHc=: 00:25:09.926 17:15:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:25:09.926 17:15:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 4 00:25:09.926 17:15:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:09.926 17:15:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:09.926 17:15:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:25:09.926 17:15:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:25:09.926 17:15:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:09.926 17:15:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:25:09.926 17:15:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:09.926 17:15:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:09.926 17:15:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:09.926 17:15:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:09.926 17:15:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:09.926 17:15:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:09.926 17:15:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:09.926 17:15:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:09.926 17:15:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:09.926 17:15:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:09.926 17:15:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:09.926 17:15:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:09.926 17:15:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:09.926 17:15:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:09.926 17:15:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:25:09.926 17:15:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:09.926 17:15:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:10.515 nvme0n1 00:25:10.515 17:15:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:10.515 17:15:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:10.515 17:15:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:10.515 17:15:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:10.515 17:15:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:10.515 17:15:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:10.515 17:15:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:10.515 17:15:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:10.515 17:15:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:10.515 17:15:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:10.515 17:15:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:10.515 17:15:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:25:10.515 17:15:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:25:10.515 17:15:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:10.515 17:15:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 0 00:25:10.515 17:15:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:10.515 17:15:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:10.515 17:15:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:10.515 17:15:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:25:10.515 17:15:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZmJjZTViODM2ODYzZTQzZjZmMDMwNTFmM2U4OTM4NzTXuYHt: 00:25:10.515 17:15:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MWY4YWQ2MmQ0ODMxNjBhNWJkYmI3YjZmMmVjNzhiYTdiNDEyYmQ0YjI3MDdhYzBhYTQ1NzZkZTYyOWQxODFiZit4JFk=: 00:25:10.515 17:15:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:10.515 17:15:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:10.515 17:15:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZmJjZTViODM2ODYzZTQzZjZmMDMwNTFmM2U4OTM4NzTXuYHt: 00:25:10.515 17:15:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MWY4YWQ2MmQ0ODMxNjBhNWJkYmI3YjZmMmVjNzhiYTdiNDEyYmQ0YjI3MDdhYzBhYTQ1NzZkZTYyOWQxODFiZit4JFk=: ]] 00:25:10.515 17:15:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MWY4YWQ2MmQ0ODMxNjBhNWJkYmI3YjZmMmVjNzhiYTdiNDEyYmQ0YjI3MDdhYzBhYTQ1NzZkZTYyOWQxODFiZit4JFk=: 00:25:10.515 17:15:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 0 00:25:10.515 17:15:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:10.515 17:15:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:10.515 17:15:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:25:10.515 17:15:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:25:10.515 17:15:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:10.515 17:15:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:25:10.515 17:15:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:10.515 17:15:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:10.515 17:15:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:10.515 17:15:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:10.515 17:15:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:10.515 17:15:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:10.515 17:15:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:10.515 17:15:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:10.516 17:15:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:10.516 17:15:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:10.516 17:15:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:10.516 17:15:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:10.516 17:15:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:10.516 17:15:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:10.516 17:15:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:10.516 17:15:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:10.516 17:15:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:10.516 nvme0n1 00:25:10.516 17:15:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:10.516 17:15:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:10.516 17:15:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:10.516 17:15:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:10.516 17:15:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:10.516 17:15:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:10.516 17:15:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:10.516 17:15:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:10.516 17:15:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:10.516 17:15:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:10.516 17:15:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:10.516 17:15:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:10.516 17:15:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 1 00:25:10.516 17:15:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:10.516 17:15:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:10.516 17:15:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:10.516 17:15:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:10.516 17:15:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YTM4NTAxMmMyNDU4NGVjNGY0OGJiMmYwYzBhNzkwODViZGZlZDJmMTBhNjg1YzA0n8CKew==: 00:25:10.516 17:15:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ODMyZDk5MzRkYjZmZGI2MzFjOTE5NDVjYjRlNWQ3NTAxMWZjZGNhMWU5YWRlZDE1Yk8z+w==: 00:25:10.516 17:15:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:10.516 17:15:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:10.516 17:15:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YTM4NTAxMmMyNDU4NGVjNGY0OGJiMmYwYzBhNzkwODViZGZlZDJmMTBhNjg1YzA0n8CKew==: 00:25:10.516 17:15:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ODMyZDk5MzRkYjZmZGI2MzFjOTE5NDVjYjRlNWQ3NTAxMWZjZGNhMWU5YWRlZDE1Yk8z+w==: ]] 00:25:10.516 17:15:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ODMyZDk5MzRkYjZmZGI2MzFjOTE5NDVjYjRlNWQ3NTAxMWZjZGNhMWU5YWRlZDE1Yk8z+w==: 00:25:10.516 17:15:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 1 00:25:10.516 17:15:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:10.516 17:15:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:10.516 17:15:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:25:10.516 17:15:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:25:10.516 17:15:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:10.516 17:15:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:25:10.516 17:15:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:10.516 17:15:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:10.516 17:15:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:10.516 17:15:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:10.516 17:15:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:10.516 17:15:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:10.516 17:15:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:10.516 17:15:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:10.516 17:15:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:10.516 17:15:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:10.516 17:15:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:10.516 17:15:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:10.516 17:15:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:10.516 17:15:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:10.516 17:15:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:10.516 17:15:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:10.774 17:15:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:10.774 nvme0n1 00:25:10.774 17:15:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:10.774 17:15:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:10.774 17:15:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:10.774 17:15:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:10.774 17:15:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:10.774 17:15:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:10.774 17:15:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:10.774 17:15:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:10.774 17:15:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:10.774 17:15:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:10.774 17:15:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:10.774 17:15:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:10.774 17:15:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 2 00:25:10.774 17:15:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:10.774 17:15:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:10.774 17:15:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:10.774 17:15:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:25:10.774 17:15:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NzlmYzhlZGU0ZWYwM2UxMzM3ZDI1YzE1NGMzNjkxMDlh7lHs: 00:25:10.774 17:15:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YjUwMTM0NTI4OGQ3MDRkZDA2ZWQ5YWUzZmM0YTNiMjgM+wAH: 00:25:10.774 17:15:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:10.774 17:15:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:10.774 17:15:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NzlmYzhlZGU0ZWYwM2UxMzM3ZDI1YzE1NGMzNjkxMDlh7lHs: 00:25:10.774 17:15:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YjUwMTM0NTI4OGQ3MDRkZDA2ZWQ5YWUzZmM0YTNiMjgM+wAH: ]] 00:25:10.774 17:15:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YjUwMTM0NTI4OGQ3MDRkZDA2ZWQ5YWUzZmM0YTNiMjgM+wAH: 00:25:10.774 17:15:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 2 00:25:10.774 17:15:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:10.774 17:15:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:10.774 17:15:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:25:10.774 17:15:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:25:10.774 17:15:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:10.774 17:15:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:25:10.774 17:15:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:10.774 17:15:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:10.774 17:15:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:10.774 17:15:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:10.774 17:15:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:10.774 17:15:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:10.774 17:15:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:10.774 17:15:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:10.774 17:15:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:10.774 17:15:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:10.774 17:15:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:10.774 17:15:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:10.774 17:15:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:10.774 17:15:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:10.774 17:15:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:10.774 17:15:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:10.774 17:15:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:11.032 nvme0n1 00:25:11.032 17:15:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:11.032 17:15:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:11.032 17:15:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:11.032 17:15:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:11.032 17:15:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:11.032 17:15:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:11.032 17:15:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:11.032 17:15:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:11.032 17:15:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:11.032 17:15:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:11.032 17:15:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:11.032 17:15:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:11.032 17:15:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 3 00:25:11.032 17:15:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:11.032 17:15:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:11.032 17:15:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:11.032 17:15:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:25:11.032 17:15:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NmJmNjUzNGQ1YmZjYzI4Nzc3OTMyNjExZTc3NDliMDRiY2ZjOTdkMTQ1NGE2NmFjfs2flw==: 00:25:11.032 17:15:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MTlhNmQzZmE4YTYzZDYzZjZlYTBkYmE0NmZjMzFlMmTKp+77: 00:25:11.032 17:15:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:11.032 17:15:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:11.032 17:15:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NmJmNjUzNGQ1YmZjYzI4Nzc3OTMyNjExZTc3NDliMDRiY2ZjOTdkMTQ1NGE2NmFjfs2flw==: 00:25:11.032 17:15:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MTlhNmQzZmE4YTYzZDYzZjZlYTBkYmE0NmZjMzFlMmTKp+77: ]] 00:25:11.032 17:15:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MTlhNmQzZmE4YTYzZDYzZjZlYTBkYmE0NmZjMzFlMmTKp+77: 00:25:11.032 17:15:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 3 00:25:11.032 17:15:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:11.032 17:15:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:11.032 17:15:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:25:11.032 17:15:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:25:11.032 17:15:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:11.032 17:15:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:25:11.032 17:15:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:11.032 17:15:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:11.032 17:15:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:11.032 17:15:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:11.032 17:15:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:11.032 17:15:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:11.032 17:15:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:11.032 17:15:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:11.032 17:15:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:11.032 17:15:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:11.032 17:15:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:11.032 17:15:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:11.032 17:15:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:11.032 17:15:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:11.032 17:15:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:25:11.032 17:15:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:11.032 17:15:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:11.290 nvme0n1 00:25:11.290 17:15:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:11.290 17:15:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:11.290 17:15:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:11.290 17:15:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:11.290 17:15:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:11.290 17:15:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:11.290 17:15:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:11.290 17:15:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:11.290 17:15:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:11.290 17:15:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:11.290 17:15:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:11.290 17:15:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:11.290 17:15:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 4 00:25:11.290 17:15:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:11.290 17:15:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:11.290 17:15:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:11.290 17:15:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:25:11.290 17:15:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MmZmMmIxMTIxY2RjMTljY2ZlMjFmYmU4Mjc0MjA1ZTA2NjdjNmNhM2QzNjgyYTllMjc4YTJmOWZmZjk1YTEyY/5IjHc=: 00:25:11.290 17:15:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:25:11.290 17:15:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:11.290 17:15:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:11.290 17:15:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MmZmMmIxMTIxY2RjMTljY2ZlMjFmYmU4Mjc0MjA1ZTA2NjdjNmNhM2QzNjgyYTllMjc4YTJmOWZmZjk1YTEyY/5IjHc=: 00:25:11.290 17:15:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:25:11.290 17:15:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 4 00:25:11.290 17:15:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:11.290 17:15:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:11.290 17:15:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:25:11.290 17:15:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:25:11.290 17:15:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:11.290 17:15:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:25:11.290 17:15:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:11.290 17:15:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:11.290 17:15:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:11.290 17:15:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:11.290 17:15:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:11.290 17:15:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:11.290 17:15:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:11.290 17:15:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:11.290 17:15:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:11.290 17:15:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:11.290 17:15:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:11.290 17:15:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:11.290 17:15:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:11.290 17:15:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:11.290 17:15:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:25:11.290 17:15:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:11.290 17:15:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:11.548 nvme0n1 00:25:11.548 17:15:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:11.548 17:15:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:11.548 17:15:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:11.548 17:15:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:11.548 17:15:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:11.548 17:15:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:11.548 17:15:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:11.548 17:15:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:11.548 17:15:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:11.548 17:15:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:11.548 17:15:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:11.548 17:15:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:25:11.548 17:15:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:11.548 17:15:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 0 00:25:11.548 17:15:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:11.548 17:15:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:11.548 17:15:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:25:11.548 17:15:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:25:11.548 17:15:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZmJjZTViODM2ODYzZTQzZjZmMDMwNTFmM2U4OTM4NzTXuYHt: 00:25:11.548 17:15:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MWY4YWQ2MmQ0ODMxNjBhNWJkYmI3YjZmMmVjNzhiYTdiNDEyYmQ0YjI3MDdhYzBhYTQ1NzZkZTYyOWQxODFiZit4JFk=: 00:25:11.548 17:15:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:11.548 17:15:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:25:11.548 17:15:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZmJjZTViODM2ODYzZTQzZjZmMDMwNTFmM2U4OTM4NzTXuYHt: 00:25:11.548 17:15:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MWY4YWQ2MmQ0ODMxNjBhNWJkYmI3YjZmMmVjNzhiYTdiNDEyYmQ0YjI3MDdhYzBhYTQ1NzZkZTYyOWQxODFiZit4JFk=: ]] 00:25:11.548 17:15:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MWY4YWQ2MmQ0ODMxNjBhNWJkYmI3YjZmMmVjNzhiYTdiNDEyYmQ0YjI3MDdhYzBhYTQ1NzZkZTYyOWQxODFiZit4JFk=: 00:25:11.548 17:15:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 0 00:25:11.548 17:15:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:11.548 17:15:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:11.548 17:15:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:25:11.548 17:15:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:25:11.548 17:15:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:11.548 17:15:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:25:11.548 17:15:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:11.548 17:15:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:11.548 17:15:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:11.548 17:15:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:11.548 17:15:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:11.548 17:15:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:11.548 17:15:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:11.548 17:15:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:11.548 17:15:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:11.548 17:15:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:11.548 17:15:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:11.548 17:15:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:11.548 17:15:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:11.548 17:15:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:11.548 17:15:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:11.548 17:15:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:11.548 17:15:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:11.806 nvme0n1 00:25:11.806 17:15:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:11.806 17:15:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:11.806 17:15:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:11.806 17:15:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:11.806 17:15:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:11.806 17:15:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:11.806 17:15:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:11.806 17:15:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:11.806 17:15:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:11.806 17:15:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:11.806 17:15:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:11.806 17:15:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:11.806 17:15:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 1 00:25:11.806 17:15:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:11.806 17:15:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:11.806 17:15:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:25:11.806 17:15:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:11.806 17:15:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YTM4NTAxMmMyNDU4NGVjNGY0OGJiMmYwYzBhNzkwODViZGZlZDJmMTBhNjg1YzA0n8CKew==: 00:25:11.806 17:15:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ODMyZDk5MzRkYjZmZGI2MzFjOTE5NDVjYjRlNWQ3NTAxMWZjZGNhMWU5YWRlZDE1Yk8z+w==: 00:25:11.806 17:15:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:11.806 17:15:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:25:11.806 17:15:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YTM4NTAxMmMyNDU4NGVjNGY0OGJiMmYwYzBhNzkwODViZGZlZDJmMTBhNjg1YzA0n8CKew==: 00:25:11.806 17:15:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ODMyZDk5MzRkYjZmZGI2MzFjOTE5NDVjYjRlNWQ3NTAxMWZjZGNhMWU5YWRlZDE1Yk8z+w==: ]] 00:25:11.806 17:15:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ODMyZDk5MzRkYjZmZGI2MzFjOTE5NDVjYjRlNWQ3NTAxMWZjZGNhMWU5YWRlZDE1Yk8z+w==: 00:25:11.806 17:15:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 1 00:25:11.806 17:15:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:11.807 17:15:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:11.807 17:15:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:25:11.807 17:15:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:25:11.807 17:15:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:11.807 17:15:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:25:11.807 17:15:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:11.807 17:15:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:11.807 17:15:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:11.807 17:15:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:11.807 17:15:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:11.807 17:15:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:11.807 17:15:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:11.807 17:15:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:11.807 17:15:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:11.807 17:15:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:11.807 17:15:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:11.807 17:15:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:11.807 17:15:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:11.807 17:15:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:11.807 17:15:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:11.807 17:15:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:11.807 17:15:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:12.064 nvme0n1 00:25:12.064 17:15:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:12.064 17:15:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:12.064 17:15:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:12.064 17:15:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:12.064 17:15:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:12.064 17:15:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:12.064 17:15:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:12.064 17:15:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:12.064 17:15:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:12.064 17:15:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:12.064 17:15:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:12.064 17:15:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:12.064 17:15:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 2 00:25:12.064 17:15:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:12.064 17:15:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:12.064 17:15:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:25:12.065 17:15:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:25:12.065 17:15:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NzlmYzhlZGU0ZWYwM2UxMzM3ZDI1YzE1NGMzNjkxMDlh7lHs: 00:25:12.065 17:15:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YjUwMTM0NTI4OGQ3MDRkZDA2ZWQ5YWUzZmM0YTNiMjgM+wAH: 00:25:12.065 17:15:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:12.065 17:15:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:25:12.065 17:15:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NzlmYzhlZGU0ZWYwM2UxMzM3ZDI1YzE1NGMzNjkxMDlh7lHs: 00:25:12.065 17:15:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YjUwMTM0NTI4OGQ3MDRkZDA2ZWQ5YWUzZmM0YTNiMjgM+wAH: ]] 00:25:12.065 17:15:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YjUwMTM0NTI4OGQ3MDRkZDA2ZWQ5YWUzZmM0YTNiMjgM+wAH: 00:25:12.065 17:15:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 2 00:25:12.065 17:15:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:12.065 17:15:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:12.065 17:15:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:25:12.065 17:15:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:25:12.065 17:15:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:12.065 17:15:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:25:12.065 17:15:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:12.065 17:15:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:12.065 17:15:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:12.065 17:15:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:12.065 17:15:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:12.065 17:15:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:12.065 17:15:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:12.065 17:15:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:12.065 17:15:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:12.065 17:15:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:12.065 17:15:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:12.065 17:15:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:12.065 17:15:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:12.065 17:15:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:12.065 17:15:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:12.065 17:15:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:12.065 17:15:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:12.323 nvme0n1 00:25:12.323 17:15:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:12.323 17:15:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:12.323 17:15:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:12.323 17:15:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:12.323 17:15:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:12.323 17:15:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:12.323 17:15:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:12.323 17:15:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:12.323 17:15:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:12.323 17:15:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:12.323 17:15:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:12.323 17:15:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:12.323 17:15:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 3 00:25:12.323 17:15:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:12.323 17:15:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:12.323 17:15:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:25:12.323 17:15:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:25:12.323 17:15:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NmJmNjUzNGQ1YmZjYzI4Nzc3OTMyNjExZTc3NDliMDRiY2ZjOTdkMTQ1NGE2NmFjfs2flw==: 00:25:12.323 17:15:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MTlhNmQzZmE4YTYzZDYzZjZlYTBkYmE0NmZjMzFlMmTKp+77: 00:25:12.323 17:15:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:12.323 17:15:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:25:12.323 17:15:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NmJmNjUzNGQ1YmZjYzI4Nzc3OTMyNjExZTc3NDliMDRiY2ZjOTdkMTQ1NGE2NmFjfs2flw==: 00:25:12.323 17:15:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MTlhNmQzZmE4YTYzZDYzZjZlYTBkYmE0NmZjMzFlMmTKp+77: ]] 00:25:12.323 17:15:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MTlhNmQzZmE4YTYzZDYzZjZlYTBkYmE0NmZjMzFlMmTKp+77: 00:25:12.323 17:15:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 3 00:25:12.323 17:15:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:12.323 17:15:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:12.323 17:15:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:25:12.323 17:15:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:25:12.323 17:15:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:12.323 17:15:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:25:12.323 17:15:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:12.323 17:15:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:12.323 17:15:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:12.323 17:15:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:12.323 17:15:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:12.323 17:15:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:12.323 17:15:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:12.323 17:15:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:12.323 17:15:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:12.323 17:15:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:12.323 17:15:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:12.323 17:15:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:12.323 17:15:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:12.323 17:15:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:12.323 17:15:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:25:12.323 17:15:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:12.323 17:15:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:12.580 nvme0n1 00:25:12.580 17:15:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:12.580 17:15:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:12.580 17:15:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:12.580 17:15:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:12.581 17:15:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:12.581 17:16:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:12.581 17:16:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:12.581 17:16:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:12.581 17:16:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:12.581 17:16:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:12.581 17:16:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:12.581 17:16:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:12.581 17:16:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 4 00:25:12.581 17:16:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:12.581 17:16:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:12.581 17:16:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:25:12.581 17:16:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:25:12.581 17:16:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MmZmMmIxMTIxY2RjMTljY2ZlMjFmYmU4Mjc0MjA1ZTA2NjdjNmNhM2QzNjgyYTllMjc4YTJmOWZmZjk1YTEyY/5IjHc=: 00:25:12.581 17:16:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:25:12.581 17:16:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:12.581 17:16:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:25:12.581 17:16:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MmZmMmIxMTIxY2RjMTljY2ZlMjFmYmU4Mjc0MjA1ZTA2NjdjNmNhM2QzNjgyYTllMjc4YTJmOWZmZjk1YTEyY/5IjHc=: 00:25:12.581 17:16:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:25:12.581 17:16:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 4 00:25:12.581 17:16:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:12.581 17:16:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:12.581 17:16:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:25:12.581 17:16:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:25:12.581 17:16:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:12.581 17:16:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:25:12.581 17:16:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:12.581 17:16:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:12.581 17:16:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:12.581 17:16:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:12.581 17:16:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:12.581 17:16:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:12.581 17:16:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:12.581 17:16:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:12.581 17:16:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:12.581 17:16:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:12.581 17:16:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:12.581 17:16:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:12.581 17:16:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:12.581 17:16:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:12.581 17:16:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:25:12.581 17:16:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:12.581 17:16:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:12.839 nvme0n1 00:25:12.839 17:16:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:12.839 17:16:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:12.839 17:16:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:12.839 17:16:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:12.839 17:16:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:12.839 17:16:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:12.839 17:16:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:12.839 17:16:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:12.839 17:16:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:12.839 17:16:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:12.839 17:16:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:12.839 17:16:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:25:12.839 17:16:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:12.839 17:16:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 0 00:25:12.839 17:16:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:12.839 17:16:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:12.839 17:16:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:12.839 17:16:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:25:12.839 17:16:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZmJjZTViODM2ODYzZTQzZjZmMDMwNTFmM2U4OTM4NzTXuYHt: 00:25:12.839 17:16:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MWY4YWQ2MmQ0ODMxNjBhNWJkYmI3YjZmMmVjNzhiYTdiNDEyYmQ0YjI3MDdhYzBhYTQ1NzZkZTYyOWQxODFiZit4JFk=: 00:25:12.839 17:16:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:12.839 17:16:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:25:12.839 17:16:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZmJjZTViODM2ODYzZTQzZjZmMDMwNTFmM2U4OTM4NzTXuYHt: 00:25:12.839 17:16:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MWY4YWQ2MmQ0ODMxNjBhNWJkYmI3YjZmMmVjNzhiYTdiNDEyYmQ0YjI3MDdhYzBhYTQ1NzZkZTYyOWQxODFiZit4JFk=: ]] 00:25:12.839 17:16:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MWY4YWQ2MmQ0ODMxNjBhNWJkYmI3YjZmMmVjNzhiYTdiNDEyYmQ0YjI3MDdhYzBhYTQ1NzZkZTYyOWQxODFiZit4JFk=: 00:25:12.839 17:16:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 0 00:25:12.839 17:16:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:12.839 17:16:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:12.839 17:16:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:25:12.839 17:16:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:25:12.839 17:16:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:12.839 17:16:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:25:12.839 17:16:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:12.839 17:16:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:12.839 17:16:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:12.839 17:16:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:12.839 17:16:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:12.839 17:16:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:12.839 17:16:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:12.839 17:16:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:12.839 17:16:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:12.839 17:16:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:12.839 17:16:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:12.839 17:16:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:12.839 17:16:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:12.839 17:16:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:12.839 17:16:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:12.839 17:16:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:12.839 17:16:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:13.097 nvme0n1 00:25:13.097 17:16:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:13.097 17:16:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:13.097 17:16:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:13.097 17:16:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:13.097 17:16:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:13.097 17:16:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:13.097 17:16:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:13.097 17:16:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:13.097 17:16:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:13.097 17:16:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:13.097 17:16:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:13.097 17:16:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:13.097 17:16:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 1 00:25:13.097 17:16:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:13.097 17:16:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:13.097 17:16:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:13.097 17:16:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:13.097 17:16:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YTM4NTAxMmMyNDU4NGVjNGY0OGJiMmYwYzBhNzkwODViZGZlZDJmMTBhNjg1YzA0n8CKew==: 00:25:13.097 17:16:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ODMyZDk5MzRkYjZmZGI2MzFjOTE5NDVjYjRlNWQ3NTAxMWZjZGNhMWU5YWRlZDE1Yk8z+w==: 00:25:13.097 17:16:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:13.097 17:16:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:25:13.097 17:16:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YTM4NTAxMmMyNDU4NGVjNGY0OGJiMmYwYzBhNzkwODViZGZlZDJmMTBhNjg1YzA0n8CKew==: 00:25:13.097 17:16:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ODMyZDk5MzRkYjZmZGI2MzFjOTE5NDVjYjRlNWQ3NTAxMWZjZGNhMWU5YWRlZDE1Yk8z+w==: ]] 00:25:13.097 17:16:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ODMyZDk5MzRkYjZmZGI2MzFjOTE5NDVjYjRlNWQ3NTAxMWZjZGNhMWU5YWRlZDE1Yk8z+w==: 00:25:13.097 17:16:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 1 00:25:13.097 17:16:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:13.097 17:16:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:13.097 17:16:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:25:13.097 17:16:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:25:13.097 17:16:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:13.097 17:16:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:25:13.097 17:16:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:13.097 17:16:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:13.097 17:16:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:13.097 17:16:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:13.097 17:16:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:13.097 17:16:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:13.097 17:16:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:13.097 17:16:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:13.097 17:16:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:13.097 17:16:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:13.097 17:16:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:13.097 17:16:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:13.097 17:16:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:13.097 17:16:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:13.097 17:16:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:13.097 17:16:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:13.097 17:16:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:13.355 nvme0n1 00:25:13.355 17:16:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:13.355 17:16:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:13.355 17:16:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:13.355 17:16:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:13.355 17:16:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:13.355 17:16:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:13.355 17:16:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:13.355 17:16:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:13.355 17:16:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:13.355 17:16:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:13.355 17:16:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:13.355 17:16:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:13.355 17:16:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 2 00:25:13.355 17:16:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:13.355 17:16:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:13.355 17:16:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:13.355 17:16:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:25:13.355 17:16:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NzlmYzhlZGU0ZWYwM2UxMzM3ZDI1YzE1NGMzNjkxMDlh7lHs: 00:25:13.355 17:16:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YjUwMTM0NTI4OGQ3MDRkZDA2ZWQ5YWUzZmM0YTNiMjgM+wAH: 00:25:13.355 17:16:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:13.355 17:16:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:25:13.355 17:16:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NzlmYzhlZGU0ZWYwM2UxMzM3ZDI1YzE1NGMzNjkxMDlh7lHs: 00:25:13.355 17:16:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YjUwMTM0NTI4OGQ3MDRkZDA2ZWQ5YWUzZmM0YTNiMjgM+wAH: ]] 00:25:13.355 17:16:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YjUwMTM0NTI4OGQ3MDRkZDA2ZWQ5YWUzZmM0YTNiMjgM+wAH: 00:25:13.355 17:16:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 2 00:25:13.355 17:16:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:13.355 17:16:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:13.355 17:16:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:25:13.355 17:16:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:25:13.355 17:16:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:13.355 17:16:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:25:13.355 17:16:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:13.355 17:16:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:13.355 17:16:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:13.355 17:16:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:13.355 17:16:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:13.355 17:16:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:13.355 17:16:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:13.355 17:16:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:13.355 17:16:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:13.355 17:16:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:13.355 17:16:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:13.355 17:16:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:13.355 17:16:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:13.355 17:16:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:13.355 17:16:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:13.355 17:16:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:13.355 17:16:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:13.613 nvme0n1 00:25:13.613 17:16:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:13.613 17:16:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:13.613 17:16:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:13.613 17:16:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:13.613 17:16:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:13.613 17:16:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:13.870 17:16:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:13.870 17:16:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:13.870 17:16:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:13.870 17:16:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:13.870 17:16:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:13.870 17:16:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:13.870 17:16:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 3 00:25:13.870 17:16:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:13.870 17:16:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:13.870 17:16:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:13.870 17:16:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:25:13.870 17:16:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NmJmNjUzNGQ1YmZjYzI4Nzc3OTMyNjExZTc3NDliMDRiY2ZjOTdkMTQ1NGE2NmFjfs2flw==: 00:25:13.870 17:16:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MTlhNmQzZmE4YTYzZDYzZjZlYTBkYmE0NmZjMzFlMmTKp+77: 00:25:13.870 17:16:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:13.870 17:16:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:25:13.870 17:16:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NmJmNjUzNGQ1YmZjYzI4Nzc3OTMyNjExZTc3NDliMDRiY2ZjOTdkMTQ1NGE2NmFjfs2flw==: 00:25:13.870 17:16:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MTlhNmQzZmE4YTYzZDYzZjZlYTBkYmE0NmZjMzFlMmTKp+77: ]] 00:25:13.870 17:16:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MTlhNmQzZmE4YTYzZDYzZjZlYTBkYmE0NmZjMzFlMmTKp+77: 00:25:13.870 17:16:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 3 00:25:13.870 17:16:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:13.870 17:16:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:13.870 17:16:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:25:13.870 17:16:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:25:13.870 17:16:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:13.870 17:16:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:25:13.870 17:16:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:13.870 17:16:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:13.870 17:16:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:13.870 17:16:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:13.870 17:16:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:13.870 17:16:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:13.870 17:16:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:13.870 17:16:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:13.870 17:16:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:13.870 17:16:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:13.870 17:16:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:13.870 17:16:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:13.870 17:16:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:13.870 17:16:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:13.870 17:16:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:25:13.870 17:16:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:13.870 17:16:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:14.128 nvme0n1 00:25:14.128 17:16:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:14.128 17:16:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:14.128 17:16:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:14.128 17:16:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:14.128 17:16:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:14.128 17:16:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:14.128 17:16:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:14.128 17:16:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:14.128 17:16:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:14.128 17:16:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:14.128 17:16:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:14.128 17:16:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:14.128 17:16:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 4 00:25:14.128 17:16:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:14.128 17:16:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:14.128 17:16:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:14.128 17:16:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:25:14.128 17:16:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MmZmMmIxMTIxY2RjMTljY2ZlMjFmYmU4Mjc0MjA1ZTA2NjdjNmNhM2QzNjgyYTllMjc4YTJmOWZmZjk1YTEyY/5IjHc=: 00:25:14.128 17:16:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:25:14.128 17:16:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:14.128 17:16:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:25:14.128 17:16:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MmZmMmIxMTIxY2RjMTljY2ZlMjFmYmU4Mjc0MjA1ZTA2NjdjNmNhM2QzNjgyYTllMjc4YTJmOWZmZjk1YTEyY/5IjHc=: 00:25:14.128 17:16:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:25:14.128 17:16:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 4 00:25:14.128 17:16:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:14.128 17:16:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:14.128 17:16:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:25:14.128 17:16:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:25:14.128 17:16:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:14.128 17:16:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:25:14.128 17:16:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:14.128 17:16:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:14.128 17:16:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:14.128 17:16:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:14.128 17:16:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:14.128 17:16:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:14.128 17:16:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:14.128 17:16:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:14.128 17:16:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:14.128 17:16:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:14.128 17:16:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:14.128 17:16:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:14.128 17:16:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:14.128 17:16:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:14.128 17:16:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:25:14.128 17:16:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:14.128 17:16:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:14.386 nvme0n1 00:25:14.386 17:16:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:14.386 17:16:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:14.386 17:16:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:14.386 17:16:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:14.386 17:16:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:14.386 17:16:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:14.386 17:16:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:14.386 17:16:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:14.386 17:16:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:14.386 17:16:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:14.386 17:16:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:14.386 17:16:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:25:14.386 17:16:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:14.386 17:16:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 0 00:25:14.386 17:16:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:14.386 17:16:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:14.386 17:16:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:25:14.386 17:16:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:25:14.386 17:16:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZmJjZTViODM2ODYzZTQzZjZmMDMwNTFmM2U4OTM4NzTXuYHt: 00:25:14.386 17:16:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MWY4YWQ2MmQ0ODMxNjBhNWJkYmI3YjZmMmVjNzhiYTdiNDEyYmQ0YjI3MDdhYzBhYTQ1NzZkZTYyOWQxODFiZit4JFk=: 00:25:14.386 17:16:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:14.386 17:16:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:25:14.386 17:16:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZmJjZTViODM2ODYzZTQzZjZmMDMwNTFmM2U4OTM4NzTXuYHt: 00:25:14.386 17:16:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MWY4YWQ2MmQ0ODMxNjBhNWJkYmI3YjZmMmVjNzhiYTdiNDEyYmQ0YjI3MDdhYzBhYTQ1NzZkZTYyOWQxODFiZit4JFk=: ]] 00:25:14.386 17:16:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MWY4YWQ2MmQ0ODMxNjBhNWJkYmI3YjZmMmVjNzhiYTdiNDEyYmQ0YjI3MDdhYzBhYTQ1NzZkZTYyOWQxODFiZit4JFk=: 00:25:14.386 17:16:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 0 00:25:14.386 17:16:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:14.386 17:16:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:14.386 17:16:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:25:14.386 17:16:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:25:14.386 17:16:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:14.386 17:16:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:25:14.386 17:16:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:14.386 17:16:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:14.386 17:16:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:14.386 17:16:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:14.386 17:16:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:14.386 17:16:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:14.386 17:16:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:14.386 17:16:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:14.386 17:16:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:14.386 17:16:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:14.386 17:16:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:14.386 17:16:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:14.386 17:16:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:14.386 17:16:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:14.386 17:16:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:14.386 17:16:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:14.386 17:16:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:14.951 nvme0n1 00:25:14.951 17:16:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:14.951 17:16:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:14.951 17:16:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:14.951 17:16:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:14.951 17:16:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:14.951 17:16:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:14.951 17:16:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:14.951 17:16:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:14.951 17:16:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:14.951 17:16:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:14.951 17:16:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:14.951 17:16:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:14.951 17:16:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 1 00:25:14.951 17:16:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:14.951 17:16:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:14.951 17:16:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:25:14.951 17:16:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:14.951 17:16:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YTM4NTAxMmMyNDU4NGVjNGY0OGJiMmYwYzBhNzkwODViZGZlZDJmMTBhNjg1YzA0n8CKew==: 00:25:14.951 17:16:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ODMyZDk5MzRkYjZmZGI2MzFjOTE5NDVjYjRlNWQ3NTAxMWZjZGNhMWU5YWRlZDE1Yk8z+w==: 00:25:14.951 17:16:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:14.951 17:16:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:25:14.951 17:16:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YTM4NTAxMmMyNDU4NGVjNGY0OGJiMmYwYzBhNzkwODViZGZlZDJmMTBhNjg1YzA0n8CKew==: 00:25:14.951 17:16:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ODMyZDk5MzRkYjZmZGI2MzFjOTE5NDVjYjRlNWQ3NTAxMWZjZGNhMWU5YWRlZDE1Yk8z+w==: ]] 00:25:14.951 17:16:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ODMyZDk5MzRkYjZmZGI2MzFjOTE5NDVjYjRlNWQ3NTAxMWZjZGNhMWU5YWRlZDE1Yk8z+w==: 00:25:14.951 17:16:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 1 00:25:14.951 17:16:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:14.951 17:16:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:14.951 17:16:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:25:14.951 17:16:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:25:14.951 17:16:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:14.951 17:16:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:25:14.951 17:16:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:14.951 17:16:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:14.951 17:16:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:14.951 17:16:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:14.951 17:16:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:14.951 17:16:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:14.951 17:16:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:14.951 17:16:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:14.951 17:16:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:14.951 17:16:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:14.951 17:16:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:14.951 17:16:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:14.951 17:16:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:14.951 17:16:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:14.951 17:16:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:14.951 17:16:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:14.951 17:16:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:15.209 nvme0n1 00:25:15.209 17:16:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:15.209 17:16:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:15.209 17:16:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:15.209 17:16:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:15.209 17:16:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:15.209 17:16:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:15.467 17:16:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:15.467 17:16:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:15.467 17:16:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:15.467 17:16:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:15.467 17:16:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:15.467 17:16:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:15.467 17:16:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 2 00:25:15.467 17:16:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:15.467 17:16:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:15.467 17:16:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:25:15.467 17:16:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:25:15.467 17:16:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NzlmYzhlZGU0ZWYwM2UxMzM3ZDI1YzE1NGMzNjkxMDlh7lHs: 00:25:15.467 17:16:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YjUwMTM0NTI4OGQ3MDRkZDA2ZWQ5YWUzZmM0YTNiMjgM+wAH: 00:25:15.467 17:16:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:15.467 17:16:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:25:15.467 17:16:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NzlmYzhlZGU0ZWYwM2UxMzM3ZDI1YzE1NGMzNjkxMDlh7lHs: 00:25:15.467 17:16:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YjUwMTM0NTI4OGQ3MDRkZDA2ZWQ5YWUzZmM0YTNiMjgM+wAH: ]] 00:25:15.467 17:16:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YjUwMTM0NTI4OGQ3MDRkZDA2ZWQ5YWUzZmM0YTNiMjgM+wAH: 00:25:15.467 17:16:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 2 00:25:15.467 17:16:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:15.467 17:16:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:15.467 17:16:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:25:15.467 17:16:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:25:15.467 17:16:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:15.467 17:16:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:25:15.467 17:16:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:15.467 17:16:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:15.467 17:16:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:15.467 17:16:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:15.467 17:16:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:15.467 17:16:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:15.467 17:16:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:15.467 17:16:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:15.467 17:16:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:15.467 17:16:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:15.467 17:16:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:15.467 17:16:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:15.467 17:16:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:15.467 17:16:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:15.467 17:16:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:15.467 17:16:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:15.467 17:16:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:15.725 nvme0n1 00:25:15.725 17:16:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:15.725 17:16:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:15.725 17:16:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:15.725 17:16:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:15.725 17:16:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:15.725 17:16:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:15.725 17:16:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:15.725 17:16:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:15.725 17:16:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:15.725 17:16:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:15.725 17:16:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:15.725 17:16:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:15.725 17:16:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 3 00:25:15.725 17:16:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:15.725 17:16:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:15.725 17:16:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:25:15.725 17:16:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:25:15.725 17:16:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NmJmNjUzNGQ1YmZjYzI4Nzc3OTMyNjExZTc3NDliMDRiY2ZjOTdkMTQ1NGE2NmFjfs2flw==: 00:25:15.725 17:16:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MTlhNmQzZmE4YTYzZDYzZjZlYTBkYmE0NmZjMzFlMmTKp+77: 00:25:15.725 17:16:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:15.725 17:16:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:25:15.725 17:16:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NmJmNjUzNGQ1YmZjYzI4Nzc3OTMyNjExZTc3NDliMDRiY2ZjOTdkMTQ1NGE2NmFjfs2flw==: 00:25:15.725 17:16:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MTlhNmQzZmE4YTYzZDYzZjZlYTBkYmE0NmZjMzFlMmTKp+77: ]] 00:25:15.725 17:16:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MTlhNmQzZmE4YTYzZDYzZjZlYTBkYmE0NmZjMzFlMmTKp+77: 00:25:15.725 17:16:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 3 00:25:15.725 17:16:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:15.725 17:16:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:15.725 17:16:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:25:15.725 17:16:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:25:15.725 17:16:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:15.725 17:16:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:25:15.725 17:16:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:15.725 17:16:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:15.725 17:16:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:15.725 17:16:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:15.725 17:16:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:15.725 17:16:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:15.725 17:16:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:15.725 17:16:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:15.725 17:16:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:15.726 17:16:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:15.726 17:16:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:15.726 17:16:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:15.726 17:16:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:15.726 17:16:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:15.726 17:16:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:25:15.726 17:16:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:15.726 17:16:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:16.290 nvme0n1 00:25:16.290 17:16:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:16.291 17:16:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:16.291 17:16:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:16.291 17:16:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:16.291 17:16:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:16.291 17:16:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:16.291 17:16:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:16.291 17:16:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:16.291 17:16:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:16.291 17:16:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:16.291 17:16:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:16.291 17:16:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:16.291 17:16:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 4 00:25:16.291 17:16:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:16.291 17:16:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:16.291 17:16:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:25:16.291 17:16:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:25:16.291 17:16:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MmZmMmIxMTIxY2RjMTljY2ZlMjFmYmU4Mjc0MjA1ZTA2NjdjNmNhM2QzNjgyYTllMjc4YTJmOWZmZjk1YTEyY/5IjHc=: 00:25:16.291 17:16:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:25:16.291 17:16:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:16.291 17:16:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:25:16.291 17:16:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MmZmMmIxMTIxY2RjMTljY2ZlMjFmYmU4Mjc0MjA1ZTA2NjdjNmNhM2QzNjgyYTllMjc4YTJmOWZmZjk1YTEyY/5IjHc=: 00:25:16.291 17:16:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:25:16.291 17:16:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 4 00:25:16.291 17:16:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:16.291 17:16:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:16.291 17:16:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:25:16.291 17:16:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:25:16.291 17:16:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:16.291 17:16:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:25:16.291 17:16:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:16.291 17:16:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:16.291 17:16:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:16.291 17:16:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:16.291 17:16:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:16.291 17:16:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:16.291 17:16:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:16.291 17:16:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:16.291 17:16:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:16.291 17:16:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:16.291 17:16:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:16.291 17:16:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:16.291 17:16:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:16.291 17:16:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:16.291 17:16:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:25:16.291 17:16:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:16.291 17:16:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:16.548 nvme0n1 00:25:16.548 17:16:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:16.548 17:16:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:16.548 17:16:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:16.549 17:16:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:16.549 17:16:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:16.806 17:16:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:16.806 17:16:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:16.806 17:16:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:16.806 17:16:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:16.806 17:16:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:16.806 17:16:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:16.806 17:16:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:25:16.806 17:16:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:16.806 17:16:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 0 00:25:16.806 17:16:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:16.806 17:16:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:16.806 17:16:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:25:16.806 17:16:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:25:16.806 17:16:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZmJjZTViODM2ODYzZTQzZjZmMDMwNTFmM2U4OTM4NzTXuYHt: 00:25:16.806 17:16:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MWY4YWQ2MmQ0ODMxNjBhNWJkYmI3YjZmMmVjNzhiYTdiNDEyYmQ0YjI3MDdhYzBhYTQ1NzZkZTYyOWQxODFiZit4JFk=: 00:25:16.806 17:16:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:16.806 17:16:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:25:16.806 17:16:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZmJjZTViODM2ODYzZTQzZjZmMDMwNTFmM2U4OTM4NzTXuYHt: 00:25:16.806 17:16:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MWY4YWQ2MmQ0ODMxNjBhNWJkYmI3YjZmMmVjNzhiYTdiNDEyYmQ0YjI3MDdhYzBhYTQ1NzZkZTYyOWQxODFiZit4JFk=: ]] 00:25:16.806 17:16:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MWY4YWQ2MmQ0ODMxNjBhNWJkYmI3YjZmMmVjNzhiYTdiNDEyYmQ0YjI3MDdhYzBhYTQ1NzZkZTYyOWQxODFiZit4JFk=: 00:25:16.806 17:16:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 0 00:25:16.806 17:16:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:16.806 17:16:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:16.806 17:16:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:25:16.806 17:16:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:25:16.806 17:16:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:16.806 17:16:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:25:16.806 17:16:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:16.806 17:16:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:16.806 17:16:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:16.806 17:16:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:16.806 17:16:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:16.806 17:16:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:16.806 17:16:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:16.806 17:16:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:16.806 17:16:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:16.806 17:16:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:16.806 17:16:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:16.806 17:16:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:16.806 17:16:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:16.806 17:16:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:16.806 17:16:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:16.806 17:16:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:16.806 17:16:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:17.372 nvme0n1 00:25:17.372 17:16:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:17.372 17:16:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:17.372 17:16:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:17.372 17:16:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:17.372 17:16:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:17.372 17:16:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:17.372 17:16:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:17.372 17:16:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:17.372 17:16:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:17.372 17:16:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:17.372 17:16:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:17.372 17:16:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:17.372 17:16:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 1 00:25:17.372 17:16:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:17.372 17:16:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:17.372 17:16:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:25:17.372 17:16:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:17.372 17:16:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YTM4NTAxMmMyNDU4NGVjNGY0OGJiMmYwYzBhNzkwODViZGZlZDJmMTBhNjg1YzA0n8CKew==: 00:25:17.372 17:16:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ODMyZDk5MzRkYjZmZGI2MzFjOTE5NDVjYjRlNWQ3NTAxMWZjZGNhMWU5YWRlZDE1Yk8z+w==: 00:25:17.372 17:16:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:17.372 17:16:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:25:17.372 17:16:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YTM4NTAxMmMyNDU4NGVjNGY0OGJiMmYwYzBhNzkwODViZGZlZDJmMTBhNjg1YzA0n8CKew==: 00:25:17.372 17:16:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ODMyZDk5MzRkYjZmZGI2MzFjOTE5NDVjYjRlNWQ3NTAxMWZjZGNhMWU5YWRlZDE1Yk8z+w==: ]] 00:25:17.372 17:16:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ODMyZDk5MzRkYjZmZGI2MzFjOTE5NDVjYjRlNWQ3NTAxMWZjZGNhMWU5YWRlZDE1Yk8z+w==: 00:25:17.372 17:16:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 1 00:25:17.372 17:16:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:17.372 17:16:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:17.372 17:16:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:25:17.372 17:16:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:25:17.372 17:16:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:17.372 17:16:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:25:17.372 17:16:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:17.372 17:16:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:17.372 17:16:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:17.372 17:16:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:17.372 17:16:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:17.372 17:16:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:17.372 17:16:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:17.372 17:16:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:17.372 17:16:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:17.372 17:16:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:17.372 17:16:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:17.372 17:16:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:17.372 17:16:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:17.372 17:16:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:17.372 17:16:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:17.372 17:16:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:17.372 17:16:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:17.937 nvme0n1 00:25:17.937 17:16:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:17.937 17:16:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:17.937 17:16:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:17.937 17:16:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:17.937 17:16:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:17.937 17:16:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:17.938 17:16:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:17.938 17:16:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:17.938 17:16:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:17.938 17:16:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:17.938 17:16:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:17.938 17:16:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:17.938 17:16:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 2 00:25:17.938 17:16:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:17.938 17:16:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:17.938 17:16:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:25:17.938 17:16:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:25:17.938 17:16:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NzlmYzhlZGU0ZWYwM2UxMzM3ZDI1YzE1NGMzNjkxMDlh7lHs: 00:25:17.938 17:16:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YjUwMTM0NTI4OGQ3MDRkZDA2ZWQ5YWUzZmM0YTNiMjgM+wAH: 00:25:17.938 17:16:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:17.938 17:16:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:25:17.938 17:16:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NzlmYzhlZGU0ZWYwM2UxMzM3ZDI1YzE1NGMzNjkxMDlh7lHs: 00:25:17.938 17:16:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YjUwMTM0NTI4OGQ3MDRkZDA2ZWQ5YWUzZmM0YTNiMjgM+wAH: ]] 00:25:17.938 17:16:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YjUwMTM0NTI4OGQ3MDRkZDA2ZWQ5YWUzZmM0YTNiMjgM+wAH: 00:25:17.938 17:16:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 2 00:25:17.938 17:16:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:17.938 17:16:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:17.938 17:16:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:25:17.938 17:16:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:25:17.938 17:16:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:17.938 17:16:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:25:17.938 17:16:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:17.938 17:16:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:17.938 17:16:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:17.938 17:16:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:17.938 17:16:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:17.938 17:16:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:17.938 17:16:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:17.938 17:16:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:17.938 17:16:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:17.938 17:16:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:17.938 17:16:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:17.938 17:16:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:17.938 17:16:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:17.938 17:16:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:17.938 17:16:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:17.938 17:16:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:17.938 17:16:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:18.502 nvme0n1 00:25:18.502 17:16:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:18.502 17:16:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:18.502 17:16:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:18.503 17:16:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:18.503 17:16:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:18.503 17:16:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:18.760 17:16:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:18.760 17:16:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:18.760 17:16:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:18.760 17:16:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:18.760 17:16:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:18.760 17:16:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:18.761 17:16:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 3 00:25:18.761 17:16:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:18.761 17:16:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:18.761 17:16:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:25:18.761 17:16:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:25:18.761 17:16:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NmJmNjUzNGQ1YmZjYzI4Nzc3OTMyNjExZTc3NDliMDRiY2ZjOTdkMTQ1NGE2NmFjfs2flw==: 00:25:18.761 17:16:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MTlhNmQzZmE4YTYzZDYzZjZlYTBkYmE0NmZjMzFlMmTKp+77: 00:25:18.761 17:16:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:18.761 17:16:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:25:18.761 17:16:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NmJmNjUzNGQ1YmZjYzI4Nzc3OTMyNjExZTc3NDliMDRiY2ZjOTdkMTQ1NGE2NmFjfs2flw==: 00:25:18.761 17:16:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MTlhNmQzZmE4YTYzZDYzZjZlYTBkYmE0NmZjMzFlMmTKp+77: ]] 00:25:18.761 17:16:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MTlhNmQzZmE4YTYzZDYzZjZlYTBkYmE0NmZjMzFlMmTKp+77: 00:25:18.761 17:16:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 3 00:25:18.761 17:16:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:18.761 17:16:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:18.761 17:16:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:25:18.761 17:16:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:25:18.761 17:16:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:18.761 17:16:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:25:18.761 17:16:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:18.761 17:16:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:18.761 17:16:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:18.761 17:16:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:18.761 17:16:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:18.761 17:16:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:18.761 17:16:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:18.761 17:16:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:18.761 17:16:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:18.761 17:16:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:18.761 17:16:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:18.761 17:16:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:18.761 17:16:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:18.761 17:16:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:18.761 17:16:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:25:18.761 17:16:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:18.761 17:16:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:19.326 nvme0n1 00:25:19.326 17:16:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:19.326 17:16:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:19.326 17:16:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:19.326 17:16:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:19.326 17:16:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:19.326 17:16:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:19.326 17:16:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:19.326 17:16:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:19.326 17:16:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:19.326 17:16:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:19.326 17:16:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:19.326 17:16:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:19.326 17:16:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 4 00:25:19.326 17:16:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:19.326 17:16:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:19.326 17:16:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:25:19.326 17:16:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:25:19.326 17:16:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MmZmMmIxMTIxY2RjMTljY2ZlMjFmYmU4Mjc0MjA1ZTA2NjdjNmNhM2QzNjgyYTllMjc4YTJmOWZmZjk1YTEyY/5IjHc=: 00:25:19.326 17:16:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:25:19.326 17:16:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:19.327 17:16:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:25:19.327 17:16:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MmZmMmIxMTIxY2RjMTljY2ZlMjFmYmU4Mjc0MjA1ZTA2NjdjNmNhM2QzNjgyYTllMjc4YTJmOWZmZjk1YTEyY/5IjHc=: 00:25:19.327 17:16:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:25:19.327 17:16:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 4 00:25:19.327 17:16:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:19.327 17:16:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:19.327 17:16:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:25:19.327 17:16:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:25:19.327 17:16:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:19.327 17:16:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:25:19.327 17:16:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:19.327 17:16:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:19.327 17:16:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:19.327 17:16:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:19.327 17:16:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:19.327 17:16:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:19.327 17:16:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:19.327 17:16:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:19.327 17:16:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:19.327 17:16:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:19.327 17:16:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:19.327 17:16:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:19.327 17:16:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:19.327 17:16:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:19.327 17:16:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:25:19.327 17:16:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:19.327 17:16:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:19.892 nvme0n1 00:25:19.892 17:16:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:19.892 17:16:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:19.892 17:16:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:19.892 17:16:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:19.892 17:16:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:19.892 17:16:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:19.892 17:16:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:19.892 17:16:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:19.892 17:16:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:19.892 17:16:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:19.892 17:16:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:19.892 17:16:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:25:19.892 17:16:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:19.892 17:16:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:19.892 17:16:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:19.892 17:16:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:19.892 17:16:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YTM4NTAxMmMyNDU4NGVjNGY0OGJiMmYwYzBhNzkwODViZGZlZDJmMTBhNjg1YzA0n8CKew==: 00:25:19.892 17:16:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ODMyZDk5MzRkYjZmZGI2MzFjOTE5NDVjYjRlNWQ3NTAxMWZjZGNhMWU5YWRlZDE1Yk8z+w==: 00:25:19.892 17:16:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:19.892 17:16:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:19.892 17:16:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YTM4NTAxMmMyNDU4NGVjNGY0OGJiMmYwYzBhNzkwODViZGZlZDJmMTBhNjg1YzA0n8CKew==: 00:25:19.892 17:16:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ODMyZDk5MzRkYjZmZGI2MzFjOTE5NDVjYjRlNWQ3NTAxMWZjZGNhMWU5YWRlZDE1Yk8z+w==: ]] 00:25:19.892 17:16:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ODMyZDk5MzRkYjZmZGI2MzFjOTE5NDVjYjRlNWQ3NTAxMWZjZGNhMWU5YWRlZDE1Yk8z+w==: 00:25:19.892 17:16:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@111 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:25:19.892 17:16:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:19.892 17:16:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:19.892 17:16:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:19.892 17:16:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@112 -- # get_main_ns_ip 00:25:19.892 17:16:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:19.892 17:16:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:19.892 17:16:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:19.892 17:16:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:19.892 17:16:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:19.892 17:16:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:19.892 17:16:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:19.892 17:16:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:19.892 17:16:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:19.892 17:16:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:19.892 17:16:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@112 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:25:19.892 17:16:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@648 -- # local es=0 00:25:19.892 17:16:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:25:19.892 17:16:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:25:19.892 17:16:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:25:19.892 17:16:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:25:19.892 17:16:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:25:19.892 17:16:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:25:19.892 17:16:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:19.892 17:16:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:20.154 request: 00:25:20.154 { 00:25:20.154 "name": "nvme0", 00:25:20.154 "trtype": "tcp", 00:25:20.154 "traddr": "10.0.0.1", 00:25:20.154 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:25:20.154 "adrfam": "ipv4", 00:25:20.154 "trsvcid": "4420", 00:25:20.154 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:25:20.154 "method": "bdev_nvme_attach_controller", 00:25:20.154 "req_id": 1 00:25:20.154 } 00:25:20.154 Got JSON-RPC error response 00:25:20.154 response: 00:25:20.154 { 00:25:20.154 "code": -32602, 00:25:20.154 "message": "Invalid parameters" 00:25:20.154 } 00:25:20.154 17:16:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:25:20.154 17:16:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # es=1 00:25:20.154 17:16:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:25:20.154 17:16:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:25:20.154 17:16:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:25:20.154 17:16:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@114 -- # rpc_cmd bdev_nvme_get_controllers 00:25:20.154 17:16:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@114 -- # jq length 00:25:20.154 17:16:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:20.154 17:16:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:20.154 17:16:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:20.154 17:16:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@114 -- # (( 0 == 0 )) 00:25:20.154 17:16:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@117 -- # get_main_ns_ip 00:25:20.154 17:16:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:20.154 17:16:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:20.154 17:16:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:20.154 17:16:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:20.154 17:16:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:20.154 17:16:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:20.154 17:16:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:20.154 17:16:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:20.154 17:16:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:20.154 17:16:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:20.154 17:16:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@117 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:25:20.154 17:16:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@648 -- # local es=0 00:25:20.154 17:16:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:25:20.154 17:16:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:25:20.154 17:16:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:25:20.154 17:16:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:25:20.154 17:16:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:25:20.154 17:16:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:25:20.154 17:16:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:20.154 17:16:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:20.154 request: 00:25:20.154 { 00:25:20.154 "name": "nvme0", 00:25:20.154 "trtype": "tcp", 00:25:20.154 "traddr": "10.0.0.1", 00:25:20.154 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:25:20.154 "adrfam": "ipv4", 00:25:20.154 "trsvcid": "4420", 00:25:20.154 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:25:20.154 "dhchap_key": "key2", 00:25:20.154 "method": "bdev_nvme_attach_controller", 00:25:20.154 "req_id": 1 00:25:20.154 } 00:25:20.154 Got JSON-RPC error response 00:25:20.154 response: 00:25:20.154 { 00:25:20.154 "code": -32602, 00:25:20.154 "message": "Invalid parameters" 00:25:20.154 } 00:25:20.154 17:16:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:25:20.154 17:16:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # es=1 00:25:20.154 17:16:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:25:20.154 17:16:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:25:20.154 17:16:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:25:20.154 17:16:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@120 -- # jq length 00:25:20.154 17:16:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@120 -- # rpc_cmd bdev_nvme_get_controllers 00:25:20.154 17:16:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:20.154 17:16:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:20.154 17:16:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:20.154 17:16:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@120 -- # (( 0 == 0 )) 00:25:20.154 17:16:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@123 -- # get_main_ns_ip 00:25:20.154 17:16:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:20.154 17:16:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:20.154 17:16:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:20.154 17:16:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:20.154 17:16:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:20.154 17:16:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:20.154 17:16:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:20.154 17:16:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:20.154 17:16:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:20.154 17:16:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:20.154 17:16:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@123 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:25:20.154 17:16:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@648 -- # local es=0 00:25:20.154 17:16:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:25:20.154 17:16:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:25:20.154 17:16:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:25:20.154 17:16:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:25:20.154 17:16:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:25:20.154 17:16:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:25:20.154 17:16:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:20.154 17:16:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:20.411 request: 00:25:20.411 { 00:25:20.411 "name": "nvme0", 00:25:20.411 "trtype": "tcp", 00:25:20.411 "traddr": "10.0.0.1", 00:25:20.411 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:25:20.411 "adrfam": "ipv4", 00:25:20.411 "trsvcid": "4420", 00:25:20.411 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:25:20.411 "dhchap_key": "key1", 00:25:20.411 "dhchap_ctrlr_key": "ckey2", 00:25:20.411 "method": "bdev_nvme_attach_controller", 00:25:20.411 "req_id": 1 00:25:20.411 } 00:25:20.411 Got JSON-RPC error response 00:25:20.411 response: 00:25:20.411 { 00:25:20.411 "code": -32602, 00:25:20.411 "message": "Invalid parameters" 00:25:20.411 } 00:25:20.411 17:16:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:25:20.411 17:16:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # es=1 00:25:20.411 17:16:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:25:20.411 17:16:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:25:20.411 17:16:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:25:20.411 17:16:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@127 -- # trap - SIGINT SIGTERM EXIT 00:25:20.411 17:16:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@128 -- # cleanup 00:25:20.411 17:16:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@24 -- # nvmftestfini 00:25:20.411 17:16:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@488 -- # nvmfcleanup 00:25:20.411 17:16:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@117 -- # sync 00:25:20.411 17:16:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:25:20.411 17:16:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@120 -- # set +e 00:25:20.411 17:16:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@121 -- # for i in {1..20} 00:25:20.411 17:16:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:25:20.411 rmmod nvme_tcp 00:25:20.411 rmmod nvme_fabrics 00:25:20.411 17:16:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:25:20.411 17:16:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@124 -- # set -e 00:25:20.411 17:16:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@125 -- # return 0 00:25:20.411 17:16:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@489 -- # '[' -n 3193436 ']' 00:25:20.411 17:16:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@490 -- # killprocess 3193436 00:25:20.412 17:16:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@946 -- # '[' -z 3193436 ']' 00:25:20.412 17:16:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@950 -- # kill -0 3193436 00:25:20.412 17:16:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@951 -- # uname 00:25:20.412 17:16:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:25:20.412 17:16:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3193436 00:25:20.412 17:16:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:25:20.412 17:16:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:25:20.412 17:16:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3193436' 00:25:20.412 killing process with pid 3193436 00:25:20.412 17:16:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@965 -- # kill 3193436 00:25:20.412 17:16:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@970 -- # wait 3193436 00:25:20.669 17:16:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:25:20.669 17:16:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:25:20.669 17:16:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:25:20.669 17:16:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:25:20.669 17:16:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@278 -- # remove_spdk_ns 00:25:20.669 17:16:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:20.669 17:16:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:20.669 17:16:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:22.567 17:16:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:25:22.567 17:16:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@25 -- # rm /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:25:22.567 17:16:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@26 -- # rmdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:25:22.567 17:16:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@27 -- # clean_kernel_target 00:25:22.567 17:16:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@684 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 ]] 00:25:22.567 17:16:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@686 -- # echo 0 00:25:22.567 17:16:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@688 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2024-02.io.spdk:cnode0 00:25:22.567 17:16:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@689 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:25:22.567 17:16:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@690 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:25:22.567 17:16:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@691 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:25:22.567 17:16:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@693 -- # modules=(/sys/module/nvmet/holders/*) 00:25:22.567 17:16:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@695 -- # modprobe -r nvmet_tcp nvmet 00:25:22.825 17:16:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@698 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:25:25.352 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:25:25.352 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:25:25.352 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:25:25.352 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:25:25.352 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:25:25.352 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:25:25.352 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:25:25.352 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:25:25.352 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:25:25.352 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:25:25.352 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:25:25.352 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:25:25.352 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:25:25.352 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:25:25.352 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:25:25.352 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:25:26.288 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:25:26.288 17:16:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@28 -- # rm -f /tmp/spdk.key-null.gCn /tmp/spdk.key-null.3uc /tmp/spdk.key-sha256.DZi /tmp/spdk.key-sha384.sSA /tmp/spdk.key-sha512.rqk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log 00:25:26.288 17:16:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:25:28.813 0000:5e:00.0 (8086 0a54): Already using the vfio-pci driver 00:25:28.813 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:25:28.813 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:25:28.813 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:25:28.813 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:25:28.813 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:25:28.813 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:25:28.813 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:25:28.813 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:25:28.813 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:25:28.813 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:25:28.813 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:25:28.813 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:25:28.813 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:25:28.813 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:25:28.813 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:25:28.813 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:25:28.813 00:25:28.813 real 0m48.874s 00:25:28.813 user 0m44.215s 00:25:28.813 sys 0m11.125s 00:25:28.813 17:16:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1122 -- # xtrace_disable 00:25:28.813 17:16:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:28.813 ************************************ 00:25:28.813 END TEST nvmf_auth_host 00:25:28.813 ************************************ 00:25:28.813 17:16:16 nvmf_tcp -- nvmf/nvmf.sh@106 -- # [[ tcp == \t\c\p ]] 00:25:28.813 17:16:16 nvmf_tcp -- nvmf/nvmf.sh@107 -- # run_test nvmf_digest /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:25:28.813 17:16:16 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:25:28.813 17:16:16 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:25:28.813 17:16:16 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:25:28.813 ************************************ 00:25:28.813 START TEST nvmf_digest 00:25:28.813 ************************************ 00:25:28.813 17:16:16 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:25:28.813 * Looking for test storage... 00:25:28.813 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:25:28.813 17:16:16 nvmf_tcp.nvmf_digest -- host/digest.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:28.813 17:16:16 nvmf_tcp.nvmf_digest -- nvmf/common.sh@7 -- # uname -s 00:25:28.814 17:16:16 nvmf_tcp.nvmf_digest -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:28.814 17:16:16 nvmf_tcp.nvmf_digest -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:28.814 17:16:16 nvmf_tcp.nvmf_digest -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:28.814 17:16:16 nvmf_tcp.nvmf_digest -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:28.814 17:16:16 nvmf_tcp.nvmf_digest -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:28.814 17:16:16 nvmf_tcp.nvmf_digest -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:28.814 17:16:16 nvmf_tcp.nvmf_digest -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:28.814 17:16:16 nvmf_tcp.nvmf_digest -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:28.814 17:16:16 nvmf_tcp.nvmf_digest -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:28.814 17:16:16 nvmf_tcp.nvmf_digest -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:28.814 17:16:16 nvmf_tcp.nvmf_digest -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:25:28.814 17:16:16 nvmf_tcp.nvmf_digest -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:25:28.814 17:16:16 nvmf_tcp.nvmf_digest -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:28.814 17:16:16 nvmf_tcp.nvmf_digest -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:28.814 17:16:16 nvmf_tcp.nvmf_digest -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:28.814 17:16:16 nvmf_tcp.nvmf_digest -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:28.814 17:16:16 nvmf_tcp.nvmf_digest -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:28.814 17:16:16 nvmf_tcp.nvmf_digest -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:28.814 17:16:16 nvmf_tcp.nvmf_digest -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:28.814 17:16:16 nvmf_tcp.nvmf_digest -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:28.814 17:16:16 nvmf_tcp.nvmf_digest -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:28.814 17:16:16 nvmf_tcp.nvmf_digest -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:28.814 17:16:16 nvmf_tcp.nvmf_digest -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:28.814 17:16:16 nvmf_tcp.nvmf_digest -- paths/export.sh@5 -- # export PATH 00:25:28.814 17:16:16 nvmf_tcp.nvmf_digest -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:28.814 17:16:16 nvmf_tcp.nvmf_digest -- nvmf/common.sh@47 -- # : 0 00:25:28.814 17:16:16 nvmf_tcp.nvmf_digest -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:25:28.814 17:16:16 nvmf_tcp.nvmf_digest -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:25:28.814 17:16:16 nvmf_tcp.nvmf_digest -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:28.814 17:16:16 nvmf_tcp.nvmf_digest -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:28.814 17:16:16 nvmf_tcp.nvmf_digest -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:28.814 17:16:16 nvmf_tcp.nvmf_digest -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:25:28.814 17:16:16 nvmf_tcp.nvmf_digest -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:25:28.814 17:16:16 nvmf_tcp.nvmf_digest -- nvmf/common.sh@51 -- # have_pci_nics=0 00:25:28.814 17:16:16 nvmf_tcp.nvmf_digest -- host/digest.sh@14 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:25:28.814 17:16:16 nvmf_tcp.nvmf_digest -- host/digest.sh@15 -- # bperfsock=/var/tmp/bperf.sock 00:25:28.814 17:16:16 nvmf_tcp.nvmf_digest -- host/digest.sh@16 -- # runtime=2 00:25:28.814 17:16:16 nvmf_tcp.nvmf_digest -- host/digest.sh@136 -- # [[ tcp != \t\c\p ]] 00:25:28.814 17:16:16 nvmf_tcp.nvmf_digest -- host/digest.sh@138 -- # nvmftestinit 00:25:28.814 17:16:16 nvmf_tcp.nvmf_digest -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:25:28.814 17:16:16 nvmf_tcp.nvmf_digest -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:28.814 17:16:16 nvmf_tcp.nvmf_digest -- nvmf/common.sh@448 -- # prepare_net_devs 00:25:28.814 17:16:16 nvmf_tcp.nvmf_digest -- nvmf/common.sh@410 -- # local -g is_hw=no 00:25:28.814 17:16:16 nvmf_tcp.nvmf_digest -- nvmf/common.sh@412 -- # remove_spdk_ns 00:25:28.814 17:16:16 nvmf_tcp.nvmf_digest -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:28.814 17:16:16 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:28.814 17:16:16 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:28.814 17:16:16 nvmf_tcp.nvmf_digest -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:25:28.814 17:16:16 nvmf_tcp.nvmf_digest -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:25:28.814 17:16:16 nvmf_tcp.nvmf_digest -- nvmf/common.sh@285 -- # xtrace_disable 00:25:28.814 17:16:16 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:25:34.116 17:16:21 nvmf_tcp.nvmf_digest -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:34.116 17:16:21 nvmf_tcp.nvmf_digest -- nvmf/common.sh@291 -- # pci_devs=() 00:25:34.116 17:16:21 nvmf_tcp.nvmf_digest -- nvmf/common.sh@291 -- # local -a pci_devs 00:25:34.116 17:16:21 nvmf_tcp.nvmf_digest -- nvmf/common.sh@292 -- # pci_net_devs=() 00:25:34.116 17:16:21 nvmf_tcp.nvmf_digest -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:25:34.116 17:16:21 nvmf_tcp.nvmf_digest -- nvmf/common.sh@293 -- # pci_drivers=() 00:25:34.116 17:16:21 nvmf_tcp.nvmf_digest -- nvmf/common.sh@293 -- # local -A pci_drivers 00:25:34.116 17:16:21 nvmf_tcp.nvmf_digest -- nvmf/common.sh@295 -- # net_devs=() 00:25:34.116 17:16:21 nvmf_tcp.nvmf_digest -- nvmf/common.sh@295 -- # local -ga net_devs 00:25:34.116 17:16:21 nvmf_tcp.nvmf_digest -- nvmf/common.sh@296 -- # e810=() 00:25:34.116 17:16:21 nvmf_tcp.nvmf_digest -- nvmf/common.sh@296 -- # local -ga e810 00:25:34.116 17:16:21 nvmf_tcp.nvmf_digest -- nvmf/common.sh@297 -- # x722=() 00:25:34.116 17:16:21 nvmf_tcp.nvmf_digest -- nvmf/common.sh@297 -- # local -ga x722 00:25:34.116 17:16:21 nvmf_tcp.nvmf_digest -- nvmf/common.sh@298 -- # mlx=() 00:25:34.116 17:16:21 nvmf_tcp.nvmf_digest -- nvmf/common.sh@298 -- # local -ga mlx 00:25:34.116 17:16:21 nvmf_tcp.nvmf_digest -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:34.116 17:16:21 nvmf_tcp.nvmf_digest -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:34.116 17:16:21 nvmf_tcp.nvmf_digest -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:34.116 17:16:21 nvmf_tcp.nvmf_digest -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:34.116 17:16:21 nvmf_tcp.nvmf_digest -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:34.116 17:16:21 nvmf_tcp.nvmf_digest -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:34.116 17:16:21 nvmf_tcp.nvmf_digest -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:34.116 17:16:21 nvmf_tcp.nvmf_digest -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:34.116 17:16:21 nvmf_tcp.nvmf_digest -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:34.116 17:16:21 nvmf_tcp.nvmf_digest -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:34.116 17:16:21 nvmf_tcp.nvmf_digest -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:34.116 17:16:21 nvmf_tcp.nvmf_digest -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:25:34.116 17:16:21 nvmf_tcp.nvmf_digest -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:25:34.116 17:16:21 nvmf_tcp.nvmf_digest -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:25:34.116 17:16:21 nvmf_tcp.nvmf_digest -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:25:34.116 17:16:21 nvmf_tcp.nvmf_digest -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:25:34.116 17:16:21 nvmf_tcp.nvmf_digest -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:25:34.116 17:16:21 nvmf_tcp.nvmf_digest -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:34.116 17:16:21 nvmf_tcp.nvmf_digest -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:25:34.116 Found 0000:86:00.0 (0x8086 - 0x159b) 00:25:34.116 17:16:21 nvmf_tcp.nvmf_digest -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:25:34.116 17:16:21 nvmf_tcp.nvmf_digest -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:25:34.116 17:16:21 nvmf_tcp.nvmf_digest -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:34.116 17:16:21 nvmf_tcp.nvmf_digest -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:34.116 17:16:21 nvmf_tcp.nvmf_digest -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:25:34.116 17:16:21 nvmf_tcp.nvmf_digest -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:34.116 17:16:21 nvmf_tcp.nvmf_digest -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:25:34.116 Found 0000:86:00.1 (0x8086 - 0x159b) 00:25:34.116 17:16:21 nvmf_tcp.nvmf_digest -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:25:34.116 17:16:21 nvmf_tcp.nvmf_digest -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:25:34.117 17:16:21 nvmf_tcp.nvmf_digest -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:34.117 17:16:21 nvmf_tcp.nvmf_digest -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:34.117 17:16:21 nvmf_tcp.nvmf_digest -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:25:34.117 17:16:21 nvmf_tcp.nvmf_digest -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:25:34.117 17:16:21 nvmf_tcp.nvmf_digest -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:25:34.117 17:16:21 nvmf_tcp.nvmf_digest -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:25:34.117 17:16:21 nvmf_tcp.nvmf_digest -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:34.117 17:16:21 nvmf_tcp.nvmf_digest -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:34.117 17:16:21 nvmf_tcp.nvmf_digest -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:25:34.117 17:16:21 nvmf_tcp.nvmf_digest -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:34.117 17:16:21 nvmf_tcp.nvmf_digest -- nvmf/common.sh@390 -- # [[ up == up ]] 00:25:34.117 17:16:21 nvmf_tcp.nvmf_digest -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:25:34.117 17:16:21 nvmf_tcp.nvmf_digest -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:34.117 17:16:21 nvmf_tcp.nvmf_digest -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:25:34.117 Found net devices under 0000:86:00.0: cvl_0_0 00:25:34.117 17:16:21 nvmf_tcp.nvmf_digest -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:25:34.117 17:16:21 nvmf_tcp.nvmf_digest -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:34.117 17:16:21 nvmf_tcp.nvmf_digest -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:34.117 17:16:21 nvmf_tcp.nvmf_digest -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:25:34.117 17:16:21 nvmf_tcp.nvmf_digest -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:34.117 17:16:21 nvmf_tcp.nvmf_digest -- nvmf/common.sh@390 -- # [[ up == up ]] 00:25:34.117 17:16:21 nvmf_tcp.nvmf_digest -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:25:34.117 17:16:21 nvmf_tcp.nvmf_digest -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:34.117 17:16:21 nvmf_tcp.nvmf_digest -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:25:34.117 Found net devices under 0000:86:00.1: cvl_0_1 00:25:34.117 17:16:21 nvmf_tcp.nvmf_digest -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:25:34.117 17:16:21 nvmf_tcp.nvmf_digest -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:25:34.117 17:16:21 nvmf_tcp.nvmf_digest -- nvmf/common.sh@414 -- # is_hw=yes 00:25:34.117 17:16:21 nvmf_tcp.nvmf_digest -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:25:34.117 17:16:21 nvmf_tcp.nvmf_digest -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:25:34.117 17:16:21 nvmf_tcp.nvmf_digest -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:25:34.117 17:16:21 nvmf_tcp.nvmf_digest -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:34.117 17:16:21 nvmf_tcp.nvmf_digest -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:34.117 17:16:21 nvmf_tcp.nvmf_digest -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:34.117 17:16:21 nvmf_tcp.nvmf_digest -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:25:34.117 17:16:21 nvmf_tcp.nvmf_digest -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:34.117 17:16:21 nvmf_tcp.nvmf_digest -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:34.117 17:16:21 nvmf_tcp.nvmf_digest -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:25:34.117 17:16:21 nvmf_tcp.nvmf_digest -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:34.117 17:16:21 nvmf_tcp.nvmf_digest -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:34.117 17:16:21 nvmf_tcp.nvmf_digest -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:25:34.117 17:16:21 nvmf_tcp.nvmf_digest -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:25:34.117 17:16:21 nvmf_tcp.nvmf_digest -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:25:34.117 17:16:21 nvmf_tcp.nvmf_digest -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:34.117 17:16:21 nvmf_tcp.nvmf_digest -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:34.117 17:16:21 nvmf_tcp.nvmf_digest -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:34.117 17:16:21 nvmf_tcp.nvmf_digest -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:25:34.117 17:16:21 nvmf_tcp.nvmf_digest -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:34.117 17:16:21 nvmf_tcp.nvmf_digest -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:34.117 17:16:21 nvmf_tcp.nvmf_digest -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:34.117 17:16:21 nvmf_tcp.nvmf_digest -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:25:34.117 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:34.117 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.173 ms 00:25:34.117 00:25:34.117 --- 10.0.0.2 ping statistics --- 00:25:34.117 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:34.117 rtt min/avg/max/mdev = 0.173/0.173/0.173/0.000 ms 00:25:34.117 17:16:21 nvmf_tcp.nvmf_digest -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:34.117 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:34.117 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.216 ms 00:25:34.117 00:25:34.117 --- 10.0.0.1 ping statistics --- 00:25:34.117 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:34.117 rtt min/avg/max/mdev = 0.216/0.216/0.216/0.000 ms 00:25:34.117 17:16:21 nvmf_tcp.nvmf_digest -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:34.117 17:16:21 nvmf_tcp.nvmf_digest -- nvmf/common.sh@422 -- # return 0 00:25:34.117 17:16:21 nvmf_tcp.nvmf_digest -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:25:34.117 17:16:21 nvmf_tcp.nvmf_digest -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:34.117 17:16:21 nvmf_tcp.nvmf_digest -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:25:34.117 17:16:21 nvmf_tcp.nvmf_digest -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:25:34.117 17:16:21 nvmf_tcp.nvmf_digest -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:34.117 17:16:21 nvmf_tcp.nvmf_digest -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:25:34.117 17:16:21 nvmf_tcp.nvmf_digest -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:25:34.117 17:16:21 nvmf_tcp.nvmf_digest -- host/digest.sh@140 -- # trap cleanup SIGINT SIGTERM EXIT 00:25:34.117 17:16:21 nvmf_tcp.nvmf_digest -- host/digest.sh@141 -- # [[ 0 -eq 1 ]] 00:25:34.117 17:16:21 nvmf_tcp.nvmf_digest -- host/digest.sh@145 -- # run_test nvmf_digest_clean run_digest 00:25:34.117 17:16:21 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:25:34.117 17:16:21 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1103 -- # xtrace_disable 00:25:34.117 17:16:21 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:25:34.117 ************************************ 00:25:34.117 START TEST nvmf_digest_clean 00:25:34.117 ************************************ 00:25:34.117 17:16:21 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1121 -- # run_digest 00:25:34.117 17:16:21 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@120 -- # local dsa_initiator 00:25:34.117 17:16:21 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # [[ '' == \d\s\a\_\i\n\i\t\i\a\t\o\r ]] 00:25:34.117 17:16:21 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # dsa_initiator=false 00:25:34.117 17:16:21 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@123 -- # tgt_params=("--wait-for-rpc") 00:25:34.117 17:16:21 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@124 -- # nvmfappstart --wait-for-rpc 00:25:34.117 17:16:21 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:25:34.117 17:16:21 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@720 -- # xtrace_disable 00:25:34.117 17:16:21 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:25:34.117 17:16:21 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@481 -- # nvmfpid=3206689 00:25:34.117 17:16:21 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@482 -- # waitforlisten 3206689 00:25:34.117 17:16:21 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@827 -- # '[' -z 3206689 ']' 00:25:34.117 17:16:21 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:34.117 17:16:21 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@832 -- # local max_retries=100 00:25:34.117 17:16:21 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:34.117 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:34.117 17:16:21 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # xtrace_disable 00:25:34.117 17:16:21 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:25:34.117 17:16:21 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:25:34.117 [2024-05-15 17:16:21.720578] Starting SPDK v24.05-pre git sha1 0ba8ca574 / DPDK 23.11.0 initialization... 00:25:34.117 [2024-05-15 17:16:21.720615] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:34.117 EAL: No free 2048 kB hugepages reported on node 1 00:25:34.375 [2024-05-15 17:16:21.777008] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:34.375 [2024-05-15 17:16:21.855788] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:34.375 [2024-05-15 17:16:21.855824] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:34.375 [2024-05-15 17:16:21.855831] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:34.375 [2024-05-15 17:16:21.855837] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:34.375 [2024-05-15 17:16:21.855842] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:34.375 [2024-05-15 17:16:21.855859] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:34.939 17:16:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:25:34.939 17:16:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@860 -- # return 0 00:25:34.939 17:16:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:25:34.939 17:16:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@726 -- # xtrace_disable 00:25:34.939 17:16:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:25:34.939 17:16:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:34.939 17:16:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@125 -- # [[ '' == \d\s\a\_\t\a\r\g\e\t ]] 00:25:34.939 17:16:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@126 -- # common_target_config 00:25:34.939 17:16:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@43 -- # rpc_cmd 00:25:34.939 17:16:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:34.939 17:16:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:25:35.196 null0 00:25:35.196 [2024-05-15 17:16:22.630958] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:35.196 [2024-05-15 17:16:22.654965] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:25:35.196 [2024-05-15 17:16:22.655174] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:35.196 17:16:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:35.196 17:16:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@128 -- # run_bperf randread 4096 128 false 00:25:35.196 17:16:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:25:35.196 17:16:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:25:35.196 17:16:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:25:35.196 17:16:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:25:35.196 17:16:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:25:35.196 17:16:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:25:35.196 17:16:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=3206818 00:25:35.196 17:16:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 3206818 /var/tmp/bperf.sock 00:25:35.196 17:16:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@827 -- # '[' -z 3206818 ']' 00:25:35.196 17:16:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bperf.sock 00:25:35.196 17:16:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@832 -- # local max_retries=100 00:25:35.196 17:16:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:25:35.196 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:25:35.196 17:16:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # xtrace_disable 00:25:35.196 17:16:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:25:35.196 17:16:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:25:35.196 [2024-05-15 17:16:22.701308] Starting SPDK v24.05-pre git sha1 0ba8ca574 / DPDK 23.11.0 initialization... 00:25:35.196 [2024-05-15 17:16:22.701350] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3206818 ] 00:25:35.196 EAL: No free 2048 kB hugepages reported on node 1 00:25:35.196 [2024-05-15 17:16:22.754659] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:35.196 [2024-05-15 17:16:22.833477] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:25:36.126 17:16:23 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:25:36.126 17:16:23 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@860 -- # return 0 00:25:36.126 17:16:23 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:25:36.126 17:16:23 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:25:36.126 17:16:23 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:25:36.126 17:16:23 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:25:36.126 17:16:23 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:25:36.382 nvme0n1 00:25:36.382 17:16:24 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:25:36.382 17:16:24 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:25:36.639 Running I/O for 2 seconds... 00:25:38.547 00:25:38.547 Latency(us) 00:25:38.547 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:38.547 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:25:38.547 nvme0n1 : 2.00 26660.60 104.14 0.00 0.00 4796.27 2222.53 9801.91 00:25:38.547 =================================================================================================================== 00:25:38.547 Total : 26660.60 104.14 0.00 0.00 4796.27 2222.53 9801.91 00:25:38.547 0 00:25:38.547 17:16:26 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:25:38.547 17:16:26 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:25:38.547 17:16:26 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:25:38.547 17:16:26 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:25:38.547 | select(.opcode=="crc32c") 00:25:38.547 | "\(.module_name) \(.executed)"' 00:25:38.547 17:16:26 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:25:38.804 17:16:26 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:25:38.804 17:16:26 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:25:38.804 17:16:26 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:25:38.804 17:16:26 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:25:38.804 17:16:26 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 3206818 00:25:38.804 17:16:26 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@946 -- # '[' -z 3206818 ']' 00:25:38.804 17:16:26 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@950 -- # kill -0 3206818 00:25:38.804 17:16:26 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@951 -- # uname 00:25:38.804 17:16:26 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:25:38.804 17:16:26 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3206818 00:25:38.804 17:16:26 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:25:38.804 17:16:26 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:25:38.804 17:16:26 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3206818' 00:25:38.804 killing process with pid 3206818 00:25:38.804 17:16:26 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@965 -- # kill 3206818 00:25:38.804 Received shutdown signal, test time was about 2.000000 seconds 00:25:38.804 00:25:38.804 Latency(us) 00:25:38.804 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:38.804 =================================================================================================================== 00:25:38.804 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:25:38.804 17:16:26 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@970 -- # wait 3206818 00:25:39.062 17:16:26 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@129 -- # run_bperf randread 131072 16 false 00:25:39.062 17:16:26 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:25:39.062 17:16:26 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:25:39.062 17:16:26 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:25:39.062 17:16:26 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:25:39.062 17:16:26 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:25:39.062 17:16:26 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:25:39.062 17:16:26 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=3207420 00:25:39.062 17:16:26 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 3207420 /var/tmp/bperf.sock 00:25:39.062 17:16:26 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:25:39.062 17:16:26 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@827 -- # '[' -z 3207420 ']' 00:25:39.062 17:16:26 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bperf.sock 00:25:39.062 17:16:26 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@832 -- # local max_retries=100 00:25:39.062 17:16:26 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:25:39.062 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:25:39.062 17:16:26 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # xtrace_disable 00:25:39.062 17:16:26 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:25:39.062 [2024-05-15 17:16:26.607481] Starting SPDK v24.05-pre git sha1 0ba8ca574 / DPDK 23.11.0 initialization... 00:25:39.062 [2024-05-15 17:16:26.607531] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3207420 ] 00:25:39.062 I/O size of 131072 is greater than zero copy threshold (65536). 00:25:39.062 Zero copy mechanism will not be used. 00:25:39.062 EAL: No free 2048 kB hugepages reported on node 1 00:25:39.062 [2024-05-15 17:16:26.660590] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:39.319 [2024-05-15 17:16:26.729279] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:25:39.883 17:16:27 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:25:39.883 17:16:27 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@860 -- # return 0 00:25:39.883 17:16:27 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:25:39.883 17:16:27 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:25:39.883 17:16:27 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:25:40.140 17:16:27 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:25:40.140 17:16:27 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:25:40.396 nvme0n1 00:25:40.396 17:16:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:25:40.396 17:16:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:25:40.653 I/O size of 131072 is greater than zero copy threshold (65536). 00:25:40.653 Zero copy mechanism will not be used. 00:25:40.653 Running I/O for 2 seconds... 00:25:42.549 00:25:42.549 Latency(us) 00:25:42.549 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:42.549 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:25:42.549 nvme0n1 : 2.00 5143.13 642.89 0.00 0.00 3107.98 648.24 11169.61 00:25:42.549 =================================================================================================================== 00:25:42.549 Total : 5143.13 642.89 0.00 0.00 3107.98 648.24 11169.61 00:25:42.549 0 00:25:42.549 17:16:30 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:25:42.549 17:16:30 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:25:42.549 17:16:30 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:25:42.549 17:16:30 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:25:42.549 | select(.opcode=="crc32c") 00:25:42.549 | "\(.module_name) \(.executed)"' 00:25:42.549 17:16:30 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:25:42.811 17:16:30 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:25:42.811 17:16:30 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:25:42.811 17:16:30 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:25:42.811 17:16:30 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:25:42.811 17:16:30 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 3207420 00:25:42.811 17:16:30 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@946 -- # '[' -z 3207420 ']' 00:25:42.811 17:16:30 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@950 -- # kill -0 3207420 00:25:42.811 17:16:30 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@951 -- # uname 00:25:42.811 17:16:30 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:25:42.811 17:16:30 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3207420 00:25:42.811 17:16:30 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:25:42.811 17:16:30 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:25:42.811 17:16:30 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3207420' 00:25:42.811 killing process with pid 3207420 00:25:42.811 17:16:30 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@965 -- # kill 3207420 00:25:42.811 Received shutdown signal, test time was about 2.000000 seconds 00:25:42.811 00:25:42.811 Latency(us) 00:25:42.811 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:42.811 =================================================================================================================== 00:25:42.811 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:25:42.811 17:16:30 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@970 -- # wait 3207420 00:25:43.067 17:16:30 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@130 -- # run_bperf randwrite 4096 128 false 00:25:43.067 17:16:30 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:25:43.067 17:16:30 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:25:43.067 17:16:30 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:25:43.067 17:16:30 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:25:43.067 17:16:30 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:25:43.067 17:16:30 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:25:43.067 17:16:30 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=3208119 00:25:43.067 17:16:30 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:25:43.067 17:16:30 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 3208119 /var/tmp/bperf.sock 00:25:43.068 17:16:30 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@827 -- # '[' -z 3208119 ']' 00:25:43.068 17:16:30 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bperf.sock 00:25:43.068 17:16:30 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@832 -- # local max_retries=100 00:25:43.068 17:16:30 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:25:43.068 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:25:43.068 17:16:30 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # xtrace_disable 00:25:43.068 17:16:30 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:25:43.068 [2024-05-15 17:16:30.605199] Starting SPDK v24.05-pre git sha1 0ba8ca574 / DPDK 23.11.0 initialization... 00:25:43.068 [2024-05-15 17:16:30.605250] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3208119 ] 00:25:43.068 EAL: No free 2048 kB hugepages reported on node 1 00:25:43.068 [2024-05-15 17:16:30.659811] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:43.324 [2024-05-15 17:16:30.727887] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:25:43.324 17:16:30 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:25:43.324 17:16:30 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@860 -- # return 0 00:25:43.324 17:16:30 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:25:43.324 17:16:30 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:25:43.324 17:16:30 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:25:43.580 17:16:30 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:25:43.580 17:16:30 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:25:43.580 nvme0n1 00:25:43.580 17:16:31 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:25:43.580 17:16:31 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:25:43.837 Running I/O for 2 seconds... 00:25:45.734 00:25:45.734 Latency(us) 00:25:45.734 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:45.734 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:25:45.734 nvme0n1 : 2.00 27122.02 105.95 0.00 0.00 4710.57 2051.56 6582.09 00:25:45.734 =================================================================================================================== 00:25:45.734 Total : 27122.02 105.95 0.00 0.00 4710.57 2051.56 6582.09 00:25:45.734 0 00:25:45.734 17:16:33 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:25:45.734 17:16:33 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:25:45.734 17:16:33 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:25:45.734 17:16:33 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:25:45.734 | select(.opcode=="crc32c") 00:25:45.734 | "\(.module_name) \(.executed)"' 00:25:45.734 17:16:33 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:25:45.992 17:16:33 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:25:45.992 17:16:33 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:25:45.992 17:16:33 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:25:45.992 17:16:33 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:25:45.992 17:16:33 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 3208119 00:25:45.992 17:16:33 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@946 -- # '[' -z 3208119 ']' 00:25:45.992 17:16:33 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@950 -- # kill -0 3208119 00:25:45.992 17:16:33 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@951 -- # uname 00:25:45.992 17:16:33 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:25:45.992 17:16:33 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3208119 00:25:45.992 17:16:33 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:25:45.992 17:16:33 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:25:45.992 17:16:33 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3208119' 00:25:45.992 killing process with pid 3208119 00:25:45.992 17:16:33 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@965 -- # kill 3208119 00:25:45.992 Received shutdown signal, test time was about 2.000000 seconds 00:25:45.992 00:25:45.992 Latency(us) 00:25:45.992 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:45.992 =================================================================================================================== 00:25:45.992 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:25:45.992 17:16:33 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@970 -- # wait 3208119 00:25:46.249 17:16:33 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@131 -- # run_bperf randwrite 131072 16 false 00:25:46.249 17:16:33 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:25:46.249 17:16:33 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:25:46.249 17:16:33 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:25:46.249 17:16:33 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:25:46.249 17:16:33 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:25:46.249 17:16:33 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:25:46.249 17:16:33 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=3208590 00:25:46.249 17:16:33 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 3208590 /var/tmp/bperf.sock 00:25:46.249 17:16:33 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@827 -- # '[' -z 3208590 ']' 00:25:46.249 17:16:33 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bperf.sock 00:25:46.249 17:16:33 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@832 -- # local max_retries=100 00:25:46.249 17:16:33 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:25:46.249 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:25:46.249 17:16:33 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # xtrace_disable 00:25:46.249 17:16:33 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:25:46.249 17:16:33 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:25:46.249 [2024-05-15 17:16:33.800252] Starting SPDK v24.05-pre git sha1 0ba8ca574 / DPDK 23.11.0 initialization... 00:25:46.249 [2024-05-15 17:16:33.800299] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3208590 ] 00:25:46.249 I/O size of 131072 is greater than zero copy threshold (65536). 00:25:46.249 Zero copy mechanism will not be used. 00:25:46.249 EAL: No free 2048 kB hugepages reported on node 1 00:25:46.249 [2024-05-15 17:16:33.852859] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:46.506 [2024-05-15 17:16:33.925617] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:25:47.073 17:16:34 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:25:47.073 17:16:34 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@860 -- # return 0 00:25:47.073 17:16:34 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:25:47.073 17:16:34 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:25:47.073 17:16:34 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:25:47.330 17:16:34 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:25:47.330 17:16:34 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:25:47.587 nvme0n1 00:25:47.587 17:16:35 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:25:47.587 17:16:35 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:25:47.843 I/O size of 131072 is greater than zero copy threshold (65536). 00:25:47.844 Zero copy mechanism will not be used. 00:25:47.844 Running I/O for 2 seconds... 00:25:49.738 00:25:49.738 Latency(us) 00:25:49.738 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:49.738 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:25:49.738 nvme0n1 : 2.00 5863.38 732.92 0.00 0.00 2723.62 1688.26 5698.78 00:25:49.738 =================================================================================================================== 00:25:49.738 Total : 5863.38 732.92 0.00 0.00 2723.62 1688.26 5698.78 00:25:49.738 0 00:25:49.738 17:16:37 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:25:49.738 17:16:37 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:25:49.739 17:16:37 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:25:49.739 17:16:37 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:25:49.739 17:16:37 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:25:49.739 | select(.opcode=="crc32c") 00:25:49.739 | "\(.module_name) \(.executed)"' 00:25:50.005 17:16:37 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:25:50.005 17:16:37 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:25:50.005 17:16:37 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:25:50.005 17:16:37 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:25:50.005 17:16:37 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 3208590 00:25:50.005 17:16:37 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@946 -- # '[' -z 3208590 ']' 00:25:50.005 17:16:37 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@950 -- # kill -0 3208590 00:25:50.005 17:16:37 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@951 -- # uname 00:25:50.006 17:16:37 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:25:50.006 17:16:37 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3208590 00:25:50.006 17:16:37 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:25:50.006 17:16:37 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:25:50.006 17:16:37 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3208590' 00:25:50.006 killing process with pid 3208590 00:25:50.006 17:16:37 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@965 -- # kill 3208590 00:25:50.006 Received shutdown signal, test time was about 2.000000 seconds 00:25:50.006 00:25:50.006 Latency(us) 00:25:50.006 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:50.006 =================================================================================================================== 00:25:50.006 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:25:50.006 17:16:37 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@970 -- # wait 3208590 00:25:50.264 17:16:37 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@132 -- # killprocess 3206689 00:25:50.264 17:16:37 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@946 -- # '[' -z 3206689 ']' 00:25:50.264 17:16:37 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@950 -- # kill -0 3206689 00:25:50.264 17:16:37 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@951 -- # uname 00:25:50.264 17:16:37 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:25:50.264 17:16:37 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3206689 00:25:50.264 17:16:37 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:25:50.264 17:16:37 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:25:50.264 17:16:37 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3206689' 00:25:50.264 killing process with pid 3206689 00:25:50.264 17:16:37 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@965 -- # kill 3206689 00:25:50.264 [2024-05-15 17:16:37.806323] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:25:50.264 17:16:37 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@970 -- # wait 3206689 00:25:50.520 00:25:50.520 real 0m16.344s 00:25:50.520 user 0m31.075s 00:25:50.520 sys 0m4.432s 00:25:50.520 17:16:38 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1122 -- # xtrace_disable 00:25:50.520 17:16:38 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:25:50.520 ************************************ 00:25:50.520 END TEST nvmf_digest_clean 00:25:50.520 ************************************ 00:25:50.520 17:16:38 nvmf_tcp.nvmf_digest -- host/digest.sh@147 -- # run_test nvmf_digest_error run_digest_error 00:25:50.520 17:16:38 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:25:50.520 17:16:38 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1103 -- # xtrace_disable 00:25:50.520 17:16:38 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:25:50.520 ************************************ 00:25:50.520 START TEST nvmf_digest_error 00:25:50.520 ************************************ 00:25:50.520 17:16:38 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1121 -- # run_digest_error 00:25:50.520 17:16:38 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@102 -- # nvmfappstart --wait-for-rpc 00:25:50.520 17:16:38 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:25:50.520 17:16:38 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@720 -- # xtrace_disable 00:25:50.520 17:16:38 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:25:50.520 17:16:38 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@481 -- # nvmfpid=3209312 00:25:50.520 17:16:38 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@482 -- # waitforlisten 3209312 00:25:50.520 17:16:38 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@827 -- # '[' -z 3209312 ']' 00:25:50.520 17:16:38 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:50.520 17:16:38 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@832 -- # local max_retries=100 00:25:50.520 17:16:38 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:50.520 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:50.520 17:16:38 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # xtrace_disable 00:25:50.521 17:16:38 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:25:50.521 17:16:38 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:25:50.521 [2024-05-15 17:16:38.128883] Starting SPDK v24.05-pre git sha1 0ba8ca574 / DPDK 23.11.0 initialization... 00:25:50.521 [2024-05-15 17:16:38.128922] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:50.521 EAL: No free 2048 kB hugepages reported on node 1 00:25:50.778 [2024-05-15 17:16:38.185347] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:50.778 [2024-05-15 17:16:38.263647] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:50.778 [2024-05-15 17:16:38.263682] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:50.778 [2024-05-15 17:16:38.263692] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:50.778 [2024-05-15 17:16:38.263697] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:50.778 [2024-05-15 17:16:38.263702] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:50.778 [2024-05-15 17:16:38.263724] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:51.379 17:16:38 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:25:51.379 17:16:38 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@860 -- # return 0 00:25:51.379 17:16:38 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:25:51.379 17:16:38 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@726 -- # xtrace_disable 00:25:51.379 17:16:38 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:25:51.379 17:16:38 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:51.379 17:16:38 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@104 -- # rpc_cmd accel_assign_opc -o crc32c -m error 00:25:51.379 17:16:38 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:51.379 17:16:38 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:25:51.379 [2024-05-15 17:16:38.973783] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation crc32c will be assigned to module error 00:25:51.379 17:16:38 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:51.379 17:16:38 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@105 -- # common_target_config 00:25:51.379 17:16:38 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@43 -- # rpc_cmd 00:25:51.379 17:16:38 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:51.379 17:16:38 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:25:51.637 null0 00:25:51.637 [2024-05-15 17:16:39.063403] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:51.637 [2024-05-15 17:16:39.087410] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:25:51.637 [2024-05-15 17:16:39.087607] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:51.637 17:16:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:51.637 17:16:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@108 -- # run_bperf_err randread 4096 128 00:25:51.637 17:16:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:25:51.637 17:16:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:25:51.637 17:16:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:25:51.637 17:16:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:25:51.637 17:16:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=3209561 00:25:51.637 17:16:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 3209561 /var/tmp/bperf.sock 00:25:51.637 17:16:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@827 -- # '[' -z 3209561 ']' 00:25:51.637 17:16:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bperf.sock 00:25:51.637 17:16:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@832 -- # local max_retries=100 00:25:51.637 17:16:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:25:51.637 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:25:51.637 17:16:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # xtrace_disable 00:25:51.637 17:16:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z 00:25:51.637 17:16:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:25:51.637 [2024-05-15 17:16:39.136518] Starting SPDK v24.05-pre git sha1 0ba8ca574 / DPDK 23.11.0 initialization... 00:25:51.637 [2024-05-15 17:16:39.136559] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3209561 ] 00:25:51.637 EAL: No free 2048 kB hugepages reported on node 1 00:25:51.637 [2024-05-15 17:16:39.190055] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:51.637 [2024-05-15 17:16:39.268981] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:25:52.568 17:16:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:25:52.568 17:16:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@860 -- # return 0 00:25:52.568 17:16:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:25:52.568 17:16:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:25:52.568 17:16:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:25:52.568 17:16:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:52.568 17:16:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:25:52.568 17:16:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:52.568 17:16:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:25:52.568 17:16:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:25:52.825 nvme0n1 00:25:52.825 17:16:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:25:52.825 17:16:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:52.825 17:16:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:25:52.825 17:16:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:52.825 17:16:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:25:52.825 17:16:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:25:53.082 Running I/O for 2 seconds... 00:25:53.082 [2024-05-15 17:16:40.561709] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23e3970) 00:25:53.082 [2024-05-15 17:16:40.561741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:14809 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.082 [2024-05-15 17:16:40.561752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:53.082 [2024-05-15 17:16:40.573623] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23e3970) 00:25:53.082 [2024-05-15 17:16:40.573647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:8070 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.082 [2024-05-15 17:16:40.573656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:53.082 [2024-05-15 17:16:40.584837] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23e3970) 00:25:53.082 [2024-05-15 17:16:40.584862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:1258 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.082 [2024-05-15 17:16:40.584871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:53.082 [2024-05-15 17:16:40.593751] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23e3970) 00:25:53.082 [2024-05-15 17:16:40.593772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:4658 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.082 [2024-05-15 17:16:40.593781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:53.082 [2024-05-15 17:16:40.605382] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23e3970) 00:25:53.082 [2024-05-15 17:16:40.605402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:15612 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.082 [2024-05-15 17:16:40.605410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:53.082 [2024-05-15 17:16:40.617021] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23e3970) 00:25:53.082 [2024-05-15 17:16:40.617042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:12538 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.082 [2024-05-15 17:16:40.617050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:53.082 [2024-05-15 17:16:40.627691] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23e3970) 00:25:53.082 [2024-05-15 17:16:40.627711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:24833 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.082 [2024-05-15 17:16:40.627719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:53.082 [2024-05-15 17:16:40.636477] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23e3970) 00:25:53.082 [2024-05-15 17:16:40.636498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:736 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.082 [2024-05-15 17:16:40.636506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:53.082 [2024-05-15 17:16:40.647699] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23e3970) 00:25:53.082 [2024-05-15 17:16:40.647719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:19303 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.082 [2024-05-15 17:16:40.647728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:53.082 [2024-05-15 17:16:40.659786] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23e3970) 00:25:53.082 [2024-05-15 17:16:40.659806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:760 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.082 [2024-05-15 17:16:40.659814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:53.082 [2024-05-15 17:16:40.668483] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23e3970) 00:25:53.082 [2024-05-15 17:16:40.668502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:18949 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.082 [2024-05-15 17:16:40.668511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:53.082 [2024-05-15 17:16:40.678135] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23e3970) 00:25:53.082 [2024-05-15 17:16:40.678154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:986 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.082 [2024-05-15 17:16:40.678162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:53.082 [2024-05-15 17:16:40.687654] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23e3970) 00:25:53.082 [2024-05-15 17:16:40.687673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:11536 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.082 [2024-05-15 17:16:40.687681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:53.082 [2024-05-15 17:16:40.698018] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23e3970) 00:25:53.082 [2024-05-15 17:16:40.698040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:10770 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.082 [2024-05-15 17:16:40.698047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:53.082 [2024-05-15 17:16:40.705911] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23e3970) 00:25:53.082 [2024-05-15 17:16:40.705932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:10501 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.082 [2024-05-15 17:16:40.705941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:53.082 [2024-05-15 17:16:40.716670] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23e3970) 00:25:53.082 [2024-05-15 17:16:40.716690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:396 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.082 [2024-05-15 17:16:40.716698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:53.082 [2024-05-15 17:16:40.725246] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23e3970) 00:25:53.082 [2024-05-15 17:16:40.725266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:21406 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.082 [2024-05-15 17:16:40.725274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:53.082 [2024-05-15 17:16:40.736076] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23e3970) 00:25:53.082 [2024-05-15 17:16:40.736096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:14858 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.082 [2024-05-15 17:16:40.736104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:53.340 [2024-05-15 17:16:40.745613] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23e3970) 00:25:53.340 [2024-05-15 17:16:40.745636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:16320 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.340 [2024-05-15 17:16:40.745645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:53.340 [2024-05-15 17:16:40.754147] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23e3970) 00:25:53.340 [2024-05-15 17:16:40.754174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25231 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.340 [2024-05-15 17:16:40.754187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:53.340 [2024-05-15 17:16:40.763761] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23e3970) 00:25:53.340 [2024-05-15 17:16:40.763781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:107 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.340 [2024-05-15 17:16:40.763789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:53.340 [2024-05-15 17:16:40.773411] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23e3970) 00:25:53.340 [2024-05-15 17:16:40.773430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:22484 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.340 [2024-05-15 17:16:40.773437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:53.340 [2024-05-15 17:16:40.783008] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23e3970) 00:25:53.340 [2024-05-15 17:16:40.783027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:18087 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.340 [2024-05-15 17:16:40.783035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:53.340 [2024-05-15 17:16:40.791972] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23e3970) 00:25:53.340 [2024-05-15 17:16:40.791992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:15794 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.340 [2024-05-15 17:16:40.792000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:53.340 [2024-05-15 17:16:40.803258] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23e3970) 00:25:53.340 [2024-05-15 17:16:40.803277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:17019 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.340 [2024-05-15 17:16:40.803285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:53.340 [2024-05-15 17:16:40.815524] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23e3970) 00:25:53.340 [2024-05-15 17:16:40.815543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:6038 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.340 [2024-05-15 17:16:40.815551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:53.340 [2024-05-15 17:16:40.824706] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23e3970) 00:25:53.340 [2024-05-15 17:16:40.824725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:25304 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.340 [2024-05-15 17:16:40.824733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:53.340 [2024-05-15 17:16:40.834097] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23e3970) 00:25:53.340 [2024-05-15 17:16:40.834116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:11246 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.340 [2024-05-15 17:16:40.834124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:53.340 [2024-05-15 17:16:40.844373] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23e3970) 00:25:53.340 [2024-05-15 17:16:40.844392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:22664 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.340 [2024-05-15 17:16:40.844399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:53.340 [2024-05-15 17:16:40.852813] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23e3970) 00:25:53.340 [2024-05-15 17:16:40.852832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:672 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.340 [2024-05-15 17:16:40.852840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:53.340 [2024-05-15 17:16:40.862510] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23e3970) 00:25:53.340 [2024-05-15 17:16:40.862529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:4131 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.340 [2024-05-15 17:16:40.862537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:53.340 [2024-05-15 17:16:40.873064] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23e3970) 00:25:53.340 [2024-05-15 17:16:40.873082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:18032 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.340 [2024-05-15 17:16:40.873090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:53.340 [2024-05-15 17:16:40.881703] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23e3970) 00:25:53.340 [2024-05-15 17:16:40.881722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:762 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.340 [2024-05-15 17:16:40.881729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:53.340 [2024-05-15 17:16:40.894157] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23e3970) 00:25:53.340 [2024-05-15 17:16:40.894181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:22904 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.340 [2024-05-15 17:16:40.894190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:53.340 [2024-05-15 17:16:40.904201] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23e3970) 00:25:53.340 [2024-05-15 17:16:40.904219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:1799 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.340 [2024-05-15 17:16:40.904227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:53.340 [2024-05-15 17:16:40.912822] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23e3970) 00:25:53.340 [2024-05-15 17:16:40.912842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:13459 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.340 [2024-05-15 17:16:40.912849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:53.340 [2024-05-15 17:16:40.925317] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23e3970) 00:25:53.340 [2024-05-15 17:16:40.925337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:8677 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.340 [2024-05-15 17:16:40.925348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:53.340 [2024-05-15 17:16:40.937511] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23e3970) 00:25:53.340 [2024-05-15 17:16:40.937531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:4560 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.340 [2024-05-15 17:16:40.937538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:53.340 [2024-05-15 17:16:40.945859] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23e3970) 00:25:53.340 [2024-05-15 17:16:40.945878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:18102 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.340 [2024-05-15 17:16:40.945886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:53.340 [2024-05-15 17:16:40.957218] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23e3970) 00:25:53.340 [2024-05-15 17:16:40.957237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:6626 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.340 [2024-05-15 17:16:40.957245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:53.340 [2024-05-15 17:16:40.969439] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23e3970) 00:25:53.340 [2024-05-15 17:16:40.969458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:12052 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.340 [2024-05-15 17:16:40.969466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:53.340 [2024-05-15 17:16:40.977997] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23e3970) 00:25:53.341 [2024-05-15 17:16:40.978017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:7309 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.341 [2024-05-15 17:16:40.978025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:53.341 [2024-05-15 17:16:40.989266] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23e3970) 00:25:53.341 [2024-05-15 17:16:40.989287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:3851 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.341 [2024-05-15 17:16:40.989294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:53.597 [2024-05-15 17:16:41.002089] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23e3970) 00:25:53.597 [2024-05-15 17:16:41.002115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:21529 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.597 [2024-05-15 17:16:41.002124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:53.597 [2024-05-15 17:16:41.014334] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23e3970) 00:25:53.597 [2024-05-15 17:16:41.014356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:23550 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.597 [2024-05-15 17:16:41.014365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:53.597 [2024-05-15 17:16:41.022184] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23e3970) 00:25:53.597 [2024-05-15 17:16:41.022210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:4034 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.597 [2024-05-15 17:16:41.022218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:53.597 [2024-05-15 17:16:41.033636] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23e3970) 00:25:53.597 [2024-05-15 17:16:41.033658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:18074 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.597 [2024-05-15 17:16:41.033667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:53.597 [2024-05-15 17:16:41.043766] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23e3970) 00:25:53.597 [2024-05-15 17:16:41.043788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:4388 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.597 [2024-05-15 17:16:41.043796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:53.597 [2024-05-15 17:16:41.052615] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23e3970) 00:25:53.597 [2024-05-15 17:16:41.052636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:13257 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.598 [2024-05-15 17:16:41.052644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:53.598 [2024-05-15 17:16:41.062366] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23e3970) 00:25:53.598 [2024-05-15 17:16:41.062387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:11441 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.598 [2024-05-15 17:16:41.062395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:53.598 [2024-05-15 17:16:41.070953] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23e3970) 00:25:53.598 [2024-05-15 17:16:41.070975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:19615 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.598 [2024-05-15 17:16:41.070982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:53.598 [2024-05-15 17:16:41.081669] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23e3970) 00:25:53.598 [2024-05-15 17:16:41.081689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:14621 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.598 [2024-05-15 17:16:41.081697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:53.598 [2024-05-15 17:16:41.091825] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23e3970) 00:25:53.598 [2024-05-15 17:16:41.091845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:12621 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.598 [2024-05-15 17:16:41.091853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:53.598 [2024-05-15 17:16:41.101007] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23e3970) 00:25:53.598 [2024-05-15 17:16:41.101028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:3535 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.598 [2024-05-15 17:16:41.101035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:53.598 [2024-05-15 17:16:41.112194] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23e3970) 00:25:53.598 [2024-05-15 17:16:41.112215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:780 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.598 [2024-05-15 17:16:41.112223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:53.598 [2024-05-15 17:16:41.120813] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23e3970) 00:25:53.598 [2024-05-15 17:16:41.120834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:10321 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.598 [2024-05-15 17:16:41.120842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:53.598 [2024-05-15 17:16:41.131311] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23e3970) 00:25:53.598 [2024-05-15 17:16:41.131330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:24851 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.598 [2024-05-15 17:16:41.131338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:53.598 [2024-05-15 17:16:41.140551] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23e3970) 00:25:53.598 [2024-05-15 17:16:41.140571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:191 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.598 [2024-05-15 17:16:41.140579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:53.598 [2024-05-15 17:16:41.149758] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23e3970) 00:25:53.598 [2024-05-15 17:16:41.149779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:16512 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.598 [2024-05-15 17:16:41.149787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:53.598 [2024-05-15 17:16:41.159010] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23e3970) 00:25:53.598 [2024-05-15 17:16:41.159029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:9531 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.598 [2024-05-15 17:16:41.159037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:53.598 [2024-05-15 17:16:41.169446] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23e3970) 00:25:53.598 [2024-05-15 17:16:41.169466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:22352 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.598 [2024-05-15 17:16:41.169473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:53.598 [2024-05-15 17:16:41.177565] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23e3970) 00:25:53.598 [2024-05-15 17:16:41.177585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:18517 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.598 [2024-05-15 17:16:41.177592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:53.598 [2024-05-15 17:16:41.186951] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23e3970) 00:25:53.598 [2024-05-15 17:16:41.186971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:17993 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.598 [2024-05-15 17:16:41.186982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:53.598 [2024-05-15 17:16:41.198858] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23e3970) 00:25:53.598 [2024-05-15 17:16:41.198877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:24198 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.598 [2024-05-15 17:16:41.198885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:53.598 [2024-05-15 17:16:41.207763] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23e3970) 00:25:53.598 [2024-05-15 17:16:41.207781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25069 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.598 [2024-05-15 17:16:41.207789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:53.598 [2024-05-15 17:16:41.217205] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23e3970) 00:25:53.598 [2024-05-15 17:16:41.217225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:17045 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.598 [2024-05-15 17:16:41.217233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:53.598 [2024-05-15 17:16:41.226597] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23e3970) 00:25:53.598 [2024-05-15 17:16:41.226617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:8808 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.598 [2024-05-15 17:16:41.226625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:53.598 [2024-05-15 17:16:41.237350] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23e3970) 00:25:53.598 [2024-05-15 17:16:41.237372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:2318 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.598 [2024-05-15 17:16:41.237380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:53.598 [2024-05-15 17:16:41.246790] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23e3970) 00:25:53.598 [2024-05-15 17:16:41.246811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:9857 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.598 [2024-05-15 17:16:41.246820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:53.855 [2024-05-15 17:16:41.258925] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23e3970) 00:25:53.855 [2024-05-15 17:16:41.258948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:9905 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.855 [2024-05-15 17:16:41.258958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:53.855 [2024-05-15 17:16:41.271326] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23e3970) 00:25:53.855 [2024-05-15 17:16:41.271348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:7564 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.855 [2024-05-15 17:16:41.271357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:53.855 [2024-05-15 17:16:41.280360] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23e3970) 00:25:53.855 [2024-05-15 17:16:41.280389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:17887 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.855 [2024-05-15 17:16:41.280397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:53.855 [2024-05-15 17:16:41.290840] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23e3970) 00:25:53.855 [2024-05-15 17:16:41.290861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:9301 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.855 [2024-05-15 17:16:41.290870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:53.855 [2024-05-15 17:16:41.300665] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23e3970) 00:25:53.855 [2024-05-15 17:16:41.300686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:25357 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.855 [2024-05-15 17:16:41.300693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:53.855 [2024-05-15 17:16:41.310084] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23e3970) 00:25:53.855 [2024-05-15 17:16:41.310104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:3349 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.855 [2024-05-15 17:16:41.310112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:53.855 [2024-05-15 17:16:41.321786] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23e3970) 00:25:53.855 [2024-05-15 17:16:41.321807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:13364 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.855 [2024-05-15 17:16:41.321815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:53.855 [2024-05-15 17:16:41.333657] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23e3970) 00:25:53.855 [2024-05-15 17:16:41.333678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:21087 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.855 [2024-05-15 17:16:41.333686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:53.855 [2024-05-15 17:16:41.341917] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23e3970) 00:25:53.855 [2024-05-15 17:16:41.341938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:11068 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.855 [2024-05-15 17:16:41.341945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:53.855 [2024-05-15 17:16:41.351863] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23e3970) 00:25:53.855 [2024-05-15 17:16:41.351882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22801 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.855 [2024-05-15 17:16:41.351890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:53.855 [2024-05-15 17:16:41.361909] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23e3970) 00:25:53.855 [2024-05-15 17:16:41.361929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:11127 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.855 [2024-05-15 17:16:41.361941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:53.855 [2024-05-15 17:16:41.371434] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23e3970) 00:25:53.855 [2024-05-15 17:16:41.371454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:10566 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.855 [2024-05-15 17:16:41.371462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:53.855 [2024-05-15 17:16:41.380201] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23e3970) 00:25:53.855 [2024-05-15 17:16:41.380221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:13828 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.856 [2024-05-15 17:16:41.380229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:53.856 [2024-05-15 17:16:41.389692] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23e3970) 00:25:53.856 [2024-05-15 17:16:41.389712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:9793 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.856 [2024-05-15 17:16:41.389720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:53.856 [2024-05-15 17:16:41.401154] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23e3970) 00:25:53.856 [2024-05-15 17:16:41.401181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:2374 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.856 [2024-05-15 17:16:41.401189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:53.856 [2024-05-15 17:16:41.409454] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23e3970) 00:25:53.856 [2024-05-15 17:16:41.409475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:15715 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.856 [2024-05-15 17:16:41.409483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:53.856 [2024-05-15 17:16:41.421100] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23e3970) 00:25:53.856 [2024-05-15 17:16:41.421121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:20309 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.856 [2024-05-15 17:16:41.421129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:53.856 [2024-05-15 17:16:41.429600] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23e3970) 00:25:53.856 [2024-05-15 17:16:41.429620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:17429 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.856 [2024-05-15 17:16:41.429628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:53.856 [2024-05-15 17:16:41.439078] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23e3970) 00:25:53.856 [2024-05-15 17:16:41.439097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:10092 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.856 [2024-05-15 17:16:41.439105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:53.856 [2024-05-15 17:16:41.447932] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23e3970) 00:25:53.856 [2024-05-15 17:16:41.447954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:23741 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.856 [2024-05-15 17:16:41.447962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:53.856 [2024-05-15 17:16:41.458426] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23e3970) 00:25:53.856 [2024-05-15 17:16:41.458445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:7765 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.856 [2024-05-15 17:16:41.458453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:53.856 [2024-05-15 17:16:41.470110] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23e3970) 00:25:53.856 [2024-05-15 17:16:41.470129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:6827 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.856 [2024-05-15 17:16:41.470137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:53.856 [2024-05-15 17:16:41.480847] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23e3970) 00:25:53.856 [2024-05-15 17:16:41.480866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:4578 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.856 [2024-05-15 17:16:41.480874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:53.856 [2024-05-15 17:16:41.489559] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23e3970) 00:25:53.856 [2024-05-15 17:16:41.489578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22794 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.856 [2024-05-15 17:16:41.489586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:53.856 [2024-05-15 17:16:41.501771] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23e3970) 00:25:53.856 [2024-05-15 17:16:41.501790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:10071 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.856 [2024-05-15 17:16:41.501798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:53.856 [2024-05-15 17:16:41.512762] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23e3970) 00:25:53.856 [2024-05-15 17:16:41.512784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:5330 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.856 [2024-05-15 17:16:41.512793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:54.113 [2024-05-15 17:16:41.524298] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23e3970) 00:25:54.113 [2024-05-15 17:16:41.524320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:5335 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:54.113 [2024-05-15 17:16:41.524329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:54.113 [2024-05-15 17:16:41.532698] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23e3970) 00:25:54.113 [2024-05-15 17:16:41.532718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:11699 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:54.113 [2024-05-15 17:16:41.532726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:54.113 [2024-05-15 17:16:41.545047] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23e3970) 00:25:54.113 [2024-05-15 17:16:41.545067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20957 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:54.113 [2024-05-15 17:16:41.545075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:54.113 [2024-05-15 17:16:41.555656] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23e3970) 00:25:54.113 [2024-05-15 17:16:41.555675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:24075 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:54.114 [2024-05-15 17:16:41.555683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:54.114 [2024-05-15 17:16:41.564583] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23e3970) 00:25:54.114 [2024-05-15 17:16:41.564602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:6820 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:54.114 [2024-05-15 17:16:41.564610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:54.114 [2024-05-15 17:16:41.573921] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23e3970) 00:25:54.114 [2024-05-15 17:16:41.573941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:25041 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:54.114 [2024-05-15 17:16:41.573949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:54.114 [2024-05-15 17:16:41.586476] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23e3970) 00:25:54.114 [2024-05-15 17:16:41.586496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:16028 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:54.114 [2024-05-15 17:16:41.586505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:54.114 [2024-05-15 17:16:41.595058] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23e3970) 00:25:54.114 [2024-05-15 17:16:41.595077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:5254 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:54.114 [2024-05-15 17:16:41.595086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:54.114 [2024-05-15 17:16:41.606304] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23e3970) 00:25:54.114 [2024-05-15 17:16:41.606323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:6615 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:54.114 [2024-05-15 17:16:41.606331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:54.114 [2024-05-15 17:16:41.618548] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23e3970) 00:25:54.114 [2024-05-15 17:16:41.618567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:9768 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:54.114 [2024-05-15 17:16:41.618575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:54.114 [2024-05-15 17:16:41.630662] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23e3970) 00:25:54.114 [2024-05-15 17:16:41.630682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:3448 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:54.114 [2024-05-15 17:16:41.630693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:54.114 [2024-05-15 17:16:41.643068] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23e3970) 00:25:54.114 [2024-05-15 17:16:41.643088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:12064 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:54.114 [2024-05-15 17:16:41.643095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:54.114 [2024-05-15 17:16:41.651924] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23e3970) 00:25:54.114 [2024-05-15 17:16:41.651943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:12072 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:54.114 [2024-05-15 17:16:41.651951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:54.114 [2024-05-15 17:16:41.663479] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23e3970) 00:25:54.114 [2024-05-15 17:16:41.663499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:19251 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:54.114 [2024-05-15 17:16:41.663508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:54.114 [2024-05-15 17:16:41.674914] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23e3970) 00:25:54.114 [2024-05-15 17:16:41.674932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:19331 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:54.114 [2024-05-15 17:16:41.674940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:54.114 [2024-05-15 17:16:41.684223] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23e3970) 00:25:54.114 [2024-05-15 17:16:41.684242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:11116 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:54.114 [2024-05-15 17:16:41.684250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:54.114 [2024-05-15 17:16:41.692581] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23e3970) 00:25:54.114 [2024-05-15 17:16:41.692600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:902 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:54.114 [2024-05-15 17:16:41.692608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:54.114 [2024-05-15 17:16:41.702573] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23e3970) 00:25:54.114 [2024-05-15 17:16:41.702592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:25366 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:54.114 [2024-05-15 17:16:41.702600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:54.114 [2024-05-15 17:16:41.710813] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23e3970) 00:25:54.114 [2024-05-15 17:16:41.710833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:3329 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:54.114 [2024-05-15 17:16:41.710841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:54.114 [2024-05-15 17:16:41.721481] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23e3970) 00:25:54.114 [2024-05-15 17:16:41.721507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:21415 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:54.114 [2024-05-15 17:16:41.721515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:54.114 [2024-05-15 17:16:41.731984] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23e3970) 00:25:54.114 [2024-05-15 17:16:41.732005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:12810 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:54.114 [2024-05-15 17:16:41.732013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:54.114 [2024-05-15 17:16:41.741998] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23e3970) 00:25:54.114 [2024-05-15 17:16:41.742017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:2988 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:54.114 [2024-05-15 17:16:41.742025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:54.114 [2024-05-15 17:16:41.750598] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23e3970) 00:25:54.114 [2024-05-15 17:16:41.750619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:9063 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:54.114 [2024-05-15 17:16:41.750626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:54.114 [2024-05-15 17:16:41.761745] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23e3970) 00:25:54.114 [2024-05-15 17:16:41.761765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:2240 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:54.114 [2024-05-15 17:16:41.761772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:54.114 [2024-05-15 17:16:41.770357] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23e3970) 00:25:54.114 [2024-05-15 17:16:41.770380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:13824 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:54.114 [2024-05-15 17:16:41.770388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:54.372 [2024-05-15 17:16:41.783055] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23e3970) 00:25:54.372 [2024-05-15 17:16:41.783077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:23674 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:54.372 [2024-05-15 17:16:41.783086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:54.372 [2024-05-15 17:16:41.794327] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23e3970) 00:25:54.372 [2024-05-15 17:16:41.794347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:21333 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:54.372 [2024-05-15 17:16:41.794355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:54.372 [2024-05-15 17:16:41.802556] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23e3970) 00:25:54.372 [2024-05-15 17:16:41.802576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:11352 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:54.372 [2024-05-15 17:16:41.802584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:54.372 [2024-05-15 17:16:41.813494] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23e3970) 00:25:54.372 [2024-05-15 17:16:41.813514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:18661 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:54.372 [2024-05-15 17:16:41.813522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:54.372 [2024-05-15 17:16:41.822153] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23e3970) 00:25:54.372 [2024-05-15 17:16:41.822175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:6328 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:54.372 [2024-05-15 17:16:41.822183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:54.372 [2024-05-15 17:16:41.833889] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23e3970) 00:25:54.372 [2024-05-15 17:16:41.833908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:21154 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:54.372 [2024-05-15 17:16:41.833916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:54.372 [2024-05-15 17:16:41.844329] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23e3970) 00:25:54.372 [2024-05-15 17:16:41.844349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:23230 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:54.372 [2024-05-15 17:16:41.844357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:54.372 [2024-05-15 17:16:41.856597] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23e3970) 00:25:54.372 [2024-05-15 17:16:41.856616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:19555 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:54.372 [2024-05-15 17:16:41.856624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:54.372 [2024-05-15 17:16:41.867123] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23e3970) 00:25:54.372 [2024-05-15 17:16:41.867142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:20814 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:54.372 [2024-05-15 17:16:41.867150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:54.372 [2024-05-15 17:16:41.875762] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23e3970) 00:25:54.372 [2024-05-15 17:16:41.875781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:3664 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:54.372 [2024-05-15 17:16:41.875788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:54.372 [2024-05-15 17:16:41.885492] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23e3970) 00:25:54.372 [2024-05-15 17:16:41.885512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:3314 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:54.372 [2024-05-15 17:16:41.885520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:54.372 [2024-05-15 17:16:41.894960] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23e3970) 00:25:54.372 [2024-05-15 17:16:41.894983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:9302 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:54.372 [2024-05-15 17:16:41.894991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:54.372 [2024-05-15 17:16:41.905040] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23e3970) 00:25:54.372 [2024-05-15 17:16:41.905059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:23251 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:54.372 [2024-05-15 17:16:41.905067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:54.372 [2024-05-15 17:16:41.913459] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23e3970) 00:25:54.372 [2024-05-15 17:16:41.913477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:147 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:54.372 [2024-05-15 17:16:41.913485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:54.372 [2024-05-15 17:16:41.925084] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23e3970) 00:25:54.372 [2024-05-15 17:16:41.925104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:6819 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:54.372 [2024-05-15 17:16:41.925112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:54.372 [2024-05-15 17:16:41.935323] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23e3970) 00:25:54.372 [2024-05-15 17:16:41.935342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:24777 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:54.372 [2024-05-15 17:16:41.935350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:54.372 [2024-05-15 17:16:41.944888] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23e3970) 00:25:54.372 [2024-05-15 17:16:41.944907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:6030 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:54.372 [2024-05-15 17:16:41.944915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:54.372 [2024-05-15 17:16:41.955481] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23e3970) 00:25:54.372 [2024-05-15 17:16:41.955500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:15054 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:54.372 [2024-05-15 17:16:41.955508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:54.372 [2024-05-15 17:16:41.964020] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23e3970) 00:25:54.372 [2024-05-15 17:16:41.964039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:23600 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:54.372 [2024-05-15 17:16:41.964047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:54.372 [2024-05-15 17:16:41.974814] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23e3970) 00:25:54.372 [2024-05-15 17:16:41.974833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:24568 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:54.372 [2024-05-15 17:16:41.974841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:54.372 [2024-05-15 17:16:41.987062] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23e3970) 00:25:54.372 [2024-05-15 17:16:41.987082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:10617 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:54.372 [2024-05-15 17:16:41.987089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:54.372 [2024-05-15 17:16:41.995288] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23e3970) 00:25:54.372 [2024-05-15 17:16:41.995307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:18324 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:54.372 [2024-05-15 17:16:41.995314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:54.372 [2024-05-15 17:16:42.006894] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23e3970) 00:25:54.372 [2024-05-15 17:16:42.006913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:11433 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:54.372 [2024-05-15 17:16:42.006921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:54.372 [2024-05-15 17:16:42.016243] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23e3970) 00:25:54.372 [2024-05-15 17:16:42.016262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:3151 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:54.373 [2024-05-15 17:16:42.016270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:54.373 [2024-05-15 17:16:42.024305] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23e3970) 00:25:54.373 [2024-05-15 17:16:42.024324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:12345 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:54.373 [2024-05-15 17:16:42.024332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:54.630 [2024-05-15 17:16:42.035443] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23e3970) 00:25:54.630 [2024-05-15 17:16:42.035465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:2218 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:54.630 [2024-05-15 17:16:42.035473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:54.630 [2024-05-15 17:16:42.045260] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23e3970) 00:25:54.630 [2024-05-15 17:16:42.045281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:10190 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:54.630 [2024-05-15 17:16:42.045289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:54.630 [2024-05-15 17:16:42.053480] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23e3970) 00:25:54.630 [2024-05-15 17:16:42.053500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:12905 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:54.630 [2024-05-15 17:16:42.053508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:54.630 [2024-05-15 17:16:42.063204] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23e3970) 00:25:54.630 [2024-05-15 17:16:42.063224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23335 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:54.630 [2024-05-15 17:16:42.063235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:54.630 [2024-05-15 17:16:42.074827] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23e3970) 00:25:54.630 [2024-05-15 17:16:42.074846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:6402 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:54.630 [2024-05-15 17:16:42.074854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:54.630 [2024-05-15 17:16:42.083995] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23e3970) 00:25:54.630 [2024-05-15 17:16:42.084015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:24591 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:54.630 [2024-05-15 17:16:42.084022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:54.630 [2024-05-15 17:16:42.095027] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23e3970) 00:25:54.630 [2024-05-15 17:16:42.095047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:11025 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:54.630 [2024-05-15 17:16:42.095054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:54.630 [2024-05-15 17:16:42.104356] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23e3970) 00:25:54.630 [2024-05-15 17:16:42.104375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:22345 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:54.630 [2024-05-15 17:16:42.104383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:54.630 [2024-05-15 17:16:42.115868] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23e3970) 00:25:54.630 [2024-05-15 17:16:42.115888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:5825 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:54.630 [2024-05-15 17:16:42.115895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:54.630 [2024-05-15 17:16:42.128126] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23e3970) 00:25:54.630 [2024-05-15 17:16:42.128146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:136 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:54.630 [2024-05-15 17:16:42.128153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:54.630 [2024-05-15 17:16:42.136673] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23e3970) 00:25:54.630 [2024-05-15 17:16:42.136692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:4418 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:54.630 [2024-05-15 17:16:42.136700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:54.630 [2024-05-15 17:16:42.148266] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23e3970) 00:25:54.630 [2024-05-15 17:16:42.148285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:14680 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:54.630 [2024-05-15 17:16:42.148293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:54.630 [2024-05-15 17:16:42.159586] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23e3970) 00:25:54.630 [2024-05-15 17:16:42.159611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:13625 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:54.630 [2024-05-15 17:16:42.159619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:54.631 [2024-05-15 17:16:42.170715] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23e3970) 00:25:54.631 [2024-05-15 17:16:42.170735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:16141 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:54.631 [2024-05-15 17:16:42.170743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:54.631 [2024-05-15 17:16:42.179084] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23e3970) 00:25:54.631 [2024-05-15 17:16:42.179103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:23594 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:54.631 [2024-05-15 17:16:42.179110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:54.631 [2024-05-15 17:16:42.191597] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23e3970) 00:25:54.631 [2024-05-15 17:16:42.191616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:14266 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:54.631 [2024-05-15 17:16:42.191623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:54.631 [2024-05-15 17:16:42.204117] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23e3970) 00:25:54.631 [2024-05-15 17:16:42.204135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:25326 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:54.631 [2024-05-15 17:16:42.204143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:54.631 [2024-05-15 17:16:42.215497] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23e3970) 00:25:54.631 [2024-05-15 17:16:42.215516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:9755 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:54.631 [2024-05-15 17:16:42.215524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:54.631 [2024-05-15 17:16:42.223980] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23e3970) 00:25:54.631 [2024-05-15 17:16:42.224001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:8482 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:54.631 [2024-05-15 17:16:42.224010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:54.631 [2024-05-15 17:16:42.235716] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23e3970) 00:25:54.631 [2024-05-15 17:16:42.235736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:10376 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:54.631 [2024-05-15 17:16:42.235743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:54.631 [2024-05-15 17:16:42.246188] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23e3970) 00:25:54.631 [2024-05-15 17:16:42.246208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:21396 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:54.631 [2024-05-15 17:16:42.246216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:54.631 [2024-05-15 17:16:42.254850] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23e3970) 00:25:54.631 [2024-05-15 17:16:42.254870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:10979 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:54.631 [2024-05-15 17:16:42.254878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:54.631 [2024-05-15 17:16:42.264203] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23e3970) 00:25:54.631 [2024-05-15 17:16:42.264223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:9655 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:54.631 [2024-05-15 17:16:42.264230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:54.631 [2024-05-15 17:16:42.274492] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23e3970) 00:25:54.631 [2024-05-15 17:16:42.274518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:11144 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:54.631 [2024-05-15 17:16:42.274526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:54.631 [2024-05-15 17:16:42.282664] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23e3970) 00:25:54.631 [2024-05-15 17:16:42.282683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:21007 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:54.631 [2024-05-15 17:16:42.282691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:54.888 [2024-05-15 17:16:42.295019] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23e3970) 00:25:54.888 [2024-05-15 17:16:42.295042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:2883 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:54.888 [2024-05-15 17:16:42.295051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:54.888 [2024-05-15 17:16:42.306491] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23e3970) 00:25:54.888 [2024-05-15 17:16:42.306511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:18852 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:54.888 [2024-05-15 17:16:42.306519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:54.888 [2024-05-15 17:16:42.315632] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23e3970) 00:25:54.888 [2024-05-15 17:16:42.315653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:5172 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:54.888 [2024-05-15 17:16:42.315661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:54.888 [2024-05-15 17:16:42.326975] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23e3970) 00:25:54.888 [2024-05-15 17:16:42.326995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:24176 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:54.888 [2024-05-15 17:16:42.327003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:54.888 [2024-05-15 17:16:42.336252] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23e3970) 00:25:54.888 [2024-05-15 17:16:42.336272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:20425 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:54.888 [2024-05-15 17:16:42.336284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:54.888 [2024-05-15 17:16:42.346000] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23e3970) 00:25:54.888 [2024-05-15 17:16:42.346020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:3284 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:54.888 [2024-05-15 17:16:42.346028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:54.888 [2024-05-15 17:16:42.356094] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23e3970) 00:25:54.888 [2024-05-15 17:16:42.356114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:11184 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:54.888 [2024-05-15 17:16:42.356122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:54.888 [2024-05-15 17:16:42.364845] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23e3970) 00:25:54.889 [2024-05-15 17:16:42.364877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:25597 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:54.889 [2024-05-15 17:16:42.364886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:54.889 [2024-05-15 17:16:42.377972] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23e3970) 00:25:54.889 [2024-05-15 17:16:42.377995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:25017 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:54.889 [2024-05-15 17:16:42.378004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:54.889 [2024-05-15 17:16:42.388837] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23e3970) 00:25:54.889 [2024-05-15 17:16:42.388857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:19810 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:54.889 [2024-05-15 17:16:42.388865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:54.889 [2024-05-15 17:16:42.397285] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23e3970) 00:25:54.889 [2024-05-15 17:16:42.397304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:726 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:54.889 [2024-05-15 17:16:42.397312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:54.889 [2024-05-15 17:16:42.409304] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23e3970) 00:25:54.889 [2024-05-15 17:16:42.409323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:809 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:54.889 [2024-05-15 17:16:42.409331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:54.889 [2024-05-15 17:16:42.420527] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23e3970) 00:25:54.889 [2024-05-15 17:16:42.420549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:15616 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:54.889 [2024-05-15 17:16:42.420557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:54.889 [2024-05-15 17:16:42.428931] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23e3970) 00:25:54.889 [2024-05-15 17:16:42.428951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:5532 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:54.889 [2024-05-15 17:16:42.428959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:54.889 [2024-05-15 17:16:42.441820] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23e3970) 00:25:54.889 [2024-05-15 17:16:42.441841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:12332 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:54.889 [2024-05-15 17:16:42.441848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:54.889 [2024-05-15 17:16:42.450810] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23e3970) 00:25:54.889 [2024-05-15 17:16:42.450832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:22423 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:54.889 [2024-05-15 17:16:42.450840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:54.889 [2024-05-15 17:16:42.461021] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23e3970) 00:25:54.889 [2024-05-15 17:16:42.461043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:2621 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:54.889 [2024-05-15 17:16:42.461050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:54.889 [2024-05-15 17:16:42.469658] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23e3970) 00:25:54.889 [2024-05-15 17:16:42.469679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16892 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:54.889 [2024-05-15 17:16:42.469687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:54.889 [2024-05-15 17:16:42.482472] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23e3970) 00:25:54.889 [2024-05-15 17:16:42.482492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:24043 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:54.889 [2024-05-15 17:16:42.482500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:54.889 [2024-05-15 17:16:42.495483] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23e3970) 00:25:54.889 [2024-05-15 17:16:42.495503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:1563 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:54.889 [2024-05-15 17:16:42.495511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:54.889 [2024-05-15 17:16:42.503522] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23e3970) 00:25:54.889 [2024-05-15 17:16:42.503542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:191 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:54.889 [2024-05-15 17:16:42.503549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:54.889 [2024-05-15 17:16:42.514815] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23e3970) 00:25:54.889 [2024-05-15 17:16:42.514836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:812 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:54.889 [2024-05-15 17:16:42.514848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:54.889 [2024-05-15 17:16:42.527111] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23e3970) 00:25:54.889 [2024-05-15 17:16:42.527132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:8186 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:54.889 [2024-05-15 17:16:42.527140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:54.889 [2024-05-15 17:16:42.537842] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23e3970) 00:25:54.889 [2024-05-15 17:16:42.537862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:3767 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:54.889 [2024-05-15 17:16:42.537870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:54.889 [2024-05-15 17:16:42.546022] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x23e3970) 00:25:54.889 [2024-05-15 17:16:42.546045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:23688 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:54.889 [2024-05-15 17:16:42.546054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:55.146 00:25:55.146 Latency(us) 00:25:55.146 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:55.146 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:25:55.146 nvme0n1 : 2.04 24340.41 95.08 0.00 0.00 5147.94 2179.78 45362.31 00:25:55.146 =================================================================================================================== 00:25:55.146 Total : 24340.41 95.08 0.00 0.00 5147.94 2179.78 45362.31 00:25:55.146 0 00:25:55.146 17:16:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:25:55.146 17:16:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:25:55.146 17:16:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:25:55.146 | .driver_specific 00:25:55.146 | .nvme_error 00:25:55.146 | .status_code 00:25:55.146 | .command_transient_transport_error' 00:25:55.146 17:16:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:25:55.146 17:16:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 195 > 0 )) 00:25:55.146 17:16:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 3209561 00:25:55.146 17:16:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@946 -- # '[' -z 3209561 ']' 00:25:55.146 17:16:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@950 -- # kill -0 3209561 00:25:55.146 17:16:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@951 -- # uname 00:25:55.146 17:16:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:25:55.146 17:16:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3209561 00:25:55.401 17:16:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:25:55.401 17:16:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:25:55.401 17:16:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3209561' 00:25:55.401 killing process with pid 3209561 00:25:55.401 17:16:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@965 -- # kill 3209561 00:25:55.401 Received shutdown signal, test time was about 2.000000 seconds 00:25:55.401 00:25:55.401 Latency(us) 00:25:55.401 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:55.401 =================================================================================================================== 00:25:55.401 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:25:55.401 17:16:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@970 -- # wait 3209561 00:25:55.401 17:16:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@109 -- # run_bperf_err randread 131072 16 00:25:55.401 17:16:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:25:55.401 17:16:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:25:55.401 17:16:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:25:55.401 17:16:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:25:55.401 17:16:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=3210252 00:25:55.401 17:16:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 3210252 /var/tmp/bperf.sock 00:25:55.401 17:16:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z 00:25:55.401 17:16:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@827 -- # '[' -z 3210252 ']' 00:25:55.401 17:16:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bperf.sock 00:25:55.401 17:16:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@832 -- # local max_retries=100 00:25:55.401 17:16:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:25:55.401 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:25:55.401 17:16:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # xtrace_disable 00:25:55.401 17:16:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:25:55.658 [2024-05-15 17:16:43.091253] Starting SPDK v24.05-pre git sha1 0ba8ca574 / DPDK 23.11.0 initialization... 00:25:55.658 [2024-05-15 17:16:43.091301] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3210252 ] 00:25:55.658 I/O size of 131072 is greater than zero copy threshold (65536). 00:25:55.658 Zero copy mechanism will not be used. 00:25:55.658 EAL: No free 2048 kB hugepages reported on node 1 00:25:55.658 [2024-05-15 17:16:43.143884] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:55.658 [2024-05-15 17:16:43.215464] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:25:56.593 17:16:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:25:56.593 17:16:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@860 -- # return 0 00:25:56.593 17:16:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:25:56.593 17:16:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:25:56.593 17:16:44 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:25:56.593 17:16:44 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:56.593 17:16:44 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:25:56.593 17:16:44 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:56.593 17:16:44 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:25:56.593 17:16:44 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:25:56.851 nvme0n1 00:25:56.851 17:16:44 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:25:56.851 17:16:44 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:56.851 17:16:44 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:25:56.851 17:16:44 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:56.851 17:16:44 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:25:56.851 17:16:44 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:25:57.112 I/O size of 131072 is greater than zero copy threshold (65536). 00:25:57.112 Zero copy mechanism will not be used. 00:25:57.112 Running I/O for 2 seconds... 00:25:57.112 [2024-05-15 17:16:44.524510] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x106d0f0) 00:25:57.112 [2024-05-15 17:16:44.524543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.112 [2024-05-15 17:16:44.524553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:57.112 [2024-05-15 17:16:44.533658] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x106d0f0) 00:25:57.112 [2024-05-15 17:16:44.533684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.112 [2024-05-15 17:16:44.533693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:57.112 [2024-05-15 17:16:44.542160] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x106d0f0) 00:25:57.112 [2024-05-15 17:16:44.542186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.112 [2024-05-15 17:16:44.542195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:57.112 [2024-05-15 17:16:44.549790] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x106d0f0) 00:25:57.112 [2024-05-15 17:16:44.549810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.112 [2024-05-15 17:16:44.549818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:57.112 [2024-05-15 17:16:44.557123] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x106d0f0) 00:25:57.112 [2024-05-15 17:16:44.557143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.112 [2024-05-15 17:16:44.557151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:57.112 [2024-05-15 17:16:44.563702] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x106d0f0) 00:25:57.112 [2024-05-15 17:16:44.563725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.112 [2024-05-15 17:16:44.563733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:57.112 [2024-05-15 17:16:44.570505] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x106d0f0) 00:25:57.112 [2024-05-15 17:16:44.570527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.113 [2024-05-15 17:16:44.570536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:57.113 [2024-05-15 17:16:44.577295] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x106d0f0) 00:25:57.113 [2024-05-15 17:16:44.577315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:20864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.113 [2024-05-15 17:16:44.577324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:57.113 [2024-05-15 17:16:44.584666] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x106d0f0) 00:25:57.113 [2024-05-15 17:16:44.584687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:5568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.113 [2024-05-15 17:16:44.584695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:57.113 [2024-05-15 17:16:44.591677] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x106d0f0) 00:25:57.113 [2024-05-15 17:16:44.591698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:15072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.113 [2024-05-15 17:16:44.591706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:57.113 [2024-05-15 17:16:44.598816] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x106d0f0) 00:25:57.113 [2024-05-15 17:16:44.598837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.113 [2024-05-15 17:16:44.598845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:57.113 [2024-05-15 17:16:44.605898] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x106d0f0) 00:25:57.113 [2024-05-15 17:16:44.605918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:9920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.113 [2024-05-15 17:16:44.605927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:57.113 [2024-05-15 17:16:44.613039] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x106d0f0) 00:25:57.113 [2024-05-15 17:16:44.613060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:10432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.113 [2024-05-15 17:16:44.613068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:57.113 [2024-05-15 17:16:44.621469] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x106d0f0) 00:25:57.113 [2024-05-15 17:16:44.621490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.113 [2024-05-15 17:16:44.621498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:57.113 [2024-05-15 17:16:44.628674] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x106d0f0) 00:25:57.113 [2024-05-15 17:16:44.628694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:11136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.113 [2024-05-15 17:16:44.628707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:57.113 [2024-05-15 17:16:44.635869] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x106d0f0) 00:25:57.113 [2024-05-15 17:16:44.635891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:8512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.113 [2024-05-15 17:16:44.635900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:57.113 [2024-05-15 17:16:44.643135] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x106d0f0) 00:25:57.113 [2024-05-15 17:16:44.643156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:15424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.113 [2024-05-15 17:16:44.643169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:57.113 [2024-05-15 17:16:44.650116] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x106d0f0) 00:25:57.113 [2024-05-15 17:16:44.650137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:19296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.113 [2024-05-15 17:16:44.650144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:57.113 [2024-05-15 17:16:44.656861] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x106d0f0) 00:25:57.113 [2024-05-15 17:16:44.656881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:23872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.113 [2024-05-15 17:16:44.656889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:57.113 [2024-05-15 17:16:44.664062] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x106d0f0) 00:25:57.113 [2024-05-15 17:16:44.664084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.113 [2024-05-15 17:16:44.664092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:57.113 [2024-05-15 17:16:44.672694] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x106d0f0) 00:25:57.113 [2024-05-15 17:16:44.672715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:18752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.113 [2024-05-15 17:16:44.672723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:57.113 [2024-05-15 17:16:44.681222] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x106d0f0) 00:25:57.113 [2024-05-15 17:16:44.681243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:18528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.113 [2024-05-15 17:16:44.681251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:57.113 [2024-05-15 17:16:44.690056] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x106d0f0) 00:25:57.113 [2024-05-15 17:16:44.690077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.113 [2024-05-15 17:16:44.690085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:57.113 [2024-05-15 17:16:44.699791] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x106d0f0) 00:25:57.113 [2024-05-15 17:16:44.699817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:15424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.113 [2024-05-15 17:16:44.699825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:57.113 [2024-05-15 17:16:44.709155] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x106d0f0) 00:25:57.113 [2024-05-15 17:16:44.709183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:4640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.113 [2024-05-15 17:16:44.709191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:57.113 [2024-05-15 17:16:44.718640] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x106d0f0) 00:25:57.113 [2024-05-15 17:16:44.718663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:23264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.113 [2024-05-15 17:16:44.718671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:57.113 [2024-05-15 17:16:44.727044] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x106d0f0) 00:25:57.113 [2024-05-15 17:16:44.727066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.113 [2024-05-15 17:16:44.727074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:57.113 [2024-05-15 17:16:44.735831] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x106d0f0) 00:25:57.113 [2024-05-15 17:16:44.735853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:1344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.113 [2024-05-15 17:16:44.735861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:57.114 [2024-05-15 17:16:44.744639] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x106d0f0) 00:25:57.114 [2024-05-15 17:16:44.744661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:21408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.114 [2024-05-15 17:16:44.744670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:57.114 [2024-05-15 17:16:44.754189] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x106d0f0) 00:25:57.114 [2024-05-15 17:16:44.754213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:14528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.114 [2024-05-15 17:16:44.754222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:57.114 [2024-05-15 17:16:44.763137] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x106d0f0) 00:25:57.114 [2024-05-15 17:16:44.763160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:21696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.114 [2024-05-15 17:16:44.763173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:57.373 [2024-05-15 17:16:44.773110] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x106d0f0) 00:25:57.373 [2024-05-15 17:16:44.773136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.373 [2024-05-15 17:16:44.773144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:57.373 [2024-05-15 17:16:44.782386] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x106d0f0) 00:25:57.373 [2024-05-15 17:16:44.782411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:8352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.373 [2024-05-15 17:16:44.782419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:57.373 [2024-05-15 17:16:44.790918] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x106d0f0) 00:25:57.373 [2024-05-15 17:16:44.790940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:1632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.373 [2024-05-15 17:16:44.790949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:57.373 [2024-05-15 17:16:44.799045] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x106d0f0) 00:25:57.373 [2024-05-15 17:16:44.799067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:20544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.373 [2024-05-15 17:16:44.799075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:57.373 [2024-05-15 17:16:44.808213] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x106d0f0) 00:25:57.373 [2024-05-15 17:16:44.808234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.373 [2024-05-15 17:16:44.808243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:57.373 [2024-05-15 17:16:44.815864] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x106d0f0) 00:25:57.373 [2024-05-15 17:16:44.815885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:1440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.373 [2024-05-15 17:16:44.815893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:57.373 [2024-05-15 17:16:44.823532] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x106d0f0) 00:25:57.373 [2024-05-15 17:16:44.823553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:6624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.373 [2024-05-15 17:16:44.823561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:57.373 [2024-05-15 17:16:44.830966] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x106d0f0) 00:25:57.373 [2024-05-15 17:16:44.830986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:13696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.373 [2024-05-15 17:16:44.830994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:57.373 [2024-05-15 17:16:44.837444] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x106d0f0) 00:25:57.374 [2024-05-15 17:16:44.837465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.374 [2024-05-15 17:16:44.837473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:57.374 [2024-05-15 17:16:44.843599] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x106d0f0) 00:25:57.374 [2024-05-15 17:16:44.843619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.374 [2024-05-15 17:16:44.843630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:57.374 [2024-05-15 17:16:44.851298] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x106d0f0) 00:25:57.374 [2024-05-15 17:16:44.851319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:3968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.374 [2024-05-15 17:16:44.851327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:57.374 [2024-05-15 17:16:44.860217] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x106d0f0) 00:25:57.374 [2024-05-15 17:16:44.860237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.374 [2024-05-15 17:16:44.860245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:57.374 [2024-05-15 17:16:44.868533] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x106d0f0) 00:25:57.374 [2024-05-15 17:16:44.868554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:18912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.374 [2024-05-15 17:16:44.868562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:57.374 [2024-05-15 17:16:44.875954] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x106d0f0) 00:25:57.374 [2024-05-15 17:16:44.875975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:8320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.374 [2024-05-15 17:16:44.875982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:57.374 [2024-05-15 17:16:44.883026] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x106d0f0) 00:25:57.374 [2024-05-15 17:16:44.883046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.374 [2024-05-15 17:16:44.883054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:57.374 [2024-05-15 17:16:44.889738] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x106d0f0) 00:25:57.374 [2024-05-15 17:16:44.889757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.374 [2024-05-15 17:16:44.889764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:57.374 [2024-05-15 17:16:44.896644] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x106d0f0) 00:25:57.374 [2024-05-15 17:16:44.896664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:18496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.374 [2024-05-15 17:16:44.896672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:57.374 [2024-05-15 17:16:44.905365] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x106d0f0) 00:25:57.374 [2024-05-15 17:16:44.905386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:21984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.374 [2024-05-15 17:16:44.905393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:57.374 [2024-05-15 17:16:44.913540] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x106d0f0) 00:25:57.374 [2024-05-15 17:16:44.913563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:11264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.374 [2024-05-15 17:16:44.913571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:57.374 [2024-05-15 17:16:44.921728] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x106d0f0) 00:25:57.374 [2024-05-15 17:16:44.921749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:12736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.374 [2024-05-15 17:16:44.921756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:57.374 [2024-05-15 17:16:44.930054] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x106d0f0) 00:25:57.374 [2024-05-15 17:16:44.930074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.374 [2024-05-15 17:16:44.930082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:57.374 [2024-05-15 17:16:44.937488] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x106d0f0) 00:25:57.374 [2024-05-15 17:16:44.937508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:7680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.374 [2024-05-15 17:16:44.937516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:57.374 [2024-05-15 17:16:44.944696] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x106d0f0) 00:25:57.374 [2024-05-15 17:16:44.944717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:23840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.374 [2024-05-15 17:16:44.944725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:57.374 [2024-05-15 17:16:44.952456] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x106d0f0) 00:25:57.374 [2024-05-15 17:16:44.952476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:12928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.374 [2024-05-15 17:16:44.952484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:57.374 [2024-05-15 17:16:44.960684] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x106d0f0) 00:25:57.374 [2024-05-15 17:16:44.960704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.374 [2024-05-15 17:16:44.960712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:57.374 [2024-05-15 17:16:44.968706] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x106d0f0) 00:25:57.374 [2024-05-15 17:16:44.968726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:22432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.374 [2024-05-15 17:16:44.968734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:57.374 [2024-05-15 17:16:44.975730] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x106d0f0) 00:25:57.374 [2024-05-15 17:16:44.975750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:22784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.374 [2024-05-15 17:16:44.975758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:57.374 [2024-05-15 17:16:44.982866] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x106d0f0) 00:25:57.374 [2024-05-15 17:16:44.982885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:21792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.374 [2024-05-15 17:16:44.982892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:57.374 [2024-05-15 17:16:44.989800] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x106d0f0) 00:25:57.374 [2024-05-15 17:16:44.989821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.374 [2024-05-15 17:16:44.989829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:57.374 [2024-05-15 17:16:44.996553] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x106d0f0) 00:25:57.374 [2024-05-15 17:16:44.996573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:14176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.374 [2024-05-15 17:16:44.996580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:57.374 [2024-05-15 17:16:45.003159] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x106d0f0) 00:25:57.374 [2024-05-15 17:16:45.003185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:1536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.374 [2024-05-15 17:16:45.003193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:57.374 [2024-05-15 17:16:45.009412] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x106d0f0) 00:25:57.374 [2024-05-15 17:16:45.009433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.374 [2024-05-15 17:16:45.009441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:57.374 [2024-05-15 17:16:45.015211] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x106d0f0) 00:25:57.374 [2024-05-15 17:16:45.015231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:15744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.374 [2024-05-15 17:16:45.015238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:57.374 [2024-05-15 17:16:45.022807] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x106d0f0) 00:25:57.375 [2024-05-15 17:16:45.022827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:3104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.375 [2024-05-15 17:16:45.022835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:57.633 [2024-05-15 17:16:45.031830] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x106d0f0) 00:25:57.633 [2024-05-15 17:16:45.031853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:18496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.633 [2024-05-15 17:16:45.031862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:57.633 [2024-05-15 17:16:45.040251] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x106d0f0) 00:25:57.633 [2024-05-15 17:16:45.040273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:19904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.633 [2024-05-15 17:16:45.040285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:57.633 [2024-05-15 17:16:45.047797] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x106d0f0) 00:25:57.633 [2024-05-15 17:16:45.047818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.633 [2024-05-15 17:16:45.047826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:57.633 [2024-05-15 17:16:45.055569] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x106d0f0) 00:25:57.633 [2024-05-15 17:16:45.055590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:1632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.633 [2024-05-15 17:16:45.055598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:57.633 [2024-05-15 17:16:45.063785] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x106d0f0) 00:25:57.633 [2024-05-15 17:16:45.063807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:22016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.633 [2024-05-15 17:16:45.063815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:57.633 [2024-05-15 17:16:45.072455] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x106d0f0) 00:25:57.633 [2024-05-15 17:16:45.072476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.633 [2024-05-15 17:16:45.072484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:57.633 [2024-05-15 17:16:45.080208] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x106d0f0) 00:25:57.633 [2024-05-15 17:16:45.080228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:5088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.633 [2024-05-15 17:16:45.080236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:57.633 [2024-05-15 17:16:45.087284] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x106d0f0) 00:25:57.633 [2024-05-15 17:16:45.087304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.633 [2024-05-15 17:16:45.087312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:57.633 [2024-05-15 17:16:45.094309] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x106d0f0) 00:25:57.633 [2024-05-15 17:16:45.094329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:10016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.633 [2024-05-15 17:16:45.094337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:57.633 [2024-05-15 17:16:45.100869] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x106d0f0) 00:25:57.633 [2024-05-15 17:16:45.100889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:23744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.633 [2024-05-15 17:16:45.100897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:57.633 [2024-05-15 17:16:45.107636] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x106d0f0) 00:25:57.633 [2024-05-15 17:16:45.107657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.633 [2024-05-15 17:16:45.107665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:57.633 [2024-05-15 17:16:45.114794] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x106d0f0) 00:25:57.633 [2024-05-15 17:16:45.114814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:64 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.633 [2024-05-15 17:16:45.114823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:57.633 [2024-05-15 17:16:45.121600] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x106d0f0) 00:25:57.633 [2024-05-15 17:16:45.121621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:10784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.633 [2024-05-15 17:16:45.121628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:57.633 [2024-05-15 17:16:45.127787] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x106d0f0) 00:25:57.633 [2024-05-15 17:16:45.127807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:21856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.633 [2024-05-15 17:16:45.127814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:57.633 [2024-05-15 17:16:45.133761] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x106d0f0) 00:25:57.633 [2024-05-15 17:16:45.133781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:19456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.633 [2024-05-15 17:16:45.133789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:57.633 [2024-05-15 17:16:45.139516] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x106d0f0) 00:25:57.633 [2024-05-15 17:16:45.139537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.633 [2024-05-15 17:16:45.139545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:57.633 [2024-05-15 17:16:45.147537] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x106d0f0) 00:25:57.633 [2024-05-15 17:16:45.147557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.633 [2024-05-15 17:16:45.147565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:57.633 [2024-05-15 17:16:45.156550] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x106d0f0) 00:25:57.633 [2024-05-15 17:16:45.156570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:22560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.633 [2024-05-15 17:16:45.156578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:57.633 [2024-05-15 17:16:45.164705] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x106d0f0) 00:25:57.633 [2024-05-15 17:16:45.164725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.633 [2024-05-15 17:16:45.164736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:57.633 [2024-05-15 17:16:45.172861] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x106d0f0) 00:25:57.633 [2024-05-15 17:16:45.172881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:8864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.633 [2024-05-15 17:16:45.172888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:57.633 [2024-05-15 17:16:45.180395] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x106d0f0) 00:25:57.633 [2024-05-15 17:16:45.180415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:96 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.633 [2024-05-15 17:16:45.180423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:57.633 [2024-05-15 17:16:45.187432] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x106d0f0) 00:25:57.633 [2024-05-15 17:16:45.187451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:19648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.633 [2024-05-15 17:16:45.187459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:57.633 [2024-05-15 17:16:45.194300] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x106d0f0) 00:25:57.633 [2024-05-15 17:16:45.194320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.633 [2024-05-15 17:16:45.194328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:57.633 [2024-05-15 17:16:45.200554] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x106d0f0) 00:25:57.633 [2024-05-15 17:16:45.200574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:6400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.634 [2024-05-15 17:16:45.200581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:57.634 [2024-05-15 17:16:45.209702] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x106d0f0) 00:25:57.634 [2024-05-15 17:16:45.209722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:14848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.634 [2024-05-15 17:16:45.209729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:57.634 [2024-05-15 17:16:45.217868] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x106d0f0) 00:25:57.634 [2024-05-15 17:16:45.217889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:5376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.634 [2024-05-15 17:16:45.217896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:57.634 [2024-05-15 17:16:45.225374] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x106d0f0) 00:25:57.634 [2024-05-15 17:16:45.225394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.634 [2024-05-15 17:16:45.225402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:57.634 [2024-05-15 17:16:45.232968] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x106d0f0) 00:25:57.634 [2024-05-15 17:16:45.232992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:5888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.634 [2024-05-15 17:16:45.232999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:57.634 [2024-05-15 17:16:45.239767] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x106d0f0) 00:25:57.634 [2024-05-15 17:16:45.239787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:7744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.634 [2024-05-15 17:16:45.239794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:57.634 [2024-05-15 17:16:45.246365] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x106d0f0) 00:25:57.634 [2024-05-15 17:16:45.246385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:18560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.634 [2024-05-15 17:16:45.246393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:57.634 [2024-05-15 17:16:45.253045] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x106d0f0) 00:25:57.634 [2024-05-15 17:16:45.253065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.634 [2024-05-15 17:16:45.253073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:57.634 [2024-05-15 17:16:45.259380] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x106d0f0) 00:25:57.634 [2024-05-15 17:16:45.259400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.634 [2024-05-15 17:16:45.259407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:57.634 [2024-05-15 17:16:45.265969] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x106d0f0) 00:25:57.634 [2024-05-15 17:16:45.265991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:12544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.634 [2024-05-15 17:16:45.265999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:57.634 [2024-05-15 17:16:45.271980] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x106d0f0) 00:25:57.634 [2024-05-15 17:16:45.272002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.634 [2024-05-15 17:16:45.272009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:57.634 [2024-05-15 17:16:45.277912] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x106d0f0) 00:25:57.634 [2024-05-15 17:16:45.277932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:19072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.634 [2024-05-15 17:16:45.277940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:57.634 [2024-05-15 17:16:45.283873] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x106d0f0) 00:25:57.634 [2024-05-15 17:16:45.283893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:2784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.634 [2024-05-15 17:16:45.283901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:57.634 [2024-05-15 17:16:45.289817] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x106d0f0) 00:25:57.634 [2024-05-15 17:16:45.289841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.634 [2024-05-15 17:16:45.289851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:57.892 [2024-05-15 17:16:45.295797] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x106d0f0) 00:25:57.892 [2024-05-15 17:16:45.295823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:21024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.892 [2024-05-15 17:16:45.295831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:57.892 [2024-05-15 17:16:45.301153] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x106d0f0) 00:25:57.892 [2024-05-15 17:16:45.301181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:18016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.892 [2024-05-15 17:16:45.301189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:57.892 [2024-05-15 17:16:45.306798] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x106d0f0) 00:25:57.892 [2024-05-15 17:16:45.306820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:6176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.893 [2024-05-15 17:16:45.306828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:57.893 [2024-05-15 17:16:45.312475] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x106d0f0) 00:25:57.893 [2024-05-15 17:16:45.312497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:8608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.893 [2024-05-15 17:16:45.312504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:57.893 [2024-05-15 17:16:45.318128] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x106d0f0) 00:25:57.893 [2024-05-15 17:16:45.318152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:22944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.893 [2024-05-15 17:16:45.318160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:57.893 [2024-05-15 17:16:45.323720] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x106d0f0) 00:25:57.893 [2024-05-15 17:16:45.323741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.893 [2024-05-15 17:16:45.323749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:57.893 [2024-05-15 17:16:45.329389] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x106d0f0) 00:25:57.893 [2024-05-15 17:16:45.329411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:17376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.893 [2024-05-15 17:16:45.329419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:57.893 [2024-05-15 17:16:45.335106] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x106d0f0) 00:25:57.893 [2024-05-15 17:16:45.335127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:8672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.893 [2024-05-15 17:16:45.335138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:57.893 [2024-05-15 17:16:45.340893] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x106d0f0) 00:25:57.893 [2024-05-15 17:16:45.340915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:8992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.893 [2024-05-15 17:16:45.340922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:57.893 [2024-05-15 17:16:45.346569] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x106d0f0) 00:25:57.893 [2024-05-15 17:16:45.346590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.893 [2024-05-15 17:16:45.346598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:57.893 [2024-05-15 17:16:45.352071] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x106d0f0) 00:25:57.893 [2024-05-15 17:16:45.352092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:6304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.893 [2024-05-15 17:16:45.352100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:57.893 [2024-05-15 17:16:45.357726] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x106d0f0) 00:25:57.893 [2024-05-15 17:16:45.357747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:7552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.893 [2024-05-15 17:16:45.357754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:57.893 [2024-05-15 17:16:45.363372] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x106d0f0) 00:25:57.893 [2024-05-15 17:16:45.363393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:9280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.893 [2024-05-15 17:16:45.363401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:57.893 [2024-05-15 17:16:45.369140] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x106d0f0) 00:25:57.893 [2024-05-15 17:16:45.369160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:15264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.893 [2024-05-15 17:16:45.369176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:57.893 [2024-05-15 17:16:45.374977] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x106d0f0) 00:25:57.893 [2024-05-15 17:16:45.374998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:5952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.893 [2024-05-15 17:16:45.375005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:57.893 [2024-05-15 17:16:45.380662] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x106d0f0) 00:25:57.893 [2024-05-15 17:16:45.380681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:24256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.893 [2024-05-15 17:16:45.380689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:57.893 [2024-05-15 17:16:45.386378] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x106d0f0) 00:25:57.893 [2024-05-15 17:16:45.386402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:2304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.893 [2024-05-15 17:16:45.386409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:57.893 [2024-05-15 17:16:45.392175] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x106d0f0) 00:25:57.893 [2024-05-15 17:16:45.392195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.893 [2024-05-15 17:16:45.392202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:57.893 [2024-05-15 17:16:45.397965] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x106d0f0) 00:25:57.893 [2024-05-15 17:16:45.397986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:10816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.893 [2024-05-15 17:16:45.397993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:57.893 [2024-05-15 17:16:45.403621] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x106d0f0) 00:25:57.893 [2024-05-15 17:16:45.403642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:1536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.893 [2024-05-15 17:16:45.403649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:57.893 [2024-05-15 17:16:45.409601] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x106d0f0) 00:25:57.893 [2024-05-15 17:16:45.409622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:6848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.893 [2024-05-15 17:16:45.409629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:57.893 [2024-05-15 17:16:45.415590] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x106d0f0) 00:25:57.893 [2024-05-15 17:16:45.415612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.893 [2024-05-15 17:16:45.415619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:57.893 [2024-05-15 17:16:45.421284] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x106d0f0) 00:25:57.893 [2024-05-15 17:16:45.421304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.893 [2024-05-15 17:16:45.421312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:57.893 [2024-05-15 17:16:45.426928] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x106d0f0) 00:25:57.893 [2024-05-15 17:16:45.426948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:2336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.893 [2024-05-15 17:16:45.426956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:57.893 [2024-05-15 17:16:45.432735] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x106d0f0) 00:25:57.893 [2024-05-15 17:16:45.432755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:5440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.893 [2024-05-15 17:16:45.432763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:57.893 [2024-05-15 17:16:45.438491] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x106d0f0) 00:25:57.893 [2024-05-15 17:16:45.438512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:9984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.893 [2024-05-15 17:16:45.438519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:57.893 [2024-05-15 17:16:45.444223] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x106d0f0) 00:25:57.893 [2024-05-15 17:16:45.444244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.893 [2024-05-15 17:16:45.444252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:57.893 [2024-05-15 17:16:45.449991] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x106d0f0) 00:25:57.893 [2024-05-15 17:16:45.450013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.893 [2024-05-15 17:16:45.450021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:57.893 [2024-05-15 17:16:45.455787] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x106d0f0) 00:25:57.893 [2024-05-15 17:16:45.455807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:13856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.893 [2024-05-15 17:16:45.455815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:57.893 [2024-05-15 17:16:45.461436] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x106d0f0) 00:25:57.893 [2024-05-15 17:16:45.461457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:19712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.893 [2024-05-15 17:16:45.461465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:57.893 [2024-05-15 17:16:45.466998] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x106d0f0) 00:25:57.894 [2024-05-15 17:16:45.467019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:19296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.894 [2024-05-15 17:16:45.467027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:57.894 [2024-05-15 17:16:45.472741] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x106d0f0) 00:25:57.894 [2024-05-15 17:16:45.472762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:3744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.894 [2024-05-15 17:16:45.472770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:57.894 [2024-05-15 17:16:45.478396] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x106d0f0) 00:25:57.894 [2024-05-15 17:16:45.478416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:11616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.894 [2024-05-15 17:16:45.478424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:57.894 [2024-05-15 17:16:45.484022] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x106d0f0) 00:25:57.894 [2024-05-15 17:16:45.484042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:23008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.894 [2024-05-15 17:16:45.484055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:57.894 [2024-05-15 17:16:45.489713] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x106d0f0) 00:25:57.894 [2024-05-15 17:16:45.489733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:19296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.894 [2024-05-15 17:16:45.489741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:57.894 [2024-05-15 17:16:45.495493] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x106d0f0) 00:25:57.894 [2024-05-15 17:16:45.495513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.894 [2024-05-15 17:16:45.495520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:57.894 [2024-05-15 17:16:45.501078] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x106d0f0) 00:25:57.894 [2024-05-15 17:16:45.501098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:10304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.894 [2024-05-15 17:16:45.501106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:57.894 [2024-05-15 17:16:45.507885] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x106d0f0) 00:25:57.894 [2024-05-15 17:16:45.507909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:21632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.894 [2024-05-15 17:16:45.507917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:57.894 [2024-05-15 17:16:45.515495] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x106d0f0) 00:25:57.894 [2024-05-15 17:16:45.515516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.894 [2024-05-15 17:16:45.515524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:57.894 [2024-05-15 17:16:45.522255] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x106d0f0) 00:25:57.894 [2024-05-15 17:16:45.522277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:1120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.894 [2024-05-15 17:16:45.522285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:57.894 [2024-05-15 17:16:45.530036] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x106d0f0) 00:25:57.894 [2024-05-15 17:16:45.530057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.894 [2024-05-15 17:16:45.530065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:57.894 [2024-05-15 17:16:45.538048] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x106d0f0) 00:25:57.894 [2024-05-15 17:16:45.538069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.894 [2024-05-15 17:16:45.538077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:57.894 [2024-05-15 17:16:45.546317] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x106d0f0) 00:25:57.894 [2024-05-15 17:16:45.546339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.894 [2024-05-15 17:16:45.546348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:58.152 [2024-05-15 17:16:45.554456] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x106d0f0) 00:25:58.152 [2024-05-15 17:16:45.554480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:7840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.152 [2024-05-15 17:16:45.554489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:58.152 [2024-05-15 17:16:45.562457] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x106d0f0) 00:25:58.152 [2024-05-15 17:16:45.562480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.152 [2024-05-15 17:16:45.562488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:58.152 [2024-05-15 17:16:45.571098] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x106d0f0) 00:25:58.152 [2024-05-15 17:16:45.571121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:21152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.152 [2024-05-15 17:16:45.571128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:58.152 [2024-05-15 17:16:45.578805] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x106d0f0) 00:25:58.152 [2024-05-15 17:16:45.578827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:14240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.152 [2024-05-15 17:16:45.578836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:58.152 [2024-05-15 17:16:45.587247] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x106d0f0) 00:25:58.152 [2024-05-15 17:16:45.587269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:12320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.152 [2024-05-15 17:16:45.587277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:58.152 [2024-05-15 17:16:45.596224] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x106d0f0) 00:25:58.152 [2024-05-15 17:16:45.596245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:6944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.153 [2024-05-15 17:16:45.596253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:58.153 [2024-05-15 17:16:45.605041] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x106d0f0) 00:25:58.153 [2024-05-15 17:16:45.605063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.153 [2024-05-15 17:16:45.605070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:58.153 [2024-05-15 17:16:45.613342] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x106d0f0) 00:25:58.153 [2024-05-15 17:16:45.613364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:12160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.153 [2024-05-15 17:16:45.613376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:58.153 [2024-05-15 17:16:45.622063] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x106d0f0) 00:25:58.153 [2024-05-15 17:16:45.622084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.153 [2024-05-15 17:16:45.622092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:58.153 [2024-05-15 17:16:45.631534] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x106d0f0) 00:25:58.153 [2024-05-15 17:16:45.631556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:25152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.153 [2024-05-15 17:16:45.631565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:58.153 [2024-05-15 17:16:45.641254] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x106d0f0) 00:25:58.153 [2024-05-15 17:16:45.641275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.153 [2024-05-15 17:16:45.641283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:58.153 [2024-05-15 17:16:45.650958] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x106d0f0) 00:25:58.153 [2024-05-15 17:16:45.650979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:12768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.153 [2024-05-15 17:16:45.650988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:58.153 [2024-05-15 17:16:45.660157] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x106d0f0) 00:25:58.153 [2024-05-15 17:16:45.660184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.153 [2024-05-15 17:16:45.660193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:58.153 [2024-05-15 17:16:45.669566] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x106d0f0) 00:25:58.153 [2024-05-15 17:16:45.669587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:13248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.153 [2024-05-15 17:16:45.669595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:58.153 [2024-05-15 17:16:45.678343] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x106d0f0) 00:25:58.153 [2024-05-15 17:16:45.678364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:23296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.153 [2024-05-15 17:16:45.678373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:58.153 [2024-05-15 17:16:45.687545] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x106d0f0) 00:25:58.153 [2024-05-15 17:16:45.687567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:13760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.153 [2024-05-15 17:16:45.687577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:58.153 [2024-05-15 17:16:45.698299] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x106d0f0) 00:25:58.153 [2024-05-15 17:16:45.698325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:25408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.153 [2024-05-15 17:16:45.698333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:58.153 [2024-05-15 17:16:45.707689] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x106d0f0) 00:25:58.153 [2024-05-15 17:16:45.707711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:3360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.153 [2024-05-15 17:16:45.707719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:58.153 [2024-05-15 17:16:45.718931] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x106d0f0) 00:25:58.153 [2024-05-15 17:16:45.718953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.153 [2024-05-15 17:16:45.718961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:58.153 [2024-05-15 17:16:45.728599] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x106d0f0) 00:25:58.153 [2024-05-15 17:16:45.728621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:6080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.153 [2024-05-15 17:16:45.728629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:58.153 [2024-05-15 17:16:45.738599] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x106d0f0) 00:25:58.153 [2024-05-15 17:16:45.738621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:8992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.153 [2024-05-15 17:16:45.738628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:58.153 [2024-05-15 17:16:45.748375] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x106d0f0) 00:25:58.153 [2024-05-15 17:16:45.748397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:1632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.153 [2024-05-15 17:16:45.748405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:58.153 [2024-05-15 17:16:45.758486] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x106d0f0) 00:25:58.153 [2024-05-15 17:16:45.758507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:19968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.153 [2024-05-15 17:16:45.758515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:58.153 [2024-05-15 17:16:45.767527] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x106d0f0) 00:25:58.153 [2024-05-15 17:16:45.767549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:13920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.153 [2024-05-15 17:16:45.767557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:58.153 [2024-05-15 17:16:45.777912] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x106d0f0) 00:25:58.153 [2024-05-15 17:16:45.777933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:18272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.153 [2024-05-15 17:16:45.777942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:58.153 [2024-05-15 17:16:45.787482] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x106d0f0) 00:25:58.153 [2024-05-15 17:16:45.787505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:21696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.153 [2024-05-15 17:16:45.787513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:58.153 [2024-05-15 17:16:45.797646] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x106d0f0) 00:25:58.153 [2024-05-15 17:16:45.797668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:18240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.153 [2024-05-15 17:16:45.797676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:58.153 [2024-05-15 17:16:45.806831] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x106d0f0) 00:25:58.153 [2024-05-15 17:16:45.806853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:3744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.153 [2024-05-15 17:16:45.806863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:58.412 [2024-05-15 17:16:45.815485] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x106d0f0) 00:25:58.412 [2024-05-15 17:16:45.815509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.412 [2024-05-15 17:16:45.815518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:58.412 [2024-05-15 17:16:45.824864] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x106d0f0) 00:25:58.412 [2024-05-15 17:16:45.824887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:14400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.412 [2024-05-15 17:16:45.824895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:58.412 [2024-05-15 17:16:45.834642] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x106d0f0) 00:25:58.412 [2024-05-15 17:16:45.834664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:19680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.412 [2024-05-15 17:16:45.834673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:58.412 [2024-05-15 17:16:45.843628] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x106d0f0) 00:25:58.412 [2024-05-15 17:16:45.843650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:4800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.412 [2024-05-15 17:16:45.843658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:58.412 [2024-05-15 17:16:45.852199] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x106d0f0) 00:25:58.412 [2024-05-15 17:16:45.852219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:5760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.412 [2024-05-15 17:16:45.852227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:58.412 [2024-05-15 17:16:45.860344] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x106d0f0) 00:25:58.412 [2024-05-15 17:16:45.860365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:23136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.412 [2024-05-15 17:16:45.860377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:58.412 [2024-05-15 17:16:45.868872] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x106d0f0) 00:25:58.412 [2024-05-15 17:16:45.868893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:12928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.412 [2024-05-15 17:16:45.868901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:58.412 [2024-05-15 17:16:45.878526] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x106d0f0) 00:25:58.412 [2024-05-15 17:16:45.878547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:5088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.412 [2024-05-15 17:16:45.878555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:58.412 [2024-05-15 17:16:45.887203] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x106d0f0) 00:25:58.412 [2024-05-15 17:16:45.887224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:14368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.412 [2024-05-15 17:16:45.887232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:58.412 [2024-05-15 17:16:45.896106] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x106d0f0) 00:25:58.412 [2024-05-15 17:16:45.896128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.412 [2024-05-15 17:16:45.896136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:58.412 [2024-05-15 17:16:45.904616] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x106d0f0) 00:25:58.412 [2024-05-15 17:16:45.904637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:6400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.412 [2024-05-15 17:16:45.904645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:58.412 [2024-05-15 17:16:45.913557] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x106d0f0) 00:25:58.412 [2024-05-15 17:16:45.913579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:18528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.412 [2024-05-15 17:16:45.913587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:58.412 [2024-05-15 17:16:45.923389] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x106d0f0) 00:25:58.412 [2024-05-15 17:16:45.923410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:3712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.412 [2024-05-15 17:16:45.923419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:58.412 [2024-05-15 17:16:45.933289] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x106d0f0) 00:25:58.412 [2024-05-15 17:16:45.933309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:24480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.412 [2024-05-15 17:16:45.933317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:58.412 [2024-05-15 17:16:45.941818] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x106d0f0) 00:25:58.412 [2024-05-15 17:16:45.941844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:15328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.412 [2024-05-15 17:16:45.941852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:58.412 [2024-05-15 17:16:45.951755] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x106d0f0) 00:25:58.412 [2024-05-15 17:16:45.951776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:23872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.412 [2024-05-15 17:16:45.951783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:58.412 [2024-05-15 17:16:45.961386] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x106d0f0) 00:25:58.412 [2024-05-15 17:16:45.961407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:12288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.412 [2024-05-15 17:16:45.961416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:58.412 [2024-05-15 17:16:45.970523] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x106d0f0) 00:25:58.412 [2024-05-15 17:16:45.970544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:6208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.412 [2024-05-15 17:16:45.970552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:58.412 [2024-05-15 17:16:45.980193] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x106d0f0) 00:25:58.412 [2024-05-15 17:16:45.980214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:20480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.412 [2024-05-15 17:16:45.980222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:58.412 [2024-05-15 17:16:45.989181] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x106d0f0) 00:25:58.412 [2024-05-15 17:16:45.989202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.412 [2024-05-15 17:16:45.989210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:58.412 [2024-05-15 17:16:45.997961] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x106d0f0) 00:25:58.412 [2024-05-15 17:16:45.997983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:7328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.412 [2024-05-15 17:16:45.997991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:58.412 [2024-05-15 17:16:46.007360] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x106d0f0) 00:25:58.412 [2024-05-15 17:16:46.007381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:15904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.412 [2024-05-15 17:16:46.007389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:58.412 [2024-05-15 17:16:46.015953] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x106d0f0) 00:25:58.412 [2024-05-15 17:16:46.015974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:7360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.412 [2024-05-15 17:16:46.015982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:58.412 [2024-05-15 17:16:46.024129] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x106d0f0) 00:25:58.412 [2024-05-15 17:16:46.024150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:11264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.412 [2024-05-15 17:16:46.024158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:58.412 [2024-05-15 17:16:46.032384] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x106d0f0) 00:25:58.412 [2024-05-15 17:16:46.032406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:20128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.413 [2024-05-15 17:16:46.032413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:58.413 [2024-05-15 17:16:46.040910] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x106d0f0) 00:25:58.413 [2024-05-15 17:16:46.040931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:18720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.413 [2024-05-15 17:16:46.040939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:58.413 [2024-05-15 17:16:46.048662] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x106d0f0) 00:25:58.413 [2024-05-15 17:16:46.048683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:10016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.413 [2024-05-15 17:16:46.048690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:58.413 [2024-05-15 17:16:46.055911] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x106d0f0) 00:25:58.413 [2024-05-15 17:16:46.055931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:19840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.413 [2024-05-15 17:16:46.055939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:58.413 [2024-05-15 17:16:46.062773] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x106d0f0) 00:25:58.413 [2024-05-15 17:16:46.062794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.413 [2024-05-15 17:16:46.062803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:58.413 [2024-05-15 17:16:46.069919] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x106d0f0) 00:25:58.413 [2024-05-15 17:16:46.069943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:19232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.413 [2024-05-15 17:16:46.069952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:58.671 [2024-05-15 17:16:46.076541] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x106d0f0) 00:25:58.671 [2024-05-15 17:16:46.076564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:5792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.671 [2024-05-15 17:16:46.076572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:58.671 [2024-05-15 17:16:46.084209] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x106d0f0) 00:25:58.671 [2024-05-15 17:16:46.084231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:15808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.671 [2024-05-15 17:16:46.084243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:58.671 [2024-05-15 17:16:46.091764] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x106d0f0) 00:25:58.672 [2024-05-15 17:16:46.091786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:17920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.672 [2024-05-15 17:16:46.091794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:58.672 [2024-05-15 17:16:46.100880] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x106d0f0) 00:25:58.672 [2024-05-15 17:16:46.100901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:15232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.672 [2024-05-15 17:16:46.100910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:58.672 [2024-05-15 17:16:46.109387] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x106d0f0) 00:25:58.672 [2024-05-15 17:16:46.109408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:15424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.672 [2024-05-15 17:16:46.109416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:58.672 [2024-05-15 17:16:46.119006] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x106d0f0) 00:25:58.672 [2024-05-15 17:16:46.119028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:8128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.672 [2024-05-15 17:16:46.119036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:58.672 [2024-05-15 17:16:46.128279] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x106d0f0) 00:25:58.672 [2024-05-15 17:16:46.128300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:19808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.672 [2024-05-15 17:16:46.128308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:58.672 [2024-05-15 17:16:46.137822] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x106d0f0) 00:25:58.672 [2024-05-15 17:16:46.137844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:22592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.672 [2024-05-15 17:16:46.137852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:58.672 [2024-05-15 17:16:46.147910] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x106d0f0) 00:25:58.672 [2024-05-15 17:16:46.147930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:18208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.672 [2024-05-15 17:16:46.147938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:58.672 [2024-05-15 17:16:46.156790] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x106d0f0) 00:25:58.672 [2024-05-15 17:16:46.156811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:10240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.672 [2024-05-15 17:16:46.156819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:58.672 [2024-05-15 17:16:46.166735] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x106d0f0) 00:25:58.672 [2024-05-15 17:16:46.166757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.672 [2024-05-15 17:16:46.166766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:58.672 [2024-05-15 17:16:46.177255] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x106d0f0) 00:25:58.672 [2024-05-15 17:16:46.177277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:21696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.672 [2024-05-15 17:16:46.177285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:58.672 [2024-05-15 17:16:46.186696] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x106d0f0) 00:25:58.672 [2024-05-15 17:16:46.186718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:5248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.672 [2024-05-15 17:16:46.186727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:58.672 [2024-05-15 17:16:46.197081] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x106d0f0) 00:25:58.672 [2024-05-15 17:16:46.197103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:2368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.672 [2024-05-15 17:16:46.197110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:58.672 [2024-05-15 17:16:46.207061] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x106d0f0) 00:25:58.672 [2024-05-15 17:16:46.207083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.672 [2024-05-15 17:16:46.207091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:58.672 [2024-05-15 17:16:46.217034] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x106d0f0) 00:25:58.672 [2024-05-15 17:16:46.217055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:13056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.672 [2024-05-15 17:16:46.217063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:58.672 [2024-05-15 17:16:46.226526] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x106d0f0) 00:25:58.672 [2024-05-15 17:16:46.226547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:25536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.672 [2024-05-15 17:16:46.226555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:58.672 [2024-05-15 17:16:46.236077] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x106d0f0) 00:25:58.672 [2024-05-15 17:16:46.236099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:2112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.672 [2024-05-15 17:16:46.236108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:58.672 [2024-05-15 17:16:46.245111] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x106d0f0) 00:25:58.672 [2024-05-15 17:16:46.245134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.672 [2024-05-15 17:16:46.245146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:58.672 [2024-05-15 17:16:46.251815] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x106d0f0) 00:25:58.672 [2024-05-15 17:16:46.251836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:12928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.672 [2024-05-15 17:16:46.251844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:58.672 [2024-05-15 17:16:46.260052] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x106d0f0) 00:25:58.672 [2024-05-15 17:16:46.260074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:4576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.672 [2024-05-15 17:16:46.260082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:58.672 [2024-05-15 17:16:46.269076] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x106d0f0) 00:25:58.672 [2024-05-15 17:16:46.269097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:24512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.672 [2024-05-15 17:16:46.269106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:58.672 [2024-05-15 17:16:46.277308] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x106d0f0) 00:25:58.672 [2024-05-15 17:16:46.277328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:15584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.672 [2024-05-15 17:16:46.277337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:58.672 [2024-05-15 17:16:46.285769] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x106d0f0) 00:25:58.672 [2024-05-15 17:16:46.285790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.672 [2024-05-15 17:16:46.285798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:58.672 [2024-05-15 17:16:46.293773] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x106d0f0) 00:25:58.672 [2024-05-15 17:16:46.293793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:14816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.672 [2024-05-15 17:16:46.293801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:58.672 [2024-05-15 17:16:46.301162] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x106d0f0) 00:25:58.672 [2024-05-15 17:16:46.301188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:15392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.672 [2024-05-15 17:16:46.301195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:58.672 [2024-05-15 17:16:46.308117] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x106d0f0) 00:25:58.672 [2024-05-15 17:16:46.308137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:18720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.672 [2024-05-15 17:16:46.308145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:58.672 [2024-05-15 17:16:46.314720] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x106d0f0) 00:25:58.672 [2024-05-15 17:16:46.314744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:4608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.672 [2024-05-15 17:16:46.314752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:58.672 [2024-05-15 17:16:46.321158] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x106d0f0) 00:25:58.672 [2024-05-15 17:16:46.321183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:13632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.672 [2024-05-15 17:16:46.321191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:58.672 [2024-05-15 17:16:46.327519] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x106d0f0) 00:25:58.672 [2024-05-15 17:16:46.327541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:6208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.672 [2024-05-15 17:16:46.327549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:58.931 [2024-05-15 17:16:46.333938] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x106d0f0) 00:25:58.931 [2024-05-15 17:16:46.333961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:1152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.931 [2024-05-15 17:16:46.333970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:58.931 [2024-05-15 17:16:46.340089] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x106d0f0) 00:25:58.931 [2024-05-15 17:16:46.340110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.931 [2024-05-15 17:16:46.340118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:58.931 [2024-05-15 17:16:46.346215] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x106d0f0) 00:25:58.931 [2024-05-15 17:16:46.346235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:2848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.931 [2024-05-15 17:16:46.346243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:58.931 [2024-05-15 17:16:46.352092] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x106d0f0) 00:25:58.931 [2024-05-15 17:16:46.352113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:7392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.931 [2024-05-15 17:16:46.352121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:58.931 [2024-05-15 17:16:46.358006] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x106d0f0) 00:25:58.931 [2024-05-15 17:16:46.358027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:21792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.931 [2024-05-15 17:16:46.358034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:58.931 [2024-05-15 17:16:46.363867] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x106d0f0) 00:25:58.931 [2024-05-15 17:16:46.363888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:23744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.931 [2024-05-15 17:16:46.363895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:58.931 [2024-05-15 17:16:46.369827] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x106d0f0) 00:25:58.931 [2024-05-15 17:16:46.369847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:15488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.931 [2024-05-15 17:16:46.369854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:58.931 [2024-05-15 17:16:46.375683] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x106d0f0) 00:25:58.931 [2024-05-15 17:16:46.375705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:19616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.931 [2024-05-15 17:16:46.375713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:58.931 [2024-05-15 17:16:46.381702] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x106d0f0) 00:25:58.931 [2024-05-15 17:16:46.381724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:7968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.931 [2024-05-15 17:16:46.381732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:58.931 [2024-05-15 17:16:46.389289] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x106d0f0) 00:25:58.931 [2024-05-15 17:16:46.389310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:23552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.931 [2024-05-15 17:16:46.389318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:58.931 [2024-05-15 17:16:46.396544] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x106d0f0) 00:25:58.931 [2024-05-15 17:16:46.396565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:9408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.931 [2024-05-15 17:16:46.396573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:58.931 [2024-05-15 17:16:46.403485] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x106d0f0) 00:25:58.931 [2024-05-15 17:16:46.403506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.931 [2024-05-15 17:16:46.403514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:58.931 [2024-05-15 17:16:46.410703] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x106d0f0) 00:25:58.931 [2024-05-15 17:16:46.410723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:11520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.931 [2024-05-15 17:16:46.410731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:58.931 [2024-05-15 17:16:46.418093] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x106d0f0) 00:25:58.931 [2024-05-15 17:16:46.418115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:15520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.931 [2024-05-15 17:16:46.418122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:58.931 [2024-05-15 17:16:46.425244] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x106d0f0) 00:25:58.931 [2024-05-15 17:16:46.425265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:10528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.931 [2024-05-15 17:16:46.425277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:58.931 [2024-05-15 17:16:46.432791] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x106d0f0) 00:25:58.931 [2024-05-15 17:16:46.432811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.931 [2024-05-15 17:16:46.432819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:58.931 [2024-05-15 17:16:46.440578] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x106d0f0) 00:25:58.931 [2024-05-15 17:16:46.440599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:20096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.931 [2024-05-15 17:16:46.440607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:58.931 [2024-05-15 17:16:46.448005] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x106d0f0) 00:25:58.931 [2024-05-15 17:16:46.448026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:15424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.931 [2024-05-15 17:16:46.448034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:58.931 [2024-05-15 17:16:46.455687] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x106d0f0) 00:25:58.931 [2024-05-15 17:16:46.455708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.931 [2024-05-15 17:16:46.455716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:58.931 [2024-05-15 17:16:46.463177] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x106d0f0) 00:25:58.931 [2024-05-15 17:16:46.463198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:21344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.931 [2024-05-15 17:16:46.463206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:58.931 [2024-05-15 17:16:46.470713] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x106d0f0) 00:25:58.932 [2024-05-15 17:16:46.470734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.932 [2024-05-15 17:16:46.470742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:58.932 [2024-05-15 17:16:46.477896] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x106d0f0) 00:25:58.932 [2024-05-15 17:16:46.477917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.932 [2024-05-15 17:16:46.477925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:58.932 [2024-05-15 17:16:46.485469] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x106d0f0) 00:25:58.932 [2024-05-15 17:16:46.485490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.932 [2024-05-15 17:16:46.485498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:58.932 [2024-05-15 17:16:46.492713] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x106d0f0) 00:25:58.932 [2024-05-15 17:16:46.492738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:16992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.932 [2024-05-15 17:16:46.492746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:58.932 [2024-05-15 17:16:46.501109] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x106d0f0) 00:25:58.932 [2024-05-15 17:16:46.501130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:18464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.932 [2024-05-15 17:16:46.501138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:58.932 [2024-05-15 17:16:46.509899] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x106d0f0) 00:25:58.932 [2024-05-15 17:16:46.509919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:13728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.932 [2024-05-15 17:16:46.509926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:58.932 [2024-05-15 17:16:46.518341] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x106d0f0) 00:25:58.932 [2024-05-15 17:16:46.518362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:25568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.932 [2024-05-15 17:16:46.518370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:58.932 00:25:58.932 Latency(us) 00:25:58.932 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:58.932 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:25:58.932 nvme0n1 : 2.04 3954.32 494.29 0.00 0.00 3967.04 1011.53 44906.41 00:25:58.932 =================================================================================================================== 00:25:58.932 Total : 3954.32 494.29 0.00 0.00 3967.04 1011.53 44906.41 00:25:58.932 0 00:25:58.932 17:16:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:25:59.190 17:16:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:25:59.190 17:16:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:25:59.190 | .driver_specific 00:25:59.190 | .nvme_error 00:25:59.190 | .status_code 00:25:59.190 | .command_transient_transport_error' 00:25:59.190 17:16:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:25:59.190 17:16:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 260 > 0 )) 00:25:59.190 17:16:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 3210252 00:25:59.190 17:16:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@946 -- # '[' -z 3210252 ']' 00:25:59.190 17:16:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@950 -- # kill -0 3210252 00:25:59.190 17:16:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@951 -- # uname 00:25:59.190 17:16:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:25:59.190 17:16:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3210252 00:25:59.190 17:16:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:25:59.190 17:16:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:25:59.190 17:16:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3210252' 00:25:59.190 killing process with pid 3210252 00:25:59.190 17:16:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@965 -- # kill 3210252 00:25:59.190 Received shutdown signal, test time was about 2.000000 seconds 00:25:59.190 00:25:59.190 Latency(us) 00:25:59.190 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:59.190 =================================================================================================================== 00:25:59.190 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:25:59.190 17:16:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@970 -- # wait 3210252 00:25:59.448 17:16:47 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@114 -- # run_bperf_err randwrite 4096 128 00:25:59.448 17:16:47 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:25:59.448 17:16:47 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:25:59.448 17:16:47 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:25:59.448 17:16:47 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:25:59.448 17:16:47 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=3210902 00:25:59.448 17:16:47 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 3210902 /var/tmp/bperf.sock 00:25:59.448 17:16:47 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z 00:25:59.448 17:16:47 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@827 -- # '[' -z 3210902 ']' 00:25:59.448 17:16:47 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bperf.sock 00:25:59.448 17:16:47 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@832 -- # local max_retries=100 00:25:59.448 17:16:47 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:25:59.448 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:25:59.448 17:16:47 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # xtrace_disable 00:25:59.448 17:16:47 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:25:59.448 [2024-05-15 17:16:47.075468] Starting SPDK v24.05-pre git sha1 0ba8ca574 / DPDK 23.11.0 initialization... 00:25:59.448 [2024-05-15 17:16:47.075516] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3210902 ] 00:25:59.448 EAL: No free 2048 kB hugepages reported on node 1 00:25:59.705 [2024-05-15 17:16:47.128462] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:59.705 [2024-05-15 17:16:47.207990] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:26:00.270 17:16:47 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:26:00.270 17:16:47 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@860 -- # return 0 00:26:00.270 17:16:47 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:26:00.270 17:16:47 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:26:00.527 17:16:48 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:26:00.527 17:16:48 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:00.527 17:16:48 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:00.527 17:16:48 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:00.527 17:16:48 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:00.527 17:16:48 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:00.784 nvme0n1 00:26:00.784 17:16:48 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:26:00.784 17:16:48 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:00.784 17:16:48 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:00.784 17:16:48 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:00.784 17:16:48 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:26:00.784 17:16:48 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:26:00.784 Running I/O for 2 seconds... 00:26:01.042 [2024-05-15 17:16:48.450193] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1378f20) with pdu=0x2000190fe2e8 00:26:01.042 [2024-05-15 17:16:48.450375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:22173 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:01.042 [2024-05-15 17:16:48.450403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:26:01.042 [2024-05-15 17:16:48.459886] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1378f20) with pdu=0x2000190fe2e8 00:26:01.042 [2024-05-15 17:16:48.460056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:24526 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:01.042 [2024-05-15 17:16:48.460077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:26:01.042 [2024-05-15 17:16:48.469582] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1378f20) with pdu=0x2000190fe2e8 00:26:01.042 [2024-05-15 17:16:48.469756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:7712 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:01.042 [2024-05-15 17:16:48.469773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:26:01.042 [2024-05-15 17:16:48.479255] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1378f20) with pdu=0x2000190fe2e8 00:26:01.042 [2024-05-15 17:16:48.479423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16125 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:01.042 [2024-05-15 17:16:48.479441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:26:01.042 [2024-05-15 17:16:48.488871] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1378f20) with pdu=0x2000190fe2e8 00:26:01.042 [2024-05-15 17:16:48.489039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6834 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:01.042 [2024-05-15 17:16:48.489057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:26:01.042 [2024-05-15 17:16:48.498506] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1378f20) with pdu=0x2000190fe2e8 00:26:01.042 [2024-05-15 17:16:48.498690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:21646 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:01.042 [2024-05-15 17:16:48.498708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:26:01.042 [2024-05-15 17:16:48.508100] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1378f20) with pdu=0x2000190fe2e8 00:26:01.042 [2024-05-15 17:16:48.508301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:11125 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:01.042 [2024-05-15 17:16:48.508325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:26:01.042 [2024-05-15 17:16:48.517698] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1378f20) with pdu=0x2000190fe2e8 00:26:01.042 [2024-05-15 17:16:48.517867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:308 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:01.042 [2024-05-15 17:16:48.517884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:26:01.042 [2024-05-15 17:16:48.527360] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1378f20) with pdu=0x2000190fe2e8 00:26:01.042 [2024-05-15 17:16:48.527553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6665 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:01.042 [2024-05-15 17:16:48.527569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:26:01.042 [2024-05-15 17:16:48.536926] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1378f20) with pdu=0x2000190fe2e8 00:26:01.042 [2024-05-15 17:16:48.537110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15539 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:01.042 [2024-05-15 17:16:48.537127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:26:01.042 [2024-05-15 17:16:48.546565] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1378f20) with pdu=0x2000190fe2e8 00:26:01.042 [2024-05-15 17:16:48.546749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:10868 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:01.042 [2024-05-15 17:16:48.546766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:26:01.042 [2024-05-15 17:16:48.556134] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1378f20) with pdu=0x2000190fe2e8 00:26:01.042 [2024-05-15 17:16:48.556307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:8531 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:01.042 [2024-05-15 17:16:48.556324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:26:01.042 [2024-05-15 17:16:48.565646] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1378f20) with pdu=0x2000190fe2e8 00:26:01.042 [2024-05-15 17:16:48.565815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:2586 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:01.043 [2024-05-15 17:16:48.565831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:26:01.043 [2024-05-15 17:16:48.575223] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1378f20) with pdu=0x2000190fe2e8 00:26:01.043 [2024-05-15 17:16:48.575408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2640 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:01.043 [2024-05-15 17:16:48.575436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:26:01.043 [2024-05-15 17:16:48.584825] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1378f20) with pdu=0x2000190fe2e8 00:26:01.043 [2024-05-15 17:16:48.585009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17834 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:01.043 [2024-05-15 17:16:48.585027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:26:01.043 [2024-05-15 17:16:48.594392] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1378f20) with pdu=0x2000190fe2e8 00:26:01.043 [2024-05-15 17:16:48.594559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:18758 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:01.043 [2024-05-15 17:16:48.594576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:26:01.043 [2024-05-15 17:16:48.603917] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1378f20) with pdu=0x2000190fe2e8 00:26:01.043 [2024-05-15 17:16:48.604083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:2573 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:01.043 [2024-05-15 17:16:48.604100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:26:01.043 [2024-05-15 17:16:48.613482] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1378f20) with pdu=0x2000190fe2e8 00:26:01.043 [2024-05-15 17:16:48.613652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:18087 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:01.043 [2024-05-15 17:16:48.613669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:26:01.043 [2024-05-15 17:16:48.623011] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1378f20) with pdu=0x2000190fe2e8 00:26:01.043 [2024-05-15 17:16:48.623197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5208 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:01.043 [2024-05-15 17:16:48.623214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:26:01.043 [2024-05-15 17:16:48.632608] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1378f20) with pdu=0x2000190fe2e8 00:26:01.043 [2024-05-15 17:16:48.632792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3368 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:01.043 [2024-05-15 17:16:48.632810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:26:01.043 [2024-05-15 17:16:48.642138] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1378f20) with pdu=0x2000190fe2e8 00:26:01.043 [2024-05-15 17:16:48.642316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:11449 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:01.043 [2024-05-15 17:16:48.642334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:26:01.043 [2024-05-15 17:16:48.651673] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1378f20) with pdu=0x2000190fe2e8 00:26:01.043 [2024-05-15 17:16:48.651839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:10285 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:01.043 [2024-05-15 17:16:48.651856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:26:01.043 [2024-05-15 17:16:48.661208] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1378f20) with pdu=0x2000190fe2e8 00:26:01.043 [2024-05-15 17:16:48.661393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:18541 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:01.043 [2024-05-15 17:16:48.661411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:26:01.043 [2024-05-15 17:16:48.670738] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1378f20) with pdu=0x2000190fe2e8 00:26:01.043 [2024-05-15 17:16:48.670903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:323 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:01.043 [2024-05-15 17:16:48.670925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:26:01.043 [2024-05-15 17:16:48.680338] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1378f20) with pdu=0x2000190fe2e8 00:26:01.043 [2024-05-15 17:16:48.680505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10918 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:01.043 [2024-05-15 17:16:48.680522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:26:01.043 [2024-05-15 17:16:48.689894] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1378f20) with pdu=0x2000190fe2e8 00:26:01.043 [2024-05-15 17:16:48.690077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:15493 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:01.043 [2024-05-15 17:16:48.690095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:26:01.043 [2024-05-15 17:16:48.699599] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1378f20) with pdu=0x2000190fe2e8 00:26:01.043 [2024-05-15 17:16:48.699833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:14472 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:01.043 [2024-05-15 17:16:48.699856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:26:01.301 [2024-05-15 17:16:48.709461] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1378f20) with pdu=0x2000190fe2e8 00:26:01.301 [2024-05-15 17:16:48.709646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:10261 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:01.301 [2024-05-15 17:16:48.709667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:26:01.301 [2024-05-15 17:16:48.719240] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1378f20) with pdu=0x2000190fe2e8 00:26:01.301 [2024-05-15 17:16:48.719413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21516 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:01.301 [2024-05-15 17:16:48.719432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:26:01.301 [2024-05-15 17:16:48.728966] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1378f20) with pdu=0x2000190fe2e8 00:26:01.301 [2024-05-15 17:16:48.729155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8835 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:01.301 [2024-05-15 17:16:48.729177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:26:01.301 [2024-05-15 17:16:48.738627] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1378f20) with pdu=0x2000190fe2e8 00:26:01.301 [2024-05-15 17:16:48.738794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:13817 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:01.301 [2024-05-15 17:16:48.738812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:26:01.301 [2024-05-15 17:16:48.748189] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1378f20) with pdu=0x2000190fe2e8 00:26:01.301 [2024-05-15 17:16:48.748383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:13384 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:01.301 [2024-05-15 17:16:48.748401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:26:01.301 [2024-05-15 17:16:48.757714] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1378f20) with pdu=0x2000190fe2e8 00:26:01.301 [2024-05-15 17:16:48.757887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:22225 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:01.301 [2024-05-15 17:16:48.757905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:26:01.301 [2024-05-15 17:16:48.767387] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1378f20) with pdu=0x2000190fe2e8 00:26:01.301 [2024-05-15 17:16:48.767577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19499 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:01.301 [2024-05-15 17:16:48.767602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:26:01.301 [2024-05-15 17:16:48.776925] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1378f20) with pdu=0x2000190fe2e8 00:26:01.301 [2024-05-15 17:16:48.777090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23232 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:01.301 [2024-05-15 17:16:48.777108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:26:01.301 [2024-05-15 17:16:48.786422] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1378f20) with pdu=0x2000190fe2e8 00:26:01.301 [2024-05-15 17:16:48.786592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:1720 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:01.301 [2024-05-15 17:16:48.786610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:26:01.301 [2024-05-15 17:16:48.795944] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1378f20) with pdu=0x2000190fe2e8 00:26:01.301 [2024-05-15 17:16:48.796129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:6286 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:01.301 [2024-05-15 17:16:48.796146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:26:01.301 [2024-05-15 17:16:48.805523] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1378f20) with pdu=0x2000190fe2e8 00:26:01.301 [2024-05-15 17:16:48.805695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:175 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:01.301 [2024-05-15 17:16:48.805711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:26:01.301 [2024-05-15 17:16:48.815016] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1378f20) with pdu=0x2000190fe2e8 00:26:01.301 [2024-05-15 17:16:48.815183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2957 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:01.301 [2024-05-15 17:16:48.815217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:26:01.301 [2024-05-15 17:16:48.824597] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1378f20) with pdu=0x2000190fe2e8 00:26:01.301 [2024-05-15 17:16:48.824790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7281 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:01.301 [2024-05-15 17:16:48.824806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:26:01.301 [2024-05-15 17:16:48.834146] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1378f20) with pdu=0x2000190fe2e8 00:26:01.301 [2024-05-15 17:16:48.834341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:7305 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:01.301 [2024-05-15 17:16:48.834359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:26:01.301 [2024-05-15 17:16:48.843720] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1378f20) with pdu=0x2000190fe2e8 00:26:01.301 [2024-05-15 17:16:48.843904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:12915 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:01.301 [2024-05-15 17:16:48.843921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:26:01.301 [2024-05-15 17:16:48.853246] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1378f20) with pdu=0x2000190fe2e8 00:26:01.301 [2024-05-15 17:16:48.853415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:22973 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:01.302 [2024-05-15 17:16:48.853432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:26:01.302 [2024-05-15 17:16:48.862756] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1378f20) with pdu=0x2000190fe2e8 00:26:01.302 [2024-05-15 17:16:48.862926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23726 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:01.302 [2024-05-15 17:16:48.862943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:26:01.302 [2024-05-15 17:16:48.872309] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1378f20) with pdu=0x2000190fe2e8 00:26:01.302 [2024-05-15 17:16:48.872503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3515 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:01.302 [2024-05-15 17:16:48.872521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:26:01.302 [2024-05-15 17:16:48.881881] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1378f20) with pdu=0x2000190fe2e8 00:26:01.302 [2024-05-15 17:16:48.882048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:17675 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:01.302 [2024-05-15 17:16:48.882065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:26:01.302 [2024-05-15 17:16:48.891409] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1378f20) with pdu=0x2000190fe2e8 00:26:01.302 [2024-05-15 17:16:48.891590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:24929 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:01.302 [2024-05-15 17:16:48.891607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:26:01.302 [2024-05-15 17:16:48.900950] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1378f20) with pdu=0x2000190fe2e8 00:26:01.302 [2024-05-15 17:16:48.901118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:15065 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:01.302 [2024-05-15 17:16:48.901135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:26:01.302 [2024-05-15 17:16:48.910454] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1378f20) with pdu=0x2000190fe2e8 00:26:01.302 [2024-05-15 17:16:48.910620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22731 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:01.302 [2024-05-15 17:16:48.910637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:26:01.302 [2024-05-15 17:16:48.920010] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1378f20) with pdu=0x2000190fe2e8 00:26:01.302 [2024-05-15 17:16:48.920190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20282 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:01.302 [2024-05-15 17:16:48.920212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:26:01.302 [2024-05-15 17:16:48.929583] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1378f20) with pdu=0x2000190fe2e8 00:26:01.302 [2024-05-15 17:16:48.929748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:14138 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:01.302 [2024-05-15 17:16:48.929764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:26:01.302 [2024-05-15 17:16:48.939220] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1378f20) with pdu=0x2000190fe2e8 00:26:01.302 [2024-05-15 17:16:48.939395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:9360 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:01.302 [2024-05-15 17:16:48.939413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:26:01.302 [2024-05-15 17:16:48.948756] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1378f20) with pdu=0x2000190fe2e8 00:26:01.302 [2024-05-15 17:16:48.948925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:16911 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:01.302 [2024-05-15 17:16:48.948942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:26:01.302 [2024-05-15 17:16:48.958386] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1378f20) with pdu=0x2000190fe2e8 00:26:01.302 [2024-05-15 17:16:48.958573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21747 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:01.302 [2024-05-15 17:16:48.958593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:26:01.560 [2024-05-15 17:16:48.968094] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1378f20) with pdu=0x2000190fe2e8 00:26:01.560 [2024-05-15 17:16:48.968285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21172 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:01.560 [2024-05-15 17:16:48.968306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:26:01.560 [2024-05-15 17:16:48.977873] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1378f20) with pdu=0x2000190fe2e8 00:26:01.560 [2024-05-15 17:16:48.978059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:22430 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:01.560 [2024-05-15 17:16:48.978077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:26:01.560 [2024-05-15 17:16:48.987596] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1378f20) with pdu=0x2000190fe2e8 00:26:01.560 [2024-05-15 17:16:48.987765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:15248 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:01.560 [2024-05-15 17:16:48.987782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:26:01.560 [2024-05-15 17:16:48.997196] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1378f20) with pdu=0x2000190fe2e8 00:26:01.560 [2024-05-15 17:16:48.997364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:20827 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:01.560 [2024-05-15 17:16:48.997381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:26:01.560 [2024-05-15 17:16:49.006680] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1378f20) with pdu=0x2000190fe2e8 00:26:01.560 [2024-05-15 17:16:49.006856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10226 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:01.560 [2024-05-15 17:16:49.006873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:26:01.560 [2024-05-15 17:16:49.016207] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1378f20) with pdu=0x2000190fe2e8 00:26:01.560 [2024-05-15 17:16:49.016394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20476 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:01.560 [2024-05-15 17:16:49.016429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:26:01.560 [2024-05-15 17:16:49.025771] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1378f20) with pdu=0x2000190fe2e8 00:26:01.560 [2024-05-15 17:16:49.025950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:22278 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:01.560 [2024-05-15 17:16:49.025967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:26:01.560 [2024-05-15 17:16:49.035368] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1378f20) with pdu=0x2000190fe2e8 00:26:01.560 [2024-05-15 17:16:49.035575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:3269 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:01.560 [2024-05-15 17:16:49.035592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:26:01.560 [2024-05-15 17:16:49.044909] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1378f20) with pdu=0x2000190fe2e8 00:26:01.560 [2024-05-15 17:16:49.045073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:17859 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:01.560 [2024-05-15 17:16:49.045090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:26:01.560 [2024-05-15 17:16:49.054461] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1378f20) with pdu=0x2000190fe2e8 00:26:01.560 [2024-05-15 17:16:49.054645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5593 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:01.560 [2024-05-15 17:16:49.054662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:26:01.560 [2024-05-15 17:16:49.064040] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1378f20) with pdu=0x2000190fe2e8 00:26:01.560 [2024-05-15 17:16:49.064207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2008 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:01.560 [2024-05-15 17:16:49.064224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:26:01.560 [2024-05-15 17:16:49.073512] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1378f20) with pdu=0x2000190fe2e8 00:26:01.560 [2024-05-15 17:16:49.073679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:4788 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:01.560 [2024-05-15 17:16:49.073695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:26:01.560 [2024-05-15 17:16:49.083085] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1378f20) with pdu=0x2000190fe2e8 00:26:01.560 [2024-05-15 17:16:49.083277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:15268 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:01.560 [2024-05-15 17:16:49.083295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:26:01.560 [2024-05-15 17:16:49.092599] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1378f20) with pdu=0x2000190fe2e8 00:26:01.560 [2024-05-15 17:16:49.092768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:22415 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:01.560 [2024-05-15 17:16:49.092784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:26:01.560 [2024-05-15 17:16:49.102131] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1378f20) with pdu=0x2000190fe2e8 00:26:01.560 [2024-05-15 17:16:49.102322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23837 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:01.560 [2024-05-15 17:16:49.102339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:26:01.560 [2024-05-15 17:16:49.111672] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1378f20) with pdu=0x2000190fe2e8 00:26:01.560 [2024-05-15 17:16:49.111837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15322 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:01.560 [2024-05-15 17:16:49.111854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:26:01.560 [2024-05-15 17:16:49.121186] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1378f20) with pdu=0x2000190fe2e8 00:26:01.560 [2024-05-15 17:16:49.121355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:15302 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:01.560 [2024-05-15 17:16:49.121371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:26:01.560 [2024-05-15 17:16:49.130708] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1378f20) with pdu=0x2000190fe2e8 00:26:01.560 [2024-05-15 17:16:49.130894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:12201 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:01.560 [2024-05-15 17:16:49.130911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:26:01.560 [2024-05-15 17:16:49.140331] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1378f20) with pdu=0x2000190fe2e8 00:26:01.560 [2024-05-15 17:16:49.140501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:5021 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:01.560 [2024-05-15 17:16:49.140517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:26:01.560 [2024-05-15 17:16:49.149824] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1378f20) with pdu=0x2000190fe2e8 00:26:01.560 [2024-05-15 17:16:49.149987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22558 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:01.560 [2024-05-15 17:16:49.150003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:26:01.560 [2024-05-15 17:16:49.159499] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1378f20) with pdu=0x2000190fe2e8 00:26:01.560 [2024-05-15 17:16:49.159675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9481 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:01.560 [2024-05-15 17:16:49.159692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:26:01.560 [2024-05-15 17:16:49.168985] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1378f20) with pdu=0x2000190fe2e8 00:26:01.561 [2024-05-15 17:16:49.169149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:3811 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:01.561 [2024-05-15 17:16:49.169174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:26:01.561 [2024-05-15 17:16:49.178583] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1378f20) with pdu=0x2000190fe2e8 00:26:01.561 [2024-05-15 17:16:49.178774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:3022 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:01.561 [2024-05-15 17:16:49.178798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:26:01.561 [2024-05-15 17:16:49.188122] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1378f20) with pdu=0x2000190fe2e8 00:26:01.561 [2024-05-15 17:16:49.188295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:8337 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:01.561 [2024-05-15 17:16:49.188312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:26:01.561 [2024-05-15 17:16:49.197620] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1378f20) with pdu=0x2000190fe2e8 00:26:01.561 [2024-05-15 17:16:49.197790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24828 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:01.561 [2024-05-15 17:16:49.197808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:26:01.561 [2024-05-15 17:16:49.207152] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1378f20) with pdu=0x2000190fe2e8 00:26:01.561 [2024-05-15 17:16:49.207346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17960 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:01.561 [2024-05-15 17:16:49.207363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:26:01.561 [2024-05-15 17:16:49.216831] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1378f20) with pdu=0x2000190fe2e8 00:26:01.561 [2024-05-15 17:16:49.217055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:4059 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:01.561 [2024-05-15 17:16:49.217076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:26:01.818 [2024-05-15 17:16:49.226604] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1378f20) with pdu=0x2000190fe2e8 00:26:01.818 [2024-05-15 17:16:49.226791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:7544 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:01.818 [2024-05-15 17:16:49.226811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:26:01.818 [2024-05-15 17:16:49.236387] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1378f20) with pdu=0x2000190fe2e8 00:26:01.818 [2024-05-15 17:16:49.236632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:1502 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:01.818 [2024-05-15 17:16:49.236652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:26:01.818 [2024-05-15 17:16:49.246306] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1378f20) with pdu=0x2000190fe2e8 00:26:01.818 [2024-05-15 17:16:49.246480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4244 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:01.818 [2024-05-15 17:16:49.246498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:26:01.819 [2024-05-15 17:16:49.255962] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1378f20) with pdu=0x2000190fe2e8 00:26:01.819 [2024-05-15 17:16:49.256209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21912 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:01.819 [2024-05-15 17:16:49.256228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:26:01.819 [2024-05-15 17:16:49.265478] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1378f20) with pdu=0x2000190fe2e8 00:26:01.819 [2024-05-15 17:16:49.265644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:22597 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:01.819 [2024-05-15 17:16:49.265661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:26:01.819 [2024-05-15 17:16:49.275056] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1378f20) with pdu=0x2000190fe2e8 00:26:01.819 [2024-05-15 17:16:49.275228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:12448 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:01.819 [2024-05-15 17:16:49.275245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:26:01.819 [2024-05-15 17:16:49.284574] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1378f20) with pdu=0x2000190fe2e8 00:26:01.819 [2024-05-15 17:16:49.284742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:13682 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:01.819 [2024-05-15 17:16:49.284759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:26:01.819 [2024-05-15 17:16:49.294076] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1378f20) with pdu=0x2000190fe2e8 00:26:01.819 [2024-05-15 17:16:49.294263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3213 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:01.819 [2024-05-15 17:16:49.294280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:26:01.819 [2024-05-15 17:16:49.303679] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1378f20) with pdu=0x2000190fe2e8 00:26:01.819 [2024-05-15 17:16:49.303844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12699 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:01.819 [2024-05-15 17:16:49.303861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:26:01.819 [2024-05-15 17:16:49.313177] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1378f20) with pdu=0x2000190fe2e8 00:26:01.819 [2024-05-15 17:16:49.313347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:1760 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:01.819 [2024-05-15 17:16:49.313364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:26:01.819 [2024-05-15 17:16:49.322703] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1378f20) with pdu=0x2000190fe2e8 00:26:01.819 [2024-05-15 17:16:49.322886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:3647 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:01.819 [2024-05-15 17:16:49.322903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:26:01.819 [2024-05-15 17:16:49.332239] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1378f20) with pdu=0x2000190fe2e8 00:26:01.819 [2024-05-15 17:16:49.332407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:5108 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:01.819 [2024-05-15 17:16:49.332424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:26:01.819 [2024-05-15 17:16:49.341772] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1378f20) with pdu=0x2000190fe2e8 00:26:01.819 [2024-05-15 17:16:49.341938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13137 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:01.819 [2024-05-15 17:16:49.341955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:26:01.819 [2024-05-15 17:16:49.351385] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1378f20) with pdu=0x2000190fe2e8 00:26:01.819 [2024-05-15 17:16:49.351552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22766 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:01.819 [2024-05-15 17:16:49.351569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:26:01.819 [2024-05-15 17:16:49.360867] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1378f20) with pdu=0x2000190fe2e8 00:26:01.819 [2024-05-15 17:16:49.361031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:12454 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:01.819 [2024-05-15 17:16:49.361047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:26:01.819 [2024-05-15 17:16:49.370432] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1378f20) with pdu=0x2000190fe2e8 00:26:01.819 [2024-05-15 17:16:49.370613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:17585 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:01.819 [2024-05-15 17:16:49.370631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:26:01.819 [2024-05-15 17:16:49.380013] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1378f20) with pdu=0x2000190fe2e8 00:26:01.819 [2024-05-15 17:16:49.380179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:15974 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:01.819 [2024-05-15 17:16:49.380197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:26:01.819 [2024-05-15 17:16:49.389507] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1378f20) with pdu=0x2000190fe2e8 00:26:01.819 [2024-05-15 17:16:49.389675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10186 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:01.819 [2024-05-15 17:16:49.389692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:26:01.819 [2024-05-15 17:16:49.399050] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1378f20) with pdu=0x2000190fe2e8 00:26:01.819 [2024-05-15 17:16:49.399280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15488 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:01.819 [2024-05-15 17:16:49.399299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:26:01.819 [2024-05-15 17:16:49.408573] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1378f20) with pdu=0x2000190fe2e8 00:26:01.819 [2024-05-15 17:16:49.408737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:16223 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:01.819 [2024-05-15 17:16:49.408754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:26:01.819 [2024-05-15 17:16:49.418256] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1378f20) with pdu=0x2000190fe2e8 00:26:01.819 [2024-05-15 17:16:49.418480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:6417 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:01.819 [2024-05-15 17:16:49.418504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:26:01.819 [2024-05-15 17:16:49.427833] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1378f20) with pdu=0x2000190fe2e8 00:26:01.819 [2024-05-15 17:16:49.427995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:3875 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:01.819 [2024-05-15 17:16:49.428013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:26:01.819 [2024-05-15 17:16:49.437340] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1378f20) with pdu=0x2000190fe2e8 00:26:01.819 [2024-05-15 17:16:49.437505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11740 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:01.819 [2024-05-15 17:16:49.437522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:26:01.819 [2024-05-15 17:16:49.446941] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1378f20) with pdu=0x2000190fe2e8 00:26:01.819 [2024-05-15 17:16:49.447123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2627 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:01.819 [2024-05-15 17:16:49.447140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:26:01.819 [2024-05-15 17:16:49.456533] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1378f20) with pdu=0x2000190fe2e8 00:26:01.819 [2024-05-15 17:16:49.456701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:4276 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:01.819 [2024-05-15 17:16:49.456718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:26:01.819 [2024-05-15 17:16:49.466113] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1378f20) with pdu=0x2000190fe2e8 00:26:01.819 [2024-05-15 17:16:49.466304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:25432 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:01.819 [2024-05-15 17:16:49.466323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:26:01.819 [2024-05-15 17:16:49.475831] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1378f20) with pdu=0x2000190fe2e8 00:26:01.819 [2024-05-15 17:16:49.476003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:4851 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:01.819 [2024-05-15 17:16:49.476023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:26:02.077 [2024-05-15 17:16:49.485522] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1378f20) with pdu=0x2000190fe2e8 00:26:02.077 [2024-05-15 17:16:49.485705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6997 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.077 [2024-05-15 17:16:49.485726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:26:02.077 [2024-05-15 17:16:49.495343] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1378f20) with pdu=0x2000190fe2e8 00:26:02.077 [2024-05-15 17:16:49.495515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6114 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.077 [2024-05-15 17:16:49.495533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:26:02.077 [2024-05-15 17:16:49.504997] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1378f20) with pdu=0x2000190fe2e8 00:26:02.077 [2024-05-15 17:16:49.505181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:10106 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.077 [2024-05-15 17:16:49.505202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:26:02.077 [2024-05-15 17:16:49.514703] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1378f20) with pdu=0x2000190fe2e8 00:26:02.077 [2024-05-15 17:16:49.514873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:10801 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.077 [2024-05-15 17:16:49.514890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:26:02.078 [2024-05-15 17:16:49.524318] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1378f20) with pdu=0x2000190fe2e8 00:26:02.078 [2024-05-15 17:16:49.524509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:14910 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.078 [2024-05-15 17:16:49.524526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:26:02.078 [2024-05-15 17:16:49.533872] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1378f20) with pdu=0x2000190fe2e8 00:26:02.078 [2024-05-15 17:16:49.534038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24672 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.078 [2024-05-15 17:16:49.534056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:26:02.078 [2024-05-15 17:16:49.543458] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1378f20) with pdu=0x2000190fe2e8 00:26:02.078 [2024-05-15 17:16:49.543644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8006 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.078 [2024-05-15 17:16:49.543661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:26:02.078 [2024-05-15 17:16:49.553041] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1378f20) with pdu=0x2000190fe2e8 00:26:02.078 [2024-05-15 17:16:49.553232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:19507 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.078 [2024-05-15 17:16:49.553249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:26:02.078 [2024-05-15 17:16:49.562621] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1378f20) with pdu=0x2000190fe2e8 00:26:02.078 [2024-05-15 17:16:49.562787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:16979 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.078 [2024-05-15 17:16:49.562804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:26:02.078 [2024-05-15 17:16:49.572127] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1378f20) with pdu=0x2000190fe2e8 00:26:02.078 [2024-05-15 17:16:49.572319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24864 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.078 [2024-05-15 17:16:49.572336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:26:02.078 [2024-05-15 17:16:49.581757] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1378f20) with pdu=0x2000190fe2e8 00:26:02.078 [2024-05-15 17:16:49.581925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:302 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.078 [2024-05-15 17:16:49.581943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:26:02.078 [2024-05-15 17:16:49.591269] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1378f20) with pdu=0x2000190fe2e8 00:26:02.078 [2024-05-15 17:16:49.591446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7619 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.078 [2024-05-15 17:16:49.591463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:26:02.078 [2024-05-15 17:16:49.600853] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1378f20) with pdu=0x2000190fe2e8 00:26:02.078 [2024-05-15 17:16:49.601019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:4254 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.078 [2024-05-15 17:16:49.601036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:26:02.078 [2024-05-15 17:16:49.610649] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1378f20) with pdu=0x2000190fe2e8 00:26:02.078 [2024-05-15 17:16:49.610824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:2119 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.078 [2024-05-15 17:16:49.610843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:26:02.078 [2024-05-15 17:16:49.620257] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1378f20) with pdu=0x2000190fe2e8 00:26:02.078 [2024-05-15 17:16:49.620448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:4177 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.078 [2024-05-15 17:16:49.620466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:26:02.078 [2024-05-15 17:16:49.629851] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1378f20) with pdu=0x2000190fe2e8 00:26:02.078 [2024-05-15 17:16:49.630041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20957 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.078 [2024-05-15 17:16:49.630058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:26:02.078 [2024-05-15 17:16:49.639424] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1378f20) with pdu=0x2000190fe2e8 00:26:02.078 [2024-05-15 17:16:49.639600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11732 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.078 [2024-05-15 17:16:49.639617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:26:02.078 [2024-05-15 17:16:49.648928] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1378f20) with pdu=0x2000190fe2e8 00:26:02.078 [2024-05-15 17:16:49.649101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:20775 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.078 [2024-05-15 17:16:49.649118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:26:02.078 [2024-05-15 17:16:49.658375] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1378f20) with pdu=0x2000190fe2e8 00:26:02.078 [2024-05-15 17:16:49.658548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:18046 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.078 [2024-05-15 17:16:49.658565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:26:02.078 [2024-05-15 17:16:49.667916] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1378f20) with pdu=0x2000190fe2e8 00:26:02.078 [2024-05-15 17:16:49.668081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:940 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.078 [2024-05-15 17:16:49.668099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:26:02.078 [2024-05-15 17:16:49.677455] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1378f20) with pdu=0x2000190fe2e8 00:26:02.078 [2024-05-15 17:16:49.677623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8287 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.078 [2024-05-15 17:16:49.677639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:26:02.078 [2024-05-15 17:16:49.686996] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1378f20) with pdu=0x2000190fe2e8 00:26:02.078 [2024-05-15 17:16:49.687163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10821 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.078 [2024-05-15 17:16:49.687185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:26:02.078 [2024-05-15 17:16:49.696504] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1378f20) with pdu=0x2000190fe2e8 00:26:02.078 [2024-05-15 17:16:49.696670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:22067 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.078 [2024-05-15 17:16:49.696687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:26:02.078 [2024-05-15 17:16:49.706071] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1378f20) with pdu=0x2000190fe2e8 00:26:02.078 [2024-05-15 17:16:49.706263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:3212 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.078 [2024-05-15 17:16:49.706280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:26:02.078 [2024-05-15 17:16:49.715599] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1378f20) with pdu=0x2000190fe2e8 00:26:02.078 [2024-05-15 17:16:49.715765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:6740 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.078 [2024-05-15 17:16:49.715783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:26:02.078 [2024-05-15 17:16:49.725079] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1378f20) with pdu=0x2000190fe2e8 00:26:02.078 [2024-05-15 17:16:49.725254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2899 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.078 [2024-05-15 17:16:49.725271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:26:02.078 [2024-05-15 17:16:49.734787] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1378f20) with pdu=0x2000190fe2e8 00:26:02.078 [2024-05-15 17:16:49.734974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17570 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.078 [2024-05-15 17:16:49.734995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:26:02.336 [2024-05-15 17:16:49.744566] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1378f20) with pdu=0x2000190fe2e8 00:26:02.336 [2024-05-15 17:16:49.744756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:19978 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.336 [2024-05-15 17:16:49.744777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:26:02.336 [2024-05-15 17:16:49.754367] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1378f20) with pdu=0x2000190fe2e8 00:26:02.336 [2024-05-15 17:16:49.754567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:24445 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.336 [2024-05-15 17:16:49.754590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:26:02.336 [2024-05-15 17:16:49.764037] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1378f20) with pdu=0x2000190fe2e8 00:26:02.336 [2024-05-15 17:16:49.764226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:20277 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.336 [2024-05-15 17:16:49.764253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:26:02.336 [2024-05-15 17:16:49.773776] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1378f20) with pdu=0x2000190fe2e8 00:26:02.336 [2024-05-15 17:16:49.773945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16298 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.336 [2024-05-15 17:16:49.773962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:26:02.336 [2024-05-15 17:16:49.783389] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1378f20) with pdu=0x2000190fe2e8 00:26:02.336 [2024-05-15 17:16:49.783558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2014 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.336 [2024-05-15 17:16:49.783575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:26:02.336 [2024-05-15 17:16:49.792932] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1378f20) with pdu=0x2000190fe2e8 00:26:02.336 [2024-05-15 17:16:49.793119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:4943 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.336 [2024-05-15 17:16:49.793137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:26:02.336 [2024-05-15 17:16:49.802553] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1378f20) with pdu=0x2000190fe2e8 00:26:02.336 [2024-05-15 17:16:49.802717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:16895 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.337 [2024-05-15 17:16:49.802734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:26:02.337 [2024-05-15 17:16:49.812071] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1378f20) with pdu=0x2000190fe2e8 00:26:02.337 [2024-05-15 17:16:49.812244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:10202 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.337 [2024-05-15 17:16:49.812261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:26:02.337 [2024-05-15 17:16:49.821676] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1378f20) with pdu=0x2000190fe2e8 00:26:02.337 [2024-05-15 17:16:49.821840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7670 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.337 [2024-05-15 17:16:49.821857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:26:02.337 [2024-05-15 17:16:49.831298] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1378f20) with pdu=0x2000190fe2e8 00:26:02.337 [2024-05-15 17:16:49.831520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10996 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.337 [2024-05-15 17:16:49.831537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:26:02.337 [2024-05-15 17:16:49.840835] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1378f20) with pdu=0x2000190fe2e8 00:26:02.337 [2024-05-15 17:16:49.841004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:15092 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.337 [2024-05-15 17:16:49.841021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:26:02.337 [2024-05-15 17:16:49.850380] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1378f20) with pdu=0x2000190fe2e8 00:26:02.337 [2024-05-15 17:16:49.850624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:5746 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.337 [2024-05-15 17:16:49.850642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:26:02.337 [2024-05-15 17:16:49.859944] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1378f20) with pdu=0x2000190fe2e8 00:26:02.337 [2024-05-15 17:16:49.860130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:20077 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.337 [2024-05-15 17:16:49.860147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:26:02.337 [2024-05-15 17:16:49.869579] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1378f20) with pdu=0x2000190fe2e8 00:26:02.337 [2024-05-15 17:16:49.869771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15170 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.337 [2024-05-15 17:16:49.869797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:26:02.337 [2024-05-15 17:16:49.879172] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1378f20) with pdu=0x2000190fe2e8 00:26:02.337 [2024-05-15 17:16:49.879339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21127 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.337 [2024-05-15 17:16:49.879356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:26:02.337 [2024-05-15 17:16:49.888673] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1378f20) with pdu=0x2000190fe2e8 00:26:02.337 [2024-05-15 17:16:49.888841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:5984 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.337 [2024-05-15 17:16:49.888857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:26:02.337 [2024-05-15 17:16:49.898211] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1378f20) with pdu=0x2000190fe2e8 00:26:02.337 [2024-05-15 17:16:49.898397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:16270 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.337 [2024-05-15 17:16:49.898425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:26:02.337 [2024-05-15 17:16:49.907779] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1378f20) with pdu=0x2000190fe2e8 00:26:02.337 [2024-05-15 17:16:49.907949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:4801 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.337 [2024-05-15 17:16:49.907966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:26:02.337 [2024-05-15 17:16:49.917289] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1378f20) with pdu=0x2000190fe2e8 00:26:02.337 [2024-05-15 17:16:49.917477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16477 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.337 [2024-05-15 17:16:49.917494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:26:02.337 [2024-05-15 17:16:49.926863] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1378f20) with pdu=0x2000190fe2e8 00:26:02.337 [2024-05-15 17:16:49.927047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12100 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.337 [2024-05-15 17:16:49.927065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:26:02.337 [2024-05-15 17:16:49.936424] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1378f20) with pdu=0x2000190fe2e8 00:26:02.337 [2024-05-15 17:16:49.936608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:10710 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.337 [2024-05-15 17:16:49.936626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:26:02.337 [2024-05-15 17:16:49.946043] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1378f20) with pdu=0x2000190fe2e8 00:26:02.337 [2024-05-15 17:16:49.946240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:13223 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.337 [2024-05-15 17:16:49.946257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:26:02.337 [2024-05-15 17:16:49.955604] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1378f20) with pdu=0x2000190fe2e8 00:26:02.337 [2024-05-15 17:16:49.955769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:4426 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.337 [2024-05-15 17:16:49.955785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:26:02.337 [2024-05-15 17:16:49.965114] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1378f20) with pdu=0x2000190fe2e8 00:26:02.337 [2024-05-15 17:16:49.965292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14704 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.337 [2024-05-15 17:16:49.965309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:26:02.337 [2024-05-15 17:16:49.974718] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1378f20) with pdu=0x2000190fe2e8 00:26:02.337 [2024-05-15 17:16:49.974901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20975 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.337 [2024-05-15 17:16:49.974918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:26:02.337 [2024-05-15 17:16:49.984265] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1378f20) with pdu=0x2000190fe2e8 00:26:02.337 [2024-05-15 17:16:49.984436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:25055 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.337 [2024-05-15 17:16:49.984453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:26:02.337 [2024-05-15 17:16:49.993838] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1378f20) with pdu=0x2000190fe2e8 00:26:02.337 [2024-05-15 17:16:49.994021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:17742 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.337 [2024-05-15 17:16:49.994041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:26:02.595 [2024-05-15 17:16:50.003700] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1378f20) with pdu=0x2000190fe2e8 00:26:02.595 [2024-05-15 17:16:50.003900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:5599 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.595 [2024-05-15 17:16:50.003924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:26:02.595 [2024-05-15 17:16:50.013496] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1378f20) with pdu=0x2000190fe2e8 00:26:02.595 [2024-05-15 17:16:50.013684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24244 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.595 [2024-05-15 17:16:50.013703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:26:02.595 [2024-05-15 17:16:50.023340] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1378f20) with pdu=0x2000190fe2e8 00:26:02.595 [2024-05-15 17:16:50.023515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4348 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.595 [2024-05-15 17:16:50.023533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:26:02.595 [2024-05-15 17:16:50.033095] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1378f20) with pdu=0x2000190fe2e8 00:26:02.595 [2024-05-15 17:16:50.033328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:14752 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.595 [2024-05-15 17:16:50.033347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:26:02.595 [2024-05-15 17:16:50.042926] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1378f20) with pdu=0x2000190fe2e8 00:26:02.595 [2024-05-15 17:16:50.043124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:13126 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.595 [2024-05-15 17:16:50.043144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:26:02.595 [2024-05-15 17:16:50.052902] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1378f20) with pdu=0x2000190fe2e8 00:26:02.595 [2024-05-15 17:16:50.053130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:15154 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.595 [2024-05-15 17:16:50.053149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:26:02.595 [2024-05-15 17:16:50.062658] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1378f20) with pdu=0x2000190fe2e8 00:26:02.595 [2024-05-15 17:16:50.062846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11850 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.595 [2024-05-15 17:16:50.062866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:26:02.595 [2024-05-15 17:16:50.072347] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1378f20) with pdu=0x2000190fe2e8 00:26:02.595 [2024-05-15 17:16:50.072518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:673 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.595 [2024-05-15 17:16:50.072536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:26:02.595 [2024-05-15 17:16:50.082104] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1378f20) with pdu=0x2000190fe2e8 00:26:02.595 [2024-05-15 17:16:50.082277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:5376 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.595 [2024-05-15 17:16:50.082294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:26:02.595 [2024-05-15 17:16:50.091758] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1378f20) with pdu=0x2000190fe2e8 00:26:02.595 [2024-05-15 17:16:50.091932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:2212 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.595 [2024-05-15 17:16:50.091949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:26:02.595 [2024-05-15 17:16:50.101487] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1378f20) with pdu=0x2000190fe2e8 00:26:02.595 [2024-05-15 17:16:50.101654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:10941 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.595 [2024-05-15 17:16:50.101671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:26:02.595 [2024-05-15 17:16:50.111109] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1378f20) with pdu=0x2000190fe2e8 00:26:02.595 [2024-05-15 17:16:50.111301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19942 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.595 [2024-05-15 17:16:50.111325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:26:02.595 [2024-05-15 17:16:50.120785] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1378f20) with pdu=0x2000190fe2e8 00:26:02.595 [2024-05-15 17:16:50.120969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7454 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.595 [2024-05-15 17:16:50.120987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:26:02.595 [2024-05-15 17:16:50.130456] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1378f20) with pdu=0x2000190fe2e8 00:26:02.595 [2024-05-15 17:16:50.130639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:13430 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.595 [2024-05-15 17:16:50.130656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:26:02.595 [2024-05-15 17:16:50.140206] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1378f20) with pdu=0x2000190fe2e8 00:26:02.595 [2024-05-15 17:16:50.140374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:19710 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.595 [2024-05-15 17:16:50.140392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:26:02.595 [2024-05-15 17:16:50.149877] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1378f20) with pdu=0x2000190fe2e8 00:26:02.595 [2024-05-15 17:16:50.150061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:20531 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.595 [2024-05-15 17:16:50.150078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:26:02.595 [2024-05-15 17:16:50.159568] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1378f20) with pdu=0x2000190fe2e8 00:26:02.595 [2024-05-15 17:16:50.159738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14943 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.595 [2024-05-15 17:16:50.159755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:26:02.595 [2024-05-15 17:16:50.169360] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1378f20) with pdu=0x2000190fe2e8 00:26:02.595 [2024-05-15 17:16:50.169546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12866 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.595 [2024-05-15 17:16:50.169563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:26:02.595 [2024-05-15 17:16:50.179123] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1378f20) with pdu=0x2000190fe2e8 00:26:02.595 [2024-05-15 17:16:50.179297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:14179 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.595 [2024-05-15 17:16:50.179314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:26:02.595 [2024-05-15 17:16:50.188796] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1378f20) with pdu=0x2000190fe2e8 00:26:02.595 [2024-05-15 17:16:50.188981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:19321 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.596 [2024-05-15 17:16:50.188998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:26:02.596 [2024-05-15 17:16:50.198501] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1378f20) with pdu=0x2000190fe2e8 00:26:02.596 [2024-05-15 17:16:50.198691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:12852 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.596 [2024-05-15 17:16:50.198708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:26:02.596 [2024-05-15 17:16:50.208209] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1378f20) with pdu=0x2000190fe2e8 00:26:02.596 [2024-05-15 17:16:50.208393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14805 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.596 [2024-05-15 17:16:50.208411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:26:02.596 [2024-05-15 17:16:50.217889] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1378f20) with pdu=0x2000190fe2e8 00:26:02.596 [2024-05-15 17:16:50.218056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25079 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.596 [2024-05-15 17:16:50.218072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:26:02.596 [2024-05-15 17:16:50.227576] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1378f20) with pdu=0x2000190fe2e8 00:26:02.596 [2024-05-15 17:16:50.227743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:24799 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.596 [2024-05-15 17:16:50.227761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:26:02.596 [2024-05-15 17:16:50.237271] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1378f20) with pdu=0x2000190fe2e8 00:26:02.596 [2024-05-15 17:16:50.237494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:10939 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.596 [2024-05-15 17:16:50.237512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:26:02.596 [2024-05-15 17:16:50.246907] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1378f20) with pdu=0x2000190fe2e8 00:26:02.596 [2024-05-15 17:16:50.247090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:1321 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.596 [2024-05-15 17:16:50.247107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:26:02.854 [2024-05-15 17:16:50.256853] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1378f20) with pdu=0x2000190fe2e8 00:26:02.854 [2024-05-15 17:16:50.257083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21614 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.854 [2024-05-15 17:16:50.257108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:26:02.854 [2024-05-15 17:16:50.266837] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1378f20) with pdu=0x2000190fe2e8 00:26:02.854 [2024-05-15 17:16:50.267008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5814 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.854 [2024-05-15 17:16:50.267026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:26:02.854 [2024-05-15 17:16:50.276625] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1378f20) with pdu=0x2000190fe2e8 00:26:02.854 [2024-05-15 17:16:50.276793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:6341 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.854 [2024-05-15 17:16:50.276811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:26:02.854 [2024-05-15 17:16:50.286417] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1378f20) with pdu=0x2000190fe2e8 00:26:02.854 [2024-05-15 17:16:50.286584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:2668 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.854 [2024-05-15 17:16:50.286602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:26:02.854 [2024-05-15 17:16:50.296067] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1378f20) with pdu=0x2000190fe2e8 00:26:02.854 [2024-05-15 17:16:50.296261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:19505 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.854 [2024-05-15 17:16:50.296279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:26:02.854 [2024-05-15 17:16:50.305769] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1378f20) with pdu=0x2000190fe2e8 00:26:02.854 [2024-05-15 17:16:50.305940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10882 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.854 [2024-05-15 17:16:50.305958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:26:02.854 [2024-05-15 17:16:50.315427] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1378f20) with pdu=0x2000190fe2e8 00:26:02.854 [2024-05-15 17:16:50.315612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24015 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.854 [2024-05-15 17:16:50.315629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:26:02.854 [2024-05-15 17:16:50.325126] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1378f20) with pdu=0x2000190fe2e8 00:26:02.854 [2024-05-15 17:16:50.325318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:1527 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.854 [2024-05-15 17:16:50.325335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:26:02.854 [2024-05-15 17:16:50.334878] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1378f20) with pdu=0x2000190fe2e8 00:26:02.854 [2024-05-15 17:16:50.335062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:2647 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.854 [2024-05-15 17:16:50.335079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:26:02.854 [2024-05-15 17:16:50.344570] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1378f20) with pdu=0x2000190fe2e8 00:26:02.854 [2024-05-15 17:16:50.344758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:21239 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.854 [2024-05-15 17:16:50.344776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:26:02.854 [2024-05-15 17:16:50.354264] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1378f20) with pdu=0x2000190fe2e8 00:26:02.854 [2024-05-15 17:16:50.354432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23052 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.854 [2024-05-15 17:16:50.354450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:26:02.854 [2024-05-15 17:16:50.363941] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1378f20) with pdu=0x2000190fe2e8 00:26:02.854 [2024-05-15 17:16:50.364126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22614 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.854 [2024-05-15 17:16:50.364144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:26:02.854 [2024-05-15 17:16:50.373625] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1378f20) with pdu=0x2000190fe2e8 00:26:02.854 [2024-05-15 17:16:50.373792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:19881 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.854 [2024-05-15 17:16:50.373809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:26:02.854 [2024-05-15 17:16:50.383331] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1378f20) with pdu=0x2000190fe2e8 00:26:02.854 [2024-05-15 17:16:50.383515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:22008 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.854 [2024-05-15 17:16:50.383532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:26:02.854 [2024-05-15 17:16:50.393048] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1378f20) with pdu=0x2000190fe2e8 00:26:02.854 [2024-05-15 17:16:50.393236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:20055 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.854 [2024-05-15 17:16:50.393254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:26:02.854 [2024-05-15 17:16:50.402739] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1378f20) with pdu=0x2000190fe2e8 00:26:02.854 [2024-05-15 17:16:50.402924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13381 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.854 [2024-05-15 17:16:50.402941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:26:02.854 [2024-05-15 17:16:50.412420] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1378f20) with pdu=0x2000190fe2e8 00:26:02.854 [2024-05-15 17:16:50.412602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20636 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.854 [2024-05-15 17:16:50.412619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:26:02.854 [2024-05-15 17:16:50.422356] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1378f20) with pdu=0x2000190fe2e8 00:26:02.854 [2024-05-15 17:16:50.422530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:20121 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.854 [2024-05-15 17:16:50.422548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:26:02.854 [2024-05-15 17:16:50.432098] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1378f20) with pdu=0x2000190fe2e8 00:26:02.854 [2024-05-15 17:16:50.432350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:15940 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.854 [2024-05-15 17:16:50.432368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:26:02.854 [2024-05-15 17:16:50.441791] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1378f20) with pdu=0x2000190fe2e8 00:26:02.854 [2024-05-15 17:16:50.441959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:19632 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.854 [2024-05-15 17:16:50.441976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:26:02.854 00:26:02.854 Latency(us) 00:26:02.854 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:02.854 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:26:02.854 nvme0n1 : 2.00 26481.12 103.44 0.00 0.00 4824.44 4559.03 11454.55 00:26:02.854 =================================================================================================================== 00:26:02.854 Total : 26481.12 103.44 0.00 0.00 4824.44 4559.03 11454.55 00:26:02.854 0 00:26:02.854 17:16:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:26:02.854 17:16:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:26:02.854 17:16:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:26:02.854 17:16:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:26:02.854 | .driver_specific 00:26:02.854 | .nvme_error 00:26:02.854 | .status_code 00:26:02.854 | .command_transient_transport_error' 00:26:03.112 17:16:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 208 > 0 )) 00:26:03.112 17:16:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 3210902 00:26:03.112 17:16:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@946 -- # '[' -z 3210902 ']' 00:26:03.112 17:16:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@950 -- # kill -0 3210902 00:26:03.112 17:16:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@951 -- # uname 00:26:03.112 17:16:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:26:03.112 17:16:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3210902 00:26:03.112 17:16:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:26:03.112 17:16:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:26:03.112 17:16:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3210902' 00:26:03.112 killing process with pid 3210902 00:26:03.112 17:16:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@965 -- # kill 3210902 00:26:03.112 Received shutdown signal, test time was about 2.000000 seconds 00:26:03.112 00:26:03.112 Latency(us) 00:26:03.112 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:03.112 =================================================================================================================== 00:26:03.112 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:26:03.112 17:16:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@970 -- # wait 3210902 00:26:03.369 17:16:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@115 -- # run_bperf_err randwrite 131072 16 00:26:03.369 17:16:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:26:03.369 17:16:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:26:03.369 17:16:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:26:03.369 17:16:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:26:03.369 17:16:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=3211437 00:26:03.369 17:16:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 3211437 /var/tmp/bperf.sock 00:26:03.369 17:16:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@827 -- # '[' -z 3211437 ']' 00:26:03.369 17:16:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bperf.sock 00:26:03.369 17:16:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@832 -- # local max_retries=100 00:26:03.369 17:16:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:26:03.369 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:26:03.369 17:16:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # xtrace_disable 00:26:03.369 17:16:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z 00:26:03.369 17:16:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:03.369 [2024-05-15 17:16:50.933658] Starting SPDK v24.05-pre git sha1 0ba8ca574 / DPDK 23.11.0 initialization... 00:26:03.369 [2024-05-15 17:16:50.933708] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3211437 ] 00:26:03.369 I/O size of 131072 is greater than zero copy threshold (65536). 00:26:03.369 Zero copy mechanism will not be used. 00:26:03.369 EAL: No free 2048 kB hugepages reported on node 1 00:26:03.369 [2024-05-15 17:16:50.987227] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:03.626 [2024-05-15 17:16:51.058941] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:26:04.190 17:16:51 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:26:04.190 17:16:51 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@860 -- # return 0 00:26:04.190 17:16:51 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:26:04.190 17:16:51 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:26:04.447 17:16:51 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:26:04.447 17:16:51 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:04.447 17:16:51 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:04.447 17:16:51 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:04.447 17:16:51 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:04.447 17:16:51 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:04.704 nvme0n1 00:26:04.704 17:16:52 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:26:04.704 17:16:52 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:04.704 17:16:52 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:04.704 17:16:52 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:04.704 17:16:52 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:26:04.704 17:16:52 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:26:04.704 I/O size of 131072 is greater than zero copy threshold (65536). 00:26:04.704 Zero copy mechanism will not be used. 00:26:04.704 Running I/O for 2 seconds... 00:26:04.704 [2024-05-15 17:16:52.258254] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1379050) with pdu=0x2000190fef90 00:26:04.704 [2024-05-15 17:16:52.258693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.704 [2024-05-15 17:16:52.258720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:04.705 [2024-05-15 17:16:52.267443] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1379050) with pdu=0x2000190fef90 00:26:04.705 [2024-05-15 17:16:52.267842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.705 [2024-05-15 17:16:52.267864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:04.705 [2024-05-15 17:16:52.274874] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1379050) with pdu=0x2000190fef90 00:26:04.705 [2024-05-15 17:16:52.275282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.705 [2024-05-15 17:16:52.275304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:04.705 [2024-05-15 17:16:52.282579] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1379050) with pdu=0x2000190fef90 00:26:04.705 [2024-05-15 17:16:52.282964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.705 [2024-05-15 17:16:52.282984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:04.705 [2024-05-15 17:16:52.289123] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1379050) with pdu=0x2000190fef90 00:26:04.705 [2024-05-15 17:16:52.289519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.705 [2024-05-15 17:16:52.289540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:04.705 [2024-05-15 17:16:52.295645] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1379050) with pdu=0x2000190fef90 00:26:04.705 [2024-05-15 17:16:52.296031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.705 [2024-05-15 17:16:52.296049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:04.705 [2024-05-15 17:16:52.302002] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1379050) with pdu=0x2000190fef90 00:26:04.705 [2024-05-15 17:16:52.302409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.705 [2024-05-15 17:16:52.302429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:04.705 [2024-05-15 17:16:52.307346] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1379050) with pdu=0x2000190fef90 00:26:04.705 [2024-05-15 17:16:52.307739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.705 [2024-05-15 17:16:52.307763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:04.705 [2024-05-15 17:16:52.312311] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1379050) with pdu=0x2000190fef90 00:26:04.705 [2024-05-15 17:16:52.312698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.705 [2024-05-15 17:16:52.312717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:04.705 [2024-05-15 17:16:52.317249] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1379050) with pdu=0x2000190fef90 00:26:04.705 [2024-05-15 17:16:52.317619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.705 [2024-05-15 17:16:52.317638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:04.705 [2024-05-15 17:16:52.322185] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1379050) with pdu=0x2000190fef90 00:26:04.705 [2024-05-15 17:16:52.322557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.705 [2024-05-15 17:16:52.322576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:04.705 [2024-05-15 17:16:52.327128] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1379050) with pdu=0x2000190fef90 00:26:04.705 [2024-05-15 17:16:52.327524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.705 [2024-05-15 17:16:52.327543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:04.705 [2024-05-15 17:16:52.332055] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1379050) with pdu=0x2000190fef90 00:26:04.705 [2024-05-15 17:16:52.332442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.705 [2024-05-15 17:16:52.332461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:04.705 [2024-05-15 17:16:52.337069] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1379050) with pdu=0x2000190fef90 00:26:04.705 [2024-05-15 17:16:52.337457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.705 [2024-05-15 17:16:52.337477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:04.705 [2024-05-15 17:16:52.342028] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1379050) with pdu=0x2000190fef90 00:26:04.705 [2024-05-15 17:16:52.342418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.705 [2024-05-15 17:16:52.342438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:04.705 [2024-05-15 17:16:52.346955] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1379050) with pdu=0x2000190fef90 00:26:04.705 [2024-05-15 17:16:52.347346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.705 [2024-05-15 17:16:52.347366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:04.705 [2024-05-15 17:16:52.351885] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1379050) with pdu=0x2000190fef90 00:26:04.705 [2024-05-15 17:16:52.352271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.705 [2024-05-15 17:16:52.352290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:04.705 [2024-05-15 17:16:52.356788] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1379050) with pdu=0x2000190fef90 00:26:04.705 [2024-05-15 17:16:52.357171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.705 [2024-05-15 17:16:52.357190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:04.705 [2024-05-15 17:16:52.362138] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1379050) with pdu=0x2000190fef90 00:26:04.705 [2024-05-15 17:16:52.362546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.705 [2024-05-15 17:16:52.362569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:04.963 [2024-05-15 17:16:52.368783] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1379050) with pdu=0x2000190fef90 00:26:04.963 [2024-05-15 17:16:52.369209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.963 [2024-05-15 17:16:52.369231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:04.963 [2024-05-15 17:16:52.376056] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1379050) with pdu=0x2000190fef90 00:26:04.963 [2024-05-15 17:16:52.376456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.963 [2024-05-15 17:16:52.376477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:04.963 [2024-05-15 17:16:52.382547] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1379050) with pdu=0x2000190fef90 00:26:04.963 [2024-05-15 17:16:52.382937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.963 [2024-05-15 17:16:52.382957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:04.963 [2024-05-15 17:16:52.388339] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1379050) with pdu=0x2000190fef90 00:26:04.963 [2024-05-15 17:16:52.388710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.963 [2024-05-15 17:16:52.388730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:04.963 [2024-05-15 17:16:52.393481] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1379050) with pdu=0x2000190fef90 00:26:04.963 [2024-05-15 17:16:52.393874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.963 [2024-05-15 17:16:52.393893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:04.963 [2024-05-15 17:16:52.398697] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1379050) with pdu=0x2000190fef90 00:26:04.963 [2024-05-15 17:16:52.399063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.963 [2024-05-15 17:16:52.399083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:04.963 [2024-05-15 17:16:52.403887] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1379050) with pdu=0x2000190fef90 00:26:04.963 [2024-05-15 17:16:52.404281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.963 [2024-05-15 17:16:52.404301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:04.963 [2024-05-15 17:16:52.409943] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1379050) with pdu=0x2000190fef90 00:26:04.963 [2024-05-15 17:16:52.410339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.964 [2024-05-15 17:16:52.410358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:04.964 [2024-05-15 17:16:52.415201] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1379050) with pdu=0x2000190fef90 00:26:04.964 [2024-05-15 17:16:52.415581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.964 [2024-05-15 17:16:52.415600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:04.964 [2024-05-15 17:16:52.420919] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1379050) with pdu=0x2000190fef90 00:26:04.964 [2024-05-15 17:16:52.421310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.964 [2024-05-15 17:16:52.421330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:04.964 [2024-05-15 17:16:52.426386] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1379050) with pdu=0x2000190fef90 00:26:04.964 [2024-05-15 17:16:52.426789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.964 [2024-05-15 17:16:52.426808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:04.964 [2024-05-15 17:16:52.432278] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1379050) with pdu=0x2000190fef90 00:26:04.964 [2024-05-15 17:16:52.432670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.964 [2024-05-15 17:16:52.432689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:04.964 [2024-05-15 17:16:52.437387] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1379050) with pdu=0x2000190fef90 00:26:04.964 [2024-05-15 17:16:52.437759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.964 [2024-05-15 17:16:52.437778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:04.964 [2024-05-15 17:16:52.442406] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1379050) with pdu=0x2000190fef90 00:26:04.964 [2024-05-15 17:16:52.442794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.964 [2024-05-15 17:16:52.442813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:04.964 [2024-05-15 17:16:52.447384] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1379050) with pdu=0x2000190fef90 00:26:04.964 [2024-05-15 17:16:52.447760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.964 [2024-05-15 17:16:52.447783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:04.964 [2024-05-15 17:16:52.452410] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1379050) with pdu=0x2000190fef90 00:26:04.964 [2024-05-15 17:16:52.452810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.964 [2024-05-15 17:16:52.452829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:04.964 [2024-05-15 17:16:52.457414] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1379050) with pdu=0x2000190fef90 00:26:04.964 [2024-05-15 17:16:52.457783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.964 [2024-05-15 17:16:52.457803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:04.964 [2024-05-15 17:16:52.462510] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1379050) with pdu=0x2000190fef90 00:26:04.964 [2024-05-15 17:16:52.462897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.964 [2024-05-15 17:16:52.462917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:04.964 [2024-05-15 17:16:52.467480] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1379050) with pdu=0x2000190fef90 00:26:04.964 [2024-05-15 17:16:52.467854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.964 [2024-05-15 17:16:52.467873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:04.964 [2024-05-15 17:16:52.472428] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1379050) with pdu=0x2000190fef90 00:26:04.964 [2024-05-15 17:16:52.472817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.964 [2024-05-15 17:16:52.472836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:04.964 [2024-05-15 17:16:52.477349] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1379050) with pdu=0x2000190fef90 00:26:04.964 [2024-05-15 17:16:52.477744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.964 [2024-05-15 17:16:52.477762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:04.964 [2024-05-15 17:16:52.482519] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1379050) with pdu=0x2000190fef90 00:26:04.964 [2024-05-15 17:16:52.482894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.964 [2024-05-15 17:16:52.482913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:04.964 [2024-05-15 17:16:52.487497] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1379050) with pdu=0x2000190fef90 00:26:04.964 [2024-05-15 17:16:52.487876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.964 [2024-05-15 17:16:52.487895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:04.964 [2024-05-15 17:16:52.492472] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1379050) with pdu=0x2000190fef90 00:26:04.964 [2024-05-15 17:16:52.492856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.964 [2024-05-15 17:16:52.492876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:04.964 [2024-05-15 17:16:52.497394] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1379050) with pdu=0x2000190fef90 00:26:04.964 [2024-05-15 17:16:52.497776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.964 [2024-05-15 17:16:52.497795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:04.964 [2024-05-15 17:16:52.502389] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1379050) with pdu=0x2000190fef90 00:26:04.964 [2024-05-15 17:16:52.502772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.964 [2024-05-15 17:16:52.502791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:04.964 [2024-05-15 17:16:52.507334] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1379050) with pdu=0x2000190fef90 00:26:04.964 [2024-05-15 17:16:52.507722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.964 [2024-05-15 17:16:52.507741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:04.964 [2024-05-15 17:16:52.512276] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1379050) with pdu=0x2000190fef90 00:26:04.964 [2024-05-15 17:16:52.512659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.964 [2024-05-15 17:16:52.512678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:04.964 [2024-05-15 17:16:52.517243] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1379050) with pdu=0x2000190fef90 00:26:04.964 [2024-05-15 17:16:52.517606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.964 [2024-05-15 17:16:52.517625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:04.964 [2024-05-15 17:16:52.522260] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1379050) with pdu=0x2000190fef90 00:26:04.964 [2024-05-15 17:16:52.522645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.964 [2024-05-15 17:16:52.522663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:04.964 [2024-05-15 17:16:52.527222] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1379050) with pdu=0x2000190fef90 00:26:04.964 [2024-05-15 17:16:52.527600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.964 [2024-05-15 17:16:52.527619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:04.964 [2024-05-15 17:16:52.532200] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1379050) with pdu=0x2000190fef90 00:26:04.964 [2024-05-15 17:16:52.532596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.964 [2024-05-15 17:16:52.532619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:04.964 [2024-05-15 17:16:52.537218] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1379050) with pdu=0x2000190fef90 00:26:04.964 [2024-05-15 17:16:52.537593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.964 [2024-05-15 17:16:52.537613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:04.964 [2024-05-15 17:16:52.542132] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1379050) with pdu=0x2000190fef90 00:26:04.964 [2024-05-15 17:16:52.542528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.964 [2024-05-15 17:16:52.542548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:04.964 [2024-05-15 17:16:52.547076] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1379050) with pdu=0x2000190fef90 00:26:04.964 [2024-05-15 17:16:52.547462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.964 [2024-05-15 17:16:52.547481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:04.964 [2024-05-15 17:16:52.552029] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1379050) with pdu=0x2000190fef90 00:26:04.964 [2024-05-15 17:16:52.552426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.965 [2024-05-15 17:16:52.552445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:04.965 [2024-05-15 17:16:52.557222] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1379050) with pdu=0x2000190fef90 00:26:04.965 [2024-05-15 17:16:52.557610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.965 [2024-05-15 17:16:52.557629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:04.965 [2024-05-15 17:16:52.562541] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1379050) with pdu=0x2000190fef90 00:26:04.965 [2024-05-15 17:16:52.562913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.965 [2024-05-15 17:16:52.562931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:04.965 [2024-05-15 17:16:52.568181] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1379050) with pdu=0x2000190fef90 00:26:04.965 [2024-05-15 17:16:52.568542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.965 [2024-05-15 17:16:52.568560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:04.965 [2024-05-15 17:16:52.573172] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1379050) with pdu=0x2000190fef90 00:26:04.965 [2024-05-15 17:16:52.573559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.965 [2024-05-15 17:16:52.573579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:04.965 [2024-05-15 17:16:52.578039] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1379050) with pdu=0x2000190fef90 00:26:04.965 [2024-05-15 17:16:52.578436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.965 [2024-05-15 17:16:52.578455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:04.965 [2024-05-15 17:16:52.583611] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1379050) with pdu=0x2000190fef90 00:26:04.965 [2024-05-15 17:16:52.583980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.965 [2024-05-15 17:16:52.583999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:04.965 [2024-05-15 17:16:52.588835] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1379050) with pdu=0x2000190fef90 00:26:04.965 [2024-05-15 17:16:52.589239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.965 [2024-05-15 17:16:52.589257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:04.965 [2024-05-15 17:16:52.593839] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1379050) with pdu=0x2000190fef90 00:26:04.965 [2024-05-15 17:16:52.594254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.965 [2024-05-15 17:16:52.594273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:04.965 [2024-05-15 17:16:52.598879] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1379050) with pdu=0x2000190fef90 00:26:04.965 [2024-05-15 17:16:52.599272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.965 [2024-05-15 17:16:52.599291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:04.965 [2024-05-15 17:16:52.603912] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1379050) with pdu=0x2000190fef90 00:26:04.965 [2024-05-15 17:16:52.604317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.965 [2024-05-15 17:16:52.604335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:04.965 [2024-05-15 17:16:52.608965] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1379050) with pdu=0x2000190fef90 00:26:04.965 [2024-05-15 17:16:52.609346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.965 [2024-05-15 17:16:52.609365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:04.965 [2024-05-15 17:16:52.613936] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1379050) with pdu=0x2000190fef90 00:26:04.965 [2024-05-15 17:16:52.614299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.965 [2024-05-15 17:16:52.614318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:04.965 [2024-05-15 17:16:52.619325] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1379050) with pdu=0x2000190fef90 00:26:04.965 [2024-05-15 17:16:52.619721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.965 [2024-05-15 17:16:52.619744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:05.223 [2024-05-15 17:16:52.624420] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1379050) with pdu=0x2000190fef90 00:26:05.223 [2024-05-15 17:16:52.624809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.223 [2024-05-15 17:16:52.624831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:05.223 [2024-05-15 17:16:52.629447] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1379050) with pdu=0x2000190fef90 00:26:05.223 [2024-05-15 17:16:52.629813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.223 [2024-05-15 17:16:52.629835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:05.223 [2024-05-15 17:16:52.634341] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1379050) with pdu=0x2000190fef90 00:26:05.223 [2024-05-15 17:16:52.634732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.223 [2024-05-15 17:16:52.634752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:05.223 [2024-05-15 17:16:52.639775] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1379050) with pdu=0x2000190fef90 00:26:05.223 [2024-05-15 17:16:52.640171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.223 [2024-05-15 17:16:52.640189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:05.223 [2024-05-15 17:16:52.645155] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1379050) with pdu=0x2000190fef90 00:26:05.224 [2024-05-15 17:16:52.645542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.224 [2024-05-15 17:16:52.645562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:05.224 [2024-05-15 17:16:52.650156] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1379050) with pdu=0x2000190fef90 00:26:05.224 [2024-05-15 17:16:52.650558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.224 [2024-05-15 17:16:52.650576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:05.224 [2024-05-15 17:16:52.655253] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1379050) with pdu=0x2000190fef90 00:26:05.224 [2024-05-15 17:16:52.655635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.224 [2024-05-15 17:16:52.655655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:05.224 [2024-05-15 17:16:52.660225] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1379050) with pdu=0x2000190fef90 00:26:05.224 [2024-05-15 17:16:52.660608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.224 [2024-05-15 17:16:52.660627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:05.224 [2024-05-15 17:16:52.665094] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1379050) with pdu=0x2000190fef90 00:26:05.224 [2024-05-15 17:16:52.665494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.224 [2024-05-15 17:16:52.665517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:05.224 [2024-05-15 17:16:52.670000] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1379050) with pdu=0x2000190fef90 00:26:05.224 [2024-05-15 17:16:52.670377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.224 [2024-05-15 17:16:52.670397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:05.224 [2024-05-15 17:16:52.674883] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1379050) with pdu=0x2000190fef90 00:26:05.224 [2024-05-15 17:16:52.675270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.224 [2024-05-15 17:16:52.675290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:05.224 [2024-05-15 17:16:52.679760] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1379050) with pdu=0x2000190fef90 00:26:05.224 [2024-05-15 17:16:52.680136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.224 [2024-05-15 17:16:52.680155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:05.224 [2024-05-15 17:16:52.685205] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1379050) with pdu=0x2000190fef90 00:26:05.224 [2024-05-15 17:16:52.685590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.224 [2024-05-15 17:16:52.685609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:05.224 [2024-05-15 17:16:52.690936] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1379050) with pdu=0x2000190fef90 00:26:05.224 [2024-05-15 17:16:52.691319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.224 [2024-05-15 17:16:52.691337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:05.224 [2024-05-15 17:16:52.696264] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1379050) with pdu=0x2000190fef90 00:26:05.224 [2024-05-15 17:16:52.696654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.224 [2024-05-15 17:16:52.696673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:05.224 [2024-05-15 17:16:52.701355] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1379050) with pdu=0x2000190fef90 00:26:05.224 [2024-05-15 17:16:52.701745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.224 [2024-05-15 17:16:52.701763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:05.224 [2024-05-15 17:16:52.706466] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1379050) with pdu=0x2000190fef90 00:26:05.224 [2024-05-15 17:16:52.706828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.224 [2024-05-15 17:16:52.706847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:05.224 [2024-05-15 17:16:52.711465] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1379050) with pdu=0x2000190fef90 00:26:05.224 [2024-05-15 17:16:52.711854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.224 [2024-05-15 17:16:52.711873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:05.224 [2024-05-15 17:16:52.716543] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1379050) with pdu=0x2000190fef90 00:26:05.224 [2024-05-15 17:16:52.716927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.224 [2024-05-15 17:16:52.716947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:05.224 [2024-05-15 17:16:52.721505] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1379050) with pdu=0x2000190fef90 00:26:05.224 [2024-05-15 17:16:52.721878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.224 [2024-05-15 17:16:52.721896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:05.224 [2024-05-15 17:16:52.726407] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1379050) with pdu=0x2000190fef90 00:26:05.224 [2024-05-15 17:16:52.726792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.224 [2024-05-15 17:16:52.726811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:05.224 [2024-05-15 17:16:52.731367] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1379050) with pdu=0x2000190fef90 00:26:05.224 [2024-05-15 17:16:52.731750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.224 [2024-05-15 17:16:52.731769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:05.224 [2024-05-15 17:16:52.736284] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1379050) with pdu=0x2000190fef90 00:26:05.224 [2024-05-15 17:16:52.736685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.224 [2024-05-15 17:16:52.736708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:05.224 [2024-05-15 17:16:52.741271] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1379050) with pdu=0x2000190fef90 00:26:05.224 [2024-05-15 17:16:52.741655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.224 [2024-05-15 17:16:52.741675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:05.224 [2024-05-15 17:16:52.746150] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1379050) with pdu=0x2000190fef90 00:26:05.224 [2024-05-15 17:16:52.746560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.224 [2024-05-15 17:16:52.746578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:05.224 [2024-05-15 17:16:52.751058] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1379050) with pdu=0x2000190fef90 00:26:05.224 [2024-05-15 17:16:52.751453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.224 [2024-05-15 17:16:52.751472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:05.224 [2024-05-15 17:16:52.756038] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1379050) with pdu=0x2000190fef90 00:26:05.224 [2024-05-15 17:16:52.756421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.224 [2024-05-15 17:16:52.756440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:05.224 [2024-05-15 17:16:52.760972] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1379050) with pdu=0x2000190fef90 00:26:05.224 [2024-05-15 17:16:52.761353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.224 [2024-05-15 17:16:52.761372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:05.224 [2024-05-15 17:16:52.765840] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1379050) with pdu=0x2000190fef90 00:26:05.224 [2024-05-15 17:16:52.766227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.224 [2024-05-15 17:16:52.766246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:05.224 [2024-05-15 17:16:52.770746] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1379050) with pdu=0x2000190fef90 00:26:05.224 [2024-05-15 17:16:52.771131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.224 [2024-05-15 17:16:52.771150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:05.224 [2024-05-15 17:16:52.775707] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1379050) with pdu=0x2000190fef90 00:26:05.224 [2024-05-15 17:16:52.776092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.224 [2024-05-15 17:16:52.776111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:05.224 [2024-05-15 17:16:52.780692] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1379050) with pdu=0x2000190fef90 00:26:05.225 [2024-05-15 17:16:52.781078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.225 [2024-05-15 17:16:52.781096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:05.225 [2024-05-15 17:16:52.785757] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1379050) with pdu=0x2000190fef90 00:26:05.225 [2024-05-15 17:16:52.786131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.225 [2024-05-15 17:16:52.786151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:05.225 [2024-05-15 17:16:52.790661] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1379050) with pdu=0x2000190fef90 00:26:05.225 [2024-05-15 17:16:52.791053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.225 [2024-05-15 17:16:52.791072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:05.225 [2024-05-15 17:16:52.795733] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1379050) with pdu=0x2000190fef90 00:26:05.225 [2024-05-15 17:16:52.796114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.225 [2024-05-15 17:16:52.796137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:05.225 [2024-05-15 17:16:52.801381] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1379050) with pdu=0x2000190fef90 00:26:05.225 [2024-05-15 17:16:52.801787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.225 [2024-05-15 17:16:52.801807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:05.225 [2024-05-15 17:16:52.806516] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1379050) with pdu=0x2000190fef90 00:26:05.225 [2024-05-15 17:16:52.806895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.225 [2024-05-15 17:16:52.806914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:05.225 [2024-05-15 17:16:52.811499] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1379050) with pdu=0x2000190fef90 00:26:05.225 [2024-05-15 17:16:52.811886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.225 [2024-05-15 17:16:52.811904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:05.225 [2024-05-15 17:16:52.816479] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1379050) with pdu=0x2000190fef90 00:26:05.225 [2024-05-15 17:16:52.816863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.225 [2024-05-15 17:16:52.816882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:05.225 [2024-05-15 17:16:52.821496] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1379050) with pdu=0x2000190fef90 00:26:05.225 [2024-05-15 17:16:52.821888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.225 [2024-05-15 17:16:52.821907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:05.225 [2024-05-15 17:16:52.826445] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1379050) with pdu=0x2000190fef90 00:26:05.225 [2024-05-15 17:16:52.826827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.225 [2024-05-15 17:16:52.826846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:05.225 [2024-05-15 17:16:52.831385] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1379050) with pdu=0x2000190fef90 00:26:05.225 [2024-05-15 17:16:52.831784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:64 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.225 [2024-05-15 17:16:52.831803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:05.225 [2024-05-15 17:16:52.836350] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1379050) with pdu=0x2000190fef90 00:26:05.225 [2024-05-15 17:16:52.836738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.225 [2024-05-15 17:16:52.836757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:05.225 [2024-05-15 17:16:52.841294] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1379050) with pdu=0x2000190fef90 00:26:05.225 [2024-05-15 17:16:52.841673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.225 [2024-05-15 17:16:52.841693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:05.225 [2024-05-15 17:16:52.846260] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1379050) with pdu=0x2000190fef90 00:26:05.225 [2024-05-15 17:16:52.846651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.225 [2024-05-15 17:16:52.846670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:05.225 [2024-05-15 17:16:52.851178] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1379050) with pdu=0x2000190fef90 00:26:05.225 [2024-05-15 17:16:52.851560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.225 [2024-05-15 17:16:52.851579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:05.225 [2024-05-15 17:16:52.856057] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1379050) with pdu=0x2000190fef90 00:26:05.225 [2024-05-15 17:16:52.856442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.225 [2024-05-15 17:16:52.856461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:05.225 [2024-05-15 17:16:52.860929] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1379050) with pdu=0x2000190fef90 00:26:05.225 [2024-05-15 17:16:52.861336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.225 [2024-05-15 17:16:52.861355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:05.225 [2024-05-15 17:16:52.865873] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1379050) with pdu=0x2000190fef90 00:26:05.225 [2024-05-15 17:16:52.866260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.225 [2024-05-15 17:16:52.866279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:05.225 [2024-05-15 17:16:52.870802] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1379050) with pdu=0x2000190fef90 00:26:05.225 [2024-05-15 17:16:52.871163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.225 [2024-05-15 17:16:52.871187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:05.225 [2024-05-15 17:16:52.875763] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1379050) with pdu=0x2000190fef90 00:26:05.225 [2024-05-15 17:16:52.876139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.225 [2024-05-15 17:16:52.876158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:05.485 [2024-05-15 17:16:52.881791] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1379050) with pdu=0x2000190fef90 00:26:05.485 [2024-05-15 17:16:52.882205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.485 [2024-05-15 17:16:52.882231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:05.485 [2024-05-15 17:16:52.887256] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1379050) with pdu=0x2000190fef90 00:26:05.485 [2024-05-15 17:16:52.887650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.485 [2024-05-15 17:16:52.887672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:05.485 [2024-05-15 17:16:52.892868] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1379050) with pdu=0x2000190fef90 00:26:05.485 [2024-05-15 17:16:52.893276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.485 [2024-05-15 17:16:52.893296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:05.485 [2024-05-15 17:16:52.900901] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1379050) with pdu=0x2000190fef90 00:26:05.485 [2024-05-15 17:16:52.901326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.485 [2024-05-15 17:16:52.901344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:05.485 [2024-05-15 17:16:52.911330] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1379050) with pdu=0x2000190fef90 00:26:05.485 [2024-05-15 17:16:52.911749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.485 [2024-05-15 17:16:52.911770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:05.485 [2024-05-15 17:16:52.919676] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1379050) with pdu=0x2000190fef90 00:26:05.485 [2024-05-15 17:16:52.920057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.485 [2024-05-15 17:16:52.920077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:05.485 [2024-05-15 17:16:52.925752] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1379050) with pdu=0x2000190fef90 00:26:05.485 [2024-05-15 17:16:52.926145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.485 [2024-05-15 17:16:52.926170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:05.486 [2024-05-15 17:16:52.933457] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1379050) with pdu=0x2000190fef90 00:26:05.486 [2024-05-15 17:16:52.933839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.486 [2024-05-15 17:16:52.933857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:05.486 [2024-05-15 17:16:52.940243] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1379050) with pdu=0x2000190fef90 00:26:05.486 [2024-05-15 17:16:52.940647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.486 [2024-05-15 17:16:52.940666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:05.486 [2024-05-15 17:16:52.946804] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1379050) with pdu=0x2000190fef90 00:26:05.486 [2024-05-15 17:16:52.947169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.486 [2024-05-15 17:16:52.947188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:05.486 [2024-05-15 17:16:52.953015] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1379050) with pdu=0x2000190fef90 00:26:05.486 [2024-05-15 17:16:52.953420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.486 [2024-05-15 17:16:52.953439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:05.486 [2024-05-15 17:16:52.958481] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1379050) with pdu=0x2000190fef90 00:26:05.486 [2024-05-15 17:16:52.958933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.486 [2024-05-15 17:16:52.958951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:05.486 [2024-05-15 17:16:52.963681] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1379050) with pdu=0x2000190fef90 00:26:05.486 [2024-05-15 17:16:52.964039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.486 [2024-05-15 17:16:52.964058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:05.486 [2024-05-15 17:16:52.969045] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1379050) with pdu=0x2000190fef90 00:26:05.486 [2024-05-15 17:16:52.969424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.486 [2024-05-15 17:16:52.969443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:05.486 [2024-05-15 17:16:52.976003] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1379050) with pdu=0x2000190fef90 00:26:05.486 [2024-05-15 17:16:52.976468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.486 [2024-05-15 17:16:52.976487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:05.486 [2024-05-15 17:16:52.982287] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1379050) with pdu=0x2000190fef90 00:26:05.486 [2024-05-15 17:16:52.982671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.486 [2024-05-15 17:16:52.982690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:05.486 [2024-05-15 17:16:52.988173] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1379050) with pdu=0x2000190fef90 00:26:05.486 [2024-05-15 17:16:52.988539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.486 [2024-05-15 17:16:52.988558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:05.486 [2024-05-15 17:16:52.994025] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1379050) with pdu=0x2000190fef90 00:26:05.486 [2024-05-15 17:16:52.994406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.486 [2024-05-15 17:16:52.994436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:05.486 [2024-05-15 17:16:52.999804] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1379050) with pdu=0x2000190fef90 00:26:05.486 [2024-05-15 17:16:53.000180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.486 [2024-05-15 17:16:53.000199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:05.486 [2024-05-15 17:16:53.005793] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1379050) with pdu=0x2000190fef90 00:26:05.486 [2024-05-15 17:16:53.006186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.486 [2024-05-15 17:16:53.006205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:05.486 [2024-05-15 17:16:53.011990] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1379050) with pdu=0x2000190fef90 00:26:05.486 [2024-05-15 17:16:53.012369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.486 [2024-05-15 17:16:53.012388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:05.486 [2024-05-15 17:16:53.017966] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1379050) with pdu=0x2000190fef90 00:26:05.486 [2024-05-15 17:16:53.018339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.486 [2024-05-15 17:16:53.018358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:05.486 [2024-05-15 17:16:53.025004] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1379050) with pdu=0x2000190fef90 00:26:05.486 [2024-05-15 17:16:53.025434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.486 [2024-05-15 17:16:53.025453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:05.486 [2024-05-15 17:16:53.033254] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1379050) with pdu=0x2000190fef90 00:26:05.486 [2024-05-15 17:16:53.033686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.486 [2024-05-15 17:16:53.033705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:05.486 [2024-05-15 17:16:53.041057] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1379050) with pdu=0x2000190fef90 00:26:05.486 [2024-05-15 17:16:53.041490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.486 [2024-05-15 17:16:53.041509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:05.486 [2024-05-15 17:16:53.049516] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1379050) with pdu=0x2000190fef90 00:26:05.486 [2024-05-15 17:16:53.049985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.486 [2024-05-15 17:16:53.050003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:05.486 [2024-05-15 17:16:53.058328] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1379050) with pdu=0x2000190fef90 00:26:05.486 [2024-05-15 17:16:53.058741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.486 [2024-05-15 17:16:53.058762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:05.486 [2024-05-15 17:16:53.066920] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1379050) with pdu=0x2000190fef90 00:26:05.486 [2024-05-15 17:16:53.067416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.486 [2024-05-15 17:16:53.067434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:05.486 [2024-05-15 17:16:53.075732] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1379050) with pdu=0x2000190fef90 00:26:05.486 [2024-05-15 17:16:53.076228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.486 [2024-05-15 17:16:53.076247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:05.486 [2024-05-15 17:16:53.084708] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1379050) with pdu=0x2000190fef90 00:26:05.486 [2024-05-15 17:16:53.085188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.486 [2024-05-15 17:16:53.085224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:05.486 [2024-05-15 17:16:53.093800] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1379050) with pdu=0x2000190fef90 00:26:05.486 [2024-05-15 17:16:53.094240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.486 [2024-05-15 17:16:53.094259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:05.486 [2024-05-15 17:16:53.102467] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1379050) with pdu=0x2000190fef90 00:26:05.486 [2024-05-15 17:16:53.102849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.486 [2024-05-15 17:16:53.102867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:05.486 [2024-05-15 17:16:53.111125] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1379050) with pdu=0x2000190fef90 00:26:05.486 [2024-05-15 17:16:53.111640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.486 [2024-05-15 17:16:53.111658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:05.486 [2024-05-15 17:16:53.119840] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1379050) with pdu=0x2000190fef90 00:26:05.486 [2024-05-15 17:16:53.120311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.487 [2024-05-15 17:16:53.120329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:05.487 [2024-05-15 17:16:53.128561] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1379050) with pdu=0x2000190fef90 00:26:05.487 [2024-05-15 17:16:53.128973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.487 [2024-05-15 17:16:53.128991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:05.487 [2024-05-15 17:16:53.136883] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1379050) with pdu=0x2000190fef90 00:26:05.487 [2024-05-15 17:16:53.137310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.487 [2024-05-15 17:16:53.137330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:05.823 [2024-05-15 17:16:53.145774] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1379050) with pdu=0x2000190fef90 00:26:05.823 [2024-05-15 17:16:53.146375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.823 [2024-05-15 17:16:53.146404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:05.823 [2024-05-15 17:16:53.154451] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1379050) with pdu=0x2000190fef90 00:26:05.823 [2024-05-15 17:16:53.154808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.823 [2024-05-15 17:16:53.154831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:05.823 [2024-05-15 17:16:53.161464] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1379050) with pdu=0x2000190fef90 00:26:05.823 [2024-05-15 17:16:53.161837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.823 [2024-05-15 17:16:53.161857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:05.823 [2024-05-15 17:16:53.168901] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1379050) with pdu=0x2000190fef90 00:26:05.823 [2024-05-15 17:16:53.169318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.823 [2024-05-15 17:16:53.169338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:05.823 [2024-05-15 17:16:53.175089] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1379050) with pdu=0x2000190fef90 00:26:05.823 [2024-05-15 17:16:53.175459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.823 [2024-05-15 17:16:53.175477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:05.823 [2024-05-15 17:16:53.180490] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1379050) with pdu=0x2000190fef90 00:26:05.823 [2024-05-15 17:16:53.180866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.823 [2024-05-15 17:16:53.180885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:05.823 [2024-05-15 17:16:53.185757] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1379050) with pdu=0x2000190fef90 00:26:05.823 [2024-05-15 17:16:53.186123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.823 [2024-05-15 17:16:53.186143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:05.823 [2024-05-15 17:16:53.191186] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1379050) with pdu=0x2000190fef90 00:26:05.823 [2024-05-15 17:16:53.191547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.823 [2024-05-15 17:16:53.191565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:05.823 [2024-05-15 17:16:53.196347] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1379050) with pdu=0x2000190fef90 00:26:05.823 [2024-05-15 17:16:53.196701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.823 [2024-05-15 17:16:53.196720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:05.823 [2024-05-15 17:16:53.201500] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1379050) with pdu=0x2000190fef90 00:26:05.823 [2024-05-15 17:16:53.201873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.823 [2024-05-15 17:16:53.201892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:05.823 [2024-05-15 17:16:53.207288] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1379050) with pdu=0x2000190fef90 00:26:05.823 [2024-05-15 17:16:53.207739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.823 [2024-05-15 17:16:53.207757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:05.824 [2024-05-15 17:16:53.215662] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1379050) with pdu=0x2000190fef90 00:26:05.824 [2024-05-15 17:16:53.216119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.824 [2024-05-15 17:16:53.216138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:05.824 [2024-05-15 17:16:53.222246] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1379050) with pdu=0x2000190fef90 00:26:05.824 [2024-05-15 17:16:53.222631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.824 [2024-05-15 17:16:53.222649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:05.824 [2024-05-15 17:16:53.228193] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1379050) with pdu=0x2000190fef90 00:26:05.824 [2024-05-15 17:16:53.228558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.824 [2024-05-15 17:16:53.228577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:05.824 [2024-05-15 17:16:53.233606] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1379050) with pdu=0x2000190fef90 00:26:05.824 [2024-05-15 17:16:53.233983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.824 [2024-05-15 17:16:53.234002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:05.824 [2024-05-15 17:16:53.239904] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1379050) with pdu=0x2000190fef90 00:26:05.824 [2024-05-15 17:16:53.240295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.824 [2024-05-15 17:16:53.240313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:05.824 [2024-05-15 17:16:53.245196] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1379050) with pdu=0x2000190fef90 00:26:05.824 [2024-05-15 17:16:53.245553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.824 [2024-05-15 17:16:53.245576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:05.824 [2024-05-15 17:16:53.250307] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1379050) with pdu=0x2000190fef90 00:26:05.824 [2024-05-15 17:16:53.250681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.824 [2024-05-15 17:16:53.250700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:05.824 [2024-05-15 17:16:53.256343] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1379050) with pdu=0x2000190fef90 00:26:05.824 [2024-05-15 17:16:53.256695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.824 [2024-05-15 17:16:53.256714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:05.824 [2024-05-15 17:16:53.262216] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1379050) with pdu=0x2000190fef90 00:26:05.824 [2024-05-15 17:16:53.262591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.824 [2024-05-15 17:16:53.262609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:05.824 [2024-05-15 17:16:53.267470] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1379050) with pdu=0x2000190fef90 00:26:05.824 [2024-05-15 17:16:53.267836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.824 [2024-05-15 17:16:53.267856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:05.824 [2024-05-15 17:16:53.273103] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1379050) with pdu=0x2000190fef90 00:26:05.824 [2024-05-15 17:16:53.273496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.824 [2024-05-15 17:16:53.273515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:05.824 [2024-05-15 17:16:53.278916] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1379050) with pdu=0x2000190fef90 00:26:05.824 [2024-05-15 17:16:53.279316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.824 [2024-05-15 17:16:53.279335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:05.824 [2024-05-15 17:16:53.285791] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1379050) with pdu=0x2000190fef90 00:26:05.824 [2024-05-15 17:16:53.286241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.824 [2024-05-15 17:16:53.286260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:05.824 [2024-05-15 17:16:53.293899] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1379050) with pdu=0x2000190fef90 00:26:05.824 [2024-05-15 17:16:53.294373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.824 [2024-05-15 17:16:53.294392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:05.824 [2024-05-15 17:16:53.301199] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1379050) with pdu=0x2000190fef90 00:26:05.824 [2024-05-15 17:16:53.301597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.824 [2024-05-15 17:16:53.301615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:05.824 [2024-05-15 17:16:53.307818] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1379050) with pdu=0x2000190fef90 00:26:05.824 [2024-05-15 17:16:53.308185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.824 [2024-05-15 17:16:53.308204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:05.824 [2024-05-15 17:16:53.314189] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1379050) with pdu=0x2000190fef90 00:26:05.824 [2024-05-15 17:16:53.314555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.824 [2024-05-15 17:16:53.314573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:05.824 [2024-05-15 17:16:53.320467] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1379050) with pdu=0x2000190fef90 00:26:05.824 [2024-05-15 17:16:53.320826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.824 [2024-05-15 17:16:53.320843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:05.824 [2024-05-15 17:16:53.326627] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1379050) with pdu=0x2000190fef90 00:26:05.824 [2024-05-15 17:16:53.327000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.824 [2024-05-15 17:16:53.327018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:05.824 [2024-05-15 17:16:53.333218] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1379050) with pdu=0x2000190fef90 00:26:05.824 [2024-05-15 17:16:53.333584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.824 [2024-05-15 17:16:53.333602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:05.824 [2024-05-15 17:16:53.339187] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1379050) with pdu=0x2000190fef90 00:26:05.824 [2024-05-15 17:16:53.339557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.824 [2024-05-15 17:16:53.339577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:05.824 [2024-05-15 17:16:53.344629] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1379050) with pdu=0x2000190fef90 00:26:05.824 [2024-05-15 17:16:53.345000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.824 [2024-05-15 17:16:53.345018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:05.824 [2024-05-15 17:16:53.350429] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1379050) with pdu=0x2000190fef90 00:26:05.824 [2024-05-15 17:16:53.350789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.824 [2024-05-15 17:16:53.350808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:05.824 [2024-05-15 17:16:53.356943] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1379050) with pdu=0x2000190fef90 00:26:05.824 [2024-05-15 17:16:53.357310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.824 [2024-05-15 17:16:53.357329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:05.824 [2024-05-15 17:16:53.362607] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1379050) with pdu=0x2000190fef90 00:26:05.824 [2024-05-15 17:16:53.362964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.824 [2024-05-15 17:16:53.362982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:05.824 [2024-05-15 17:16:53.367963] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1379050) with pdu=0x2000190fef90 00:26:05.824 [2024-05-15 17:16:53.368338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.824 [2024-05-15 17:16:53.368357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:05.824 [2024-05-15 17:16:53.373692] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1379050) with pdu=0x2000190fef90 00:26:05.824 [2024-05-15 17:16:53.374080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.824 [2024-05-15 17:16:53.374098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:05.824 [2024-05-15 17:16:53.379364] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1379050) with pdu=0x2000190fef90 00:26:05.824 [2024-05-15 17:16:53.379731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.824 [2024-05-15 17:16:53.379749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:05.825 [2024-05-15 17:16:53.384878] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1379050) with pdu=0x2000190fef90 00:26:05.825 [2024-05-15 17:16:53.385247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.825 [2024-05-15 17:16:53.385265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:05.825 [2024-05-15 17:16:53.391365] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1379050) with pdu=0x2000190fef90 00:26:05.825 [2024-05-15 17:16:53.391737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.825 [2024-05-15 17:16:53.391756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:05.825 [2024-05-15 17:16:53.398161] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1379050) with pdu=0x2000190fef90 00:26:05.825 [2024-05-15 17:16:53.398556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.825 [2024-05-15 17:16:53.398574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:05.825 [2024-05-15 17:16:53.404347] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1379050) with pdu=0x2000190fef90 00:26:05.825 [2024-05-15 17:16:53.404762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.825 [2024-05-15 17:16:53.404784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:05.825 [2024-05-15 17:16:53.411326] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1379050) with pdu=0x2000190fef90 00:26:05.825 [2024-05-15 17:16:53.411753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.825 [2024-05-15 17:16:53.411772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:05.825 [2024-05-15 17:16:53.419462] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1379050) with pdu=0x2000190fef90 00:26:05.825 [2024-05-15 17:16:53.419945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.825 [2024-05-15 17:16:53.419964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:05.825 [2024-05-15 17:16:53.427375] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1379050) with pdu=0x2000190fef90 00:26:05.825 [2024-05-15 17:16:53.427813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.825 [2024-05-15 17:16:53.427832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:05.825 [2024-05-15 17:16:53.435521] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1379050) with pdu=0x2000190fef90 00:26:05.825 [2024-05-15 17:16:53.435981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.825 [2024-05-15 17:16:53.435999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:05.825 [2024-05-15 17:16:53.443923] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1379050) with pdu=0x2000190fef90 00:26:05.825 [2024-05-15 17:16:53.444388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.825 [2024-05-15 17:16:53.444407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:05.825 [2024-05-15 17:16:53.452412] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1379050) with pdu=0x2000190fef90 00:26:05.825 [2024-05-15 17:16:53.452857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.825 [2024-05-15 17:16:53.452876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:06.090 [2024-05-15 17:16:53.460485] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1379050) with pdu=0x2000190fef90 00:26:06.090 [2024-05-15 17:16:53.460951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.090 [2024-05-15 17:16:53.460970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:06.090 [2024-05-15 17:16:53.468859] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1379050) with pdu=0x2000190fef90 00:26:06.090 [2024-05-15 17:16:53.469332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.091 [2024-05-15 17:16:53.469351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:06.091 [2024-05-15 17:16:53.476679] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1379050) with pdu=0x2000190fef90 00:26:06.091 [2024-05-15 17:16:53.477050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.091 [2024-05-15 17:16:53.477069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:06.091 [2024-05-15 17:16:53.484069] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1379050) with pdu=0x2000190fef90 00:26:06.091 [2024-05-15 17:16:53.484474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.091 [2024-05-15 17:16:53.484493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:06.091 [2024-05-15 17:16:53.491010] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1379050) with pdu=0x2000190fef90 00:26:06.091 [2024-05-15 17:16:53.491400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.091 [2024-05-15 17:16:53.491419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:06.091 [2024-05-15 17:16:53.497486] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1379050) with pdu=0x2000190fef90 00:26:06.091 [2024-05-15 17:16:53.497842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.091 [2024-05-15 17:16:53.497861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:06.091 [2024-05-15 17:16:53.503685] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1379050) with pdu=0x2000190fef90 00:26:06.091 [2024-05-15 17:16:53.504032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.091 [2024-05-15 17:16:53.504050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:06.091 [2024-05-15 17:16:53.510283] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1379050) with pdu=0x2000190fef90 00:26:06.091 [2024-05-15 17:16:53.510701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.091 [2024-05-15 17:16:53.510720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:06.091 [2024-05-15 17:16:53.517367] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1379050) with pdu=0x2000190fef90 00:26:06.091 [2024-05-15 17:16:53.517734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.091 [2024-05-15 17:16:53.517753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:06.091 [2024-05-15 17:16:53.523390] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1379050) with pdu=0x2000190fef90 00:26:06.091 [2024-05-15 17:16:53.523757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.091 [2024-05-15 17:16:53.523776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:06.091 [2024-05-15 17:16:53.530336] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1379050) with pdu=0x2000190fef90 00:26:06.091 [2024-05-15 17:16:53.530710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.091 [2024-05-15 17:16:53.530732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:06.091 [2024-05-15 17:16:53.537214] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1379050) with pdu=0x2000190fef90 00:26:06.091 [2024-05-15 17:16:53.537602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.091 [2024-05-15 17:16:53.537621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:06.091 [2024-05-15 17:16:53.543490] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1379050) with pdu=0x2000190fef90 00:26:06.091 [2024-05-15 17:16:53.543858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.091 [2024-05-15 17:16:53.543876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:06.091 [2024-05-15 17:16:53.549310] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1379050) with pdu=0x2000190fef90 00:26:06.091 [2024-05-15 17:16:53.549676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.091 [2024-05-15 17:16:53.549695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:06.091 [2024-05-15 17:16:53.554267] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1379050) with pdu=0x2000190fef90 00:26:06.091 [2024-05-15 17:16:53.554623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.091 [2024-05-15 17:16:53.554641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:06.091 [2024-05-15 17:16:53.560170] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1379050) with pdu=0x2000190fef90 00:26:06.091 [2024-05-15 17:16:53.560543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.091 [2024-05-15 17:16:53.560563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:06.091 [2024-05-15 17:16:53.565264] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1379050) with pdu=0x2000190fef90 00:26:06.091 [2024-05-15 17:16:53.565657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.091 [2024-05-15 17:16:53.565676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:06.091 [2024-05-15 17:16:53.570245] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1379050) with pdu=0x2000190fef90 00:26:06.091 [2024-05-15 17:16:53.570591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.091 [2024-05-15 17:16:53.570608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:06.091 [2024-05-15 17:16:53.575712] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1379050) with pdu=0x2000190fef90 00:26:06.091 [2024-05-15 17:16:53.576072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.091 [2024-05-15 17:16:53.576090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:06.091 [2024-05-15 17:16:53.580828] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1379050) with pdu=0x2000190fef90 00:26:06.091 [2024-05-15 17:16:53.581218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.091 [2024-05-15 17:16:53.581237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:06.091 [2024-05-15 17:16:53.586309] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1379050) with pdu=0x2000190fef90 00:26:06.091 [2024-05-15 17:16:53.586667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.091 [2024-05-15 17:16:53.586685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:06.091 [2024-05-15 17:16:53.592705] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1379050) with pdu=0x2000190fef90 00:26:06.091 [2024-05-15 17:16:53.593073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.091 [2024-05-15 17:16:53.593092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:06.091 [2024-05-15 17:16:53.598647] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1379050) with pdu=0x2000190fef90 00:26:06.091 [2024-05-15 17:16:53.599006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.091 [2024-05-15 17:16:53.599024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:06.091 [2024-05-15 17:16:53.605052] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1379050) with pdu=0x2000190fef90 00:26:06.091 [2024-05-15 17:16:53.605410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.091 [2024-05-15 17:16:53.605439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:06.091 [2024-05-15 17:16:53.613127] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1379050) with pdu=0x2000190fef90 00:26:06.091 [2024-05-15 17:16:53.613561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.091 [2024-05-15 17:16:53.613579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:06.091 [2024-05-15 17:16:53.621567] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1379050) with pdu=0x2000190fef90 00:26:06.091 [2024-05-15 17:16:53.622026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.091 [2024-05-15 17:16:53.622044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:06.091 [2024-05-15 17:16:53.627914] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1379050) with pdu=0x2000190fef90 00:26:06.091 [2024-05-15 17:16:53.628303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.091 [2024-05-15 17:16:53.628321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:06.091 [2024-05-15 17:16:53.635038] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1379050) with pdu=0x2000190fef90 00:26:06.091 [2024-05-15 17:16:53.635407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.091 [2024-05-15 17:16:53.635425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:06.091 [2024-05-15 17:16:53.641548] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1379050) with pdu=0x2000190fef90 00:26:06.091 [2024-05-15 17:16:53.641915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.091 [2024-05-15 17:16:53.641933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:06.091 [2024-05-15 17:16:53.647578] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1379050) with pdu=0x2000190fef90 00:26:06.091 [2024-05-15 17:16:53.647954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.092 [2024-05-15 17:16:53.647972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:06.092 [2024-05-15 17:16:53.652969] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1379050) with pdu=0x2000190fef90 00:26:06.092 [2024-05-15 17:16:53.653346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.092 [2024-05-15 17:16:53.653365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:06.092 [2024-05-15 17:16:53.658323] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1379050) with pdu=0x2000190fef90 00:26:06.092 [2024-05-15 17:16:53.658691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.092 [2024-05-15 17:16:53.658710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:06.092 [2024-05-15 17:16:53.663082] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1379050) with pdu=0x2000190fef90 00:26:06.092 [2024-05-15 17:16:53.663461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.092 [2024-05-15 17:16:53.663479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:06.092 [2024-05-15 17:16:53.668048] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1379050) with pdu=0x2000190fef90 00:26:06.092 [2024-05-15 17:16:53.668410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.092 [2024-05-15 17:16:53.668429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:06.092 [2024-05-15 17:16:53.673437] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1379050) with pdu=0x2000190fef90 00:26:06.092 [2024-05-15 17:16:53.673856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.092 [2024-05-15 17:16:53.673874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:06.092 [2024-05-15 17:16:53.679399] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1379050) with pdu=0x2000190fef90 00:26:06.092 [2024-05-15 17:16:53.679842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.092 [2024-05-15 17:16:53.679860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:06.092 [2024-05-15 17:16:53.686541] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1379050) with pdu=0x2000190fef90 00:26:06.092 [2024-05-15 17:16:53.686915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.092 [2024-05-15 17:16:53.686938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:06.092 [2024-05-15 17:16:53.692494] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1379050) with pdu=0x2000190fef90 00:26:06.092 [2024-05-15 17:16:53.692978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.092 [2024-05-15 17:16:53.692997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:06.092 [2024-05-15 17:16:53.700585] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1379050) with pdu=0x2000190fef90 00:26:06.092 [2024-05-15 17:16:53.701014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.092 [2024-05-15 17:16:53.701032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:06.092 [2024-05-15 17:16:53.709006] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1379050) with pdu=0x2000190fef90 00:26:06.092 [2024-05-15 17:16:53.709480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.092 [2024-05-15 17:16:53.709498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:06.092 [2024-05-15 17:16:53.717127] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1379050) with pdu=0x2000190fef90 00:26:06.092 [2024-05-15 17:16:53.717502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.092 [2024-05-15 17:16:53.717521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:06.092 [2024-05-15 17:16:53.725431] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1379050) with pdu=0x2000190fef90 00:26:06.092 [2024-05-15 17:16:53.725847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.092 [2024-05-15 17:16:53.725866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:06.092 [2024-05-15 17:16:53.733472] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1379050) with pdu=0x2000190fef90 00:26:06.092 [2024-05-15 17:16:53.733776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.092 [2024-05-15 17:16:53.733794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:06.092 [2024-05-15 17:16:53.742406] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1379050) with pdu=0x2000190fef90 00:26:06.092 [2024-05-15 17:16:53.742706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.092 [2024-05-15 17:16:53.742725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:06.351 [2024-05-15 17:16:53.750751] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1379050) with pdu=0x2000190fef90 00:26:06.351 [2024-05-15 17:16:53.751182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.351 [2024-05-15 17:16:53.751201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:06.351 [2024-05-15 17:16:53.759094] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1379050) with pdu=0x2000190fef90 00:26:06.351 [2024-05-15 17:16:53.759549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.351 [2024-05-15 17:16:53.759568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:06.351 [2024-05-15 17:16:53.767947] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1379050) with pdu=0x2000190fef90 00:26:06.351 [2024-05-15 17:16:53.768334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.351 [2024-05-15 17:16:53.768353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:06.351 [2024-05-15 17:16:53.776635] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1379050) with pdu=0x2000190fef90 00:26:06.351 [2024-05-15 17:16:53.777064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.351 [2024-05-15 17:16:53.777083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:06.351 [2024-05-15 17:16:53.785356] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1379050) with pdu=0x2000190fef90 00:26:06.351 [2024-05-15 17:16:53.785792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.351 [2024-05-15 17:16:53.785811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:06.351 [2024-05-15 17:16:53.793757] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1379050) with pdu=0x2000190fef90 00:26:06.351 [2024-05-15 17:16:53.794127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.351 [2024-05-15 17:16:53.794146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:06.351 [2024-05-15 17:16:53.802298] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1379050) with pdu=0x2000190fef90 00:26:06.351 [2024-05-15 17:16:53.802639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.351 [2024-05-15 17:16:53.802658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:06.351 [2024-05-15 17:16:53.810830] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1379050) with pdu=0x2000190fef90 00:26:06.351 [2024-05-15 17:16:53.811251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.351 [2024-05-15 17:16:53.811269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:06.351 [2024-05-15 17:16:53.819935] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1379050) with pdu=0x2000190fef90 00:26:06.351 [2024-05-15 17:16:53.820318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.351 [2024-05-15 17:16:53.820337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:06.351 [2024-05-15 17:16:53.828290] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1379050) with pdu=0x2000190fef90 00:26:06.351 [2024-05-15 17:16:53.828681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.351 [2024-05-15 17:16:53.828701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:06.351 [2024-05-15 17:16:53.837007] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1379050) with pdu=0x2000190fef90 00:26:06.351 [2024-05-15 17:16:53.837358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.351 [2024-05-15 17:16:53.837377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:06.351 [2024-05-15 17:16:53.844155] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1379050) with pdu=0x2000190fef90 00:26:06.351 [2024-05-15 17:16:53.844574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.351 [2024-05-15 17:16:53.844592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:06.351 [2024-05-15 17:16:53.852321] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1379050) with pdu=0x2000190fef90 00:26:06.351 [2024-05-15 17:16:53.852734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.351 [2024-05-15 17:16:53.852753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:06.351 [2024-05-15 17:16:53.860706] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1379050) with pdu=0x2000190fef90 00:26:06.351 [2024-05-15 17:16:53.861123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.351 [2024-05-15 17:16:53.861142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:06.351 [2024-05-15 17:16:53.868928] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1379050) with pdu=0x2000190fef90 00:26:06.351 [2024-05-15 17:16:53.869374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.351 [2024-05-15 17:16:53.869394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:06.351 [2024-05-15 17:16:53.876731] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1379050) with pdu=0x2000190fef90 00:26:06.351 [2024-05-15 17:16:53.877147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.351 [2024-05-15 17:16:53.877170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:06.351 [2024-05-15 17:16:53.885388] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1379050) with pdu=0x2000190fef90 00:26:06.351 [2024-05-15 17:16:53.885780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.351 [2024-05-15 17:16:53.885798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:06.351 [2024-05-15 17:16:53.893373] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1379050) with pdu=0x2000190fef90 00:26:06.351 [2024-05-15 17:16:53.893817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.351 [2024-05-15 17:16:53.893837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:06.351 [2024-05-15 17:16:53.902020] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1379050) with pdu=0x2000190fef90 00:26:06.351 [2024-05-15 17:16:53.902440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.351 [2024-05-15 17:16:53.902462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:06.351 [2024-05-15 17:16:53.910084] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1379050) with pdu=0x2000190fef90 00:26:06.351 [2024-05-15 17:16:53.910523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.351 [2024-05-15 17:16:53.910544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:06.351 [2024-05-15 17:16:53.918263] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1379050) with pdu=0x2000190fef90 00:26:06.351 [2024-05-15 17:16:53.918654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.351 [2024-05-15 17:16:53.918673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:06.351 [2024-05-15 17:16:53.925907] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1379050) with pdu=0x2000190fef90 00:26:06.351 [2024-05-15 17:16:53.926216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.351 [2024-05-15 17:16:53.926235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:06.351 [2024-05-15 17:16:53.934356] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1379050) with pdu=0x2000190fef90 00:26:06.351 [2024-05-15 17:16:53.934734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.352 [2024-05-15 17:16:53.934754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:06.352 [2024-05-15 17:16:53.942829] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1379050) with pdu=0x2000190fef90 00:26:06.352 [2024-05-15 17:16:53.943206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.352 [2024-05-15 17:16:53.943226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:06.352 [2024-05-15 17:16:53.950901] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1379050) with pdu=0x2000190fef90 00:26:06.352 [2024-05-15 17:16:53.951232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.352 [2024-05-15 17:16:53.951251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:06.352 [2024-05-15 17:16:53.958799] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1379050) with pdu=0x2000190fef90 00:26:06.352 [2024-05-15 17:16:53.959256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.352 [2024-05-15 17:16:53.959275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:06.352 [2024-05-15 17:16:53.967550] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1379050) with pdu=0x2000190fef90 00:26:06.352 [2024-05-15 17:16:53.967968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.352 [2024-05-15 17:16:53.967986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:06.352 [2024-05-15 17:16:53.975561] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1379050) with pdu=0x2000190fef90 00:26:06.352 [2024-05-15 17:16:53.975994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.352 [2024-05-15 17:16:53.976013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:06.352 [2024-05-15 17:16:53.983139] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1379050) with pdu=0x2000190fef90 00:26:06.352 [2024-05-15 17:16:53.983533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.352 [2024-05-15 17:16:53.983552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:06.352 [2024-05-15 17:16:53.991360] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1379050) with pdu=0x2000190fef90 00:26:06.352 [2024-05-15 17:16:53.991761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.352 [2024-05-15 17:16:53.991780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:06.352 [2024-05-15 17:16:53.998495] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1379050) with pdu=0x2000190fef90 00:26:06.352 [2024-05-15 17:16:53.998926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.352 [2024-05-15 17:16:53.998945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:06.352 [2024-05-15 17:16:54.006614] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1379050) with pdu=0x2000190fef90 00:26:06.352 [2024-05-15 17:16:54.006938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.352 [2024-05-15 17:16:54.006957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:06.611 [2024-05-15 17:16:54.011987] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1379050) with pdu=0x2000190fef90 00:26:06.611 [2024-05-15 17:16:54.012314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.611 [2024-05-15 17:16:54.012333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:06.611 [2024-05-15 17:16:54.017666] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1379050) with pdu=0x2000190fef90 00:26:06.611 [2024-05-15 17:16:54.018017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.611 [2024-05-15 17:16:54.018036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:06.611 [2024-05-15 17:16:54.023333] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1379050) with pdu=0x2000190fef90 00:26:06.611 [2024-05-15 17:16:54.023664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.611 [2024-05-15 17:16:54.023683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:06.611 [2024-05-15 17:16:54.029063] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1379050) with pdu=0x2000190fef90 00:26:06.611 [2024-05-15 17:16:54.029426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.611 [2024-05-15 17:16:54.029449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:06.611 [2024-05-15 17:16:54.034826] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1379050) with pdu=0x2000190fef90 00:26:06.611 [2024-05-15 17:16:54.035182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.611 [2024-05-15 17:16:54.035201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:06.611 [2024-05-15 17:16:54.040513] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1379050) with pdu=0x2000190fef90 00:26:06.611 [2024-05-15 17:16:54.040807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.611 [2024-05-15 17:16:54.040827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:06.611 [2024-05-15 17:16:54.046438] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1379050) with pdu=0x2000190fef90 00:26:06.611 [2024-05-15 17:16:54.046723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.611 [2024-05-15 17:16:54.046743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:06.611 [2024-05-15 17:16:54.052481] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1379050) with pdu=0x2000190fef90 00:26:06.612 [2024-05-15 17:16:54.052787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.612 [2024-05-15 17:16:54.052806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:06.612 [2024-05-15 17:16:54.058492] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1379050) with pdu=0x2000190fef90 00:26:06.612 [2024-05-15 17:16:54.058898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.612 [2024-05-15 17:16:54.058917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:06.612 [2024-05-15 17:16:54.066347] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1379050) with pdu=0x2000190fef90 00:26:06.612 [2024-05-15 17:16:54.066770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.612 [2024-05-15 17:16:54.066789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:06.612 [2024-05-15 17:16:54.073588] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1379050) with pdu=0x2000190fef90 00:26:06.612 [2024-05-15 17:16:54.073996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.612 [2024-05-15 17:16:54.074014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:06.612 [2024-05-15 17:16:54.081140] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1379050) with pdu=0x2000190fef90 00:26:06.612 [2024-05-15 17:16:54.081400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.612 [2024-05-15 17:16:54.081418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:06.612 [2024-05-15 17:16:54.088982] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1379050) with pdu=0x2000190fef90 00:26:06.612 [2024-05-15 17:16:54.089377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.612 [2024-05-15 17:16:54.089396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:06.612 [2024-05-15 17:16:54.097129] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1379050) with pdu=0x2000190fef90 00:26:06.612 [2024-05-15 17:16:54.097537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.612 [2024-05-15 17:16:54.097556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:06.612 [2024-05-15 17:16:54.103882] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1379050) with pdu=0x2000190fef90 00:26:06.612 [2024-05-15 17:16:54.104202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.612 [2024-05-15 17:16:54.104221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:06.612 [2024-05-15 17:16:54.110262] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1379050) with pdu=0x2000190fef90 00:26:06.612 [2024-05-15 17:16:54.110616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.612 [2024-05-15 17:16:54.110635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:06.612 [2024-05-15 17:16:54.116458] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1379050) with pdu=0x2000190fef90 00:26:06.612 [2024-05-15 17:16:54.116801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.612 [2024-05-15 17:16:54.116821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:06.612 [2024-05-15 17:16:54.122682] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1379050) with pdu=0x2000190fef90 00:26:06.612 [2024-05-15 17:16:54.123022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.612 [2024-05-15 17:16:54.123041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:06.612 [2024-05-15 17:16:54.129618] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1379050) with pdu=0x2000190fef90 00:26:06.612 [2024-05-15 17:16:54.129957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.612 [2024-05-15 17:16:54.129976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:06.612 [2024-05-15 17:16:54.136721] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1379050) with pdu=0x2000190fef90 00:26:06.612 [2024-05-15 17:16:54.137086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.612 [2024-05-15 17:16:54.137106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:06.612 [2024-05-15 17:16:54.143402] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1379050) with pdu=0x2000190fef90 00:26:06.612 [2024-05-15 17:16:54.143722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.612 [2024-05-15 17:16:54.143742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:06.612 [2024-05-15 17:16:54.149189] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1379050) with pdu=0x2000190fef90 00:26:06.612 [2024-05-15 17:16:54.149498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.612 [2024-05-15 17:16:54.149518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:06.612 [2024-05-15 17:16:54.154310] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1379050) with pdu=0x2000190fef90 00:26:06.612 [2024-05-15 17:16:54.154636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.612 [2024-05-15 17:16:54.154656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:06.612 [2024-05-15 17:16:54.160049] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1379050) with pdu=0x2000190fef90 00:26:06.612 [2024-05-15 17:16:54.160353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.612 [2024-05-15 17:16:54.160372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:06.612 [2024-05-15 17:16:54.165279] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1379050) with pdu=0x2000190fef90 00:26:06.612 [2024-05-15 17:16:54.165570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.612 [2024-05-15 17:16:54.165589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:06.612 [2024-05-15 17:16:54.170224] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1379050) with pdu=0x2000190fef90 00:26:06.612 [2024-05-15 17:16:54.170526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.612 [2024-05-15 17:16:54.170544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:06.612 [2024-05-15 17:16:54.174943] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1379050) with pdu=0x2000190fef90 00:26:06.612 [2024-05-15 17:16:54.175224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.612 [2024-05-15 17:16:54.175243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:06.612 [2024-05-15 17:16:54.179238] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1379050) with pdu=0x2000190fef90 00:26:06.612 [2024-05-15 17:16:54.179483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.612 [2024-05-15 17:16:54.179501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:06.612 [2024-05-15 17:16:54.183306] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1379050) with pdu=0x2000190fef90 00:26:06.612 [2024-05-15 17:16:54.183558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.612 [2024-05-15 17:16:54.183577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:06.612 [2024-05-15 17:16:54.187415] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1379050) with pdu=0x2000190fef90 00:26:06.612 [2024-05-15 17:16:54.187636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.612 [2024-05-15 17:16:54.187658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:06.612 [2024-05-15 17:16:54.191720] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1379050) with pdu=0x2000190fef90 00:26:06.612 [2024-05-15 17:16:54.191950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.612 [2024-05-15 17:16:54.191970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:06.612 [2024-05-15 17:16:54.197501] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1379050) with pdu=0x2000190fef90 00:26:06.612 [2024-05-15 17:16:54.197737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.612 [2024-05-15 17:16:54.197756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:06.612 [2024-05-15 17:16:54.202221] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1379050) with pdu=0x2000190fef90 00:26:06.612 [2024-05-15 17:16:54.202447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.612 [2024-05-15 17:16:54.202466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:06.612 [2024-05-15 17:16:54.206566] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1379050) with pdu=0x2000190fef90 00:26:06.612 [2024-05-15 17:16:54.206788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.612 [2024-05-15 17:16:54.206807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:06.612 [2024-05-15 17:16:54.210892] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1379050) with pdu=0x2000190fef90 00:26:06.612 [2024-05-15 17:16:54.211119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.612 [2024-05-15 17:16:54.211137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:06.612 [2024-05-15 17:16:54.215283] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1379050) with pdu=0x2000190fef90 00:26:06.613 [2024-05-15 17:16:54.215512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.613 [2024-05-15 17:16:54.215531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:06.613 [2024-05-15 17:16:54.219616] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1379050) with pdu=0x2000190fef90 00:26:06.613 [2024-05-15 17:16:54.219853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.613 [2024-05-15 17:16:54.219872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:06.613 [2024-05-15 17:16:54.224040] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1379050) with pdu=0x2000190fef90 00:26:06.613 [2024-05-15 17:16:54.224285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.613 [2024-05-15 17:16:54.224304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:06.613 [2024-05-15 17:16:54.228393] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1379050) with pdu=0x2000190fef90 00:26:06.613 [2024-05-15 17:16:54.228624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.613 [2024-05-15 17:16:54.228642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:06.613 [2024-05-15 17:16:54.232865] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1379050) with pdu=0x2000190fef90 00:26:06.613 [2024-05-15 17:16:54.233090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.613 [2024-05-15 17:16:54.233109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:06.613 [2024-05-15 17:16:54.237484] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1379050) with pdu=0x2000190fef90 00:26:06.613 [2024-05-15 17:16:54.237703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.613 [2024-05-15 17:16:54.237721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:06.613 [2024-05-15 17:16:54.242388] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1379050) with pdu=0x2000190fef90 00:26:06.613 [2024-05-15 17:16:54.242615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.613 [2024-05-15 17:16:54.242633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:06.613 [2024-05-15 17:16:54.246714] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1379050) with pdu=0x2000190fef90 00:26:06.613 [2024-05-15 17:16:54.246937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.613 [2024-05-15 17:16:54.246955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:06.613 [2024-05-15 17:16:54.250963] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1379050) with pdu=0x2000190fef90 00:26:06.613 [2024-05-15 17:16:54.251036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.613 [2024-05-15 17:16:54.251054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:06.613 00:26:06.613 Latency(us) 00:26:06.613 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:06.613 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:26:06.613 nvme0n1 : 2.00 5018.79 627.35 0.00 0.00 3182.93 1951.83 9858.89 00:26:06.613 =================================================================================================================== 00:26:06.613 Total : 5018.79 627.35 0.00 0.00 3182.93 1951.83 9858.89 00:26:06.613 0 00:26:06.872 17:16:54 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:26:06.872 17:16:54 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:26:06.872 17:16:54 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:26:06.872 | .driver_specific 00:26:06.872 | .nvme_error 00:26:06.872 | .status_code 00:26:06.872 | .command_transient_transport_error' 00:26:06.872 17:16:54 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:26:06.872 17:16:54 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 324 > 0 )) 00:26:06.872 17:16:54 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 3211437 00:26:06.872 17:16:54 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@946 -- # '[' -z 3211437 ']' 00:26:06.872 17:16:54 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@950 -- # kill -0 3211437 00:26:06.872 17:16:54 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@951 -- # uname 00:26:06.872 17:16:54 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:26:06.872 17:16:54 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3211437 00:26:06.872 17:16:54 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:26:06.872 17:16:54 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:26:06.872 17:16:54 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3211437' 00:26:06.872 killing process with pid 3211437 00:26:06.872 17:16:54 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@965 -- # kill 3211437 00:26:06.872 Received shutdown signal, test time was about 2.000000 seconds 00:26:06.872 00:26:06.872 Latency(us) 00:26:06.872 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:06.872 =================================================================================================================== 00:26:06.872 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:26:06.872 17:16:54 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@970 -- # wait 3211437 00:26:07.130 17:16:54 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@116 -- # killprocess 3209312 00:26:07.130 17:16:54 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@946 -- # '[' -z 3209312 ']' 00:26:07.130 17:16:54 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@950 -- # kill -0 3209312 00:26:07.130 17:16:54 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@951 -- # uname 00:26:07.130 17:16:54 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:26:07.130 17:16:54 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3209312 00:26:07.130 17:16:54 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:26:07.130 17:16:54 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:26:07.130 17:16:54 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3209312' 00:26:07.130 killing process with pid 3209312 00:26:07.130 17:16:54 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@965 -- # kill 3209312 00:26:07.130 [2024-05-15 17:16:54.749053] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:26:07.130 17:16:54 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@970 -- # wait 3209312 00:26:07.389 00:26:07.389 real 0m16.875s 00:26:07.389 user 0m32.556s 00:26:07.389 sys 0m4.186s 00:26:07.389 17:16:54 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1122 -- # xtrace_disable 00:26:07.389 17:16:54 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:07.389 ************************************ 00:26:07.389 END TEST nvmf_digest_error 00:26:07.389 ************************************ 00:26:07.389 17:16:54 nvmf_tcp.nvmf_digest -- host/digest.sh@149 -- # trap - SIGINT SIGTERM EXIT 00:26:07.389 17:16:54 nvmf_tcp.nvmf_digest -- host/digest.sh@150 -- # nvmftestfini 00:26:07.389 17:16:54 nvmf_tcp.nvmf_digest -- nvmf/common.sh@488 -- # nvmfcleanup 00:26:07.389 17:16:54 nvmf_tcp.nvmf_digest -- nvmf/common.sh@117 -- # sync 00:26:07.389 17:16:54 nvmf_tcp.nvmf_digest -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:26:07.389 17:16:54 nvmf_tcp.nvmf_digest -- nvmf/common.sh@120 -- # set +e 00:26:07.389 17:16:54 nvmf_tcp.nvmf_digest -- nvmf/common.sh@121 -- # for i in {1..20} 00:26:07.389 17:16:54 nvmf_tcp.nvmf_digest -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:26:07.389 rmmod nvme_tcp 00:26:07.389 rmmod nvme_fabrics 00:26:07.389 rmmod nvme_keyring 00:26:07.389 17:16:55 nvmf_tcp.nvmf_digest -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:26:07.647 17:16:55 nvmf_tcp.nvmf_digest -- nvmf/common.sh@124 -- # set -e 00:26:07.647 17:16:55 nvmf_tcp.nvmf_digest -- nvmf/common.sh@125 -- # return 0 00:26:07.647 17:16:55 nvmf_tcp.nvmf_digest -- nvmf/common.sh@489 -- # '[' -n 3209312 ']' 00:26:07.647 17:16:55 nvmf_tcp.nvmf_digest -- nvmf/common.sh@490 -- # killprocess 3209312 00:26:07.647 17:16:55 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@946 -- # '[' -z 3209312 ']' 00:26:07.647 17:16:55 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@950 -- # kill -0 3209312 00:26:07.647 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 950: kill: (3209312) - No such process 00:26:07.647 17:16:55 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@973 -- # echo 'Process with pid 3209312 is not found' 00:26:07.647 Process with pid 3209312 is not found 00:26:07.647 17:16:55 nvmf_tcp.nvmf_digest -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:26:07.647 17:16:55 nvmf_tcp.nvmf_digest -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:26:07.647 17:16:55 nvmf_tcp.nvmf_digest -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:26:07.647 17:16:55 nvmf_tcp.nvmf_digest -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:26:07.647 17:16:55 nvmf_tcp.nvmf_digest -- nvmf/common.sh@278 -- # remove_spdk_ns 00:26:07.647 17:16:55 nvmf_tcp.nvmf_digest -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:07.647 17:16:55 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:26:07.647 17:16:55 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:09.550 17:16:57 nvmf_tcp.nvmf_digest -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:26:09.550 00:26:09.550 real 0m40.919s 00:26:09.550 user 1m5.171s 00:26:09.550 sys 0m12.746s 00:26:09.550 17:16:57 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1122 -- # xtrace_disable 00:26:09.550 17:16:57 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:26:09.550 ************************************ 00:26:09.550 END TEST nvmf_digest 00:26:09.550 ************************************ 00:26:09.550 17:16:57 nvmf_tcp -- nvmf/nvmf.sh@110 -- # [[ 0 -eq 1 ]] 00:26:09.550 17:16:57 nvmf_tcp -- nvmf/nvmf.sh@115 -- # [[ 0 -eq 1 ]] 00:26:09.550 17:16:57 nvmf_tcp -- nvmf/nvmf.sh@120 -- # [[ phy == phy ]] 00:26:09.550 17:16:57 nvmf_tcp -- nvmf/nvmf.sh@121 -- # run_test nvmf_bdevperf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:26:09.550 17:16:57 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:26:09.550 17:16:57 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:26:09.550 17:16:57 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:26:09.550 ************************************ 00:26:09.550 START TEST nvmf_bdevperf 00:26:09.550 ************************************ 00:26:09.550 17:16:57 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:26:09.809 * Looking for test storage... 00:26:09.809 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:26:09.809 17:16:57 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:09.809 17:16:57 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@7 -- # uname -s 00:26:09.809 17:16:57 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:09.809 17:16:57 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:09.809 17:16:57 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:09.809 17:16:57 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:09.809 17:16:57 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:09.809 17:16:57 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:09.809 17:16:57 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:09.809 17:16:57 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:09.809 17:16:57 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:09.809 17:16:57 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:09.809 17:16:57 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:26:09.809 17:16:57 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:26:09.809 17:16:57 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:09.809 17:16:57 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:09.809 17:16:57 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:09.809 17:16:57 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:09.809 17:16:57 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:09.809 17:16:57 nvmf_tcp.nvmf_bdevperf -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:09.809 17:16:57 nvmf_tcp.nvmf_bdevperf -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:09.809 17:16:57 nvmf_tcp.nvmf_bdevperf -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:09.809 17:16:57 nvmf_tcp.nvmf_bdevperf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:09.809 17:16:57 nvmf_tcp.nvmf_bdevperf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:09.810 17:16:57 nvmf_tcp.nvmf_bdevperf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:09.810 17:16:57 nvmf_tcp.nvmf_bdevperf -- paths/export.sh@5 -- # export PATH 00:26:09.810 17:16:57 nvmf_tcp.nvmf_bdevperf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:09.810 17:16:57 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@47 -- # : 0 00:26:09.810 17:16:57 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:26:09.810 17:16:57 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:26:09.810 17:16:57 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:09.810 17:16:57 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:09.810 17:16:57 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:09.810 17:16:57 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:26:09.810 17:16:57 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:26:09.810 17:16:57 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@51 -- # have_pci_nics=0 00:26:09.810 17:16:57 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@11 -- # MALLOC_BDEV_SIZE=64 00:26:09.810 17:16:57 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:26:09.810 17:16:57 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@24 -- # nvmftestinit 00:26:09.810 17:16:57 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:26:09.810 17:16:57 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:09.810 17:16:57 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@448 -- # prepare_net_devs 00:26:09.810 17:16:57 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@410 -- # local -g is_hw=no 00:26:09.810 17:16:57 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@412 -- # remove_spdk_ns 00:26:09.810 17:16:57 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:09.810 17:16:57 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:26:09.810 17:16:57 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:09.810 17:16:57 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:26:09.810 17:16:57 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:26:09.810 17:16:57 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@285 -- # xtrace_disable 00:26:09.810 17:16:57 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:15.077 17:17:02 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:15.077 17:17:02 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@291 -- # pci_devs=() 00:26:15.077 17:17:02 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@291 -- # local -a pci_devs 00:26:15.077 17:17:02 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@292 -- # pci_net_devs=() 00:26:15.077 17:17:02 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:26:15.077 17:17:02 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@293 -- # pci_drivers=() 00:26:15.077 17:17:02 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@293 -- # local -A pci_drivers 00:26:15.077 17:17:02 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@295 -- # net_devs=() 00:26:15.078 17:17:02 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@295 -- # local -ga net_devs 00:26:15.078 17:17:02 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@296 -- # e810=() 00:26:15.078 17:17:02 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@296 -- # local -ga e810 00:26:15.078 17:17:02 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@297 -- # x722=() 00:26:15.078 17:17:02 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@297 -- # local -ga x722 00:26:15.078 17:17:02 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@298 -- # mlx=() 00:26:15.078 17:17:02 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@298 -- # local -ga mlx 00:26:15.078 17:17:02 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:15.078 17:17:02 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:15.078 17:17:02 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:15.078 17:17:02 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:15.078 17:17:02 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:15.078 17:17:02 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:15.078 17:17:02 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:15.078 17:17:02 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:15.078 17:17:02 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:15.078 17:17:02 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:15.078 17:17:02 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:15.078 17:17:02 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:26:15.078 17:17:02 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:26:15.078 17:17:02 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:26:15.078 17:17:02 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:26:15.078 17:17:02 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:26:15.078 17:17:02 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:26:15.078 17:17:02 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:15.078 17:17:02 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:26:15.078 Found 0000:86:00.0 (0x8086 - 0x159b) 00:26:15.078 17:17:02 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:26:15.078 17:17:02 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:26:15.078 17:17:02 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:15.078 17:17:02 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:15.078 17:17:02 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:26:15.078 17:17:02 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:15.078 17:17:02 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:26:15.078 Found 0000:86:00.1 (0x8086 - 0x159b) 00:26:15.078 17:17:02 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:26:15.078 17:17:02 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:26:15.078 17:17:02 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:15.078 17:17:02 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:15.078 17:17:02 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:26:15.078 17:17:02 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:26:15.078 17:17:02 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:26:15.078 17:17:02 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:26:15.078 17:17:02 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:15.078 17:17:02 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:15.078 17:17:02 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:26:15.078 17:17:02 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:15.078 17:17:02 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@390 -- # [[ up == up ]] 00:26:15.078 17:17:02 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:26:15.078 17:17:02 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:15.078 17:17:02 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:26:15.078 Found net devices under 0000:86:00.0: cvl_0_0 00:26:15.078 17:17:02 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:26:15.078 17:17:02 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:15.078 17:17:02 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:15.078 17:17:02 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:26:15.078 17:17:02 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:15.078 17:17:02 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@390 -- # [[ up == up ]] 00:26:15.078 17:17:02 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:26:15.078 17:17:02 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:15.078 17:17:02 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:26:15.078 Found net devices under 0000:86:00.1: cvl_0_1 00:26:15.078 17:17:02 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:26:15.078 17:17:02 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:26:15.078 17:17:02 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@414 -- # is_hw=yes 00:26:15.078 17:17:02 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:26:15.078 17:17:02 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:26:15.078 17:17:02 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:26:15.078 17:17:02 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:15.078 17:17:02 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:15.078 17:17:02 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:15.078 17:17:02 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:26:15.078 17:17:02 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:26:15.078 17:17:02 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:26:15.078 17:17:02 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:26:15.078 17:17:02 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:26:15.078 17:17:02 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:15.078 17:17:02 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:26:15.078 17:17:02 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:26:15.078 17:17:02 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:26:15.078 17:17:02 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:26:15.078 17:17:02 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:26:15.078 17:17:02 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:15.078 17:17:02 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:26:15.078 17:17:02 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:15.078 17:17:02 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:26:15.078 17:17:02 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:26:15.078 17:17:02 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:26:15.078 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:15.078 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.193 ms 00:26:15.078 00:26:15.078 --- 10.0.0.2 ping statistics --- 00:26:15.078 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:15.078 rtt min/avg/max/mdev = 0.193/0.193/0.193/0.000 ms 00:26:15.078 17:17:02 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:26:15.078 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:15.078 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.219 ms 00:26:15.078 00:26:15.078 --- 10.0.0.1 ping statistics --- 00:26:15.078 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:15.078 rtt min/avg/max/mdev = 0.219/0.219/0.219/0.000 ms 00:26:15.078 17:17:02 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:15.078 17:17:02 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@422 -- # return 0 00:26:15.078 17:17:02 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:26:15.078 17:17:02 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:15.078 17:17:02 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:26:15.078 17:17:02 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:26:15.078 17:17:02 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:15.078 17:17:02 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:26:15.078 17:17:02 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:26:15.078 17:17:02 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@25 -- # tgt_init 00:26:15.078 17:17:02 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:26:15.078 17:17:02 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:26:15.078 17:17:02 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@720 -- # xtrace_disable 00:26:15.078 17:17:02 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:15.078 17:17:02 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@481 -- # nvmfpid=3215511 00:26:15.078 17:17:02 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@482 -- # waitforlisten 3215511 00:26:15.078 17:17:02 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:26:15.078 17:17:02 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@827 -- # '[' -z 3215511 ']' 00:26:15.078 17:17:02 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:15.078 17:17:02 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@832 -- # local max_retries=100 00:26:15.078 17:17:02 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:15.078 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:15.078 17:17:02 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@836 -- # xtrace_disable 00:26:15.078 17:17:02 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:15.078 [2024-05-15 17:17:02.647428] Starting SPDK v24.05-pre git sha1 0ba8ca574 / DPDK 23.11.0 initialization... 00:26:15.078 [2024-05-15 17:17:02.647473] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:15.078 EAL: No free 2048 kB hugepages reported on node 1 00:26:15.078 [2024-05-15 17:17:02.704622] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:26:15.337 [2024-05-15 17:17:02.784888] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:15.337 [2024-05-15 17:17:02.784925] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:15.337 [2024-05-15 17:17:02.784932] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:15.337 [2024-05-15 17:17:02.784938] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:15.337 [2024-05-15 17:17:02.784943] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:15.337 [2024-05-15 17:17:02.785048] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:26:15.337 [2024-05-15 17:17:02.785143] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:26:15.337 [2024-05-15 17:17:02.785145] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:26:15.902 17:17:03 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:26:15.902 17:17:03 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@860 -- # return 0 00:26:15.902 17:17:03 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:26:15.902 17:17:03 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@726 -- # xtrace_disable 00:26:15.902 17:17:03 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:15.902 17:17:03 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:15.902 17:17:03 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:26:15.902 17:17:03 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:15.902 17:17:03 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:15.902 [2024-05-15 17:17:03.500738] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:15.902 17:17:03 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:15.902 17:17:03 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:26:15.902 17:17:03 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:15.902 17:17:03 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:15.902 Malloc0 00:26:15.902 17:17:03 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:15.902 17:17:03 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:26:15.902 17:17:03 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:15.902 17:17:03 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:15.902 17:17:03 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:15.902 17:17:03 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:26:15.902 17:17:03 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:15.902 17:17:03 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:15.902 17:17:03 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:15.902 17:17:03 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:26:15.902 17:17:03 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:15.902 17:17:03 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:15.902 [2024-05-15 17:17:03.560020] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:26:15.902 [2024-05-15 17:17:03.560246] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:16.160 17:17:03 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:16.160 17:17:03 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 128 -o 4096 -w verify -t 1 00:26:16.160 17:17:03 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@27 -- # gen_nvmf_target_json 00:26:16.160 17:17:03 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@532 -- # config=() 00:26:16.160 17:17:03 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@532 -- # local subsystem config 00:26:16.160 17:17:03 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:26:16.160 17:17:03 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:26:16.160 { 00:26:16.160 "params": { 00:26:16.160 "name": "Nvme$subsystem", 00:26:16.160 "trtype": "$TEST_TRANSPORT", 00:26:16.160 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:16.160 "adrfam": "ipv4", 00:26:16.160 "trsvcid": "$NVMF_PORT", 00:26:16.160 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:16.160 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:16.160 "hdgst": ${hdgst:-false}, 00:26:16.160 "ddgst": ${ddgst:-false} 00:26:16.160 }, 00:26:16.160 "method": "bdev_nvme_attach_controller" 00:26:16.160 } 00:26:16.160 EOF 00:26:16.160 )") 00:26:16.160 17:17:03 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@554 -- # cat 00:26:16.160 17:17:03 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@556 -- # jq . 00:26:16.160 17:17:03 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@557 -- # IFS=, 00:26:16.160 17:17:03 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:26:16.160 "params": { 00:26:16.160 "name": "Nvme1", 00:26:16.160 "trtype": "tcp", 00:26:16.160 "traddr": "10.0.0.2", 00:26:16.160 "adrfam": "ipv4", 00:26:16.160 "trsvcid": "4420", 00:26:16.160 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:26:16.160 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:26:16.160 "hdgst": false, 00:26:16.160 "ddgst": false 00:26:16.160 }, 00:26:16.160 "method": "bdev_nvme_attach_controller" 00:26:16.160 }' 00:26:16.160 [2024-05-15 17:17:03.610097] Starting SPDK v24.05-pre git sha1 0ba8ca574 / DPDK 23.11.0 initialization... 00:26:16.161 [2024-05-15 17:17:03.610137] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3215683 ] 00:26:16.161 EAL: No free 2048 kB hugepages reported on node 1 00:26:16.161 [2024-05-15 17:17:03.662650] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:16.161 [2024-05-15 17:17:03.735940] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:16.418 Running I/O for 1 seconds... 00:26:17.353 00:26:17.353 Latency(us) 00:26:17.353 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:17.353 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:26:17.353 Verification LBA range: start 0x0 length 0x4000 00:26:17.353 Nvme1n1 : 1.01 10867.81 42.45 0.00 0.00 11712.38 1182.50 11511.54 00:26:17.353 =================================================================================================================== 00:26:17.353 Total : 10867.81 42.45 0.00 0.00 11712.38 1182.50 11511.54 00:26:17.611 17:17:05 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@30 -- # bdevperfpid=3215923 00:26:17.611 17:17:05 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@32 -- # sleep 3 00:26:17.611 17:17:05 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -q 128 -o 4096 -w verify -t 15 -f 00:26:17.611 17:17:05 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@29 -- # gen_nvmf_target_json 00:26:17.611 17:17:05 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@532 -- # config=() 00:26:17.611 17:17:05 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@532 -- # local subsystem config 00:26:17.611 17:17:05 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:26:17.611 17:17:05 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:26:17.611 { 00:26:17.611 "params": { 00:26:17.611 "name": "Nvme$subsystem", 00:26:17.611 "trtype": "$TEST_TRANSPORT", 00:26:17.611 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:17.611 "adrfam": "ipv4", 00:26:17.611 "trsvcid": "$NVMF_PORT", 00:26:17.611 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:17.611 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:17.611 "hdgst": ${hdgst:-false}, 00:26:17.611 "ddgst": ${ddgst:-false} 00:26:17.611 }, 00:26:17.611 "method": "bdev_nvme_attach_controller" 00:26:17.611 } 00:26:17.611 EOF 00:26:17.611 )") 00:26:17.611 17:17:05 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@554 -- # cat 00:26:17.611 17:17:05 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@556 -- # jq . 00:26:17.611 17:17:05 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@557 -- # IFS=, 00:26:17.611 17:17:05 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:26:17.611 "params": { 00:26:17.611 "name": "Nvme1", 00:26:17.611 "trtype": "tcp", 00:26:17.611 "traddr": "10.0.0.2", 00:26:17.611 "adrfam": "ipv4", 00:26:17.611 "trsvcid": "4420", 00:26:17.611 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:26:17.611 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:26:17.611 "hdgst": false, 00:26:17.611 "ddgst": false 00:26:17.611 }, 00:26:17.611 "method": "bdev_nvme_attach_controller" 00:26:17.611 }' 00:26:17.611 [2024-05-15 17:17:05.190992] Starting SPDK v24.05-pre git sha1 0ba8ca574 / DPDK 23.11.0 initialization... 00:26:17.611 [2024-05-15 17:17:05.191041] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3215923 ] 00:26:17.611 EAL: No free 2048 kB hugepages reported on node 1 00:26:17.611 [2024-05-15 17:17:05.246290] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:17.869 [2024-05-15 17:17:05.316224] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:17.869 Running I/O for 15 seconds... 00:26:21.155 17:17:08 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@33 -- # kill -9 3215511 00:26:21.155 17:17:08 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@35 -- # sleep 3 00:26:21.155 [2024-05-15 17:17:08.162301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:97792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.155 [2024-05-15 17:17:08.162343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:21.155 [2024-05-15 17:17:08.162361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:97800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.155 [2024-05-15 17:17:08.162370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:21.155 [2024-05-15 17:17:08.162379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:97808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.155 [2024-05-15 17:17:08.162386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:21.155 [2024-05-15 17:17:08.162395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:97816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.155 [2024-05-15 17:17:08.162403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:21.156 [2024-05-15 17:17:08.162412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:97824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.156 [2024-05-15 17:17:08.162418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:21.156 [2024-05-15 17:17:08.162426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:97832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.156 [2024-05-15 17:17:08.162434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:21.156 [2024-05-15 17:17:08.162444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:97840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.156 [2024-05-15 17:17:08.162451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:21.156 [2024-05-15 17:17:08.162460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:97848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.156 [2024-05-15 17:17:08.162466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:21.156 [2024-05-15 17:17:08.162479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:97856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.156 [2024-05-15 17:17:08.162486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:21.156 [2024-05-15 17:17:08.162496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:97864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.156 [2024-05-15 17:17:08.162504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:21.156 [2024-05-15 17:17:08.162513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:97872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.156 [2024-05-15 17:17:08.162522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:21.156 [2024-05-15 17:17:08.162534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:97880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.156 [2024-05-15 17:17:08.162544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:21.156 [2024-05-15 17:17:08.162557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:97888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.156 [2024-05-15 17:17:08.162565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:21.156 [2024-05-15 17:17:08.162577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:97896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.156 [2024-05-15 17:17:08.162588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:21.156 [2024-05-15 17:17:08.162604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:97904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.156 [2024-05-15 17:17:08.162616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:21.156 [2024-05-15 17:17:08.162629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:97912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.156 [2024-05-15 17:17:08.162637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:21.156 [2024-05-15 17:17:08.162647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:97920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.156 [2024-05-15 17:17:08.162655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:21.156 [2024-05-15 17:17:08.162663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:97928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.156 [2024-05-15 17:17:08.162670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:21.156 [2024-05-15 17:17:08.162678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:97936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.156 [2024-05-15 17:17:08.162684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:21.156 [2024-05-15 17:17:08.162692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:97944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.156 [2024-05-15 17:17:08.162698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:21.156 [2024-05-15 17:17:08.162706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:97952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.156 [2024-05-15 17:17:08.162714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:21.156 [2024-05-15 17:17:08.162723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:97960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.156 [2024-05-15 17:17:08.162730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:21.156 [2024-05-15 17:17:08.162739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:97968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.156 [2024-05-15 17:17:08.162745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:21.156 [2024-05-15 17:17:08.162753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:97976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.156 [2024-05-15 17:17:08.162760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:21.156 [2024-05-15 17:17:08.162768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:97984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.156 [2024-05-15 17:17:08.162775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:21.156 [2024-05-15 17:17:08.162783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:97992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.156 [2024-05-15 17:17:08.162789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:21.156 [2024-05-15 17:17:08.162797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:98000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.156 [2024-05-15 17:17:08.162803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:21.156 [2024-05-15 17:17:08.162811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:98008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.156 [2024-05-15 17:17:08.162818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:21.156 [2024-05-15 17:17:08.162826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:98776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:21.156 [2024-05-15 17:17:08.162833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:21.156 [2024-05-15 17:17:08.162841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:98784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:21.156 [2024-05-15 17:17:08.162848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:21.156 [2024-05-15 17:17:08.162856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:98792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:21.156 [2024-05-15 17:17:08.162862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:21.156 [2024-05-15 17:17:08.162871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:98800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:21.156 [2024-05-15 17:17:08.162877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:21.156 [2024-05-15 17:17:08.162885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:98016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.156 [2024-05-15 17:17:08.162892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:21.156 [2024-05-15 17:17:08.162900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:98024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.156 [2024-05-15 17:17:08.162908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:21.156 [2024-05-15 17:17:08.162917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:98032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.156 [2024-05-15 17:17:08.162923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:21.156 [2024-05-15 17:17:08.162932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:98040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.156 [2024-05-15 17:17:08.162939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:21.156 [2024-05-15 17:17:08.162947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:98048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.156 [2024-05-15 17:17:08.162954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:21.156 [2024-05-15 17:17:08.162962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:98056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.156 [2024-05-15 17:17:08.162968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:21.156 [2024-05-15 17:17:08.162976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:98064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.156 [2024-05-15 17:17:08.162982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:21.156 [2024-05-15 17:17:08.162990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:98072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.156 [2024-05-15 17:17:08.162997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:21.156 [2024-05-15 17:17:08.163005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:98080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.156 [2024-05-15 17:17:08.163011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:21.156 [2024-05-15 17:17:08.163019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:98088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.156 [2024-05-15 17:17:08.163025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:21.156 [2024-05-15 17:17:08.163033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:98096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.156 [2024-05-15 17:17:08.163039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:21.156 [2024-05-15 17:17:08.163047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:98104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.156 [2024-05-15 17:17:08.163053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:21.156 [2024-05-15 17:17:08.163061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:98112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.156 [2024-05-15 17:17:08.163067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:21.156 [2024-05-15 17:17:08.163075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:98120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.156 [2024-05-15 17:17:08.163082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:21.156 [2024-05-15 17:17:08.163092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:98128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.156 [2024-05-15 17:17:08.163098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:21.156 [2024-05-15 17:17:08.163106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:98808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:21.156 [2024-05-15 17:17:08.163113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:21.156 [2024-05-15 17:17:08.163121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:98136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.156 [2024-05-15 17:17:08.163127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:21.156 [2024-05-15 17:17:08.163135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:98144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.156 [2024-05-15 17:17:08.163142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:21.156 [2024-05-15 17:17:08.163152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:98152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.156 [2024-05-15 17:17:08.163158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:21.156 [2024-05-15 17:17:08.163171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:98160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.156 [2024-05-15 17:17:08.163178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:21.156 [2024-05-15 17:17:08.163187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:98168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.156 [2024-05-15 17:17:08.163194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:21.156 [2024-05-15 17:17:08.163204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:98176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.156 [2024-05-15 17:17:08.163212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:21.156 [2024-05-15 17:17:08.163220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:98184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.156 [2024-05-15 17:17:08.163226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:21.156 [2024-05-15 17:17:08.163234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:98192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.156 [2024-05-15 17:17:08.163241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:21.156 [2024-05-15 17:17:08.163249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:98200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.156 [2024-05-15 17:17:08.163255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:21.156 [2024-05-15 17:17:08.163263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:98208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.156 [2024-05-15 17:17:08.163270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:21.156 [2024-05-15 17:17:08.163279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:98216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.156 [2024-05-15 17:17:08.163287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:21.156 [2024-05-15 17:17:08.163295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:98224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.156 [2024-05-15 17:17:08.163302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:21.156 [2024-05-15 17:17:08.163310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:98232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.156 [2024-05-15 17:17:08.163316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:21.156 [2024-05-15 17:17:08.163324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:98240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.156 [2024-05-15 17:17:08.163330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:21.156 [2024-05-15 17:17:08.163338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:98248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.156 [2024-05-15 17:17:08.163345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:21.156 [2024-05-15 17:17:08.163353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:98256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.156 [2024-05-15 17:17:08.163359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:21.156 [2024-05-15 17:17:08.163367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:98264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.156 [2024-05-15 17:17:08.163373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:21.156 [2024-05-15 17:17:08.163381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:98272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.156 [2024-05-15 17:17:08.163387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:21.156 [2024-05-15 17:17:08.163395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:98280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.156 [2024-05-15 17:17:08.163402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:21.156 [2024-05-15 17:17:08.163410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:98288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.156 [2024-05-15 17:17:08.163416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:21.156 [2024-05-15 17:17:08.163423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:98296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.156 [2024-05-15 17:17:08.163430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:21.156 [2024-05-15 17:17:08.163437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:98304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.156 [2024-05-15 17:17:08.163444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:21.156 [2024-05-15 17:17:08.163452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:98312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.156 [2024-05-15 17:17:08.163458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:21.156 [2024-05-15 17:17:08.163470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:98320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.156 [2024-05-15 17:17:08.163477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:21.156 [2024-05-15 17:17:08.163485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:98328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.156 [2024-05-15 17:17:08.163491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:21.156 [2024-05-15 17:17:08.163499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:98336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.156 [2024-05-15 17:17:08.163506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:21.156 [2024-05-15 17:17:08.163513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:98344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.156 [2024-05-15 17:17:08.163520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:21.156 [2024-05-15 17:17:08.163527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:98352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.156 [2024-05-15 17:17:08.163534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:21.156 [2024-05-15 17:17:08.163542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:98360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.156 [2024-05-15 17:17:08.163549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:21.156 [2024-05-15 17:17:08.163557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:98368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.156 [2024-05-15 17:17:08.163563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:21.156 [2024-05-15 17:17:08.163571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:98376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.156 [2024-05-15 17:17:08.163578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:21.157 [2024-05-15 17:17:08.163587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:98384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.157 [2024-05-15 17:17:08.163594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:21.157 [2024-05-15 17:17:08.163602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:98392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.157 [2024-05-15 17:17:08.163608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:21.157 [2024-05-15 17:17:08.163617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:98400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.157 [2024-05-15 17:17:08.163623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:21.157 [2024-05-15 17:17:08.163631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:98408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.157 [2024-05-15 17:17:08.163637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:21.157 [2024-05-15 17:17:08.163645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:98416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.157 [2024-05-15 17:17:08.163653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:21.157 [2024-05-15 17:17:08.163661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:98424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.157 [2024-05-15 17:17:08.163667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:21.157 [2024-05-15 17:17:08.163675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:98432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.157 [2024-05-15 17:17:08.163681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:21.157 [2024-05-15 17:17:08.163689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:98440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.157 [2024-05-15 17:17:08.163696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:21.157 [2024-05-15 17:17:08.163704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:98448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.157 [2024-05-15 17:17:08.163711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:21.157 [2024-05-15 17:17:08.163718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:98456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.157 [2024-05-15 17:17:08.163725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:21.157 [2024-05-15 17:17:08.163732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:98464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.157 [2024-05-15 17:17:08.163739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:21.157 [2024-05-15 17:17:08.163746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:98472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.157 [2024-05-15 17:17:08.163753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:21.157 [2024-05-15 17:17:08.163761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:98480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.157 [2024-05-15 17:17:08.163767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:21.157 [2024-05-15 17:17:08.163775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:98488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.157 [2024-05-15 17:17:08.163781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:21.157 [2024-05-15 17:17:08.163789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:98496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.157 [2024-05-15 17:17:08.163795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:21.157 [2024-05-15 17:17:08.163809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:98504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.157 [2024-05-15 17:17:08.163815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:21.157 [2024-05-15 17:17:08.163823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:98512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.157 [2024-05-15 17:17:08.163830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:21.157 [2024-05-15 17:17:08.163838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:98520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.157 [2024-05-15 17:17:08.163848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:21.157 [2024-05-15 17:17:08.163858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:98528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.157 [2024-05-15 17:17:08.163867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:21.157 [2024-05-15 17:17:08.163875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:98536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.157 [2024-05-15 17:17:08.163883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:21.157 [2024-05-15 17:17:08.163892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:98544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.157 [2024-05-15 17:17:08.163902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:21.157 [2024-05-15 17:17:08.163912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:98552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.157 [2024-05-15 17:17:08.163920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:21.157 [2024-05-15 17:17:08.163929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:98560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.157 [2024-05-15 17:17:08.163937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:21.157 [2024-05-15 17:17:08.163947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:98568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.157 [2024-05-15 17:17:08.163955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:21.157 [2024-05-15 17:17:08.163964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:98576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.157 [2024-05-15 17:17:08.163970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:21.157 [2024-05-15 17:17:08.163979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:98584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.157 [2024-05-15 17:17:08.163986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:21.157 [2024-05-15 17:17:08.163995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:98592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.157 [2024-05-15 17:17:08.164001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:21.157 [2024-05-15 17:17:08.164009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:98600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.157 [2024-05-15 17:17:08.164016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:21.157 [2024-05-15 17:17:08.164024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:98608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.157 [2024-05-15 17:17:08.164031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:21.157 [2024-05-15 17:17:08.164038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:98616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.157 [2024-05-15 17:17:08.164045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:21.157 [2024-05-15 17:17:08.164054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:98624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.157 [2024-05-15 17:17:08.164061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:21.157 [2024-05-15 17:17:08.164070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:98632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.157 [2024-05-15 17:17:08.164076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:21.157 [2024-05-15 17:17:08.164084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:98640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.157 [2024-05-15 17:17:08.164090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:21.157 [2024-05-15 17:17:08.164098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:98648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.157 [2024-05-15 17:17:08.164104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:21.157 [2024-05-15 17:17:08.164112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:98656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.157 [2024-05-15 17:17:08.164119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:21.157 [2024-05-15 17:17:08.164127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:98664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.157 [2024-05-15 17:17:08.164133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:21.157 [2024-05-15 17:17:08.164141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:98672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.157 [2024-05-15 17:17:08.164148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:21.157 [2024-05-15 17:17:08.164156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:98680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.157 [2024-05-15 17:17:08.164162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:21.157 [2024-05-15 17:17:08.164175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:98688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.157 [2024-05-15 17:17:08.164181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:21.157 [2024-05-15 17:17:08.164189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:98696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.157 [2024-05-15 17:17:08.164195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:21.157 [2024-05-15 17:17:08.164203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:98704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.157 [2024-05-15 17:17:08.164210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:21.157 [2024-05-15 17:17:08.164218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:98712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.157 [2024-05-15 17:17:08.164224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:21.157 [2024-05-15 17:17:08.164232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:98720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.157 [2024-05-15 17:17:08.164239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:21.157 [2024-05-15 17:17:08.164247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:98728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.157 [2024-05-15 17:17:08.164253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:21.157 [2024-05-15 17:17:08.164261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:98736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.157 [2024-05-15 17:17:08.164268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:21.157 [2024-05-15 17:17:08.164276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:98744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.157 [2024-05-15 17:17:08.164282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:21.157 [2024-05-15 17:17:08.164290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:98752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.157 [2024-05-15 17:17:08.164298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:21.157 [2024-05-15 17:17:08.164308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:98760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.157 [2024-05-15 17:17:08.164314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:21.157 [2024-05-15 17:17:08.164322] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12e96f0 is same with the state(5) to be set 00:26:21.157 [2024-05-15 17:17:08.164330] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:21.157 [2024-05-15 17:17:08.164335] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:21.157 [2024-05-15 17:17:08.164341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:98768 len:8 PRP1 0x0 PRP2 0x0 00:26:21.157 [2024-05-15 17:17:08.164348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:21.157 [2024-05-15 17:17:08.164389] bdev_nvme.c:1602:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x12e96f0 was disconnected and freed. reset controller. 00:26:21.157 [2024-05-15 17:17:08.167417] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:21.157 [2024-05-15 17:17:08.167473] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10b7840 (9): Bad file descriptor 00:26:21.157 [2024-05-15 17:17:08.168057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.157 [2024-05-15 17:17:08.168266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.157 [2024-05-15 17:17:08.168278] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10b7840 with addr=10.0.0.2, port=4420 00:26:21.157 [2024-05-15 17:17:08.168285] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b7840 is same with the state(5) to be set 00:26:21.157 [2024-05-15 17:17:08.168466] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10b7840 (9): Bad file descriptor 00:26:21.157 [2024-05-15 17:17:08.168646] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:21.157 [2024-05-15 17:17:08.168655] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:21.157 [2024-05-15 17:17:08.168663] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:21.157 [2024-05-15 17:17:08.171539] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:21.157 [2024-05-15 17:17:08.180813] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:21.157 [2024-05-15 17:17:08.181333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.157 [2024-05-15 17:17:08.181558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.157 [2024-05-15 17:17:08.181569] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10b7840 with addr=10.0.0.2, port=4420 00:26:21.157 [2024-05-15 17:17:08.181576] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b7840 is same with the state(5) to be set 00:26:21.157 [2024-05-15 17:17:08.181757] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10b7840 (9): Bad file descriptor 00:26:21.157 [2024-05-15 17:17:08.181938] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:21.157 [2024-05-15 17:17:08.181947] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:21.157 [2024-05-15 17:17:08.181953] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:21.157 [2024-05-15 17:17:08.184778] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:21.157 [2024-05-15 17:17:08.193860] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:21.157 [2024-05-15 17:17:08.194354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.157 [2024-05-15 17:17:08.194576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.157 [2024-05-15 17:17:08.194587] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10b7840 with addr=10.0.0.2, port=4420 00:26:21.157 [2024-05-15 17:17:08.194593] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b7840 is same with the state(5) to be set 00:26:21.157 [2024-05-15 17:17:08.194758] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10b7840 (9): Bad file descriptor 00:26:21.157 [2024-05-15 17:17:08.194923] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:21.157 [2024-05-15 17:17:08.194933] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:21.157 [2024-05-15 17:17:08.194938] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:21.157 [2024-05-15 17:17:08.197775] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:21.157 [2024-05-15 17:17:08.207068] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:21.157 [2024-05-15 17:17:08.207511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.157 [2024-05-15 17:17:08.207750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.157 [2024-05-15 17:17:08.207781] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10b7840 with addr=10.0.0.2, port=4420 00:26:21.157 [2024-05-15 17:17:08.207802] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b7840 is same with the state(5) to be set 00:26:21.157 [2024-05-15 17:17:08.208399] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10b7840 (9): Bad file descriptor 00:26:21.157 [2024-05-15 17:17:08.208763] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:21.157 [2024-05-15 17:17:08.208772] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:21.157 [2024-05-15 17:17:08.208778] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:21.157 [2024-05-15 17:17:08.211603] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:21.157 [2024-05-15 17:17:08.220177] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:21.157 [2024-05-15 17:17:08.220635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.157 [2024-05-15 17:17:08.220943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.157 [2024-05-15 17:17:08.220974] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10b7840 with addr=10.0.0.2, port=4420 00:26:21.157 [2024-05-15 17:17:08.220995] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b7840 is same with the state(5) to be set 00:26:21.157 [2024-05-15 17:17:08.221571] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10b7840 (9): Bad file descriptor 00:26:21.157 [2024-05-15 17:17:08.221746] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:21.157 [2024-05-15 17:17:08.221753] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:21.157 [2024-05-15 17:17:08.221760] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:21.157 [2024-05-15 17:17:08.224540] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:21.157 [2024-05-15 17:17:08.233008] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:21.157 [2024-05-15 17:17:08.233467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.157 [2024-05-15 17:17:08.233709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.157 [2024-05-15 17:17:08.233740] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10b7840 with addr=10.0.0.2, port=4420 00:26:21.157 [2024-05-15 17:17:08.233762] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b7840 is same with the state(5) to be set 00:26:21.157 [2024-05-15 17:17:08.234350] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10b7840 (9): Bad file descriptor 00:26:21.157 [2024-05-15 17:17:08.234526] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:21.157 [2024-05-15 17:17:08.234533] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:21.157 [2024-05-15 17:17:08.234540] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:21.157 [2024-05-15 17:17:08.237252] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:21.157 [2024-05-15 17:17:08.245867] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:21.157 [2024-05-15 17:17:08.246345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.157 [2024-05-15 17:17:08.246585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.158 [2024-05-15 17:17:08.246616] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10b7840 with addr=10.0.0.2, port=4420 00:26:21.158 [2024-05-15 17:17:08.246636] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b7840 is same with the state(5) to be set 00:26:21.158 [2024-05-15 17:17:08.246877] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10b7840 (9): Bad file descriptor 00:26:21.158 [2024-05-15 17:17:08.247051] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:21.158 [2024-05-15 17:17:08.247059] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:21.158 [2024-05-15 17:17:08.247065] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:21.158 [2024-05-15 17:17:08.249784] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:21.158 [2024-05-15 17:17:08.258792] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:21.158 [2024-05-15 17:17:08.259190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.158 [2024-05-15 17:17:08.259384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.158 [2024-05-15 17:17:08.259397] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10b7840 with addr=10.0.0.2, port=4420 00:26:21.158 [2024-05-15 17:17:08.259422] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b7840 is same with the state(5) to be set 00:26:21.158 [2024-05-15 17:17:08.260010] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10b7840 (9): Bad file descriptor 00:26:21.158 [2024-05-15 17:17:08.260611] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:21.158 [2024-05-15 17:17:08.260640] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:21.158 [2024-05-15 17:17:08.260646] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:21.158 [2024-05-15 17:17:08.263359] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:21.158 [2024-05-15 17:17:08.271672] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:21.158 [2024-05-15 17:17:08.272148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.158 [2024-05-15 17:17:08.272420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.158 [2024-05-15 17:17:08.272452] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10b7840 with addr=10.0.0.2, port=4420 00:26:21.158 [2024-05-15 17:17:08.272473] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b7840 is same with the state(5) to be set 00:26:21.158 [2024-05-15 17:17:08.272768] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10b7840 (9): Bad file descriptor 00:26:21.158 [2024-05-15 17:17:08.272943] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:21.158 [2024-05-15 17:17:08.272950] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:21.158 [2024-05-15 17:17:08.272956] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:21.158 [2024-05-15 17:17:08.275728] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:21.158 [2024-05-15 17:17:08.284597] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:21.158 [2024-05-15 17:17:08.285040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.158 [2024-05-15 17:17:08.285296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.158 [2024-05-15 17:17:08.285309] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10b7840 with addr=10.0.0.2, port=4420 00:26:21.158 [2024-05-15 17:17:08.285316] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b7840 is same with the state(5) to be set 00:26:21.158 [2024-05-15 17:17:08.285494] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10b7840 (9): Bad file descriptor 00:26:21.158 [2024-05-15 17:17:08.285659] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:21.158 [2024-05-15 17:17:08.285666] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:21.158 [2024-05-15 17:17:08.285672] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:21.158 [2024-05-15 17:17:08.288373] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:21.158 [2024-05-15 17:17:08.297455] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:21.158 [2024-05-15 17:17:08.297892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.158 [2024-05-15 17:17:08.298070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.158 [2024-05-15 17:17:08.298107] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10b7840 with addr=10.0.0.2, port=4420 00:26:21.158 [2024-05-15 17:17:08.298135] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b7840 is same with the state(5) to be set 00:26:21.158 [2024-05-15 17:17:08.298672] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10b7840 (9): Bad file descriptor 00:26:21.158 [2024-05-15 17:17:08.298847] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:21.158 [2024-05-15 17:17:08.298856] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:21.158 [2024-05-15 17:17:08.298862] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:21.158 [2024-05-15 17:17:08.301574] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:21.158 [2024-05-15 17:17:08.310347] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:21.158 [2024-05-15 17:17:08.310723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.158 [2024-05-15 17:17:08.310996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.158 [2024-05-15 17:17:08.311006] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10b7840 with addr=10.0.0.2, port=4420 00:26:21.158 [2024-05-15 17:17:08.311013] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b7840 is same with the state(5) to be set 00:26:21.158 [2024-05-15 17:17:08.311193] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10b7840 (9): Bad file descriptor 00:26:21.158 [2024-05-15 17:17:08.311368] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:21.158 [2024-05-15 17:17:08.311375] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:21.158 [2024-05-15 17:17:08.311382] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:21.158 [2024-05-15 17:17:08.314091] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:21.158 [2024-05-15 17:17:08.323178] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:21.158 [2024-05-15 17:17:08.323578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.158 [2024-05-15 17:17:08.323764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.158 [2024-05-15 17:17:08.323774] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10b7840 with addr=10.0.0.2, port=4420 00:26:21.158 [2024-05-15 17:17:08.323805] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b7840 is same with the state(5) to be set 00:26:21.158 [2024-05-15 17:17:08.324406] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10b7840 (9): Bad file descriptor 00:26:21.158 [2024-05-15 17:17:08.324646] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:21.158 [2024-05-15 17:17:08.324655] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:21.158 [2024-05-15 17:17:08.324662] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:21.158 [2024-05-15 17:17:08.327346] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:21.158 [2024-05-15 17:17:08.336122] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:21.158 [2024-05-15 17:17:08.336582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.158 [2024-05-15 17:17:08.336764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.158 [2024-05-15 17:17:08.336794] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10b7840 with addr=10.0.0.2, port=4420 00:26:21.158 [2024-05-15 17:17:08.336816] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b7840 is same with the state(5) to be set 00:26:21.158 [2024-05-15 17:17:08.337418] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10b7840 (9): Bad file descriptor 00:26:21.158 [2024-05-15 17:17:08.337864] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:21.158 [2024-05-15 17:17:08.337872] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:21.158 [2024-05-15 17:17:08.337878] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:21.158 [2024-05-15 17:17:08.340587] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:21.158 [2024-05-15 17:17:08.349047] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:21.158 [2024-05-15 17:17:08.349381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.158 [2024-05-15 17:17:08.349613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.158 [2024-05-15 17:17:08.349643] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10b7840 with addr=10.0.0.2, port=4420 00:26:21.158 [2024-05-15 17:17:08.349665] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b7840 is same with the state(5) to be set 00:26:21.158 [2024-05-15 17:17:08.350052] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10b7840 (9): Bad file descriptor 00:26:21.158 [2024-05-15 17:17:08.350231] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:21.158 [2024-05-15 17:17:08.350239] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:21.158 [2024-05-15 17:17:08.350245] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:21.158 [2024-05-15 17:17:08.352957] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:21.158 [2024-05-15 17:17:08.361890] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:21.158 [2024-05-15 17:17:08.362298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.158 [2024-05-15 17:17:08.362483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.158 [2024-05-15 17:17:08.362493] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10b7840 with addr=10.0.0.2, port=4420 00:26:21.158 [2024-05-15 17:17:08.362499] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b7840 is same with the state(5) to be set 00:26:21.158 [2024-05-15 17:17:08.362664] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10b7840 (9): Bad file descriptor 00:26:21.158 [2024-05-15 17:17:08.362828] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:21.158 [2024-05-15 17:17:08.362836] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:21.158 [2024-05-15 17:17:08.362842] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:21.158 [2024-05-15 17:17:08.365552] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:21.158 [2024-05-15 17:17:08.374782] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:21.158 [2024-05-15 17:17:08.375264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.158 [2024-05-15 17:17:08.375563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.158 [2024-05-15 17:17:08.375594] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10b7840 with addr=10.0.0.2, port=4420 00:26:21.158 [2024-05-15 17:17:08.375615] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b7840 is same with the state(5) to be set 00:26:21.158 [2024-05-15 17:17:08.375956] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10b7840 (9): Bad file descriptor 00:26:21.158 [2024-05-15 17:17:08.376133] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:21.158 [2024-05-15 17:17:08.376141] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:21.158 [2024-05-15 17:17:08.376147] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:21.158 [2024-05-15 17:17:08.378967] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:21.158 [2024-05-15 17:17:08.387604] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:21.158 [2024-05-15 17:17:08.388049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.158 [2024-05-15 17:17:08.388365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.158 [2024-05-15 17:17:08.388400] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10b7840 with addr=10.0.0.2, port=4420 00:26:21.158 [2024-05-15 17:17:08.388421] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b7840 is same with the state(5) to be set 00:26:21.158 [2024-05-15 17:17:08.388839] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10b7840 (9): Bad file descriptor 00:26:21.158 [2024-05-15 17:17:08.389003] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:21.158 [2024-05-15 17:17:08.389011] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:21.158 [2024-05-15 17:17:08.389016] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:21.158 [2024-05-15 17:17:08.391734] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:21.158 [2024-05-15 17:17:08.400509] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:21.158 [2024-05-15 17:17:08.400926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.158 [2024-05-15 17:17:08.401182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.158 [2024-05-15 17:17:08.401193] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10b7840 with addr=10.0.0.2, port=4420 00:26:21.158 [2024-05-15 17:17:08.401200] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b7840 is same with the state(5) to be set 00:26:21.158 [2024-05-15 17:17:08.401374] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10b7840 (9): Bad file descriptor 00:26:21.158 [2024-05-15 17:17:08.401548] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:21.158 [2024-05-15 17:17:08.401556] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:21.158 [2024-05-15 17:17:08.401562] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:21.158 [2024-05-15 17:17:08.404273] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:21.158 [2024-05-15 17:17:08.413357] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:21.158 [2024-05-15 17:17:08.413762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.158 [2024-05-15 17:17:08.414038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.158 [2024-05-15 17:17:08.414049] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10b7840 with addr=10.0.0.2, port=4420 00:26:21.158 [2024-05-15 17:17:08.414056] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b7840 is same with the state(5) to be set 00:26:21.158 [2024-05-15 17:17:08.414242] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10b7840 (9): Bad file descriptor 00:26:21.158 [2024-05-15 17:17:08.414422] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:21.158 [2024-05-15 17:17:08.414433] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:21.158 [2024-05-15 17:17:08.414439] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:21.158 [2024-05-15 17:17:08.417515] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:21.158 [2024-05-15 17:17:08.426517] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:21.158 [2024-05-15 17:17:08.427010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.158 [2024-05-15 17:17:08.427200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.158 [2024-05-15 17:17:08.427213] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10b7840 with addr=10.0.0.2, port=4420 00:26:21.158 [2024-05-15 17:17:08.427220] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b7840 is same with the state(5) to be set 00:26:21.158 [2024-05-15 17:17:08.427401] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10b7840 (9): Bad file descriptor 00:26:21.158 [2024-05-15 17:17:08.427580] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:21.158 [2024-05-15 17:17:08.427588] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:21.158 [2024-05-15 17:17:08.427595] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:21.158 [2024-05-15 17:17:08.430463] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:21.158 [2024-05-15 17:17:08.439573] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:21.158 [2024-05-15 17:17:08.440067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.158 [2024-05-15 17:17:08.440307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.158 [2024-05-15 17:17:08.440340] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10b7840 with addr=10.0.0.2, port=4420 00:26:21.158 [2024-05-15 17:17:08.440361] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b7840 is same with the state(5) to be set 00:26:21.158 [2024-05-15 17:17:08.440946] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10b7840 (9): Bad file descriptor 00:26:21.158 [2024-05-15 17:17:08.441153] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:21.158 [2024-05-15 17:17:08.441162] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:21.158 [2024-05-15 17:17:08.441172] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:21.158 [2024-05-15 17:17:08.443948] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:21.159 [2024-05-15 17:17:08.452570] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:21.159 [2024-05-15 17:17:08.453003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.159 [2024-05-15 17:17:08.453329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.159 [2024-05-15 17:17:08.453361] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10b7840 with addr=10.0.0.2, port=4420 00:26:21.159 [2024-05-15 17:17:08.453383] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b7840 is same with the state(5) to be set 00:26:21.159 [2024-05-15 17:17:08.453938] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10b7840 (9): Bad file descriptor 00:26:21.159 [2024-05-15 17:17:08.454113] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:21.159 [2024-05-15 17:17:08.454121] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:21.159 [2024-05-15 17:17:08.454131] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:21.159 [2024-05-15 17:17:08.456910] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:21.159 [2024-05-15 17:17:08.465464] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:21.159 [2024-05-15 17:17:08.465954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.159 [2024-05-15 17:17:08.466222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.159 [2024-05-15 17:17:08.466254] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10b7840 with addr=10.0.0.2, port=4420 00:26:21.159 [2024-05-15 17:17:08.466275] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b7840 is same with the state(5) to be set 00:26:21.159 [2024-05-15 17:17:08.466861] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10b7840 (9): Bad file descriptor 00:26:21.159 [2024-05-15 17:17:08.467157] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:21.159 [2024-05-15 17:17:08.467171] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:21.159 [2024-05-15 17:17:08.467177] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:21.159 [2024-05-15 17:17:08.469900] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:21.159 [2024-05-15 17:17:08.478393] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:21.159 [2024-05-15 17:17:08.478753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.159 [2024-05-15 17:17:08.478992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.159 [2024-05-15 17:17:08.479002] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10b7840 with addr=10.0.0.2, port=4420 00:26:21.159 [2024-05-15 17:17:08.479008] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b7840 is same with the state(5) to be set 00:26:21.159 [2024-05-15 17:17:08.479188] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10b7840 (9): Bad file descriptor 00:26:21.159 [2024-05-15 17:17:08.479364] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:21.159 [2024-05-15 17:17:08.479372] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:21.159 [2024-05-15 17:17:08.479378] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:21.159 [2024-05-15 17:17:08.482083] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:21.159 [2024-05-15 17:17:08.491391] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:21.159 [2024-05-15 17:17:08.491853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.159 [2024-05-15 17:17:08.492127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.159 [2024-05-15 17:17:08.492138] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10b7840 with addr=10.0.0.2, port=4420 00:26:21.159 [2024-05-15 17:17:08.492144] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b7840 is same with the state(5) to be set 00:26:21.159 [2024-05-15 17:17:08.492323] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10b7840 (9): Bad file descriptor 00:26:21.159 [2024-05-15 17:17:08.492498] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:21.159 [2024-05-15 17:17:08.492506] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:21.159 [2024-05-15 17:17:08.492513] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:21.159 [2024-05-15 17:17:08.495227] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:21.159 [2024-05-15 17:17:08.504313] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:21.159 [2024-05-15 17:17:08.504765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.159 [2024-05-15 17:17:08.505055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.159 [2024-05-15 17:17:08.505085] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10b7840 with addr=10.0.0.2, port=4420 00:26:21.159 [2024-05-15 17:17:08.505107] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b7840 is same with the state(5) to be set 00:26:21.159 [2024-05-15 17:17:08.505705] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10b7840 (9): Bad file descriptor 00:26:21.159 [2024-05-15 17:17:08.506227] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:21.159 [2024-05-15 17:17:08.506237] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:21.159 [2024-05-15 17:17:08.506243] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:21.159 [2024-05-15 17:17:08.508949] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:21.159 [2024-05-15 17:17:08.517254] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:21.159 [2024-05-15 17:17:08.517681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.159 [2024-05-15 17:17:08.517881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.159 [2024-05-15 17:17:08.517891] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10b7840 with addr=10.0.0.2, port=4420 00:26:21.159 [2024-05-15 17:17:08.517898] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b7840 is same with the state(5) to be set 00:26:21.159 [2024-05-15 17:17:08.518072] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10b7840 (9): Bad file descriptor 00:26:21.159 [2024-05-15 17:17:08.518250] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:21.159 [2024-05-15 17:17:08.518259] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:21.159 [2024-05-15 17:17:08.518264] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:21.159 [2024-05-15 17:17:08.520976] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:21.159 [2024-05-15 17:17:08.530315] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:21.159 [2024-05-15 17:17:08.530696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.159 [2024-05-15 17:17:08.530924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.159 [2024-05-15 17:17:08.530954] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10b7840 with addr=10.0.0.2, port=4420 00:26:21.159 [2024-05-15 17:17:08.530974] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b7840 is same with the state(5) to be set 00:26:21.159 [2024-05-15 17:17:08.531462] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10b7840 (9): Bad file descriptor 00:26:21.159 [2024-05-15 17:17:08.531637] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:21.159 [2024-05-15 17:17:08.531645] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:21.159 [2024-05-15 17:17:08.531651] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:21.159 [2024-05-15 17:17:08.534364] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:21.159 [2024-05-15 17:17:08.543134] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:21.159 [2024-05-15 17:17:08.543590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.159 [2024-05-15 17:17:08.543699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.159 [2024-05-15 17:17:08.543709] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10b7840 with addr=10.0.0.2, port=4420 00:26:21.159 [2024-05-15 17:17:08.543715] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b7840 is same with the state(5) to be set 00:26:21.159 [2024-05-15 17:17:08.543889] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10b7840 (9): Bad file descriptor 00:26:21.159 [2024-05-15 17:17:08.544063] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:21.159 [2024-05-15 17:17:08.544072] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:21.159 [2024-05-15 17:17:08.544078] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:21.159 [2024-05-15 17:17:08.546789] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:21.159 [2024-05-15 17:17:08.555985] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:21.159 [2024-05-15 17:17:08.556329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.159 [2024-05-15 17:17:08.556504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.159 [2024-05-15 17:17:08.556514] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10b7840 with addr=10.0.0.2, port=4420 00:26:21.159 [2024-05-15 17:17:08.556520] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b7840 is same with the state(5) to be set 00:26:21.159 [2024-05-15 17:17:08.556694] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10b7840 (9): Bad file descriptor 00:26:21.159 [2024-05-15 17:17:08.556867] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:21.159 [2024-05-15 17:17:08.556876] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:21.159 [2024-05-15 17:17:08.556882] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:21.159 [2024-05-15 17:17:08.559595] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:21.159 [2024-05-15 17:17:08.569009] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:21.159 [2024-05-15 17:17:08.569349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.159 [2024-05-15 17:17:08.569555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.159 [2024-05-15 17:17:08.569586] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10b7840 with addr=10.0.0.2, port=4420 00:26:21.159 [2024-05-15 17:17:08.569608] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b7840 is same with the state(5) to be set 00:26:21.159 [2024-05-15 17:17:08.570124] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10b7840 (9): Bad file descriptor 00:26:21.159 [2024-05-15 17:17:08.570301] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:21.159 [2024-05-15 17:17:08.570310] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:21.159 [2024-05-15 17:17:08.570316] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:21.159 [2024-05-15 17:17:08.573020] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:21.159 [2024-05-15 17:17:08.581879] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:21.159 [2024-05-15 17:17:08.582327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.159 [2024-05-15 17:17:08.582499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.159 [2024-05-15 17:17:08.582530] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10b7840 with addr=10.0.0.2, port=4420 00:26:21.159 [2024-05-15 17:17:08.582550] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b7840 is same with the state(5) to be set 00:26:21.159 [2024-05-15 17:17:08.583135] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10b7840 (9): Bad file descriptor 00:26:21.159 [2024-05-15 17:17:08.583438] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:21.159 [2024-05-15 17:17:08.583447] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:21.159 [2024-05-15 17:17:08.583453] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:21.159 [2024-05-15 17:17:08.586160] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:21.159 [2024-05-15 17:17:08.594834] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:21.159 [2024-05-15 17:17:08.595212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.159 [2024-05-15 17:17:08.595390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.159 [2024-05-15 17:17:08.595428] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10b7840 with addr=10.0.0.2, port=4420 00:26:21.159 [2024-05-15 17:17:08.595449] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b7840 is same with the state(5) to be set 00:26:21.159 [2024-05-15 17:17:08.596036] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10b7840 (9): Bad file descriptor 00:26:21.159 [2024-05-15 17:17:08.596611] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:21.159 [2024-05-15 17:17:08.596619] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:21.159 [2024-05-15 17:17:08.596625] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:21.159 [2024-05-15 17:17:08.599344] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:21.159 [2024-05-15 17:17:08.607691] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:21.159 [2024-05-15 17:17:08.608131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.159 [2024-05-15 17:17:08.608446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.159 [2024-05-15 17:17:08.608478] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10b7840 with addr=10.0.0.2, port=4420 00:26:21.159 [2024-05-15 17:17:08.608500] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b7840 is same with the state(5) to be set 00:26:21.159 [2024-05-15 17:17:08.609086] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10b7840 (9): Bad file descriptor 00:26:21.159 [2024-05-15 17:17:08.609457] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:21.159 [2024-05-15 17:17:08.609469] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:21.159 [2024-05-15 17:17:08.609478] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:21.159 [2024-05-15 17:17:08.613583] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:21.159 [2024-05-15 17:17:08.621162] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:21.159 [2024-05-15 17:17:08.621630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.159 [2024-05-15 17:17:08.621900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.159 [2024-05-15 17:17:08.621931] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10b7840 with addr=10.0.0.2, port=4420 00:26:21.159 [2024-05-15 17:17:08.621952] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b7840 is same with the state(5) to be set 00:26:21.159 [2024-05-15 17:17:08.622497] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10b7840 (9): Bad file descriptor 00:26:21.159 [2024-05-15 17:17:08.622673] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:21.159 [2024-05-15 17:17:08.622681] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:21.159 [2024-05-15 17:17:08.622686] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:21.159 [2024-05-15 17:17:08.625431] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:21.159 [2024-05-15 17:17:08.634018] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:21.159 [2024-05-15 17:17:08.634503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.159 [2024-05-15 17:17:08.634826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.159 [2024-05-15 17:17:08.634856] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10b7840 with addr=10.0.0.2, port=4420 00:26:21.159 [2024-05-15 17:17:08.634876] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b7840 is same with the state(5) to be set 00:26:21.159 [2024-05-15 17:17:08.635432] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10b7840 (9): Bad file descriptor 00:26:21.159 [2024-05-15 17:17:08.635606] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:21.159 [2024-05-15 17:17:08.635614] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:21.159 [2024-05-15 17:17:08.635620] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:21.159 [2024-05-15 17:17:08.638329] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:21.159 [2024-05-15 17:17:08.646966] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:21.159 [2024-05-15 17:17:08.647404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.159 [2024-05-15 17:17:08.647665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.159 [2024-05-15 17:17:08.647676] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10b7840 with addr=10.0.0.2, port=4420 00:26:21.159 [2024-05-15 17:17:08.647682] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b7840 is same with the state(5) to be set 00:26:21.159 [2024-05-15 17:17:08.647856] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10b7840 (9): Bad file descriptor 00:26:21.159 [2024-05-15 17:17:08.648030] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:21.159 [2024-05-15 17:17:08.648038] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:21.159 [2024-05-15 17:17:08.648044] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:21.159 [2024-05-15 17:17:08.650755] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:21.159 [2024-05-15 17:17:08.659829] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:21.159 [2024-05-15 17:17:08.660275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.159 [2024-05-15 17:17:08.660505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.159 [2024-05-15 17:17:08.660535] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10b7840 with addr=10.0.0.2, port=4420 00:26:21.159 [2024-05-15 17:17:08.660564] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b7840 is same with the state(5) to be set 00:26:21.159 [2024-05-15 17:17:08.661149] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10b7840 (9): Bad file descriptor 00:26:21.159 [2024-05-15 17:17:08.661412] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:21.159 [2024-05-15 17:17:08.661420] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:21.159 [2024-05-15 17:17:08.661425] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:21.159 [2024-05-15 17:17:08.664169] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:21.159 [2024-05-15 17:17:08.672979] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:21.159 [2024-05-15 17:17:08.673429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.159 [2024-05-15 17:17:08.673697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.159 [2024-05-15 17:17:08.673727] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10b7840 with addr=10.0.0.2, port=4420 00:26:21.159 [2024-05-15 17:17:08.673748] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b7840 is same with the state(5) to be set 00:26:21.159 [2024-05-15 17:17:08.674344] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10b7840 (9): Bad file descriptor 00:26:21.159 [2024-05-15 17:17:08.674803] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:21.159 [2024-05-15 17:17:08.674811] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:21.159 [2024-05-15 17:17:08.674817] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:21.159 [2024-05-15 17:17:08.677638] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:21.159 [2024-05-15 17:17:08.686058] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:21.159 [2024-05-15 17:17:08.686420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.159 [2024-05-15 17:17:08.686650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.159 [2024-05-15 17:17:08.686660] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10b7840 with addr=10.0.0.2, port=4420 00:26:21.160 [2024-05-15 17:17:08.686666] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b7840 is same with the state(5) to be set 00:26:21.160 [2024-05-15 17:17:08.686841] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10b7840 (9): Bad file descriptor 00:26:21.160 [2024-05-15 17:17:08.687015] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:21.160 [2024-05-15 17:17:08.687023] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:21.160 [2024-05-15 17:17:08.687028] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:21.160 [2024-05-15 17:17:08.689742] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:21.160 [2024-05-15 17:17:08.698928] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:21.160 [2024-05-15 17:17:08.699372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.160 [2024-05-15 17:17:08.699553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.160 [2024-05-15 17:17:08.699563] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10b7840 with addr=10.0.0.2, port=4420 00:26:21.160 [2024-05-15 17:17:08.699569] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b7840 is same with the state(5) to be set 00:26:21.160 [2024-05-15 17:17:08.699737] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10b7840 (9): Bad file descriptor 00:26:21.160 [2024-05-15 17:17:08.699901] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:21.160 [2024-05-15 17:17:08.699908] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:21.160 [2024-05-15 17:17:08.699914] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:21.160 [2024-05-15 17:17:08.702626] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:21.160 [2024-05-15 17:17:08.711804] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:21.160 [2024-05-15 17:17:08.712152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.160 [2024-05-15 17:17:08.712411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.160 [2024-05-15 17:17:08.712422] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10b7840 with addr=10.0.0.2, port=4420 00:26:21.160 [2024-05-15 17:17:08.712429] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b7840 is same with the state(5) to be set 00:26:21.160 [2024-05-15 17:17:08.712603] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10b7840 (9): Bad file descriptor 00:26:21.160 [2024-05-15 17:17:08.712777] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:21.160 [2024-05-15 17:17:08.712784] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:21.160 [2024-05-15 17:17:08.712790] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:21.160 [2024-05-15 17:17:08.715504] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:21.160 [2024-05-15 17:17:08.724625] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:21.160 [2024-05-15 17:17:08.725048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.160 [2024-05-15 17:17:08.725301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.160 [2024-05-15 17:17:08.725312] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10b7840 with addr=10.0.0.2, port=4420 00:26:21.160 [2024-05-15 17:17:08.725318] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b7840 is same with the state(5) to be set 00:26:21.160 [2024-05-15 17:17:08.725483] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10b7840 (9): Bad file descriptor 00:26:21.160 [2024-05-15 17:17:08.725647] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:21.160 [2024-05-15 17:17:08.725654] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:21.160 [2024-05-15 17:17:08.725660] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:21.160 [2024-05-15 17:17:08.728354] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:21.160 [2024-05-15 17:17:08.737499] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:21.160 [2024-05-15 17:17:08.737958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.160 [2024-05-15 17:17:08.738281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.160 [2024-05-15 17:17:08.738313] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10b7840 with addr=10.0.0.2, port=4420 00:26:21.160 [2024-05-15 17:17:08.738334] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b7840 is same with the state(5) to be set 00:26:21.160 [2024-05-15 17:17:08.738642] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10b7840 (9): Bad file descriptor 00:26:21.160 [2024-05-15 17:17:08.738819] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:21.160 [2024-05-15 17:17:08.738827] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:21.160 [2024-05-15 17:17:08.738833] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:21.160 [2024-05-15 17:17:08.741539] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:21.160 [2024-05-15 17:17:08.750421] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:21.160 [2024-05-15 17:17:08.750772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.160 [2024-05-15 17:17:08.751000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.160 [2024-05-15 17:17:08.751010] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10b7840 with addr=10.0.0.2, port=4420 00:26:21.160 [2024-05-15 17:17:08.751016] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b7840 is same with the state(5) to be set 00:26:21.160 [2024-05-15 17:17:08.751188] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10b7840 (9): Bad file descriptor 00:26:21.160 [2024-05-15 17:17:08.751381] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:21.160 [2024-05-15 17:17:08.751389] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:21.160 [2024-05-15 17:17:08.751395] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:21.160 [2024-05-15 17:17:08.754098] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:21.160 [2024-05-15 17:17:08.763605] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:21.160 [2024-05-15 17:17:08.764096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.160 [2024-05-15 17:17:08.764399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.160 [2024-05-15 17:17:08.764431] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10b7840 with addr=10.0.0.2, port=4420 00:26:21.160 [2024-05-15 17:17:08.764452] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b7840 is same with the state(5) to be set 00:26:21.160 [2024-05-15 17:17:08.764912] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10b7840 (9): Bad file descriptor 00:26:21.160 [2024-05-15 17:17:08.765091] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:21.160 [2024-05-15 17:17:08.765099] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:21.160 [2024-05-15 17:17:08.765105] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:21.160 [2024-05-15 17:17:08.767897] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:21.160 [2024-05-15 17:17:08.776494] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:21.160 [2024-05-15 17:17:08.776942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.160 [2024-05-15 17:17:08.777217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.160 [2024-05-15 17:17:08.777249] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10b7840 with addr=10.0.0.2, port=4420 00:26:21.160 [2024-05-15 17:17:08.777271] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b7840 is same with the state(5) to be set 00:26:21.160 [2024-05-15 17:17:08.777857] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10b7840 (9): Bad file descriptor 00:26:21.160 [2024-05-15 17:17:08.778174] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:21.160 [2024-05-15 17:17:08.778184] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:21.160 [2024-05-15 17:17:08.778206] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:21.160 [2024-05-15 17:17:08.780948] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:21.160 [2024-05-15 17:17:08.789414] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:21.160 [2024-05-15 17:17:08.789837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.160 [2024-05-15 17:17:08.790116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.160 [2024-05-15 17:17:08.790146] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10b7840 with addr=10.0.0.2, port=4420 00:26:21.160 [2024-05-15 17:17:08.790181] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b7840 is same with the state(5) to be set 00:26:21.160 [2024-05-15 17:17:08.790769] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10b7840 (9): Bad file descriptor 00:26:21.160 [2024-05-15 17:17:08.791025] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:21.160 [2024-05-15 17:17:08.791036] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:21.160 [2024-05-15 17:17:08.791045] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:21.160 [2024-05-15 17:17:08.795145] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:21.160 [2024-05-15 17:17:08.802880] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:21.160 [2024-05-15 17:17:08.803308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.160 [2024-05-15 17:17:08.803491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.160 [2024-05-15 17:17:08.803501] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10b7840 with addr=10.0.0.2, port=4420 00:26:21.160 [2024-05-15 17:17:08.803508] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b7840 is same with the state(5) to be set 00:26:21.160 [2024-05-15 17:17:08.803681] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10b7840 (9): Bad file descriptor 00:26:21.160 [2024-05-15 17:17:08.803856] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:21.160 [2024-05-15 17:17:08.803863] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:21.160 [2024-05-15 17:17:08.803870] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:21.160 [2024-05-15 17:17:08.806757] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:21.420 [2024-05-15 17:17:08.815739] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:21.420 [2024-05-15 17:17:08.816194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.420 [2024-05-15 17:17:08.816415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.420 [2024-05-15 17:17:08.816446] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10b7840 with addr=10.0.0.2, port=4420 00:26:21.420 [2024-05-15 17:17:08.816469] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b7840 is same with the state(5) to be set 00:26:21.420 [2024-05-15 17:17:08.816888] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10b7840 (9): Bad file descriptor 00:26:21.420 [2024-05-15 17:17:08.817052] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:21.420 [2024-05-15 17:17:08.817060] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:21.420 [2024-05-15 17:17:08.817069] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:21.420 [2024-05-15 17:17:08.819787] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:21.420 [2024-05-15 17:17:08.828649] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:21.420 [2024-05-15 17:17:08.829113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.420 [2024-05-15 17:17:08.829388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.420 [2024-05-15 17:17:08.829420] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10b7840 with addr=10.0.0.2, port=4420 00:26:21.420 [2024-05-15 17:17:08.829442] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b7840 is same with the state(5) to be set 00:26:21.420 [2024-05-15 17:17:08.830028] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10b7840 (9): Bad file descriptor 00:26:21.420 [2024-05-15 17:17:08.830273] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:21.420 [2024-05-15 17:17:08.830281] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:21.420 [2024-05-15 17:17:08.830287] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:21.420 [2024-05-15 17:17:08.833059] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:21.420 [2024-05-15 17:17:08.841503] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:21.420 [2024-05-15 17:17:08.841951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.420 [2024-05-15 17:17:08.842206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.420 [2024-05-15 17:17:08.842217] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10b7840 with addr=10.0.0.2, port=4420 00:26:21.420 [2024-05-15 17:17:08.842224] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b7840 is same with the state(5) to be set 00:26:21.420 [2024-05-15 17:17:08.842398] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10b7840 (9): Bad file descriptor 00:26:21.420 [2024-05-15 17:17:08.842572] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:21.420 [2024-05-15 17:17:08.842580] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:21.420 [2024-05-15 17:17:08.842586] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:21.420 [2024-05-15 17:17:08.845297] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:21.420 [2024-05-15 17:17:08.854367] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:21.420 [2024-05-15 17:17:08.854818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.420 [2024-05-15 17:17:08.855045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.420 [2024-05-15 17:17:08.855054] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10b7840 with addr=10.0.0.2, port=4420 00:26:21.420 [2024-05-15 17:17:08.855061] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b7840 is same with the state(5) to be set 00:26:21.420 [2024-05-15 17:17:08.855243] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10b7840 (9): Bad file descriptor 00:26:21.420 [2024-05-15 17:17:08.855418] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:21.420 [2024-05-15 17:17:08.855426] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:21.420 [2024-05-15 17:17:08.855432] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:21.420 [2024-05-15 17:17:08.858139] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:21.420 [2024-05-15 17:17:08.867205] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:21.420 [2024-05-15 17:17:08.867663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.420 [2024-05-15 17:17:08.867916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.420 [2024-05-15 17:17:08.867946] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10b7840 with addr=10.0.0.2, port=4420 00:26:21.420 [2024-05-15 17:17:08.867967] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b7840 is same with the state(5) to be set 00:26:21.420 [2024-05-15 17:17:08.868239] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10b7840 (9): Bad file descriptor 00:26:21.420 [2024-05-15 17:17:08.868413] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:21.420 [2024-05-15 17:17:08.868421] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:21.420 [2024-05-15 17:17:08.868427] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:21.420 [2024-05-15 17:17:08.871131] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:21.420 [2024-05-15 17:17:08.880106] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:21.420 [2024-05-15 17:17:08.880543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.420 [2024-05-15 17:17:08.880679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.420 [2024-05-15 17:17:08.880689] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10b7840 with addr=10.0.0.2, port=4420 00:26:21.420 [2024-05-15 17:17:08.880695] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b7840 is same with the state(5) to be set 00:26:21.420 [2024-05-15 17:17:08.880869] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10b7840 (9): Bad file descriptor 00:26:21.420 [2024-05-15 17:17:08.881043] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:21.420 [2024-05-15 17:17:08.881051] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:21.420 [2024-05-15 17:17:08.881057] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:21.420 [2024-05-15 17:17:08.883773] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:21.420 [2024-05-15 17:17:08.893134] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:21.420 [2024-05-15 17:17:08.893597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.420 [2024-05-15 17:17:08.893903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.420 [2024-05-15 17:17:08.893933] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10b7840 with addr=10.0.0.2, port=4420 00:26:21.420 [2024-05-15 17:17:08.893955] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b7840 is same with the state(5) to be set 00:26:21.420 [2024-05-15 17:17:08.894284] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10b7840 (9): Bad file descriptor 00:26:21.420 [2024-05-15 17:17:08.894459] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:21.420 [2024-05-15 17:17:08.894467] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:21.420 [2024-05-15 17:17:08.894473] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:21.420 [2024-05-15 17:17:08.897181] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:21.420 [2024-05-15 17:17:08.905997] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:21.420 [2024-05-15 17:17:08.906457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.420 [2024-05-15 17:17:08.906714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.421 [2024-05-15 17:17:08.906724] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10b7840 with addr=10.0.0.2, port=4420 00:26:21.421 [2024-05-15 17:17:08.906731] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b7840 is same with the state(5) to be set 00:26:21.421 [2024-05-15 17:17:08.906905] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10b7840 (9): Bad file descriptor 00:26:21.421 [2024-05-15 17:17:08.907079] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:21.421 [2024-05-15 17:17:08.907087] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:21.421 [2024-05-15 17:17:08.907093] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:21.421 [2024-05-15 17:17:08.909804] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:21.421 [2024-05-15 17:17:08.918871] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:21.421 [2024-05-15 17:17:08.919335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.421 [2024-05-15 17:17:08.919500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.421 [2024-05-15 17:17:08.919510] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10b7840 with addr=10.0.0.2, port=4420 00:26:21.421 [2024-05-15 17:17:08.919517] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b7840 is same with the state(5) to be set 00:26:21.421 [2024-05-15 17:17:08.919691] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10b7840 (9): Bad file descriptor 00:26:21.421 [2024-05-15 17:17:08.919864] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:21.421 [2024-05-15 17:17:08.919872] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:21.421 [2024-05-15 17:17:08.919878] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:21.421 [2024-05-15 17:17:08.922749] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:21.421 [2024-05-15 17:17:08.931999] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:21.421 [2024-05-15 17:17:08.932456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.421 [2024-05-15 17:17:08.932731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.421 [2024-05-15 17:17:08.932761] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10b7840 with addr=10.0.0.2, port=4420 00:26:21.421 [2024-05-15 17:17:08.932782] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b7840 is same with the state(5) to be set 00:26:21.421 [2024-05-15 17:17:08.933180] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10b7840 (9): Bad file descriptor 00:26:21.421 [2024-05-15 17:17:08.933360] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:21.421 [2024-05-15 17:17:08.933368] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:21.421 [2024-05-15 17:17:08.933375] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:21.421 [2024-05-15 17:17:08.936160] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:21.421 [2024-05-15 17:17:08.944910] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:21.421 [2024-05-15 17:17:08.945338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.421 [2024-05-15 17:17:08.945590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.421 [2024-05-15 17:17:08.945600] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10b7840 with addr=10.0.0.2, port=4420 00:26:21.421 [2024-05-15 17:17:08.945606] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b7840 is same with the state(5) to be set 00:26:21.421 [2024-05-15 17:17:08.945780] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10b7840 (9): Bad file descriptor 00:26:21.421 [2024-05-15 17:17:08.945953] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:21.421 [2024-05-15 17:17:08.945961] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:21.421 [2024-05-15 17:17:08.945967] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:21.421 [2024-05-15 17:17:08.948678] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:21.421 [2024-05-15 17:17:08.957796] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:21.421 [2024-05-15 17:17:08.958204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.421 [2024-05-15 17:17:08.958483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.421 [2024-05-15 17:17:08.958493] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10b7840 with addr=10.0.0.2, port=4420 00:26:21.421 [2024-05-15 17:17:08.958499] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b7840 is same with the state(5) to be set 00:26:21.421 [2024-05-15 17:17:08.958663] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10b7840 (9): Bad file descriptor 00:26:21.421 [2024-05-15 17:17:08.958827] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:21.421 [2024-05-15 17:17:08.958835] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:21.421 [2024-05-15 17:17:08.958841] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:21.421 [2024-05-15 17:17:08.961550] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:21.421 [2024-05-15 17:17:08.970620] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:21.421 [2024-05-15 17:17:08.971067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.421 [2024-05-15 17:17:08.971333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.421 [2024-05-15 17:17:08.971365] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10b7840 with addr=10.0.0.2, port=4420 00:26:21.421 [2024-05-15 17:17:08.971386] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b7840 is same with the state(5) to be set 00:26:21.421 [2024-05-15 17:17:08.971973] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10b7840 (9): Bad file descriptor 00:26:21.421 [2024-05-15 17:17:08.972232] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:21.421 [2024-05-15 17:17:08.972240] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:21.421 [2024-05-15 17:17:08.972246] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:21.421 [2024-05-15 17:17:08.976314] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:21.421 [2024-05-15 17:17:08.984290] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:21.421 [2024-05-15 17:17:08.984735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.421 [2024-05-15 17:17:08.984919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.421 [2024-05-15 17:17:08.984928] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10b7840 with addr=10.0.0.2, port=4420 00:26:21.421 [2024-05-15 17:17:08.984935] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b7840 is same with the state(5) to be set 00:26:21.421 [2024-05-15 17:17:08.985109] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10b7840 (9): Bad file descriptor 00:26:21.421 [2024-05-15 17:17:08.985289] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:21.421 [2024-05-15 17:17:08.985298] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:21.421 [2024-05-15 17:17:08.985304] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:21.421 [2024-05-15 17:17:08.988046] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:21.421 [2024-05-15 17:17:08.997235] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:21.421 [2024-05-15 17:17:08.997679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.421 [2024-05-15 17:17:08.998007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.421 [2024-05-15 17:17:08.998036] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10b7840 with addr=10.0.0.2, port=4420 00:26:21.421 [2024-05-15 17:17:08.998058] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b7840 is same with the state(5) to be set 00:26:21.421 [2024-05-15 17:17:08.998387] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10b7840 (9): Bad file descriptor 00:26:21.421 [2024-05-15 17:17:08.998562] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:21.421 [2024-05-15 17:17:08.998570] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:21.421 [2024-05-15 17:17:08.998576] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:21.421 [2024-05-15 17:17:09.001290] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:21.421 [2024-05-15 17:17:09.010185] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:21.421 [2024-05-15 17:17:09.010581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.421 [2024-05-15 17:17:09.010721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.421 [2024-05-15 17:17:09.010731] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10b7840 with addr=10.0.0.2, port=4420 00:26:21.421 [2024-05-15 17:17:09.010737] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b7840 is same with the state(5) to be set 00:26:21.421 [2024-05-15 17:17:09.010912] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10b7840 (9): Bad file descriptor 00:26:21.421 [2024-05-15 17:17:09.011086] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:21.421 [2024-05-15 17:17:09.011093] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:21.421 [2024-05-15 17:17:09.011099] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:21.421 [2024-05-15 17:17:09.013815] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:21.421 [2024-05-15 17:17:09.023176] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:21.421 [2024-05-15 17:17:09.023617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.421 [2024-05-15 17:17:09.023898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.421 [2024-05-15 17:17:09.023928] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10b7840 with addr=10.0.0.2, port=4420 00:26:21.421 [2024-05-15 17:17:09.023956] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b7840 is same with the state(5) to be set 00:26:21.421 [2024-05-15 17:17:09.024298] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10b7840 (9): Bad file descriptor 00:26:21.421 [2024-05-15 17:17:09.024472] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:21.422 [2024-05-15 17:17:09.024480] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:21.422 [2024-05-15 17:17:09.024487] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:21.422 [2024-05-15 17:17:09.027268] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:21.422 [2024-05-15 17:17:09.036072] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:21.422 [2024-05-15 17:17:09.036563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.422 [2024-05-15 17:17:09.036892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.422 [2024-05-15 17:17:09.036922] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10b7840 with addr=10.0.0.2, port=4420 00:26:21.422 [2024-05-15 17:17:09.036943] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b7840 is same with the state(5) to be set 00:26:21.422 [2024-05-15 17:17:09.037521] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10b7840 (9): Bad file descriptor 00:26:21.422 [2024-05-15 17:17:09.037697] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:21.422 [2024-05-15 17:17:09.037706] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:21.422 [2024-05-15 17:17:09.037712] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:21.422 [2024-05-15 17:17:09.040421] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:21.422 [2024-05-15 17:17:09.049033] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:21.422 [2024-05-15 17:17:09.049417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.422 [2024-05-15 17:17:09.049613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.422 [2024-05-15 17:17:09.049623] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10b7840 with addr=10.0.0.2, port=4420 00:26:21.422 [2024-05-15 17:17:09.049630] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b7840 is same with the state(5) to be set 00:26:21.422 [2024-05-15 17:17:09.049803] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10b7840 (9): Bad file descriptor 00:26:21.422 [2024-05-15 17:17:09.049978] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:21.422 [2024-05-15 17:17:09.049986] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:21.422 [2024-05-15 17:17:09.049992] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:21.422 [2024-05-15 17:17:09.052706] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:21.422 [2024-05-15 17:17:09.061999] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:21.422 [2024-05-15 17:17:09.062469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.422 [2024-05-15 17:17:09.062743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.422 [2024-05-15 17:17:09.062773] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10b7840 with addr=10.0.0.2, port=4420 00:26:21.422 [2024-05-15 17:17:09.062793] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b7840 is same with the state(5) to be set 00:26:21.422 [2024-05-15 17:17:09.063400] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10b7840 (9): Bad file descriptor 00:26:21.422 [2024-05-15 17:17:09.063781] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:21.422 [2024-05-15 17:17:09.063792] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:21.422 [2024-05-15 17:17:09.063801] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:21.422 [2024-05-15 17:17:09.067907] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:21.422 [2024-05-15 17:17:09.075667] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:21.422 [2024-05-15 17:17:09.076144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.422 [2024-05-15 17:17:09.076284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.422 [2024-05-15 17:17:09.076295] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10b7840 with addr=10.0.0.2, port=4420 00:26:21.422 [2024-05-15 17:17:09.076302] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b7840 is same with the state(5) to be set 00:26:21.422 [2024-05-15 17:17:09.076484] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10b7840 (9): Bad file descriptor 00:26:21.422 [2024-05-15 17:17:09.076663] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:21.422 [2024-05-15 17:17:09.076672] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:21.422 [2024-05-15 17:17:09.076678] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:21.682 [2024-05-15 17:17:09.079613] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:21.682 [2024-05-15 17:17:09.088591] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:21.682 [2024-05-15 17:17:09.089087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.682 [2024-05-15 17:17:09.089369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.682 [2024-05-15 17:17:09.089402] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10b7840 with addr=10.0.0.2, port=4420 00:26:21.682 [2024-05-15 17:17:09.089424] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b7840 is same with the state(5) to be set 00:26:21.682 [2024-05-15 17:17:09.089731] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10b7840 (9): Bad file descriptor 00:26:21.682 [2024-05-15 17:17:09.089896] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:21.682 [2024-05-15 17:17:09.089904] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:21.682 [2024-05-15 17:17:09.089909] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:21.682 [2024-05-15 17:17:09.092679] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:21.682 [2024-05-15 17:17:09.101548] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:21.682 [2024-05-15 17:17:09.101952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.682 [2024-05-15 17:17:09.102198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.682 [2024-05-15 17:17:09.102208] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10b7840 with addr=10.0.0.2, port=4420 00:26:21.682 [2024-05-15 17:17:09.102215] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b7840 is same with the state(5) to be set 00:26:21.682 [2024-05-15 17:17:09.102398] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10b7840 (9): Bad file descriptor 00:26:21.682 [2024-05-15 17:17:09.102566] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:21.682 [2024-05-15 17:17:09.102574] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:21.682 [2024-05-15 17:17:09.102580] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:21.682 [2024-05-15 17:17:09.105273] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:21.682 [2024-05-15 17:17:09.114504] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:21.682 [2024-05-15 17:17:09.114908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.682 [2024-05-15 17:17:09.115081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.682 [2024-05-15 17:17:09.115091] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10b7840 with addr=10.0.0.2, port=4420 00:26:21.682 [2024-05-15 17:17:09.115098] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b7840 is same with the state(5) to be set 00:26:21.682 [2024-05-15 17:17:09.115279] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10b7840 (9): Bad file descriptor 00:26:21.682 [2024-05-15 17:17:09.115454] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:21.682 [2024-05-15 17:17:09.115462] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:21.682 [2024-05-15 17:17:09.115468] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:21.682 [2024-05-15 17:17:09.118177] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:21.682 [2024-05-15 17:17:09.127404] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:21.682 [2024-05-15 17:17:09.127875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.682 [2024-05-15 17:17:09.128185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.682 [2024-05-15 17:17:09.128217] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10b7840 with addr=10.0.0.2, port=4420 00:26:21.682 [2024-05-15 17:17:09.128238] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b7840 is same with the state(5) to be set 00:26:21.682 [2024-05-15 17:17:09.128601] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10b7840 (9): Bad file descriptor 00:26:21.682 [2024-05-15 17:17:09.128776] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:21.682 [2024-05-15 17:17:09.128784] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:21.682 [2024-05-15 17:17:09.128790] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:21.682 [2024-05-15 17:17:09.131559] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:21.682 [2024-05-15 17:17:09.140322] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:21.682 [2024-05-15 17:17:09.140805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.682 [2024-05-15 17:17:09.140994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.682 [2024-05-15 17:17:09.141024] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10b7840 with addr=10.0.0.2, port=4420 00:26:21.682 [2024-05-15 17:17:09.141045] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b7840 is same with the state(5) to be set 00:26:21.682 [2024-05-15 17:17:09.141645] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10b7840 (9): Bad file descriptor 00:26:21.682 [2024-05-15 17:17:09.141868] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:21.682 [2024-05-15 17:17:09.141876] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:21.682 [2024-05-15 17:17:09.141882] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:21.682 [2024-05-15 17:17:09.144594] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:21.682 [2024-05-15 17:17:09.153223] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:21.682 [2024-05-15 17:17:09.153675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.682 [2024-05-15 17:17:09.153930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.682 [2024-05-15 17:17:09.153963] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10b7840 with addr=10.0.0.2, port=4420 00:26:21.682 [2024-05-15 17:17:09.153985] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b7840 is same with the state(5) to be set 00:26:21.682 [2024-05-15 17:17:09.154577] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10b7840 (9): Bad file descriptor 00:26:21.682 [2024-05-15 17:17:09.154752] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:21.682 [2024-05-15 17:17:09.154761] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:21.682 [2024-05-15 17:17:09.154766] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:21.682 [2024-05-15 17:17:09.158664] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:21.682 [2024-05-15 17:17:09.166719] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:21.682 [2024-05-15 17:17:09.167189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.682 [2024-05-15 17:17:09.167396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.682 [2024-05-15 17:17:09.167406] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10b7840 with addr=10.0.0.2, port=4420 00:26:21.682 [2024-05-15 17:17:09.167412] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b7840 is same with the state(5) to be set 00:26:21.682 [2024-05-15 17:17:09.167586] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10b7840 (9): Bad file descriptor 00:26:21.682 [2024-05-15 17:17:09.167760] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:21.682 [2024-05-15 17:17:09.167768] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:21.682 [2024-05-15 17:17:09.167774] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:21.682 [2024-05-15 17:17:09.170558] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:21.682 [2024-05-15 17:17:09.179853] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:21.682 [2024-05-15 17:17:09.180311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.682 [2024-05-15 17:17:09.180494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.682 [2024-05-15 17:17:09.180504] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10b7840 with addr=10.0.0.2, port=4420 00:26:21.682 [2024-05-15 17:17:09.180511] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b7840 is same with the state(5) to be set 00:26:21.683 [2024-05-15 17:17:09.180690] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10b7840 (9): Bad file descriptor 00:26:21.683 [2024-05-15 17:17:09.180868] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:21.683 [2024-05-15 17:17:09.180877] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:21.683 [2024-05-15 17:17:09.180886] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:21.683 [2024-05-15 17:17:09.183703] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:21.683 [2024-05-15 17:17:09.192907] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:21.683 [2024-05-15 17:17:09.193318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.683 [2024-05-15 17:17:09.193555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.683 [2024-05-15 17:17:09.193586] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10b7840 with addr=10.0.0.2, port=4420 00:26:21.683 [2024-05-15 17:17:09.193611] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b7840 is same with the state(5) to be set 00:26:21.683 [2024-05-15 17:17:09.194115] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10b7840 (9): Bad file descriptor 00:26:21.683 [2024-05-15 17:17:09.194294] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:21.683 [2024-05-15 17:17:09.194302] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:21.683 [2024-05-15 17:17:09.194308] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:21.683 [2024-05-15 17:17:09.197083] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:21.683 [2024-05-15 17:17:09.205900] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:21.683 [2024-05-15 17:17:09.206263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.683 [2024-05-15 17:17:09.206522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.683 [2024-05-15 17:17:09.206532] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10b7840 with addr=10.0.0.2, port=4420 00:26:21.683 [2024-05-15 17:17:09.206539] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b7840 is same with the state(5) to be set 00:26:21.683 [2024-05-15 17:17:09.206713] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10b7840 (9): Bad file descriptor 00:26:21.683 [2024-05-15 17:17:09.206887] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:21.683 [2024-05-15 17:17:09.206895] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:21.683 [2024-05-15 17:17:09.206902] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:21.683 [2024-05-15 17:17:09.209615] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:21.683 [2024-05-15 17:17:09.218869] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:21.683 [2024-05-15 17:17:09.219301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.683 [2024-05-15 17:17:09.219491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.683 [2024-05-15 17:17:09.219500] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10b7840 with addr=10.0.0.2, port=4420 00:26:21.683 [2024-05-15 17:17:09.219507] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b7840 is same with the state(5) to be set 00:26:21.683 [2024-05-15 17:17:09.219671] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10b7840 (9): Bad file descriptor 00:26:21.683 [2024-05-15 17:17:09.219836] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:21.683 [2024-05-15 17:17:09.219844] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:21.683 [2024-05-15 17:17:09.219853] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:21.683 [2024-05-15 17:17:09.222565] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:21.683 [2024-05-15 17:17:09.231901] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:21.683 [2024-05-15 17:17:09.232380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.683 [2024-05-15 17:17:09.232581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.683 [2024-05-15 17:17:09.232590] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10b7840 with addr=10.0.0.2, port=4420 00:26:21.683 [2024-05-15 17:17:09.232597] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b7840 is same with the state(5) to be set 00:26:21.683 [2024-05-15 17:17:09.232771] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10b7840 (9): Bad file descriptor 00:26:21.683 [2024-05-15 17:17:09.232945] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:21.683 [2024-05-15 17:17:09.232953] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:21.683 [2024-05-15 17:17:09.232959] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:21.683 [2024-05-15 17:17:09.235669] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:21.683 [2024-05-15 17:17:09.244742] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:21.683 [2024-05-15 17:17:09.245219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.683 [2024-05-15 17:17:09.245449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.683 [2024-05-15 17:17:09.245459] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10b7840 with addr=10.0.0.2, port=4420 00:26:21.683 [2024-05-15 17:17:09.245465] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b7840 is same with the state(5) to be set 00:26:21.683 [2024-05-15 17:17:09.245630] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10b7840 (9): Bad file descriptor 00:26:21.683 [2024-05-15 17:17:09.245795] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:21.683 [2024-05-15 17:17:09.245803] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:21.683 [2024-05-15 17:17:09.245808] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:21.683 [2024-05-15 17:17:09.248514] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:21.683 [2024-05-15 17:17:09.257658] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:21.683 [2024-05-15 17:17:09.258094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.683 [2024-05-15 17:17:09.258337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.683 [2024-05-15 17:17:09.258348] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10b7840 with addr=10.0.0.2, port=4420 00:26:21.683 [2024-05-15 17:17:09.258354] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b7840 is same with the state(5) to be set 00:26:21.683 [2024-05-15 17:17:09.258518] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10b7840 (9): Bad file descriptor 00:26:21.683 [2024-05-15 17:17:09.258683] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:21.683 [2024-05-15 17:17:09.258690] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:21.683 [2024-05-15 17:17:09.258696] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:21.683 [2024-05-15 17:17:09.261395] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:21.683 [2024-05-15 17:17:09.270621] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:21.683 [2024-05-15 17:17:09.271118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.683 [2024-05-15 17:17:09.271322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.683 [2024-05-15 17:17:09.271354] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10b7840 with addr=10.0.0.2, port=4420 00:26:21.683 [2024-05-15 17:17:09.271375] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b7840 is same with the state(5) to be set 00:26:21.683 [2024-05-15 17:17:09.271961] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10b7840 (9): Bad file descriptor 00:26:21.683 [2024-05-15 17:17:09.272218] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:21.683 [2024-05-15 17:17:09.272226] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:21.683 [2024-05-15 17:17:09.272232] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:21.683 [2024-05-15 17:17:09.274997] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:21.683 [2024-05-15 17:17:09.283486] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:21.683 [2024-05-15 17:17:09.283941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.683 [2024-05-15 17:17:09.284199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.683 [2024-05-15 17:17:09.284233] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10b7840 with addr=10.0.0.2, port=4420 00:26:21.683 [2024-05-15 17:17:09.284255] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b7840 is same with the state(5) to be set 00:26:21.683 [2024-05-15 17:17:09.284841] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10b7840 (9): Bad file descriptor 00:26:21.683 [2024-05-15 17:17:09.285180] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:21.683 [2024-05-15 17:17:09.285188] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:21.683 [2024-05-15 17:17:09.285194] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:21.683 [2024-05-15 17:17:09.287820] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:21.683 [2024-05-15 17:17:09.296424] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:21.683 [2024-05-15 17:17:09.296908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.683 [2024-05-15 17:17:09.297224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.683 [2024-05-15 17:17:09.297258] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10b7840 with addr=10.0.0.2, port=4420 00:26:21.683 [2024-05-15 17:17:09.297279] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b7840 is same with the state(5) to be set 00:26:21.683 [2024-05-15 17:17:09.297836] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10b7840 (9): Bad file descriptor 00:26:21.683 [2024-05-15 17:17:09.298009] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:21.683 [2024-05-15 17:17:09.298017] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:21.683 [2024-05-15 17:17:09.298023] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:21.683 [2024-05-15 17:17:09.300753] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:21.683 [2024-05-15 17:17:09.309374] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:21.683 [2024-05-15 17:17:09.309832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.683 [2024-05-15 17:17:09.310006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.683 [2024-05-15 17:17:09.310016] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10b7840 with addr=10.0.0.2, port=4420 00:26:21.683 [2024-05-15 17:17:09.310022] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b7840 is same with the state(5) to be set 00:26:21.683 [2024-05-15 17:17:09.310208] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10b7840 (9): Bad file descriptor 00:26:21.683 [2024-05-15 17:17:09.310383] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:21.683 [2024-05-15 17:17:09.310391] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:21.683 [2024-05-15 17:17:09.310397] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:21.684 [2024-05-15 17:17:09.313102] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:21.684 [2024-05-15 17:17:09.322220] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:21.684 [2024-05-15 17:17:09.322693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.684 [2024-05-15 17:17:09.322877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.684 [2024-05-15 17:17:09.322907] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10b7840 with addr=10.0.0.2, port=4420 00:26:21.684 [2024-05-15 17:17:09.322928] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b7840 is same with the state(5) to be set 00:26:21.684 [2024-05-15 17:17:09.323535] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10b7840 (9): Bad file descriptor 00:26:21.684 [2024-05-15 17:17:09.324113] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:21.684 [2024-05-15 17:17:09.324121] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:21.684 [2024-05-15 17:17:09.324127] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:21.684 [2024-05-15 17:17:09.326836] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:21.684 [2024-05-15 17:17:09.335133] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:21.684 [2024-05-15 17:17:09.335621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.684 [2024-05-15 17:17:09.335793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.684 [2024-05-15 17:17:09.335805] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10b7840 with addr=10.0.0.2, port=4420 00:26:21.684 [2024-05-15 17:17:09.335812] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b7840 is same with the state(5) to be set 00:26:21.684 [2024-05-15 17:17:09.335993] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10b7840 (9): Bad file descriptor 00:26:21.684 [2024-05-15 17:17:09.336179] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:21.684 [2024-05-15 17:17:09.336188] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:21.684 [2024-05-15 17:17:09.336194] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:21.684 [2024-05-15 17:17:09.339124] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:21.944 [2024-05-15 17:17:09.348202] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:21.944 [2024-05-15 17:17:09.348681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.944 [2024-05-15 17:17:09.348875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.944 [2024-05-15 17:17:09.348886] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10b7840 with addr=10.0.0.2, port=4420 00:26:21.944 [2024-05-15 17:17:09.348893] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b7840 is same with the state(5) to be set 00:26:21.944 [2024-05-15 17:17:09.349068] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10b7840 (9): Bad file descriptor 00:26:21.944 [2024-05-15 17:17:09.349250] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:21.944 [2024-05-15 17:17:09.349259] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:21.944 [2024-05-15 17:17:09.349265] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:21.944 [2024-05-15 17:17:09.351972] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:21.944 [2024-05-15 17:17:09.361044] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:21.944 [2024-05-15 17:17:09.361497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.944 [2024-05-15 17:17:09.361806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.944 [2024-05-15 17:17:09.361837] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10b7840 with addr=10.0.0.2, port=4420 00:26:21.944 [2024-05-15 17:17:09.361858] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b7840 is same with the state(5) to be set 00:26:21.944 [2024-05-15 17:17:09.362163] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10b7840 (9): Bad file descriptor 00:26:21.944 [2024-05-15 17:17:09.362344] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:21.944 [2024-05-15 17:17:09.362352] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:21.944 [2024-05-15 17:17:09.362358] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:21.944 [2024-05-15 17:17:09.365063] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:21.944 [2024-05-15 17:17:09.373979] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:21.944 [2024-05-15 17:17:09.374381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.944 [2024-05-15 17:17:09.374551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.944 [2024-05-15 17:17:09.374561] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10b7840 with addr=10.0.0.2, port=4420 00:26:21.944 [2024-05-15 17:17:09.374568] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b7840 is same with the state(5) to be set 00:26:21.944 [2024-05-15 17:17:09.374742] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10b7840 (9): Bad file descriptor 00:26:21.944 [2024-05-15 17:17:09.374917] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:21.944 [2024-05-15 17:17:09.374924] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:21.944 [2024-05-15 17:17:09.374930] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:21.944 [2024-05-15 17:17:09.377640] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:21.944 [2024-05-15 17:17:09.386850] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:21.944 [2024-05-15 17:17:09.387228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.944 [2024-05-15 17:17:09.387438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.944 [2024-05-15 17:17:09.387468] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10b7840 with addr=10.0.0.2, port=4420 00:26:21.944 [2024-05-15 17:17:09.387496] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b7840 is same with the state(5) to be set 00:26:21.944 [2024-05-15 17:17:09.388082] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10b7840 (9): Bad file descriptor 00:26:21.944 [2024-05-15 17:17:09.388545] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:21.944 [2024-05-15 17:17:09.388553] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:21.944 [2024-05-15 17:17:09.388559] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:21.944 [2024-05-15 17:17:09.391268] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:21.944 [2024-05-15 17:17:09.399712] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:21.944 [2024-05-15 17:17:09.400184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.944 [2024-05-15 17:17:09.400438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.944 [2024-05-15 17:17:09.400447] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10b7840 with addr=10.0.0.2, port=4420 00:26:21.944 [2024-05-15 17:17:09.400454] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b7840 is same with the state(5) to be set 00:26:21.944 [2024-05-15 17:17:09.400618] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10b7840 (9): Bad file descriptor 00:26:21.944 [2024-05-15 17:17:09.400782] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:21.944 [2024-05-15 17:17:09.400790] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:21.944 [2024-05-15 17:17:09.400795] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:21.944 [2024-05-15 17:17:09.403506] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:21.944 [2024-05-15 17:17:09.412573] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:21.944 [2024-05-15 17:17:09.413060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.944 [2024-05-15 17:17:09.413356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.944 [2024-05-15 17:17:09.413389] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10b7840 with addr=10.0.0.2, port=4420 00:26:21.944 [2024-05-15 17:17:09.413410] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b7840 is same with the state(5) to be set 00:26:21.944 [2024-05-15 17:17:09.413610] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10b7840 (9): Bad file descriptor 00:26:21.944 [2024-05-15 17:17:09.413775] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:21.944 [2024-05-15 17:17:09.413782] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:21.944 [2024-05-15 17:17:09.413788] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:21.944 [2024-05-15 17:17:09.416658] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:21.944 [2024-05-15 17:17:09.425524] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:21.944 [2024-05-15 17:17:09.426001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.944 [2024-05-15 17:17:09.426260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.944 [2024-05-15 17:17:09.426272] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10b7840 with addr=10.0.0.2, port=4420 00:26:21.944 [2024-05-15 17:17:09.426282] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b7840 is same with the state(5) to be set 00:26:21.944 [2024-05-15 17:17:09.426458] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10b7840 (9): Bad file descriptor 00:26:21.944 [2024-05-15 17:17:09.426632] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:21.944 [2024-05-15 17:17:09.426639] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:21.944 [2024-05-15 17:17:09.426645] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:21.944 [2024-05-15 17:17:09.429528] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:21.944 [2024-05-15 17:17:09.438602] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:21.944 [2024-05-15 17:17:09.439056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.944 [2024-05-15 17:17:09.439286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.944 [2024-05-15 17:17:09.439297] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10b7840 with addr=10.0.0.2, port=4420 00:26:21.944 [2024-05-15 17:17:09.439304] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b7840 is same with the state(5) to be set 00:26:21.944 [2024-05-15 17:17:09.439489] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10b7840 (9): Bad file descriptor 00:26:21.944 [2024-05-15 17:17:09.439663] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:21.944 [2024-05-15 17:17:09.439671] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:21.944 [2024-05-15 17:17:09.439677] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:21.944 [2024-05-15 17:17:09.442485] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:21.944 [2024-05-15 17:17:09.451426] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:21.944 [2024-05-15 17:17:09.451847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.944 [2024-05-15 17:17:09.452104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.944 [2024-05-15 17:17:09.452114] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10b7840 with addr=10.0.0.2, port=4420 00:26:21.944 [2024-05-15 17:17:09.452120] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b7840 is same with the state(5) to be set 00:26:21.944 [2024-05-15 17:17:09.452312] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10b7840 (9): Bad file descriptor 00:26:21.944 [2024-05-15 17:17:09.452487] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:21.945 [2024-05-15 17:17:09.452495] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:21.945 [2024-05-15 17:17:09.452501] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:21.945 [2024-05-15 17:17:09.455209] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:21.945 [2024-05-15 17:17:09.464274] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:21.945 [2024-05-15 17:17:09.464759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.945 [2024-05-15 17:17:09.465054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.945 [2024-05-15 17:17:09.465084] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10b7840 with addr=10.0.0.2, port=4420 00:26:21.945 [2024-05-15 17:17:09.465106] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b7840 is same with the state(5) to be set 00:26:21.945 [2024-05-15 17:17:09.465415] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10b7840 (9): Bad file descriptor 00:26:21.945 [2024-05-15 17:17:09.465589] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:21.945 [2024-05-15 17:17:09.465597] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:21.945 [2024-05-15 17:17:09.465602] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:21.945 [2024-05-15 17:17:09.468309] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:21.945 [2024-05-15 17:17:09.477220] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:21.945 [2024-05-15 17:17:09.477687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.945 [2024-05-15 17:17:09.477986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.945 [2024-05-15 17:17:09.478016] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10b7840 with addr=10.0.0.2, port=4420 00:26:21.945 [2024-05-15 17:17:09.478037] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b7840 is same with the state(5) to be set 00:26:21.945 [2024-05-15 17:17:09.478620] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10b7840 (9): Bad file descriptor 00:26:21.945 [2024-05-15 17:17:09.478876] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:21.945 [2024-05-15 17:17:09.478887] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:21.945 [2024-05-15 17:17:09.478896] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:21.945 [2024-05-15 17:17:09.483002] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:21.945 [2024-05-15 17:17:09.490694] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:21.945 [2024-05-15 17:17:09.491174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.945 [2024-05-15 17:17:09.491427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.945 [2024-05-15 17:17:09.491437] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10b7840 with addr=10.0.0.2, port=4420 00:26:21.945 [2024-05-15 17:17:09.491443] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b7840 is same with the state(5) to be set 00:26:21.945 [2024-05-15 17:17:09.491617] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10b7840 (9): Bad file descriptor 00:26:21.945 [2024-05-15 17:17:09.491791] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:21.945 [2024-05-15 17:17:09.491799] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:21.945 [2024-05-15 17:17:09.491804] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:21.945 [2024-05-15 17:17:09.494553] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:21.945 [2024-05-15 17:17:09.503543] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:21.945 [2024-05-15 17:17:09.503919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.945 [2024-05-15 17:17:09.504174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.945 [2024-05-15 17:17:09.504185] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10b7840 with addr=10.0.0.2, port=4420 00:26:21.945 [2024-05-15 17:17:09.504208] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b7840 is same with the state(5) to be set 00:26:21.945 [2024-05-15 17:17:09.504383] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10b7840 (9): Bad file descriptor 00:26:21.945 [2024-05-15 17:17:09.504559] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:21.945 [2024-05-15 17:17:09.504567] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:21.945 [2024-05-15 17:17:09.504573] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:21.945 [2024-05-15 17:17:09.507283] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:21.945 [2024-05-15 17:17:09.516360] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:21.945 [2024-05-15 17:17:09.516833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.945 [2024-05-15 17:17:09.517070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.945 [2024-05-15 17:17:09.517100] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10b7840 with addr=10.0.0.2, port=4420 00:26:21.945 [2024-05-15 17:17:09.517121] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b7840 is same with the state(5) to be set 00:26:21.945 [2024-05-15 17:17:09.517535] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10b7840 (9): Bad file descriptor 00:26:21.945 [2024-05-15 17:17:09.517710] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:21.945 [2024-05-15 17:17:09.517717] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:21.945 [2024-05-15 17:17:09.517723] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:21.945 [2024-05-15 17:17:09.520432] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:21.945 [2024-05-15 17:17:09.529288] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:21.945 [2024-05-15 17:17:09.529778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.945 [2024-05-15 17:17:09.530052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.945 [2024-05-15 17:17:09.530083] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10b7840 with addr=10.0.0.2, port=4420 00:26:21.945 [2024-05-15 17:17:09.530104] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b7840 is same with the state(5) to be set 00:26:21.945 [2024-05-15 17:17:09.530711] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10b7840 (9): Bad file descriptor 00:26:21.945 [2024-05-15 17:17:09.530922] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:21.945 [2024-05-15 17:17:09.530930] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:21.945 [2024-05-15 17:17:09.530936] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:21.945 [2024-05-15 17:17:09.533645] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:21.945 [2024-05-15 17:17:09.542250] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:21.945 [2024-05-15 17:17:09.542735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.945 [2024-05-15 17:17:09.542979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.945 [2024-05-15 17:17:09.543009] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10b7840 with addr=10.0.0.2, port=4420 00:26:21.945 [2024-05-15 17:17:09.543030] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b7840 is same with the state(5) to be set 00:26:21.945 [2024-05-15 17:17:09.543628] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10b7840 (9): Bad file descriptor 00:26:21.945 [2024-05-15 17:17:09.543803] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:21.945 [2024-05-15 17:17:09.543814] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:21.945 [2024-05-15 17:17:09.543821] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:21.945 [2024-05-15 17:17:09.546540] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:21.945 [2024-05-15 17:17:09.555138] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:21.945 [2024-05-15 17:17:09.555550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.945 [2024-05-15 17:17:09.555851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.945 [2024-05-15 17:17:09.555881] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10b7840 with addr=10.0.0.2, port=4420 00:26:21.945 [2024-05-15 17:17:09.555902] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b7840 is same with the state(5) to be set 00:26:21.945 [2024-05-15 17:17:09.556512] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10b7840 (9): Bad file descriptor 00:26:21.945 [2024-05-15 17:17:09.556687] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:21.945 [2024-05-15 17:17:09.556694] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:21.945 [2024-05-15 17:17:09.556700] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:21.945 [2024-05-15 17:17:09.559406] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:21.945 [2024-05-15 17:17:09.568020] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:21.945 [2024-05-15 17:17:09.568401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.945 [2024-05-15 17:17:09.568575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.945 [2024-05-15 17:17:09.568585] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10b7840 with addr=10.0.0.2, port=4420 00:26:21.945 [2024-05-15 17:17:09.568591] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b7840 is same with the state(5) to be set 00:26:21.945 [2024-05-15 17:17:09.568765] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10b7840 (9): Bad file descriptor 00:26:21.945 [2024-05-15 17:17:09.568939] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:21.945 [2024-05-15 17:17:09.568946] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:21.945 [2024-05-15 17:17:09.568953] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:21.945 [2024-05-15 17:17:09.571664] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:21.945 [2024-05-15 17:17:09.580894] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:21.945 [2024-05-15 17:17:09.581345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.945 [2024-05-15 17:17:09.581599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.946 [2024-05-15 17:17:09.581629] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10b7840 with addr=10.0.0.2, port=4420 00:26:21.946 [2024-05-15 17:17:09.581650] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b7840 is same with the state(5) to be set 00:26:21.946 [2024-05-15 17:17:09.582249] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10b7840 (9): Bad file descriptor 00:26:21.946 [2024-05-15 17:17:09.582838] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:21.946 [2024-05-15 17:17:09.582862] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:21.946 [2024-05-15 17:17:09.582888] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:21.946 [2024-05-15 17:17:09.585670] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:21.946 [2024-05-15 17:17:09.593822] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:21.946 [2024-05-15 17:17:09.594269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.946 [2024-05-15 17:17:09.594515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.946 [2024-05-15 17:17:09.594545] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10b7840 with addr=10.0.0.2, port=4420 00:26:21.946 [2024-05-15 17:17:09.594566] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b7840 is same with the state(5) to be set 00:26:21.946 [2024-05-15 17:17:09.594918] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10b7840 (9): Bad file descriptor 00:26:21.946 [2024-05-15 17:17:09.595092] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:21.946 [2024-05-15 17:17:09.595100] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:21.946 [2024-05-15 17:17:09.595106] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:21.946 [2024-05-15 17:17:09.597940] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:22.205 [2024-05-15 17:17:09.606827] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:22.205 [2024-05-15 17:17:09.607268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.205 [2024-05-15 17:17:09.607444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.205 [2024-05-15 17:17:09.607458] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10b7840 with addr=10.0.0.2, port=4420 00:26:22.205 [2024-05-15 17:17:09.607465] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b7840 is same with the state(5) to be set 00:26:22.205 [2024-05-15 17:17:09.607647] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10b7840 (9): Bad file descriptor 00:26:22.205 [2024-05-15 17:17:09.607828] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:22.205 [2024-05-15 17:17:09.607836] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:22.205 [2024-05-15 17:17:09.607843] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:22.205 [2024-05-15 17:17:09.610586] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:22.205 [2024-05-15 17:17:09.619665] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:22.205 [2024-05-15 17:17:09.620113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.205 [2024-05-15 17:17:09.620362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.205 [2024-05-15 17:17:09.620393] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10b7840 with addr=10.0.0.2, port=4420 00:26:22.205 [2024-05-15 17:17:09.620414] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b7840 is same with the state(5) to be set 00:26:22.205 [2024-05-15 17:17:09.620799] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10b7840 (9): Bad file descriptor 00:26:22.205 [2024-05-15 17:17:09.620974] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:22.205 [2024-05-15 17:17:09.620981] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:22.205 [2024-05-15 17:17:09.620987] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:22.205 [2024-05-15 17:17:09.623698] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:22.205 [2024-05-15 17:17:09.632851] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:22.205 [2024-05-15 17:17:09.633266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.205 [2024-05-15 17:17:09.633445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.205 [2024-05-15 17:17:09.633455] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10b7840 with addr=10.0.0.2, port=4420 00:26:22.205 [2024-05-15 17:17:09.633481] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b7840 is same with the state(5) to be set 00:26:22.205 [2024-05-15 17:17:09.634029] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10b7840 (9): Bad file descriptor 00:26:22.205 [2024-05-15 17:17:09.634208] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:22.205 [2024-05-15 17:17:09.634217] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:22.205 [2024-05-15 17:17:09.634223] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:22.205 [2024-05-15 17:17:09.636931] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:22.205 [2024-05-15 17:17:09.645703] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:22.205 [2024-05-15 17:17:09.646191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.205 [2024-05-15 17:17:09.646464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.206 [2024-05-15 17:17:09.646494] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10b7840 with addr=10.0.0.2, port=4420 00:26:22.206 [2024-05-15 17:17:09.646515] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b7840 is same with the state(5) to be set 00:26:22.206 [2024-05-15 17:17:09.646874] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10b7840 (9): Bad file descriptor 00:26:22.206 [2024-05-15 17:17:09.647049] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:22.206 [2024-05-15 17:17:09.647056] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:22.206 [2024-05-15 17:17:09.647062] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:22.206 [2024-05-15 17:17:09.649773] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:22.206 [2024-05-15 17:17:09.658549] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:22.206 [2024-05-15 17:17:09.658990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.206 [2024-05-15 17:17:09.659175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.206 [2024-05-15 17:17:09.659186] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10b7840 with addr=10.0.0.2, port=4420 00:26:22.206 [2024-05-15 17:17:09.659192] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b7840 is same with the state(5) to be set 00:26:22.206 [2024-05-15 17:17:09.659367] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10b7840 (9): Bad file descriptor 00:26:22.206 [2024-05-15 17:17:09.659542] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:22.206 [2024-05-15 17:17:09.659549] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:22.206 [2024-05-15 17:17:09.659556] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:22.206 [2024-05-15 17:17:09.662294] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:22.206 [2024-05-15 17:17:09.671376] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:22.206 [2024-05-15 17:17:09.671709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.206 [2024-05-15 17:17:09.671891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.206 [2024-05-15 17:17:09.671901] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10b7840 with addr=10.0.0.2, port=4420 00:26:22.206 [2024-05-15 17:17:09.671907] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b7840 is same with the state(5) to be set 00:26:22.206 [2024-05-15 17:17:09.672082] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10b7840 (9): Bad file descriptor 00:26:22.206 [2024-05-15 17:17:09.672261] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:22.206 [2024-05-15 17:17:09.672270] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:22.206 [2024-05-15 17:17:09.672276] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:22.206 [2024-05-15 17:17:09.674982] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:22.206 [2024-05-15 17:17:09.684308] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:22.206 [2024-05-15 17:17:09.684645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.206 [2024-05-15 17:17:09.684793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.206 [2024-05-15 17:17:09.684803] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10b7840 with addr=10.0.0.2, port=4420 00:26:22.206 [2024-05-15 17:17:09.684811] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b7840 is same with the state(5) to be set 00:26:22.206 [2024-05-15 17:17:09.684990] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10b7840 (9): Bad file descriptor 00:26:22.206 [2024-05-15 17:17:09.685175] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:22.206 [2024-05-15 17:17:09.685184] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:22.206 [2024-05-15 17:17:09.685191] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:22.206 [2024-05-15 17:17:09.688056] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:22.206 [2024-05-15 17:17:09.697337] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:22.206 [2024-05-15 17:17:09.697724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.206 [2024-05-15 17:17:09.697929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.206 [2024-05-15 17:17:09.697938] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10b7840 with addr=10.0.0.2, port=4420 00:26:22.206 [2024-05-15 17:17:09.697945] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b7840 is same with the state(5) to be set 00:26:22.206 [2024-05-15 17:17:09.698119] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10b7840 (9): Bad file descriptor 00:26:22.206 [2024-05-15 17:17:09.698317] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:22.206 [2024-05-15 17:17:09.698327] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:22.206 [2024-05-15 17:17:09.698333] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:22.206 [2024-05-15 17:17:09.701152] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:22.206 [2024-05-15 17:17:09.710338] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:22.206 [2024-05-15 17:17:09.710799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.206 [2024-05-15 17:17:09.711031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.206 [2024-05-15 17:17:09.711061] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10b7840 with addr=10.0.0.2, port=4420 00:26:22.206 [2024-05-15 17:17:09.711082] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b7840 is same with the state(5) to be set 00:26:22.206 [2024-05-15 17:17:09.711431] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10b7840 (9): Bad file descriptor 00:26:22.206 [2024-05-15 17:17:09.711688] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:22.206 [2024-05-15 17:17:09.711699] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:22.206 [2024-05-15 17:17:09.711707] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:22.206 [2024-05-15 17:17:09.715820] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:22.206 [2024-05-15 17:17:09.723615] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:22.206 [2024-05-15 17:17:09.723999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.206 [2024-05-15 17:17:09.724229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.206 [2024-05-15 17:17:09.724240] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10b7840 with addr=10.0.0.2, port=4420 00:26:22.206 [2024-05-15 17:17:09.724247] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b7840 is same with the state(5) to be set 00:26:22.206 [2024-05-15 17:17:09.724420] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10b7840 (9): Bad file descriptor 00:26:22.206 [2024-05-15 17:17:09.724594] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:22.206 [2024-05-15 17:17:09.724602] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:22.206 [2024-05-15 17:17:09.724608] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:22.206 [2024-05-15 17:17:09.727389] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:22.206 [2024-05-15 17:17:09.736538] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:22.206 [2024-05-15 17:17:09.736980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.206 [2024-05-15 17:17:09.737246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.206 [2024-05-15 17:17:09.737278] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10b7840 with addr=10.0.0.2, port=4420 00:26:22.206 [2024-05-15 17:17:09.737299] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b7840 is same with the state(5) to be set 00:26:22.206 [2024-05-15 17:17:09.737883] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10b7840 (9): Bad file descriptor 00:26:22.206 [2024-05-15 17:17:09.738132] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:22.206 [2024-05-15 17:17:09.738140] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:22.206 [2024-05-15 17:17:09.738146] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:22.206 [2024-05-15 17:17:09.740864] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:22.206 [2024-05-15 17:17:09.749483] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:22.206 [2024-05-15 17:17:09.749813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.206 [2024-05-15 17:17:09.749995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.206 [2024-05-15 17:17:09.750009] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10b7840 with addr=10.0.0.2, port=4420 00:26:22.206 [2024-05-15 17:17:09.750016] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b7840 is same with the state(5) to be set 00:26:22.206 [2024-05-15 17:17:09.750196] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10b7840 (9): Bad file descriptor 00:26:22.206 [2024-05-15 17:17:09.750370] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:22.206 [2024-05-15 17:17:09.750378] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:22.206 [2024-05-15 17:17:09.750383] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:22.206 [2024-05-15 17:17:09.753096] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:22.206 [2024-05-15 17:17:09.762328] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:22.206 [2024-05-15 17:17:09.762619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.206 [2024-05-15 17:17:09.762786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.206 [2024-05-15 17:17:09.762795] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10b7840 with addr=10.0.0.2, port=4420 00:26:22.206 [2024-05-15 17:17:09.762802] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b7840 is same with the state(5) to be set 00:26:22.206 [2024-05-15 17:17:09.762976] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10b7840 (9): Bad file descriptor 00:26:22.206 [2024-05-15 17:17:09.763150] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:22.206 [2024-05-15 17:17:09.763158] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:22.206 [2024-05-15 17:17:09.763170] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:22.207 [2024-05-15 17:17:09.765879] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:22.207 [2024-05-15 17:17:09.775270] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:22.207 [2024-05-15 17:17:09.775601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.207 [2024-05-15 17:17:09.775700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.207 [2024-05-15 17:17:09.775710] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10b7840 with addr=10.0.0.2, port=4420 00:26:22.207 [2024-05-15 17:17:09.775717] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b7840 is same with the state(5) to be set 00:26:22.207 [2024-05-15 17:17:09.775891] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10b7840 (9): Bad file descriptor 00:26:22.207 [2024-05-15 17:17:09.776065] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:22.207 [2024-05-15 17:17:09.776072] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:22.207 [2024-05-15 17:17:09.776078] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:22.207 [2024-05-15 17:17:09.778790] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:22.207 [2024-05-15 17:17:09.788304] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:22.207 [2024-05-15 17:17:09.788691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.207 [2024-05-15 17:17:09.788818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.207 [2024-05-15 17:17:09.788828] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10b7840 with addr=10.0.0.2, port=4420 00:26:22.207 [2024-05-15 17:17:09.788837] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b7840 is same with the state(5) to be set 00:26:22.207 [2024-05-15 17:17:09.789011] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10b7840 (9): Bad file descriptor 00:26:22.207 [2024-05-15 17:17:09.789192] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:22.207 [2024-05-15 17:17:09.789200] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:22.207 [2024-05-15 17:17:09.789206] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:22.207 [2024-05-15 17:17:09.791983] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:22.207 [2024-05-15 17:17:09.801152] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:22.207 [2024-05-15 17:17:09.801557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.207 [2024-05-15 17:17:09.801646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.207 [2024-05-15 17:17:09.801656] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10b7840 with addr=10.0.0.2, port=4420 00:26:22.207 [2024-05-15 17:17:09.801662] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b7840 is same with the state(5) to be set 00:26:22.207 [2024-05-15 17:17:09.801836] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10b7840 (9): Bad file descriptor 00:26:22.207 [2024-05-15 17:17:09.802010] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:22.207 [2024-05-15 17:17:09.802019] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:22.207 [2024-05-15 17:17:09.802025] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:22.207 [2024-05-15 17:17:09.804716] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:22.207 [2024-05-15 17:17:09.814111] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:22.207 [2024-05-15 17:17:09.814578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.207 [2024-05-15 17:17:09.814875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.207 [2024-05-15 17:17:09.814905] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10b7840 with addr=10.0.0.2, port=4420 00:26:22.207 [2024-05-15 17:17:09.814926] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b7840 is same with the state(5) to be set 00:26:22.207 [2024-05-15 17:17:09.815276] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10b7840 (9): Bad file descriptor 00:26:22.207 [2024-05-15 17:17:09.815451] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:22.207 [2024-05-15 17:17:09.815459] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:22.207 [2024-05-15 17:17:09.815465] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:22.207 [2024-05-15 17:17:09.818174] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:22.207 [2024-05-15 17:17:09.826944] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:22.207 [2024-05-15 17:17:09.827898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.207 [2024-05-15 17:17:09.828161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.207 [2024-05-15 17:17:09.828177] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10b7840 with addr=10.0.0.2, port=4420 00:26:22.207 [2024-05-15 17:17:09.828184] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b7840 is same with the state(5) to be set 00:26:22.207 [2024-05-15 17:17:09.828369] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10b7840 (9): Bad file descriptor 00:26:22.207 [2024-05-15 17:17:09.828543] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:22.207 [2024-05-15 17:17:09.828551] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:22.207 [2024-05-15 17:17:09.828557] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:22.207 [2024-05-15 17:17:09.831339] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:22.207 [2024-05-15 17:17:09.840072] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:22.207 [2024-05-15 17:17:09.840488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.207 [2024-05-15 17:17:09.840690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.207 [2024-05-15 17:17:09.840700] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10b7840 with addr=10.0.0.2, port=4420 00:26:22.207 [2024-05-15 17:17:09.840707] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b7840 is same with the state(5) to be set 00:26:22.207 [2024-05-15 17:17:09.840881] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10b7840 (9): Bad file descriptor 00:26:22.207 [2024-05-15 17:17:09.841055] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:22.207 [2024-05-15 17:17:09.841063] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:22.207 [2024-05-15 17:17:09.841069] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:22.207 [2024-05-15 17:17:09.843781] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:22.207 [2024-05-15 17:17:09.853167] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:22.207 [2024-05-15 17:17:09.853519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.207 [2024-05-15 17:17:09.853748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.207 [2024-05-15 17:17:09.853779] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10b7840 with addr=10.0.0.2, port=4420 00:26:22.207 [2024-05-15 17:17:09.853800] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b7840 is same with the state(5) to be set 00:26:22.207 [2024-05-15 17:17:09.854402] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10b7840 (9): Bad file descriptor 00:26:22.207 [2024-05-15 17:17:09.854615] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:22.207 [2024-05-15 17:17:09.854623] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:22.207 [2024-05-15 17:17:09.854629] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:22.207 [2024-05-15 17:17:09.857338] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:22.466 [2024-05-15 17:17:09.866186] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:22.466 [2024-05-15 17:17:09.866520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.467 [2024-05-15 17:17:09.866636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.467 [2024-05-15 17:17:09.866646] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10b7840 with addr=10.0.0.2, port=4420 00:26:22.467 [2024-05-15 17:17:09.866653] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b7840 is same with the state(5) to be set 00:26:22.467 [2024-05-15 17:17:09.866827] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10b7840 (9): Bad file descriptor 00:26:22.467 [2024-05-15 17:17:09.867004] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:22.467 [2024-05-15 17:17:09.867013] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:22.467 [2024-05-15 17:17:09.867018] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:22.467 [2024-05-15 17:17:09.869942] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:22.467 [2024-05-15 17:17:09.879069] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:22.467 [2024-05-15 17:17:09.879391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.467 [2024-05-15 17:17:09.879612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.467 [2024-05-15 17:17:09.879643] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10b7840 with addr=10.0.0.2, port=4420 00:26:22.467 [2024-05-15 17:17:09.879665] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b7840 is same with the state(5) to be set 00:26:22.467 [2024-05-15 17:17:09.880241] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10b7840 (9): Bad file descriptor 00:26:22.467 [2024-05-15 17:17:09.880416] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:22.467 [2024-05-15 17:17:09.880424] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:22.467 [2024-05-15 17:17:09.880430] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:22.467 [2024-05-15 17:17:09.883233] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:22.467 [2024-05-15 17:17:09.892121] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:22.467 [2024-05-15 17:17:09.892493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.467 [2024-05-15 17:17:09.892632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.467 [2024-05-15 17:17:09.892642] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10b7840 with addr=10.0.0.2, port=4420 00:26:22.467 [2024-05-15 17:17:09.892648] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b7840 is same with the state(5) to be set 00:26:22.467 [2024-05-15 17:17:09.892823] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10b7840 (9): Bad file descriptor 00:26:22.467 [2024-05-15 17:17:09.892997] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:22.467 [2024-05-15 17:17:09.893004] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:22.467 [2024-05-15 17:17:09.893010] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:22.467 [2024-05-15 17:17:09.895723] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:22.467 [2024-05-15 17:17:09.904969] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:22.467 [2024-05-15 17:17:09.905301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.467 [2024-05-15 17:17:09.905482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.467 [2024-05-15 17:17:09.905492] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10b7840 with addr=10.0.0.2, port=4420 00:26:22.467 [2024-05-15 17:17:09.905499] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b7840 is same with the state(5) to be set 00:26:22.467 [2024-05-15 17:17:09.905672] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10b7840 (9): Bad file descriptor 00:26:22.467 [2024-05-15 17:17:09.905846] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:22.467 [2024-05-15 17:17:09.905857] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:22.467 [2024-05-15 17:17:09.905863] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:22.467 [2024-05-15 17:17:09.908574] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:22.467 [2024-05-15 17:17:09.917812] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:22.467 [2024-05-15 17:17:09.918190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.467 [2024-05-15 17:17:09.918309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.467 [2024-05-15 17:17:09.918319] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10b7840 with addr=10.0.0.2, port=4420 00:26:22.467 [2024-05-15 17:17:09.918326] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b7840 is same with the state(5) to be set 00:26:22.467 [2024-05-15 17:17:09.918500] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10b7840 (9): Bad file descriptor 00:26:22.467 [2024-05-15 17:17:09.918674] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:22.467 [2024-05-15 17:17:09.918682] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:22.467 [2024-05-15 17:17:09.918688] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:22.467 [2024-05-15 17:17:09.921417] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:22.467 [2024-05-15 17:17:09.930684] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:22.467 [2024-05-15 17:17:09.930978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.467 [2024-05-15 17:17:09.931080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.467 [2024-05-15 17:17:09.931089] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10b7840 with addr=10.0.0.2, port=4420 00:26:22.467 [2024-05-15 17:17:09.931096] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b7840 is same with the state(5) to be set 00:26:22.467 [2024-05-15 17:17:09.931273] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10b7840 (9): Bad file descriptor 00:26:22.467 [2024-05-15 17:17:09.931448] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:22.467 [2024-05-15 17:17:09.931456] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:22.467 [2024-05-15 17:17:09.931462] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:22.467 [2024-05-15 17:17:09.934198] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:22.467 [2024-05-15 17:17:09.943778] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:22.467 [2024-05-15 17:17:09.944083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.467 [2024-05-15 17:17:09.944262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.467 [2024-05-15 17:17:09.944273] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10b7840 with addr=10.0.0.2, port=4420 00:26:22.467 [2024-05-15 17:17:09.944279] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b7840 is same with the state(5) to be set 00:26:22.467 [2024-05-15 17:17:09.944458] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10b7840 (9): Bad file descriptor 00:26:22.467 [2024-05-15 17:17:09.944638] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:22.467 [2024-05-15 17:17:09.944646] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:22.467 [2024-05-15 17:17:09.944655] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:22.467 [2024-05-15 17:17:09.947524] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:22.467 [2024-05-15 17:17:09.956814] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:22.467 [2024-05-15 17:17:09.957254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.467 [2024-05-15 17:17:09.957438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.467 [2024-05-15 17:17:09.957448] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10b7840 with addr=10.0.0.2, port=4420 00:26:22.467 [2024-05-15 17:17:09.957454] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b7840 is same with the state(5) to be set 00:26:22.467 [2024-05-15 17:17:09.957628] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10b7840 (9): Bad file descriptor 00:26:22.467 [2024-05-15 17:17:09.957802] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:22.467 [2024-05-15 17:17:09.957811] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:22.467 [2024-05-15 17:17:09.957816] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:22.467 [2024-05-15 17:17:09.960597] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:22.467 [2024-05-15 17:17:09.969939] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:22.467 [2024-05-15 17:17:09.970255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.467 [2024-05-15 17:17:09.970488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.467 [2024-05-15 17:17:09.970518] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10b7840 with addr=10.0.0.2, port=4420 00:26:22.467 [2024-05-15 17:17:09.970539] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b7840 is same with the state(5) to be set 00:26:22.467 [2024-05-15 17:17:09.970906] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10b7840 (9): Bad file descriptor 00:26:22.467 [2024-05-15 17:17:09.971080] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:22.467 [2024-05-15 17:17:09.971088] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:22.467 [2024-05-15 17:17:09.971093] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:22.467 [2024-05-15 17:17:09.973870] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:22.467 [2024-05-15 17:17:09.982892] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:22.467 [2024-05-15 17:17:09.983254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.467 [2024-05-15 17:17:09.983392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.467 [2024-05-15 17:17:09.983402] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10b7840 with addr=10.0.0.2, port=4420 00:26:22.467 [2024-05-15 17:17:09.983409] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b7840 is same with the state(5) to be set 00:26:22.467 [2024-05-15 17:17:09.983998] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10b7840 (9): Bad file descriptor 00:26:22.468 [2024-05-15 17:17:09.984203] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:22.468 [2024-05-15 17:17:09.984211] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:22.468 [2024-05-15 17:17:09.984217] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:22.468 [2024-05-15 17:17:09.988180] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:22.468 [2024-05-15 17:17:09.996325] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:22.468 [2024-05-15 17:17:09.996646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.468 [2024-05-15 17:17:09.996760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.468 [2024-05-15 17:17:09.996770] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10b7840 with addr=10.0.0.2, port=4420 00:26:22.468 [2024-05-15 17:17:09.996799] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b7840 is same with the state(5) to be set 00:26:22.468 [2024-05-15 17:17:09.997399] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10b7840 (9): Bad file descriptor 00:26:22.468 [2024-05-15 17:17:09.997902] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:22.468 [2024-05-15 17:17:09.997910] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:22.468 [2024-05-15 17:17:09.997916] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:22.468 [2024-05-15 17:17:10.000677] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:22.468 [2024-05-15 17:17:10.009482] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:22.468 [2024-05-15 17:17:10.009869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.468 [2024-05-15 17:17:10.010135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.468 [2024-05-15 17:17:10.010180] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10b7840 with addr=10.0.0.2, port=4420 00:26:22.468 [2024-05-15 17:17:10.010203] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b7840 is same with the state(5) to be set 00:26:22.468 [2024-05-15 17:17:10.010791] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10b7840 (9): Bad file descriptor 00:26:22.468 [2024-05-15 17:17:10.011010] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:22.468 [2024-05-15 17:17:10.011019] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:22.468 [2024-05-15 17:17:10.011025] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:22.468 [2024-05-15 17:17:10.013915] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:22.468 [2024-05-15 17:17:10.022605] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:22.468 [2024-05-15 17:17:10.023083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.468 [2024-05-15 17:17:10.023263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.468 [2024-05-15 17:17:10.023274] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10b7840 with addr=10.0.0.2, port=4420 00:26:22.468 [2024-05-15 17:17:10.023281] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b7840 is same with the state(5) to be set 00:26:22.468 [2024-05-15 17:17:10.023461] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10b7840 (9): Bad file descriptor 00:26:22.468 [2024-05-15 17:17:10.023641] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:22.468 [2024-05-15 17:17:10.023650] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:22.468 [2024-05-15 17:17:10.023656] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:22.468 [2024-05-15 17:17:10.026464] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:22.468 [2024-05-15 17:17:10.035633] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:22.468 [2024-05-15 17:17:10.036100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.468 [2024-05-15 17:17:10.036294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.468 [2024-05-15 17:17:10.036306] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10b7840 with addr=10.0.0.2, port=4420 00:26:22.468 [2024-05-15 17:17:10.036313] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b7840 is same with the state(5) to be set 00:26:22.468 [2024-05-15 17:17:10.036504] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10b7840 (9): Bad file descriptor 00:26:22.468 [2024-05-15 17:17:10.036684] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:22.468 [2024-05-15 17:17:10.036692] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:22.468 [2024-05-15 17:17:10.036698] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:22.468 [2024-05-15 17:17:10.039517] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:22.468 [2024-05-15 17:17:10.048933] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:22.468 [2024-05-15 17:17:10.049346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.468 [2024-05-15 17:17:10.049581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.468 [2024-05-15 17:17:10.049591] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10b7840 with addr=10.0.0.2, port=4420 00:26:22.468 [2024-05-15 17:17:10.049598] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b7840 is same with the state(5) to be set 00:26:22.468 [2024-05-15 17:17:10.049778] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10b7840 (9): Bad file descriptor 00:26:22.468 [2024-05-15 17:17:10.049958] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:22.468 [2024-05-15 17:17:10.049966] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:22.468 [2024-05-15 17:17:10.049972] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:22.468 [2024-05-15 17:17:10.052836] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:22.468 [2024-05-15 17:17:10.062139] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:22.468 [2024-05-15 17:17:10.062577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.468 [2024-05-15 17:17:10.062836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.468 [2024-05-15 17:17:10.062846] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10b7840 with addr=10.0.0.2, port=4420 00:26:22.468 [2024-05-15 17:17:10.062853] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b7840 is same with the state(5) to be set 00:26:22.468 [2024-05-15 17:17:10.063032] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10b7840 (9): Bad file descriptor 00:26:22.468 [2024-05-15 17:17:10.063216] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:22.468 [2024-05-15 17:17:10.063224] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:22.468 [2024-05-15 17:17:10.063230] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:22.468 [2024-05-15 17:17:10.066042] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:22.468 [2024-05-15 17:17:10.075267] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:22.468 [2024-05-15 17:17:10.075719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.468 [2024-05-15 17:17:10.075902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.468 [2024-05-15 17:17:10.075912] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10b7840 with addr=10.0.0.2, port=4420 00:26:22.468 [2024-05-15 17:17:10.075919] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b7840 is same with the state(5) to be set 00:26:22.468 [2024-05-15 17:17:10.076098] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10b7840 (9): Bad file descriptor 00:26:22.468 [2024-05-15 17:17:10.076284] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:22.468 [2024-05-15 17:17:10.076293] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:22.468 [2024-05-15 17:17:10.076299] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:22.468 [2024-05-15 17:17:10.079122] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:22.468 [2024-05-15 17:17:10.088260] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:22.468 [2024-05-15 17:17:10.088661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.468 [2024-05-15 17:17:10.088900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.468 [2024-05-15 17:17:10.088910] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10b7840 with addr=10.0.0.2, port=4420 00:26:22.468 [2024-05-15 17:17:10.088917] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b7840 is same with the state(5) to be set 00:26:22.468 [2024-05-15 17:17:10.089096] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10b7840 (9): Bad file descriptor 00:26:22.468 [2024-05-15 17:17:10.089280] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:22.468 [2024-05-15 17:17:10.089289] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:22.468 [2024-05-15 17:17:10.089295] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:22.468 [2024-05-15 17:17:10.092154] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:22.468 [2024-05-15 17:17:10.101302] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:22.468 [2024-05-15 17:17:10.101696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.468 [2024-05-15 17:17:10.101950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.468 [2024-05-15 17:17:10.101961] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10b7840 with addr=10.0.0.2, port=4420 00:26:22.468 [2024-05-15 17:17:10.101968] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b7840 is same with the state(5) to be set 00:26:22.468 [2024-05-15 17:17:10.102142] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10b7840 (9): Bad file descriptor 00:26:22.468 [2024-05-15 17:17:10.102324] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:22.468 [2024-05-15 17:17:10.102333] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:22.468 [2024-05-15 17:17:10.102339] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:22.468 [2024-05-15 17:17:10.105105] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:22.468 [2024-05-15 17:17:10.114390] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:22.468 [2024-05-15 17:17:10.114775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.468 [2024-05-15 17:17:10.115006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.469 [2024-05-15 17:17:10.115020] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10b7840 with addr=10.0.0.2, port=4420 00:26:22.469 [2024-05-15 17:17:10.115026] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b7840 is same with the state(5) to be set 00:26:22.469 [2024-05-15 17:17:10.115206] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10b7840 (9): Bad file descriptor 00:26:22.469 [2024-05-15 17:17:10.115380] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:22.469 [2024-05-15 17:17:10.115388] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:22.469 [2024-05-15 17:17:10.115394] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:22.469 [2024-05-15 17:17:10.118172] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:22.761 [2024-05-15 17:17:10.127462] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:22.761 [2024-05-15 17:17:10.127943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.761 [2024-05-15 17:17:10.128234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.761 [2024-05-15 17:17:10.128269] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10b7840 with addr=10.0.0.2, port=4420 00:26:22.761 [2024-05-15 17:17:10.128291] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b7840 is same with the state(5) to be set 00:26:22.761 [2024-05-15 17:17:10.128891] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10b7840 (9): Bad file descriptor 00:26:22.761 [2024-05-15 17:17:10.129210] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:22.761 [2024-05-15 17:17:10.129219] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:22.761 [2024-05-15 17:17:10.129226] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:22.761 [2024-05-15 17:17:10.132088] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:22.761 [2024-05-15 17:17:10.140498] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:22.761 [2024-05-15 17:17:10.140986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.761 [2024-05-15 17:17:10.141208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.761 [2024-05-15 17:17:10.141240] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10b7840 with addr=10.0.0.2, port=4420 00:26:22.761 [2024-05-15 17:17:10.141262] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b7840 is same with the state(5) to be set 00:26:22.761 [2024-05-15 17:17:10.141846] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10b7840 (9): Bad file descriptor 00:26:22.761 [2024-05-15 17:17:10.142114] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:22.761 [2024-05-15 17:17:10.142122] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:22.761 [2024-05-15 17:17:10.142128] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:22.761 [2024-05-15 17:17:10.144893] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:22.761 [2024-05-15 17:17:10.153566] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:22.761 [2024-05-15 17:17:10.154036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.761 [2024-05-15 17:17:10.154258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.761 [2024-05-15 17:17:10.154291] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10b7840 with addr=10.0.0.2, port=4420 00:26:22.761 [2024-05-15 17:17:10.154320] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b7840 is same with the state(5) to be set 00:26:22.761 [2024-05-15 17:17:10.154555] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10b7840 (9): Bad file descriptor 00:26:22.761 [2024-05-15 17:17:10.154729] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:22.761 [2024-05-15 17:17:10.154737] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:22.761 [2024-05-15 17:17:10.154743] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:22.761 [2024-05-15 17:17:10.157480] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:22.761 [2024-05-15 17:17:10.166512] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:22.762 [2024-05-15 17:17:10.166987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.762 [2024-05-15 17:17:10.167254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.762 [2024-05-15 17:17:10.167264] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10b7840 with addr=10.0.0.2, port=4420 00:26:22.762 [2024-05-15 17:17:10.167271] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b7840 is same with the state(5) to be set 00:26:22.762 [2024-05-15 17:17:10.167445] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10b7840 (9): Bad file descriptor 00:26:22.762 [2024-05-15 17:17:10.167618] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:22.762 [2024-05-15 17:17:10.167626] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:22.762 [2024-05-15 17:17:10.167632] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:22.762 [2024-05-15 17:17:10.170442] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:22.762 [2024-05-15 17:17:10.179623] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:22.762 [2024-05-15 17:17:10.180073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.762 [2024-05-15 17:17:10.180380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.762 [2024-05-15 17:17:10.180413] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10b7840 with addr=10.0.0.2, port=4420 00:26:22.762 [2024-05-15 17:17:10.180435] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b7840 is same with the state(5) to be set 00:26:22.762 [2024-05-15 17:17:10.180643] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10b7840 (9): Bad file descriptor 00:26:22.762 [2024-05-15 17:17:10.180817] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:22.762 [2024-05-15 17:17:10.180825] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:22.762 [2024-05-15 17:17:10.180831] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:22.762 [2024-05-15 17:17:10.183629] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:22.762 [2024-05-15 17:17:10.192684] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:22.762 [2024-05-15 17:17:10.193157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.762 [2024-05-15 17:17:10.193301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.762 [2024-05-15 17:17:10.193311] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10b7840 with addr=10.0.0.2, port=4420 00:26:22.762 [2024-05-15 17:17:10.193318] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b7840 is same with the state(5) to be set 00:26:22.762 [2024-05-15 17:17:10.193496] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10b7840 (9): Bad file descriptor 00:26:22.762 [2024-05-15 17:17:10.193670] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:22.762 [2024-05-15 17:17:10.193679] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:22.762 [2024-05-15 17:17:10.193685] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:22.762 [2024-05-15 17:17:10.196466] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:22.762 [2024-05-15 17:17:10.205802] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:22.762 [2024-05-15 17:17:10.206248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.762 [2024-05-15 17:17:10.206444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.762 [2024-05-15 17:17:10.206454] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10b7840 with addr=10.0.0.2, port=4420 00:26:22.762 [2024-05-15 17:17:10.206461] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b7840 is same with the state(5) to be set 00:26:22.762 [2024-05-15 17:17:10.206641] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10b7840 (9): Bad file descriptor 00:26:22.762 [2024-05-15 17:17:10.206820] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:22.762 [2024-05-15 17:17:10.206828] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:22.762 [2024-05-15 17:17:10.206834] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:22.762 [2024-05-15 17:17:10.209652] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:22.762 [2024-05-15 17:17:10.219029] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:22.762 [2024-05-15 17:17:10.219514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.762 [2024-05-15 17:17:10.219745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.762 [2024-05-15 17:17:10.219775] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10b7840 with addr=10.0.0.2, port=4420 00:26:22.762 [2024-05-15 17:17:10.219796] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b7840 is same with the state(5) to be set 00:26:22.762 [2024-05-15 17:17:10.220395] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10b7840 (9): Bad file descriptor 00:26:22.762 [2024-05-15 17:17:10.220635] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:22.762 [2024-05-15 17:17:10.220643] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:22.762 [2024-05-15 17:17:10.220649] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:22.762 [2024-05-15 17:17:10.223518] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:22.762 [2024-05-15 17:17:10.232136] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:22.762 [2024-05-15 17:17:10.232471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.762 [2024-05-15 17:17:10.232700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.762 [2024-05-15 17:17:10.232730] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10b7840 with addr=10.0.0.2, port=4420 00:26:22.762 [2024-05-15 17:17:10.232751] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b7840 is same with the state(5) to be set 00:26:22.762 [2024-05-15 17:17:10.233351] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10b7840 (9): Bad file descriptor 00:26:22.762 [2024-05-15 17:17:10.233651] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:22.762 [2024-05-15 17:17:10.233659] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:22.762 [2024-05-15 17:17:10.233666] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:22.762 [2024-05-15 17:17:10.236572] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:22.762 [2024-05-15 17:17:10.245118] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:22.762 [2024-05-15 17:17:10.245642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.762 [2024-05-15 17:17:10.245982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.762 [2024-05-15 17:17:10.246012] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10b7840 with addr=10.0.0.2, port=4420 00:26:22.762 [2024-05-15 17:17:10.246034] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b7840 is same with the state(5) to be set 00:26:22.762 [2024-05-15 17:17:10.246623] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10b7840 (9): Bad file descriptor 00:26:22.762 [2024-05-15 17:17:10.246879] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:22.762 [2024-05-15 17:17:10.246890] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:22.762 [2024-05-15 17:17:10.246898] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:22.762 [2024-05-15 17:17:10.251006] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:22.762 [2024-05-15 17:17:10.258740] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:22.762 [2024-05-15 17:17:10.259218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.762 [2024-05-15 17:17:10.259515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.762 [2024-05-15 17:17:10.259545] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10b7840 with addr=10.0.0.2, port=4420 00:26:22.762 [2024-05-15 17:17:10.259566] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b7840 is same with the state(5) to be set 00:26:22.763 [2024-05-15 17:17:10.259797] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10b7840 (9): Bad file descriptor 00:26:22.763 [2024-05-15 17:17:10.259972] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:22.763 [2024-05-15 17:17:10.259980] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:22.763 [2024-05-15 17:17:10.259986] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:22.763 [2024-05-15 17:17:10.262766] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:22.763 [2024-05-15 17:17:10.271772] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:22.763 [2024-05-15 17:17:10.272154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.763 [2024-05-15 17:17:10.272498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.763 [2024-05-15 17:17:10.272528] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10b7840 with addr=10.0.0.2, port=4420 00:26:22.763 [2024-05-15 17:17:10.272549] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b7840 is same with the state(5) to be set 00:26:22.763 [2024-05-15 17:17:10.272862] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10b7840 (9): Bad file descriptor 00:26:22.763 [2024-05-15 17:17:10.273036] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:22.763 [2024-05-15 17:17:10.273046] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:22.763 [2024-05-15 17:17:10.273052] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:22.763 [2024-05-15 17:17:10.275832] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:22.763 [2024-05-15 17:17:10.284837] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:22.763 [2024-05-15 17:17:10.285295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.763 [2024-05-15 17:17:10.285534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.763 [2024-05-15 17:17:10.285544] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10b7840 with addr=10.0.0.2, port=4420 00:26:22.763 [2024-05-15 17:17:10.285551] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b7840 is same with the state(5) to be set 00:26:22.763 [2024-05-15 17:17:10.285725] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10b7840 (9): Bad file descriptor 00:26:22.763 [2024-05-15 17:17:10.285899] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:22.763 [2024-05-15 17:17:10.285907] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:22.763 [2024-05-15 17:17:10.285912] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:22.763 [2024-05-15 17:17:10.288693] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:22.763 [2024-05-15 17:17:10.297860] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:22.763 [2024-05-15 17:17:10.298291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.763 [2024-05-15 17:17:10.298552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.763 [2024-05-15 17:17:10.298562] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10b7840 with addr=10.0.0.2, port=4420 00:26:22.763 [2024-05-15 17:17:10.298569] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b7840 is same with the state(5) to be set 00:26:22.763 [2024-05-15 17:17:10.298744] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10b7840 (9): Bad file descriptor 00:26:22.763 [2024-05-15 17:17:10.298917] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:22.763 [2024-05-15 17:17:10.298925] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:22.763 [2024-05-15 17:17:10.298931] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:22.763 [2024-05-15 17:17:10.301747] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:22.763 [2024-05-15 17:17:10.310884] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:22.763 [2024-05-15 17:17:10.311318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.763 [2024-05-15 17:17:10.311647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.763 [2024-05-15 17:17:10.311678] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10b7840 with addr=10.0.0.2, port=4420 00:26:22.763 [2024-05-15 17:17:10.311699] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b7840 is same with the state(5) to be set 00:26:22.763 [2024-05-15 17:17:10.312302] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10b7840 (9): Bad file descriptor 00:26:22.763 [2024-05-15 17:17:10.312533] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:22.763 [2024-05-15 17:17:10.312541] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:22.763 [2024-05-15 17:17:10.312551] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:22.763 [2024-05-15 17:17:10.315327] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:22.763 [2024-05-15 17:17:10.323980] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:22.763 [2024-05-15 17:17:10.324427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.763 [2024-05-15 17:17:10.324727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.763 [2024-05-15 17:17:10.324757] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10b7840 with addr=10.0.0.2, port=4420 00:26:22.763 [2024-05-15 17:17:10.324778] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b7840 is same with the state(5) to be set 00:26:22.763 [2024-05-15 17:17:10.325379] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10b7840 (9): Bad file descriptor 00:26:22.763 [2024-05-15 17:17:10.325588] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:22.763 [2024-05-15 17:17:10.325596] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:22.763 [2024-05-15 17:17:10.325602] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:22.763 [2024-05-15 17:17:10.328383] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:22.763 [2024-05-15 17:17:10.337038] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:22.763 [2024-05-15 17:17:10.337548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.763 [2024-05-15 17:17:10.337846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.763 [2024-05-15 17:17:10.337877] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10b7840 with addr=10.0.0.2, port=4420 00:26:22.763 [2024-05-15 17:17:10.337898] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b7840 is same with the state(5) to be set 00:26:22.763 [2024-05-15 17:17:10.338500] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10b7840 (9): Bad file descriptor 00:26:22.763 [2024-05-15 17:17:10.338835] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:22.763 [2024-05-15 17:17:10.338843] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:22.763 [2024-05-15 17:17:10.338849] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:22.763 [2024-05-15 17:17:10.341625] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:22.763 [2024-05-15 17:17:10.350059] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:22.763 [2024-05-15 17:17:10.350512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.763 [2024-05-15 17:17:10.350759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.763 [2024-05-15 17:17:10.350769] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10b7840 with addr=10.0.0.2, port=4420 00:26:22.763 [2024-05-15 17:17:10.350776] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b7840 is same with the state(5) to be set 00:26:22.763 [2024-05-15 17:17:10.350949] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10b7840 (9): Bad file descriptor 00:26:22.763 [2024-05-15 17:17:10.351123] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:22.763 [2024-05-15 17:17:10.351131] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:22.763 [2024-05-15 17:17:10.351137] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:22.763 [2024-05-15 17:17:10.353945] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:22.763 [2024-05-15 17:17:10.363172] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:22.763 [2024-05-15 17:17:10.363629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.763 [2024-05-15 17:17:10.363801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.763 [2024-05-15 17:17:10.363811] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10b7840 with addr=10.0.0.2, port=4420 00:26:22.763 [2024-05-15 17:17:10.363818] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b7840 is same with the state(5) to be set 00:26:22.763 [2024-05-15 17:17:10.363992] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10b7840 (9): Bad file descriptor 00:26:22.763 [2024-05-15 17:17:10.364172] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:22.763 [2024-05-15 17:17:10.364181] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:22.763 [2024-05-15 17:17:10.364187] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:22.763 [2024-05-15 17:17:10.366960] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:22.763 [2024-05-15 17:17:10.376290] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:22.763 [2024-05-15 17:17:10.376726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.763 [2024-05-15 17:17:10.376903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.763 [2024-05-15 17:17:10.376929] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10b7840 with addr=10.0.0.2, port=4420 00:26:22.763 [2024-05-15 17:17:10.376952] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b7840 is same with the state(5) to be set 00:26:22.763 [2024-05-15 17:17:10.377521] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10b7840 (9): Bad file descriptor 00:26:22.763 [2024-05-15 17:17:10.377695] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:22.764 [2024-05-15 17:17:10.377703] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:22.764 [2024-05-15 17:17:10.377709] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:22.764 [2024-05-15 17:17:10.380487] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:22.764 [2024-05-15 17:17:10.389358] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:22.764 [2024-05-15 17:17:10.389769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.764 [2024-05-15 17:17:10.390022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.764 [2024-05-15 17:17:10.390032] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10b7840 with addr=10.0.0.2, port=4420 00:26:22.764 [2024-05-15 17:17:10.390038] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b7840 is same with the state(5) to be set 00:26:22.764 [2024-05-15 17:17:10.390219] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10b7840 (9): Bad file descriptor 00:26:22.764 [2024-05-15 17:17:10.390392] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:22.764 [2024-05-15 17:17:10.390400] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:22.764 [2024-05-15 17:17:10.390405] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:22.764 [2024-05-15 17:17:10.393181] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:22.764 [2024-05-15 17:17:10.402351] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:22.764 [2024-05-15 17:17:10.402795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.764 [2024-05-15 17:17:10.403025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.764 [2024-05-15 17:17:10.403035] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10b7840 with addr=10.0.0.2, port=4420 00:26:22.764 [2024-05-15 17:17:10.403042] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b7840 is same with the state(5) to be set 00:26:22.764 [2024-05-15 17:17:10.403222] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10b7840 (9): Bad file descriptor 00:26:22.764 [2024-05-15 17:17:10.403397] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:22.764 [2024-05-15 17:17:10.403405] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:22.764 [2024-05-15 17:17:10.403411] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:22.764 [2024-05-15 17:17:10.406213] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:22.764 [2024-05-15 17:17:10.415522] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:22.764 [2024-05-15 17:17:10.415889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.764 [2024-05-15 17:17:10.416149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.764 [2024-05-15 17:17:10.416162] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10b7840 with addr=10.0.0.2, port=4420 00:26:22.764 [2024-05-15 17:17:10.416177] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b7840 is same with the state(5) to be set 00:26:22.764 [2024-05-15 17:17:10.416360] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10b7840 (9): Bad file descriptor 00:26:22.764 [2024-05-15 17:17:10.416567] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:22.764 [2024-05-15 17:17:10.416577] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:22.764 [2024-05-15 17:17:10.416583] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:23.023 [2024-05-15 17:17:10.419494] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:23.023 [2024-05-15 17:17:10.428624] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:23.023 [2024-05-15 17:17:10.429083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.023 [2024-05-15 17:17:10.429345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.023 [2024-05-15 17:17:10.429357] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10b7840 with addr=10.0.0.2, port=4420 00:26:23.023 [2024-05-15 17:17:10.429364] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b7840 is same with the state(5) to be set 00:26:23.023 [2024-05-15 17:17:10.429549] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10b7840 (9): Bad file descriptor 00:26:23.024 [2024-05-15 17:17:10.429724] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:23.024 [2024-05-15 17:17:10.429732] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:23.024 [2024-05-15 17:17:10.429737] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:23.024 [2024-05-15 17:17:10.432512] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:23.024 [2024-05-15 17:17:10.441541] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:23.024 [2024-05-15 17:17:10.442006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.024 [2024-05-15 17:17:10.442233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.024 [2024-05-15 17:17:10.442266] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10b7840 with addr=10.0.0.2, port=4420 00:26:23.024 [2024-05-15 17:17:10.442288] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b7840 is same with the state(5) to be set 00:26:23.024 [2024-05-15 17:17:10.442873] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10b7840 (9): Bad file descriptor 00:26:23.024 [2024-05-15 17:17:10.443110] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:23.024 [2024-05-15 17:17:10.443118] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:23.024 [2024-05-15 17:17:10.443124] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:23.024 [2024-05-15 17:17:10.445900] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:23.024 [2024-05-15 17:17:10.454582] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:23.024 [2024-05-15 17:17:10.455031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.024 [2024-05-15 17:17:10.455306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.024 [2024-05-15 17:17:10.455317] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10b7840 with addr=10.0.0.2, port=4420 00:26:23.024 [2024-05-15 17:17:10.455324] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b7840 is same with the state(5) to be set 00:26:23.024 [2024-05-15 17:17:10.455504] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10b7840 (9): Bad file descriptor 00:26:23.024 [2024-05-15 17:17:10.455683] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:23.024 [2024-05-15 17:17:10.455691] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:23.024 [2024-05-15 17:17:10.455697] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:23.024 [2024-05-15 17:17:10.458571] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:23.024 [2024-05-15 17:17:10.467647] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:23.024 [2024-05-15 17:17:10.468094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.024 [2024-05-15 17:17:10.468353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.024 [2024-05-15 17:17:10.468384] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10b7840 with addr=10.0.0.2, port=4420 00:26:23.024 [2024-05-15 17:17:10.468405] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b7840 is same with the state(5) to be set 00:26:23.024 [2024-05-15 17:17:10.468809] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10b7840 (9): Bad file descriptor 00:26:23.024 [2024-05-15 17:17:10.468989] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:23.024 [2024-05-15 17:17:10.468997] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:23.024 [2024-05-15 17:17:10.469003] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:23.024 [2024-05-15 17:17:10.471801] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:23.024 [2024-05-15 17:17:10.480644] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:23.024 [2024-05-15 17:17:10.481093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.024 [2024-05-15 17:17:10.481334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.024 [2024-05-15 17:17:10.481373] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10b7840 with addr=10.0.0.2, port=4420 00:26:23.024 [2024-05-15 17:17:10.481395] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b7840 is same with the state(5) to be set 00:26:23.024 [2024-05-15 17:17:10.481852] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10b7840 (9): Bad file descriptor 00:26:23.024 [2024-05-15 17:17:10.482026] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:23.024 [2024-05-15 17:17:10.482034] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:23.024 [2024-05-15 17:17:10.482039] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:23.024 [2024-05-15 17:17:10.484814] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:23.024 [2024-05-15 17:17:10.493655] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:23.024 [2024-05-15 17:17:10.494031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.024 [2024-05-15 17:17:10.494262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.024 [2024-05-15 17:17:10.494273] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10b7840 with addr=10.0.0.2, port=4420 00:26:23.024 [2024-05-15 17:17:10.494279] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b7840 is same with the state(5) to be set 00:26:23.024 [2024-05-15 17:17:10.494453] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10b7840 (9): Bad file descriptor 00:26:23.024 [2024-05-15 17:17:10.494628] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:23.024 [2024-05-15 17:17:10.494636] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:23.024 [2024-05-15 17:17:10.494641] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:23.024 [2024-05-15 17:17:10.497421] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:23.024 [2024-05-15 17:17:10.506590] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:23.024 [2024-05-15 17:17:10.507039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.024 [2024-05-15 17:17:10.507315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.024 [2024-05-15 17:17:10.507326] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10b7840 with addr=10.0.0.2, port=4420 00:26:23.024 [2024-05-15 17:17:10.507332] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b7840 is same with the state(5) to be set 00:26:23.024 [2024-05-15 17:17:10.507506] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10b7840 (9): Bad file descriptor 00:26:23.024 [2024-05-15 17:17:10.507680] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:23.024 [2024-05-15 17:17:10.507687] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:23.024 [2024-05-15 17:17:10.507694] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:23.024 [2024-05-15 17:17:10.510433] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:23.024 [2024-05-15 17:17:10.519626] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:23.024 [2024-05-15 17:17:10.520041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.024 [2024-05-15 17:17:10.520374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.025 [2024-05-15 17:17:10.520406] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10b7840 with addr=10.0.0.2, port=4420 00:26:23.025 [2024-05-15 17:17:10.520434] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b7840 is same with the state(5) to be set 00:26:23.025 [2024-05-15 17:17:10.521020] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10b7840 (9): Bad file descriptor 00:26:23.025 [2024-05-15 17:17:10.521299] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:23.025 [2024-05-15 17:17:10.521308] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:23.025 [2024-05-15 17:17:10.521314] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:23.025 [2024-05-15 17:17:10.524085] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:23.025 [2024-05-15 17:17:10.532600] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:23.025 [2024-05-15 17:17:10.533037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.025 [2024-05-15 17:17:10.533337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.025 [2024-05-15 17:17:10.533369] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10b7840 with addr=10.0.0.2, port=4420 00:26:23.025 [2024-05-15 17:17:10.533390] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b7840 is same with the state(5) to be set 00:26:23.025 [2024-05-15 17:17:10.533975] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10b7840 (9): Bad file descriptor 00:26:23.025 [2024-05-15 17:17:10.534172] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:23.025 [2024-05-15 17:17:10.534180] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:23.025 [2024-05-15 17:17:10.534187] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:23.025 [2024-05-15 17:17:10.536957] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:23.025 [2024-05-15 17:17:10.545545] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:23.025 [2024-05-15 17:17:10.545994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.025 [2024-05-15 17:17:10.546259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.025 [2024-05-15 17:17:10.546291] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10b7840 with addr=10.0.0.2, port=4420 00:26:23.025 [2024-05-15 17:17:10.546312] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b7840 is same with the state(5) to be set 00:26:23.025 [2024-05-15 17:17:10.546899] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10b7840 (9): Bad file descriptor 00:26:23.025 [2024-05-15 17:17:10.547409] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:23.025 [2024-05-15 17:17:10.547417] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:23.025 [2024-05-15 17:17:10.547423] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:23.025 [2024-05-15 17:17:10.550197] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:23.025 [2024-05-15 17:17:10.558548] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:23.025 [2024-05-15 17:17:10.558914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.025 [2024-05-15 17:17:10.559155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.025 [2024-05-15 17:17:10.559202] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10b7840 with addr=10.0.0.2, port=4420 00:26:23.025 [2024-05-15 17:17:10.559223] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b7840 is same with the state(5) to be set 00:26:23.025 [2024-05-15 17:17:10.559817] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10b7840 (9): Bad file descriptor 00:26:23.025 [2024-05-15 17:17:10.560064] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:23.025 [2024-05-15 17:17:10.560072] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:23.025 [2024-05-15 17:17:10.560078] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:23.025 [2024-05-15 17:17:10.562853] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:23.025 [2024-05-15 17:17:10.571586] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:23.025 [2024-05-15 17:17:10.572026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.025 [2024-05-15 17:17:10.572192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.025 [2024-05-15 17:17:10.572203] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10b7840 with addr=10.0.0.2, port=4420 00:26:23.025 [2024-05-15 17:17:10.572210] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b7840 is same with the state(5) to be set 00:26:23.025 [2024-05-15 17:17:10.572385] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10b7840 (9): Bad file descriptor 00:26:23.025 [2024-05-15 17:17:10.572559] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:23.025 [2024-05-15 17:17:10.572566] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:23.025 [2024-05-15 17:17:10.572573] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:23.025 [2024-05-15 17:17:10.575347] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:23.025 [2024-05-15 17:17:10.584676] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:23.025 [2024-05-15 17:17:10.585160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.025 [2024-05-15 17:17:10.585494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.025 [2024-05-15 17:17:10.585523] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10b7840 with addr=10.0.0.2, port=4420 00:26:23.025 [2024-05-15 17:17:10.585545] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b7840 is same with the state(5) to be set 00:26:23.025 [2024-05-15 17:17:10.586130] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10b7840 (9): Bad file descriptor 00:26:23.025 [2024-05-15 17:17:10.586411] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:23.025 [2024-05-15 17:17:10.586419] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:23.025 [2024-05-15 17:17:10.586425] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:23.025 [2024-05-15 17:17:10.589201] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:23.025 [2024-05-15 17:17:10.597720] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:23.025 [2024-05-15 17:17:10.598162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.025 [2024-05-15 17:17:10.598478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.025 [2024-05-15 17:17:10.598508] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10b7840 with addr=10.0.0.2, port=4420 00:26:23.025 [2024-05-15 17:17:10.598529] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b7840 is same with the state(5) to be set 00:26:23.025 [2024-05-15 17:17:10.599114] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10b7840 (9): Bad file descriptor 00:26:23.025 [2024-05-15 17:17:10.599451] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:23.025 [2024-05-15 17:17:10.599460] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:23.025 [2024-05-15 17:17:10.599466] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:23.025 [2024-05-15 17:17:10.602248] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:23.025 [2024-05-15 17:17:10.610767] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:23.025 [2024-05-15 17:17:10.611232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.025 [2024-05-15 17:17:10.611444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.026 [2024-05-15 17:17:10.611453] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10b7840 with addr=10.0.0.2, port=4420 00:26:23.026 [2024-05-15 17:17:10.611460] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b7840 is same with the state(5) to be set 00:26:23.026 [2024-05-15 17:17:10.611634] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10b7840 (9): Bad file descriptor 00:26:23.026 [2024-05-15 17:17:10.611808] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:23.026 [2024-05-15 17:17:10.611816] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:23.026 [2024-05-15 17:17:10.611821] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:23.026 [2024-05-15 17:17:10.614541] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:23.026 [2024-05-15 17:17:10.623608] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:23.026 [2024-05-15 17:17:10.624069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.026 [2024-05-15 17:17:10.624349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.026 [2024-05-15 17:17:10.624381] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10b7840 with addr=10.0.0.2, port=4420 00:26:23.026 [2024-05-15 17:17:10.624403] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b7840 is same with the state(5) to be set 00:26:23.026 [2024-05-15 17:17:10.624990] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10b7840 (9): Bad file descriptor 00:26:23.026 [2024-05-15 17:17:10.625296] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:23.026 [2024-05-15 17:17:10.625304] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:23.026 [2024-05-15 17:17:10.625310] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:23.026 [2024-05-15 17:17:10.628086] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:23.026 [2024-05-15 17:17:10.636610] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:23.026 [2024-05-15 17:17:10.637044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.026 [2024-05-15 17:17:10.637343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.026 [2024-05-15 17:17:10.637375] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10b7840 with addr=10.0.0.2, port=4420 00:26:23.026 [2024-05-15 17:17:10.637396] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b7840 is same with the state(5) to be set 00:26:23.026 [2024-05-15 17:17:10.637730] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10b7840 (9): Bad file descriptor 00:26:23.026 [2024-05-15 17:17:10.637905] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:23.026 [2024-05-15 17:17:10.637916] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:23.026 [2024-05-15 17:17:10.637922] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:23.026 [2024-05-15 17:17:10.640723] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:23.026 [2024-05-15 17:17:10.649644] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:23.026 [2024-05-15 17:17:10.650084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.026 [2024-05-15 17:17:10.650340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.026 [2024-05-15 17:17:10.650350] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10b7840 with addr=10.0.0.2, port=4420 00:26:23.026 [2024-05-15 17:17:10.650357] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b7840 is same with the state(5) to be set 00:26:23.026 [2024-05-15 17:17:10.650530] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10b7840 (9): Bad file descriptor 00:26:23.026 [2024-05-15 17:17:10.650704] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:23.026 [2024-05-15 17:17:10.650712] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:23.026 [2024-05-15 17:17:10.650718] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:23.026 [2024-05-15 17:17:10.653496] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:23.026 [2024-05-15 17:17:10.662579] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:23.026 [2024-05-15 17:17:10.663026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.026 [2024-05-15 17:17:10.663323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.026 [2024-05-15 17:17:10.663355] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10b7840 with addr=10.0.0.2, port=4420 00:26:23.026 [2024-05-15 17:17:10.663377] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b7840 is same with the state(5) to be set 00:26:23.026 [2024-05-15 17:17:10.663634] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10b7840 (9): Bad file descriptor 00:26:23.026 [2024-05-15 17:17:10.663808] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:23.026 [2024-05-15 17:17:10.663816] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:23.026 [2024-05-15 17:17:10.663822] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:23.026 [2024-05-15 17:17:10.666626] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:23.026 [2024-05-15 17:17:10.675436] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:23.026 [2024-05-15 17:17:10.675881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.026 [2024-05-15 17:17:10.676061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.026 [2024-05-15 17:17:10.676070] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10b7840 with addr=10.0.0.2, port=4420 00:26:23.026 [2024-05-15 17:17:10.676077] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b7840 is same with the state(5) to be set 00:26:23.026 [2024-05-15 17:17:10.676257] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10b7840 (9): Bad file descriptor 00:26:23.026 [2024-05-15 17:17:10.676432] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:23.026 [2024-05-15 17:17:10.676439] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:23.026 [2024-05-15 17:17:10.676448] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:23.026 [2024-05-15 17:17:10.679378] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:23.286 [2024-05-15 17:17:10.688560] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:23.286 [2024-05-15 17:17:10.689025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.286 [2024-05-15 17:17:10.689263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.286 [2024-05-15 17:17:10.689295] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10b7840 with addr=10.0.0.2, port=4420 00:26:23.286 [2024-05-15 17:17:10.689317] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b7840 is same with the state(5) to be set 00:26:23.286 [2024-05-15 17:17:10.689795] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10b7840 (9): Bad file descriptor 00:26:23.286 [2024-05-15 17:17:10.689969] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:23.286 [2024-05-15 17:17:10.689977] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:23.286 [2024-05-15 17:17:10.689983] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:23.286 [2024-05-15 17:17:10.692750] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:23.286 [2024-05-15 17:17:10.701524] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:23.286 [2024-05-15 17:17:10.701984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.286 [2024-05-15 17:17:10.702310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.286 [2024-05-15 17:17:10.702342] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10b7840 with addr=10.0.0.2, port=4420 00:26:23.286 [2024-05-15 17:17:10.702362] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b7840 is same with the state(5) to be set 00:26:23.286 [2024-05-15 17:17:10.702643] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10b7840 (9): Bad file descriptor 00:26:23.286 [2024-05-15 17:17:10.702817] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:23.286 [2024-05-15 17:17:10.702825] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:23.286 [2024-05-15 17:17:10.702831] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:23.286 [2024-05-15 17:17:10.705536] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:23.286 [2024-05-15 17:17:10.714679] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:23.286 [2024-05-15 17:17:10.715120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.286 [2024-05-15 17:17:10.715347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.286 [2024-05-15 17:17:10.715358] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10b7840 with addr=10.0.0.2, port=4420 00:26:23.286 [2024-05-15 17:17:10.715365] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b7840 is same with the state(5) to be set 00:26:23.286 [2024-05-15 17:17:10.715544] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10b7840 (9): Bad file descriptor 00:26:23.286 [2024-05-15 17:17:10.715723] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:23.286 [2024-05-15 17:17:10.715731] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:23.286 [2024-05-15 17:17:10.715737] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:23.286 [2024-05-15 17:17:10.718585] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:23.286 [2024-05-15 17:17:10.727782] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:23.286 [2024-05-15 17:17:10.728218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.286 [2024-05-15 17:17:10.728449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.286 [2024-05-15 17:17:10.728459] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10b7840 with addr=10.0.0.2, port=4420 00:26:23.286 [2024-05-15 17:17:10.728466] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b7840 is same with the state(5) to be set 00:26:23.286 [2024-05-15 17:17:10.728640] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10b7840 (9): Bad file descriptor 00:26:23.286 [2024-05-15 17:17:10.728814] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:23.286 [2024-05-15 17:17:10.728821] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:23.286 [2024-05-15 17:17:10.728827] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:23.286 [2024-05-15 17:17:10.731535] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:23.286 [2024-05-15 17:17:10.740652] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:23.286 [2024-05-15 17:17:10.741073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.286 [2024-05-15 17:17:10.741307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.286 [2024-05-15 17:17:10.741340] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10b7840 with addr=10.0.0.2, port=4420 00:26:23.286 [2024-05-15 17:17:10.741361] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b7840 is same with the state(5) to be set 00:26:23.286 [2024-05-15 17:17:10.741859] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10b7840 (9): Bad file descriptor 00:26:23.286 [2024-05-15 17:17:10.742023] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:23.286 [2024-05-15 17:17:10.742030] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:23.286 [2024-05-15 17:17:10.742036] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:23.286 [2024-05-15 17:17:10.744746] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:23.286 [2024-05-15 17:17:10.753503] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:23.286 [2024-05-15 17:17:10.753943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.286 [2024-05-15 17:17:10.754197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.286 [2024-05-15 17:17:10.754229] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10b7840 with addr=10.0.0.2, port=4420 00:26:23.286 [2024-05-15 17:17:10.754250] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b7840 is same with the state(5) to be set 00:26:23.286 [2024-05-15 17:17:10.754531] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10b7840 (9): Bad file descriptor 00:26:23.286 [2024-05-15 17:17:10.754704] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:23.286 [2024-05-15 17:17:10.754712] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:23.286 [2024-05-15 17:17:10.754718] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:23.286 [2024-05-15 17:17:10.757425] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:23.286 [2024-05-15 17:17:10.766390] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:23.286 [2024-05-15 17:17:10.766817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.286 [2024-05-15 17:17:10.767020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.286 [2024-05-15 17:17:10.767050] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10b7840 with addr=10.0.0.2, port=4420 00:26:23.286 [2024-05-15 17:17:10.767071] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b7840 is same with the state(5) to be set 00:26:23.286 [2024-05-15 17:17:10.767671] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10b7840 (9): Bad file descriptor 00:26:23.286 [2024-05-15 17:17:10.767860] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:23.287 [2024-05-15 17:17:10.767868] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:23.287 [2024-05-15 17:17:10.767874] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:23.287 [2024-05-15 17:17:10.770614] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:23.287 [2024-05-15 17:17:10.779208] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:23.287 [2024-05-15 17:17:10.779654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.287 [2024-05-15 17:17:10.779785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.287 [2024-05-15 17:17:10.779795] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10b7840 with addr=10.0.0.2, port=4420 00:26:23.287 [2024-05-15 17:17:10.779801] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b7840 is same with the state(5) to be set 00:26:23.287 [2024-05-15 17:17:10.779975] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10b7840 (9): Bad file descriptor 00:26:23.287 [2024-05-15 17:17:10.780148] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:23.287 [2024-05-15 17:17:10.780156] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:23.287 [2024-05-15 17:17:10.780162] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:23.287 [2024-05-15 17:17:10.782873] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:23.287 [2024-05-15 17:17:10.792035] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:23.287 [2024-05-15 17:17:10.792475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.287 [2024-05-15 17:17:10.792679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.287 [2024-05-15 17:17:10.792689] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10b7840 with addr=10.0.0.2, port=4420 00:26:23.287 [2024-05-15 17:17:10.792696] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b7840 is same with the state(5) to be set 00:26:23.287 [2024-05-15 17:17:10.792871] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10b7840 (9): Bad file descriptor 00:26:23.287 [2024-05-15 17:17:10.793045] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:23.287 [2024-05-15 17:17:10.793053] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:23.287 [2024-05-15 17:17:10.793060] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:23.287 [2024-05-15 17:17:10.795775] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:23.287 [2024-05-15 17:17:10.804859] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:23.287 [2024-05-15 17:17:10.805278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.287 [2024-05-15 17:17:10.805535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.287 [2024-05-15 17:17:10.805545] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10b7840 with addr=10.0.0.2, port=4420 00:26:23.287 [2024-05-15 17:17:10.805551] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b7840 is same with the state(5) to be set 00:26:23.287 [2024-05-15 17:17:10.805716] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10b7840 (9): Bad file descriptor 00:26:23.287 [2024-05-15 17:17:10.805880] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:23.287 [2024-05-15 17:17:10.805888] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:23.287 [2024-05-15 17:17:10.805893] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:23.287 [2024-05-15 17:17:10.808603] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:23.287 [2024-05-15 17:17:10.817716] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:23.287 [2024-05-15 17:17:10.818085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.287 [2024-05-15 17:17:10.818337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.287 [2024-05-15 17:17:10.818369] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10b7840 with addr=10.0.0.2, port=4420 00:26:23.287 [2024-05-15 17:17:10.818391] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b7840 is same with the state(5) to be set 00:26:23.287 [2024-05-15 17:17:10.818880] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10b7840 (9): Bad file descriptor 00:26:23.287 [2024-05-15 17:17:10.819055] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:23.287 [2024-05-15 17:17:10.819063] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:23.287 [2024-05-15 17:17:10.819068] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:23.287 [2024-05-15 17:17:10.821775] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:23.287 [2024-05-15 17:17:10.830536] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:23.287 [2024-05-15 17:17:10.830988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.287 [2024-05-15 17:17:10.831255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.287 [2024-05-15 17:17:10.831287] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10b7840 with addr=10.0.0.2, port=4420 00:26:23.287 [2024-05-15 17:17:10.831309] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b7840 is same with the state(5) to be set 00:26:23.287 [2024-05-15 17:17:10.831731] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10b7840 (9): Bad file descriptor 00:26:23.287 [2024-05-15 17:17:10.831905] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:23.287 [2024-05-15 17:17:10.831913] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:23.287 [2024-05-15 17:17:10.831919] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:23.287 [2024-05-15 17:17:10.834647] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:23.287 [2024-05-15 17:17:10.843395] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:23.287 [2024-05-15 17:17:10.843815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.287 [2024-05-15 17:17:10.843989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.287 [2024-05-15 17:17:10.844002] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10b7840 with addr=10.0.0.2, port=4420 00:26:23.287 [2024-05-15 17:17:10.844008] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b7840 is same with the state(5) to be set 00:26:23.287 [2024-05-15 17:17:10.844178] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10b7840 (9): Bad file descriptor 00:26:23.287 [2024-05-15 17:17:10.844368] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:23.287 [2024-05-15 17:17:10.844376] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:23.287 [2024-05-15 17:17:10.844383] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:23.287 [2024-05-15 17:17:10.847087] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:23.287 [2024-05-15 17:17:10.856367] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:23.287 [2024-05-15 17:17:10.856832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.287 [2024-05-15 17:17:10.857127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.287 [2024-05-15 17:17:10.857158] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10b7840 with addr=10.0.0.2, port=4420 00:26:23.287 [2024-05-15 17:17:10.857192] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b7840 is same with the state(5) to be set 00:26:23.287 [2024-05-15 17:17:10.857777] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10b7840 (9): Bad file descriptor 00:26:23.287 [2024-05-15 17:17:10.858056] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:23.287 [2024-05-15 17:17:10.858064] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:23.287 [2024-05-15 17:17:10.858070] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:23.287 [2024-05-15 17:17:10.860842] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:23.287 [2024-05-15 17:17:10.869290] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:23.287 [2024-05-15 17:17:10.869748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.287 [2024-05-15 17:17:10.870070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.287 [2024-05-15 17:17:10.870100] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10b7840 with addr=10.0.0.2, port=4420 00:26:23.287 [2024-05-15 17:17:10.870121] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b7840 is same with the state(5) to be set 00:26:23.287 [2024-05-15 17:17:10.870322] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10b7840 (9): Bad file descriptor 00:26:23.287 [2024-05-15 17:17:10.870496] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:23.287 [2024-05-15 17:17:10.870504] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:23.287 [2024-05-15 17:17:10.870510] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:23.287 [2024-05-15 17:17:10.873215] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:23.287 [2024-05-15 17:17:10.882156] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:23.287 [2024-05-15 17:17:10.882595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.287 [2024-05-15 17:17:10.882865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.287 [2024-05-15 17:17:10.882895] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10b7840 with addr=10.0.0.2, port=4420 00:26:23.287 [2024-05-15 17:17:10.882922] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b7840 is same with the state(5) to be set 00:26:23.287 [2024-05-15 17:17:10.883175] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10b7840 (9): Bad file descriptor 00:26:23.287 [2024-05-15 17:17:10.883366] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:23.287 [2024-05-15 17:17:10.883374] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:23.287 [2024-05-15 17:17:10.883379] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:23.288 [2024-05-15 17:17:10.886148] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:23.288 [2024-05-15 17:17:10.895051] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:23.288 [2024-05-15 17:17:10.895499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.288 [2024-05-15 17:17:10.895694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.288 [2024-05-15 17:17:10.895704] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10b7840 with addr=10.0.0.2, port=4420 00:26:23.288 [2024-05-15 17:17:10.895711] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b7840 is same with the state(5) to be set 00:26:23.288 [2024-05-15 17:17:10.895884] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10b7840 (9): Bad file descriptor 00:26:23.288 [2024-05-15 17:17:10.896058] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:23.288 [2024-05-15 17:17:10.896066] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:23.288 [2024-05-15 17:17:10.896072] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:23.288 [2024-05-15 17:17:10.898781] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:23.288 [2024-05-15 17:17:10.908009] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:23.288 [2024-05-15 17:17:10.908458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.288 [2024-05-15 17:17:10.908711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.288 [2024-05-15 17:17:10.908722] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10b7840 with addr=10.0.0.2, port=4420 00:26:23.288 [2024-05-15 17:17:10.908728] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b7840 is same with the state(5) to be set 00:26:23.288 [2024-05-15 17:17:10.908902] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10b7840 (9): Bad file descriptor 00:26:23.288 [2024-05-15 17:17:10.909075] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:23.288 [2024-05-15 17:17:10.909083] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:23.288 [2024-05-15 17:17:10.909089] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:23.288 [2024-05-15 17:17:10.911797] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:23.288 [2024-05-15 17:17:10.920865] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:23.288 [2024-05-15 17:17:10.921312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.288 [2024-05-15 17:17:10.921645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.288 [2024-05-15 17:17:10.921675] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10b7840 with addr=10.0.0.2, port=4420 00:26:23.288 [2024-05-15 17:17:10.921697] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b7840 is same with the state(5) to be set 00:26:23.288 [2024-05-15 17:17:10.922035] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10b7840 (9): Bad file descriptor 00:26:23.288 [2024-05-15 17:17:10.922215] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:23.288 [2024-05-15 17:17:10.922224] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:23.288 [2024-05-15 17:17:10.922230] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:23.288 [2024-05-15 17:17:10.924927] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:23.288 [2024-05-15 17:17:10.933734] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:23.288 [2024-05-15 17:17:10.934153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.288 [2024-05-15 17:17:10.934382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.288 [2024-05-15 17:17:10.934393] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10b7840 with addr=10.0.0.2, port=4420 00:26:23.288 [2024-05-15 17:17:10.934399] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b7840 is same with the state(5) to be set 00:26:23.288 [2024-05-15 17:17:10.934573] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10b7840 (9): Bad file descriptor 00:26:23.288 [2024-05-15 17:17:10.934747] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:23.288 [2024-05-15 17:17:10.934754] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:23.288 [2024-05-15 17:17:10.934760] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:23.288 [2024-05-15 17:17:10.937473] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:23.548 [2024-05-15 17:17:10.946811] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:23.548 [2024-05-15 17:17:10.947278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.548 [2024-05-15 17:17:10.947501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.548 [2024-05-15 17:17:10.947531] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10b7840 with addr=10.0.0.2, port=4420 00:26:23.548 [2024-05-15 17:17:10.947553] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b7840 is same with the state(5) to be set 00:26:23.548 [2024-05-15 17:17:10.947944] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10b7840 (9): Bad file descriptor 00:26:23.548 [2024-05-15 17:17:10.948119] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:23.548 [2024-05-15 17:17:10.948127] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:23.548 [2024-05-15 17:17:10.948133] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:23.548 [2024-05-15 17:17:10.951032] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:23.548 [2024-05-15 17:17:10.959638] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:23.548 [2024-05-15 17:17:10.960104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.548 [2024-05-15 17:17:10.960365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.548 [2024-05-15 17:17:10.960376] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10b7840 with addr=10.0.0.2, port=4420 00:26:23.548 [2024-05-15 17:17:10.960383] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b7840 is same with the state(5) to be set 00:26:23.548 [2024-05-15 17:17:10.960562] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10b7840 (9): Bad file descriptor 00:26:23.548 [2024-05-15 17:17:10.960745] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:23.548 [2024-05-15 17:17:10.960753] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:23.548 [2024-05-15 17:17:10.960759] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:23.548 [2024-05-15 17:17:10.963626] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:23.548 [2024-05-15 17:17:10.972694] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:23.548 [2024-05-15 17:17:10.973187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.548 [2024-05-15 17:17:10.973490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.548 [2024-05-15 17:17:10.973521] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10b7840 with addr=10.0.0.2, port=4420 00:26:23.548 [2024-05-15 17:17:10.973543] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b7840 is same with the state(5) to be set 00:26:23.548 [2024-05-15 17:17:10.974128] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10b7840 (9): Bad file descriptor 00:26:23.548 [2024-05-15 17:17:10.974458] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:23.548 [2024-05-15 17:17:10.974467] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:23.548 [2024-05-15 17:17:10.974472] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:23.548 [2024-05-15 17:17:10.977244] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:23.548 [2024-05-15 17:17:10.985610] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:23.548 [2024-05-15 17:17:10.986052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.548 [2024-05-15 17:17:10.986324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.548 [2024-05-15 17:17:10.986356] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10b7840 with addr=10.0.0.2, port=4420 00:26:23.548 [2024-05-15 17:17:10.986377] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b7840 is same with the state(5) to be set 00:26:23.548 [2024-05-15 17:17:10.986963] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10b7840 (9): Bad file descriptor 00:26:23.548 [2024-05-15 17:17:10.987194] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:23.548 [2024-05-15 17:17:10.987202] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:23.548 [2024-05-15 17:17:10.987208] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:23.548 [2024-05-15 17:17:10.989915] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:23.548 [2024-05-15 17:17:10.998546] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:23.548 [2024-05-15 17:17:10.999028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.548 [2024-05-15 17:17:10.999297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.548 [2024-05-15 17:17:10.999329] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10b7840 with addr=10.0.0.2, port=4420 00:26:23.548 [2024-05-15 17:17:10.999351] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b7840 is same with the state(5) to be set 00:26:23.548 [2024-05-15 17:17:10.999937] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10b7840 (9): Bad file descriptor 00:26:23.548 [2024-05-15 17:17:11.000146] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:23.548 [2024-05-15 17:17:11.000160] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:23.548 [2024-05-15 17:17:11.000171] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:23.548 [2024-05-15 17:17:11.002883] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:23.548 [2024-05-15 17:17:11.011505] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:23.548 [2024-05-15 17:17:11.011950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.548 [2024-05-15 17:17:11.012134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.548 [2024-05-15 17:17:11.012144] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10b7840 with addr=10.0.0.2, port=4420 00:26:23.548 [2024-05-15 17:17:11.012151] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b7840 is same with the state(5) to be set 00:26:23.548 [2024-05-15 17:17:11.012331] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10b7840 (9): Bad file descriptor 00:26:23.548 [2024-05-15 17:17:11.012506] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:23.548 [2024-05-15 17:17:11.012514] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:23.548 [2024-05-15 17:17:11.012520] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:23.548 [2024-05-15 17:17:11.015236] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:23.548 [2024-05-15 17:17:11.024474] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:23.548 [2024-05-15 17:17:11.024950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.548 [2024-05-15 17:17:11.025130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.548 [2024-05-15 17:17:11.025161] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10b7840 with addr=10.0.0.2, port=4420 00:26:23.548 [2024-05-15 17:17:11.025200] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b7840 is same with the state(5) to be set 00:26:23.548 [2024-05-15 17:17:11.025784] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10b7840 (9): Bad file descriptor 00:26:23.548 [2024-05-15 17:17:11.026382] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:23.548 [2024-05-15 17:17:11.026408] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:23.548 [2024-05-15 17:17:11.026413] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:23.548 [2024-05-15 17:17:11.029119] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:23.548 [2024-05-15 17:17:11.037523] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:23.548 [2024-05-15 17:17:11.037992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.548 [2024-05-15 17:17:11.038250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.548 [2024-05-15 17:17:11.038281] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10b7840 with addr=10.0.0.2, port=4420 00:26:23.548 [2024-05-15 17:17:11.038302] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b7840 is same with the state(5) to be set 00:26:23.548 [2024-05-15 17:17:11.038512] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10b7840 (9): Bad file descriptor 00:26:23.548 [2024-05-15 17:17:11.038687] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:23.548 [2024-05-15 17:17:11.038695] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:23.548 [2024-05-15 17:17:11.038705] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:23.548 [2024-05-15 17:17:11.041420] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:23.548 [2024-05-15 17:17:11.050580] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:23.548 [2024-05-15 17:17:11.050968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.548 [2024-05-15 17:17:11.051242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.548 [2024-05-15 17:17:11.051274] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10b7840 with addr=10.0.0.2, port=4420 00:26:23.548 [2024-05-15 17:17:11.051295] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b7840 is same with the state(5) to be set 00:26:23.548 [2024-05-15 17:17:11.051673] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10b7840 (9): Bad file descriptor 00:26:23.548 [2024-05-15 17:17:11.051853] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:23.548 [2024-05-15 17:17:11.051861] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:23.548 [2024-05-15 17:17:11.051867] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:23.549 [2024-05-15 17:17:11.054802] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:23.549 [2024-05-15 17:17:11.063515] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:23.549 [2024-05-15 17:17:11.063993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.549 [2024-05-15 17:17:11.064224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.549 [2024-05-15 17:17:11.064256] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10b7840 with addr=10.0.0.2, port=4420 00:26:23.549 [2024-05-15 17:17:11.064277] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b7840 is same with the state(5) to be set 00:26:23.549 [2024-05-15 17:17:11.064643] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10b7840 (9): Bad file descriptor 00:26:23.549 [2024-05-15 17:17:11.064816] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:23.549 [2024-05-15 17:17:11.064824] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:23.549 [2024-05-15 17:17:11.064830] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:23.549 [2024-05-15 17:17:11.067640] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:23.549 [2024-05-15 17:17:11.076524] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:23.549 [2024-05-15 17:17:11.076996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.549 [2024-05-15 17:17:11.077284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.549 [2024-05-15 17:17:11.077315] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10b7840 with addr=10.0.0.2, port=4420 00:26:23.549 [2024-05-15 17:17:11.077337] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b7840 is same with the state(5) to be set 00:26:23.549 [2024-05-15 17:17:11.077662] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10b7840 (9): Bad file descriptor 00:26:23.549 [2024-05-15 17:17:11.077837] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:23.549 [2024-05-15 17:17:11.077845] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:23.549 [2024-05-15 17:17:11.077851] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:23.549 [2024-05-15 17:17:11.080639] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:23.549 [2024-05-15 17:17:11.089599] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:23.549 [2024-05-15 17:17:11.090074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.549 [2024-05-15 17:17:11.090372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.549 [2024-05-15 17:17:11.090404] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10b7840 with addr=10.0.0.2, port=4420 00:26:23.549 [2024-05-15 17:17:11.090424] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b7840 is same with the state(5) to be set 00:26:23.549 [2024-05-15 17:17:11.090676] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10b7840 (9): Bad file descriptor 00:26:23.549 [2024-05-15 17:17:11.090850] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:23.549 [2024-05-15 17:17:11.090857] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:23.549 [2024-05-15 17:17:11.090864] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:23.549 [2024-05-15 17:17:11.093591] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:23.549 [2024-05-15 17:17:11.102540] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:23.549 [2024-05-15 17:17:11.102940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.549 [2024-05-15 17:17:11.103099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.549 [2024-05-15 17:17:11.103129] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10b7840 with addr=10.0.0.2, port=4420 00:26:23.549 [2024-05-15 17:17:11.103150] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b7840 is same with the state(5) to be set 00:26:23.549 [2024-05-15 17:17:11.103536] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10b7840 (9): Bad file descriptor 00:26:23.549 [2024-05-15 17:17:11.103711] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:23.549 [2024-05-15 17:17:11.103720] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:23.549 [2024-05-15 17:17:11.103725] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:23.549 [2024-05-15 17:17:11.106440] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:23.549 [2024-05-15 17:17:11.115455] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:23.549 [2024-05-15 17:17:11.115958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.549 [2024-05-15 17:17:11.116246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.549 [2024-05-15 17:17:11.116280] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10b7840 with addr=10.0.0.2, port=4420 00:26:23.549 [2024-05-15 17:17:11.116301] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b7840 is same with the state(5) to be set 00:26:23.549 [2024-05-15 17:17:11.116636] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10b7840 (9): Bad file descriptor 00:26:23.549 [2024-05-15 17:17:11.116800] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:23.549 [2024-05-15 17:17:11.116808] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:23.549 [2024-05-15 17:17:11.116813] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:23.549 [2024-05-15 17:17:11.119517] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:23.549 [2024-05-15 17:17:11.128304] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:23.549 [2024-05-15 17:17:11.128740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.549 [2024-05-15 17:17:11.128926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.549 [2024-05-15 17:17:11.128936] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10b7840 with addr=10.0.0.2, port=4420 00:26:23.549 [2024-05-15 17:17:11.128943] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b7840 is same with the state(5) to be set 00:26:23.549 [2024-05-15 17:17:11.129117] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10b7840 (9): Bad file descriptor 00:26:23.549 [2024-05-15 17:17:11.129296] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:23.549 [2024-05-15 17:17:11.129305] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:23.549 [2024-05-15 17:17:11.129310] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:23.549 [2024-05-15 17:17:11.132084] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:23.549 [2024-05-15 17:17:11.141244] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:23.549 [2024-05-15 17:17:11.141700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.549 [2024-05-15 17:17:11.141928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.549 [2024-05-15 17:17:11.141957] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10b7840 with addr=10.0.0.2, port=4420 00:26:23.549 [2024-05-15 17:17:11.141979] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b7840 is same with the state(5) to be set 00:26:23.549 [2024-05-15 17:17:11.142578] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10b7840 (9): Bad file descriptor 00:26:23.549 [2024-05-15 17:17:11.142831] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:23.549 [2024-05-15 17:17:11.142840] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:23.549 [2024-05-15 17:17:11.142846] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:23.549 [2024-05-15 17:17:11.145563] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:23.549 [2024-05-15 17:17:11.154076] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:23.549 [2024-05-15 17:17:11.154409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.549 [2024-05-15 17:17:11.154547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.549 [2024-05-15 17:17:11.154557] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10b7840 with addr=10.0.0.2, port=4420 00:26:23.549 [2024-05-15 17:17:11.154563] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b7840 is same with the state(5) to be set 00:26:23.549 [2024-05-15 17:17:11.154738] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10b7840 (9): Bad file descriptor 00:26:23.549 [2024-05-15 17:17:11.154912] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:23.549 [2024-05-15 17:17:11.154920] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:23.549 [2024-05-15 17:17:11.154926] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:23.549 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh: line 35: 3215511 Killed "${NVMF_APP[@]}" "$@" 00:26:23.549 [2024-05-15 17:17:11.157710] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:23.549 17:17:11 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@36 -- # tgt_init 00:26:23.549 17:17:11 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:26:23.549 17:17:11 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:26:23.549 17:17:11 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@720 -- # xtrace_disable 00:26:23.549 17:17:11 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:23.549 17:17:11 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@481 -- # nvmfpid=3216917 00:26:23.549 17:17:11 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@482 -- # waitforlisten 3216917 00:26:23.549 17:17:11 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:26:23.549 17:17:11 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@827 -- # '[' -z 3216917 ']' 00:26:23.549 17:17:11 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:23.549 17:17:11 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@832 -- # local max_retries=100 00:26:23.549 17:17:11 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:23.550 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:23.550 [2024-05-15 17:17:11.167301] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:23.550 17:17:11 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@836 -- # xtrace_disable 00:26:23.550 [2024-05-15 17:17:11.167692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.550 17:17:11 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:23.550 [2024-05-15 17:17:11.167900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.550 [2024-05-15 17:17:11.167911] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10b7840 with addr=10.0.0.2, port=4420 00:26:23.550 [2024-05-15 17:17:11.167918] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b7840 is same with the state(5) to be set 00:26:23.550 [2024-05-15 17:17:11.168098] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10b7840 (9): Bad file descriptor 00:26:23.550 [2024-05-15 17:17:11.168285] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:23.550 [2024-05-15 17:17:11.168295] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:23.550 [2024-05-15 17:17:11.168300] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:23.550 [2024-05-15 17:17:11.171161] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:23.550 [2024-05-15 17:17:11.180460] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:23.550 [2024-05-15 17:17:11.180908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.550 [2024-05-15 17:17:11.181174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.550 [2024-05-15 17:17:11.181186] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10b7840 with addr=10.0.0.2, port=4420 00:26:23.550 [2024-05-15 17:17:11.181193] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b7840 is same with the state(5) to be set 00:26:23.550 [2024-05-15 17:17:11.181372] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10b7840 (9): Bad file descriptor 00:26:23.550 [2024-05-15 17:17:11.181551] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:23.550 [2024-05-15 17:17:11.181559] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:23.550 [2024-05-15 17:17:11.181566] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:23.550 [2024-05-15 17:17:11.184430] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:23.550 [2024-05-15 17:17:11.193565] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:23.550 [2024-05-15 17:17:11.194044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.550 [2024-05-15 17:17:11.194258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.550 [2024-05-15 17:17:11.194269] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10b7840 with addr=10.0.0.2, port=4420 00:26:23.550 [2024-05-15 17:17:11.194276] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b7840 is same with the state(5) to be set 00:26:23.550 [2024-05-15 17:17:11.194455] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10b7840 (9): Bad file descriptor 00:26:23.550 [2024-05-15 17:17:11.194635] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:23.550 [2024-05-15 17:17:11.194643] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:23.550 [2024-05-15 17:17:11.194649] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:23.550 [2024-05-15 17:17:11.197529] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:23.809 [2024-05-15 17:17:11.206780] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:23.809 [2024-05-15 17:17:11.207224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.809 [2024-05-15 17:17:11.207491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.809 [2024-05-15 17:17:11.207501] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10b7840 with addr=10.0.0.2, port=4420 00:26:23.809 [2024-05-15 17:17:11.207509] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b7840 is same with the state(5) to be set 00:26:23.809 [2024-05-15 17:17:11.207688] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10b7840 (9): Bad file descriptor 00:26:23.809 [2024-05-15 17:17:11.207869] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:23.809 [2024-05-15 17:17:11.207877] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:23.809 [2024-05-15 17:17:11.207883] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:23.809 [2024-05-15 17:17:11.209587] Starting SPDK v24.05-pre git sha1 0ba8ca574 / DPDK 23.11.0 initialization... 00:26:23.809 [2024-05-15 17:17:11.209625] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:23.809 [2024-05-15 17:17:11.210784] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:23.809 [2024-05-15 17:17:11.219946] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:23.809 [2024-05-15 17:17:11.220322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.809 [2024-05-15 17:17:11.220497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.809 [2024-05-15 17:17:11.220508] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10b7840 with addr=10.0.0.2, port=4420 00:26:23.809 [2024-05-15 17:17:11.220515] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b7840 is same with the state(5) to be set 00:26:23.809 [2024-05-15 17:17:11.220695] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10b7840 (9): Bad file descriptor 00:26:23.809 [2024-05-15 17:17:11.220877] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:23.809 [2024-05-15 17:17:11.220888] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:23.809 [2024-05-15 17:17:11.220895] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:23.809 [2024-05-15 17:17:11.223767] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:23.809 [2024-05-15 17:17:11.233066] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:23.809 [2024-05-15 17:17:11.233398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.809 [2024-05-15 17:17:11.233619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.809 [2024-05-15 17:17:11.233630] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10b7840 with addr=10.0.0.2, port=4420 00:26:23.809 [2024-05-15 17:17:11.233637] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b7840 is same with the state(5) to be set 00:26:23.809 [2024-05-15 17:17:11.233816] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10b7840 (9): Bad file descriptor 00:26:23.809 [2024-05-15 17:17:11.233997] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:23.809 [2024-05-15 17:17:11.234005] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:23.809 [2024-05-15 17:17:11.234012] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:23.809 EAL: No free 2048 kB hugepages reported on node 1 00:26:23.809 [2024-05-15 17:17:11.236880] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:23.809 [2024-05-15 17:17:11.246187] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:23.809 [2024-05-15 17:17:11.246581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.809 [2024-05-15 17:17:11.246716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.809 [2024-05-15 17:17:11.246728] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10b7840 with addr=10.0.0.2, port=4420 00:26:23.809 [2024-05-15 17:17:11.246735] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b7840 is same with the state(5) to be set 00:26:23.809 [2024-05-15 17:17:11.246914] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10b7840 (9): Bad file descriptor 00:26:23.809 [2024-05-15 17:17:11.247094] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:23.809 [2024-05-15 17:17:11.247102] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:23.809 [2024-05-15 17:17:11.247109] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:23.809 [2024-05-15 17:17:11.249982] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:23.809 [2024-05-15 17:17:11.259474] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:23.810 [2024-05-15 17:17:11.259899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.810 [2024-05-15 17:17:11.260088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.810 [2024-05-15 17:17:11.260099] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10b7840 with addr=10.0.0.2, port=4420 00:26:23.810 [2024-05-15 17:17:11.260107] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b7840 is same with the state(5) to be set 00:26:23.810 [2024-05-15 17:17:11.260294] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10b7840 (9): Bad file descriptor 00:26:23.810 [2024-05-15 17:17:11.260482] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:23.810 [2024-05-15 17:17:11.260489] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:23.810 [2024-05-15 17:17:11.260496] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:23.810 [2024-05-15 17:17:11.263368] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:23.810 [2024-05-15 17:17:11.269331] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:26:23.810 [2024-05-15 17:17:11.272541] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:23.810 [2024-05-15 17:17:11.272934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.810 [2024-05-15 17:17:11.273102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.810 [2024-05-15 17:17:11.273113] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10b7840 with addr=10.0.0.2, port=4420 00:26:23.810 [2024-05-15 17:17:11.273120] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b7840 is same with the state(5) to be set 00:26:23.810 [2024-05-15 17:17:11.273308] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10b7840 (9): Bad file descriptor 00:26:23.810 [2024-05-15 17:17:11.273495] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:23.810 [2024-05-15 17:17:11.273503] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:23.810 [2024-05-15 17:17:11.273510] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:23.810 [2024-05-15 17:17:11.276347] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:23.810 [2024-05-15 17:17:11.285594] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:23.810 [2024-05-15 17:17:11.285993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.810 [2024-05-15 17:17:11.286178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.810 [2024-05-15 17:17:11.286190] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10b7840 with addr=10.0.0.2, port=4420 00:26:23.810 [2024-05-15 17:17:11.286197] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b7840 is same with the state(5) to be set 00:26:23.810 [2024-05-15 17:17:11.286377] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10b7840 (9): Bad file descriptor 00:26:23.810 [2024-05-15 17:17:11.286556] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:23.810 [2024-05-15 17:17:11.286564] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:23.810 [2024-05-15 17:17:11.286571] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:23.810 [2024-05-15 17:17:11.289423] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:23.810 [2024-05-15 17:17:11.298712] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:23.810 [2024-05-15 17:17:11.299125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.810 [2024-05-15 17:17:11.299262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.810 [2024-05-15 17:17:11.299272] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10b7840 with addr=10.0.0.2, port=4420 00:26:23.810 [2024-05-15 17:17:11.299279] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b7840 is same with the state(5) to be set 00:26:23.810 [2024-05-15 17:17:11.299460] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10b7840 (9): Bad file descriptor 00:26:23.810 [2024-05-15 17:17:11.299639] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:23.810 [2024-05-15 17:17:11.299647] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:23.810 [2024-05-15 17:17:11.299653] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:23.810 [2024-05-15 17:17:11.302555] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:23.810 [2024-05-15 17:17:11.311818] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:23.810 [2024-05-15 17:17:11.312238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.810 [2024-05-15 17:17:11.312370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.810 [2024-05-15 17:17:11.312381] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10b7840 with addr=10.0.0.2, port=4420 00:26:23.810 [2024-05-15 17:17:11.312389] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b7840 is same with the state(5) to be set 00:26:23.810 [2024-05-15 17:17:11.312565] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10b7840 (9): Bad file descriptor 00:26:23.810 [2024-05-15 17:17:11.312740] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:23.810 [2024-05-15 17:17:11.312748] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:23.810 [2024-05-15 17:17:11.312755] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:23.810 [2024-05-15 17:17:11.315590] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:23.810 [2024-05-15 17:17:11.324842] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:23.810 [2024-05-15 17:17:11.325141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.810 [2024-05-15 17:17:11.325384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.810 [2024-05-15 17:17:11.325394] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10b7840 with addr=10.0.0.2, port=4420 00:26:23.810 [2024-05-15 17:17:11.325402] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b7840 is same with the state(5) to be set 00:26:23.810 [2024-05-15 17:17:11.325583] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10b7840 (9): Bad file descriptor 00:26:23.810 [2024-05-15 17:17:11.325763] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:23.810 [2024-05-15 17:17:11.325771] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:23.810 [2024-05-15 17:17:11.325778] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:23.810 [2024-05-15 17:17:11.328608] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:23.810 [2024-05-15 17:17:11.337913] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:23.810 [2024-05-15 17:17:11.338299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.810 [2024-05-15 17:17:11.338539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.810 [2024-05-15 17:17:11.338549] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10b7840 with addr=10.0.0.2, port=4420 00:26:23.810 [2024-05-15 17:17:11.338555] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b7840 is same with the state(5) to be set 00:26:23.810 [2024-05-15 17:17:11.338735] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10b7840 (9): Bad file descriptor 00:26:23.810 [2024-05-15 17:17:11.338913] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:23.810 [2024-05-15 17:17:11.338921] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:23.810 [2024-05-15 17:17:11.338928] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:23.810 [2024-05-15 17:17:11.341794] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:23.810 [2024-05-15 17:17:11.350539] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:23.810 [2024-05-15 17:17:11.350569] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:23.810 [2024-05-15 17:17:11.350576] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:23.810 [2024-05-15 17:17:11.350582] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:23.810 [2024-05-15 17:17:11.350587] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:23.810 [2024-05-15 17:17:11.350624] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:26:23.810 [2024-05-15 17:17:11.350729] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:26:23.810 [2024-05-15 17:17:11.350730] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:26:23.810 [2024-05-15 17:17:11.351110] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:23.810 [2024-05-15 17:17:11.351533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.810 [2024-05-15 17:17:11.351663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.810 [2024-05-15 17:17:11.351673] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10b7840 with addr=10.0.0.2, port=4420 00:26:23.810 [2024-05-15 17:17:11.351680] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b7840 is same with the state(5) to be set 00:26:23.810 [2024-05-15 17:17:11.351860] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10b7840 (9): Bad file descriptor 00:26:23.810 [2024-05-15 17:17:11.352039] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:23.810 [2024-05-15 17:17:11.352047] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:23.810 [2024-05-15 17:17:11.352053] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:23.810 [2024-05-15 17:17:11.354925] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:23.810 [2024-05-15 17:17:11.364224] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:23.810 [2024-05-15 17:17:11.364618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.810 [2024-05-15 17:17:11.364806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.811 [2024-05-15 17:17:11.364816] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10b7840 with addr=10.0.0.2, port=4420 00:26:23.811 [2024-05-15 17:17:11.364824] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b7840 is same with the state(5) to be set 00:26:23.811 [2024-05-15 17:17:11.365005] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10b7840 (9): Bad file descriptor 00:26:23.811 [2024-05-15 17:17:11.365191] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:23.811 [2024-05-15 17:17:11.365199] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:23.811 [2024-05-15 17:17:11.365206] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:23.811 [2024-05-15 17:17:11.368072] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:23.811 [2024-05-15 17:17:11.377374] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:23.811 [2024-05-15 17:17:11.377812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.811 [2024-05-15 17:17:11.378047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.811 [2024-05-15 17:17:11.378057] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10b7840 with addr=10.0.0.2, port=4420 00:26:23.811 [2024-05-15 17:17:11.378065] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b7840 is same with the state(5) to be set 00:26:23.811 [2024-05-15 17:17:11.378250] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10b7840 (9): Bad file descriptor 00:26:23.811 [2024-05-15 17:17:11.378438] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:23.811 [2024-05-15 17:17:11.378446] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:23.811 [2024-05-15 17:17:11.378453] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:23.811 [2024-05-15 17:17:11.381353] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:23.811 [2024-05-15 17:17:11.390574] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:23.811 [2024-05-15 17:17:11.390914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.811 [2024-05-15 17:17:11.391127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.811 [2024-05-15 17:17:11.391137] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10b7840 with addr=10.0.0.2, port=4420 00:26:23.811 [2024-05-15 17:17:11.391145] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b7840 is same with the state(5) to be set 00:26:23.811 [2024-05-15 17:17:11.391331] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10b7840 (9): Bad file descriptor 00:26:23.811 [2024-05-15 17:17:11.391512] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:23.811 [2024-05-15 17:17:11.391520] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:23.811 [2024-05-15 17:17:11.391527] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:23.811 [2024-05-15 17:17:11.394393] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:23.811 [2024-05-15 17:17:11.403718] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:23.811 [2024-05-15 17:17:11.404103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.811 [2024-05-15 17:17:11.404328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.811 [2024-05-15 17:17:11.404341] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10b7840 with addr=10.0.0.2, port=4420 00:26:23.811 [2024-05-15 17:17:11.404349] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b7840 is same with the state(5) to be set 00:26:23.811 [2024-05-15 17:17:11.404530] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10b7840 (9): Bad file descriptor 00:26:23.811 [2024-05-15 17:17:11.404710] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:23.811 [2024-05-15 17:17:11.404718] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:23.811 [2024-05-15 17:17:11.404725] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:23.811 [2024-05-15 17:17:11.407591] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:23.811 [2024-05-15 17:17:11.417030] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:23.811 [2024-05-15 17:17:11.417438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.811 [2024-05-15 17:17:11.417651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.811 [2024-05-15 17:17:11.417661] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10b7840 with addr=10.0.0.2, port=4420 00:26:23.811 [2024-05-15 17:17:11.417669] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b7840 is same with the state(5) to be set 00:26:23.811 [2024-05-15 17:17:11.417850] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10b7840 (9): Bad file descriptor 00:26:23.811 [2024-05-15 17:17:11.418029] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:23.811 [2024-05-15 17:17:11.418042] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:23.811 [2024-05-15 17:17:11.418049] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:23.811 [2024-05-15 17:17:11.420913] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:23.811 [2024-05-15 17:17:11.430205] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:23.811 [2024-05-15 17:17:11.430563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.811 [2024-05-15 17:17:11.430798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.811 [2024-05-15 17:17:11.430809] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10b7840 with addr=10.0.0.2, port=4420 00:26:23.811 [2024-05-15 17:17:11.430816] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b7840 is same with the state(5) to be set 00:26:23.811 [2024-05-15 17:17:11.430995] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10b7840 (9): Bad file descriptor 00:26:23.811 [2024-05-15 17:17:11.431179] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:23.811 [2024-05-15 17:17:11.431188] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:23.811 [2024-05-15 17:17:11.431194] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:23.811 [2024-05-15 17:17:11.434052] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:23.811 [2024-05-15 17:17:11.443334] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:23.811 [2024-05-15 17:17:11.443773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.811 [2024-05-15 17:17:11.443957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.811 [2024-05-15 17:17:11.443967] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10b7840 with addr=10.0.0.2, port=4420 00:26:23.811 [2024-05-15 17:17:11.443974] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b7840 is same with the state(5) to be set 00:26:23.811 [2024-05-15 17:17:11.444154] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10b7840 (9): Bad file descriptor 00:26:23.811 [2024-05-15 17:17:11.444338] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:23.811 [2024-05-15 17:17:11.444347] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:23.811 [2024-05-15 17:17:11.444353] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:23.811 [2024-05-15 17:17:11.447213] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:23.811 [2024-05-15 17:17:11.456488] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:23.811 [2024-05-15 17:17:11.456952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.811 [2024-05-15 17:17:11.457139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.811 [2024-05-15 17:17:11.457149] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10b7840 with addr=10.0.0.2, port=4420 00:26:23.811 [2024-05-15 17:17:11.457156] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b7840 is same with the state(5) to be set 00:26:23.811 [2024-05-15 17:17:11.457340] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10b7840 (9): Bad file descriptor 00:26:23.811 [2024-05-15 17:17:11.457520] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:23.811 [2024-05-15 17:17:11.457528] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:23.811 [2024-05-15 17:17:11.457538] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:23.811 [2024-05-15 17:17:11.460401] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:24.070 [2024-05-15 17:17:11.469637] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:24.070 [2024-05-15 17:17:11.470116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.070 [2024-05-15 17:17:11.470280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.070 [2024-05-15 17:17:11.470292] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10b7840 with addr=10.0.0.2, port=4420 00:26:24.070 [2024-05-15 17:17:11.470299] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b7840 is same with the state(5) to be set 00:26:24.070 [2024-05-15 17:17:11.470478] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10b7840 (9): Bad file descriptor 00:26:24.070 [2024-05-15 17:17:11.470658] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:24.070 [2024-05-15 17:17:11.470666] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:24.070 [2024-05-15 17:17:11.470673] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:24.070 [2024-05-15 17:17:11.473568] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:24.070 [2024-05-15 17:17:11.482858] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:24.070 [2024-05-15 17:17:11.483311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.070 [2024-05-15 17:17:11.483562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.070 [2024-05-15 17:17:11.483572] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10b7840 with addr=10.0.0.2, port=4420 00:26:24.070 [2024-05-15 17:17:11.483579] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b7840 is same with the state(5) to be set 00:26:24.070 [2024-05-15 17:17:11.483759] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10b7840 (9): Bad file descriptor 00:26:24.070 [2024-05-15 17:17:11.483937] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:24.070 [2024-05-15 17:17:11.483945] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:24.070 [2024-05-15 17:17:11.483952] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:24.070 [2024-05-15 17:17:11.486812] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:24.070 [2024-05-15 17:17:11.495921] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:24.070 [2024-05-15 17:17:11.496310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.070 [2024-05-15 17:17:11.496565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.070 [2024-05-15 17:17:11.496576] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10b7840 with addr=10.0.0.2, port=4420 00:26:24.070 [2024-05-15 17:17:11.496583] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b7840 is same with the state(5) to be set 00:26:24.070 [2024-05-15 17:17:11.496762] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10b7840 (9): Bad file descriptor 00:26:24.070 [2024-05-15 17:17:11.496943] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:24.070 [2024-05-15 17:17:11.496951] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:24.070 [2024-05-15 17:17:11.496957] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:24.070 [2024-05-15 17:17:11.499823] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:24.070 [2024-05-15 17:17:11.509129] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:24.070 [2024-05-15 17:17:11.509498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.070 [2024-05-15 17:17:11.509737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.070 [2024-05-15 17:17:11.509747] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10b7840 with addr=10.0.0.2, port=4420 00:26:24.070 [2024-05-15 17:17:11.509754] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b7840 is same with the state(5) to be set 00:26:24.070 [2024-05-15 17:17:11.509933] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10b7840 (9): Bad file descriptor 00:26:24.070 [2024-05-15 17:17:11.510112] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:24.070 [2024-05-15 17:17:11.510121] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:24.070 [2024-05-15 17:17:11.510127] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:24.070 [2024-05-15 17:17:11.512989] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:24.070 [2024-05-15 17:17:11.522264] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:24.070 [2024-05-15 17:17:11.522728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.071 [2024-05-15 17:17:11.522901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.071 [2024-05-15 17:17:11.522911] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10b7840 with addr=10.0.0.2, port=4420 00:26:24.071 [2024-05-15 17:17:11.522918] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b7840 is same with the state(5) to be set 00:26:24.071 [2024-05-15 17:17:11.523096] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10b7840 (9): Bad file descriptor 00:26:24.071 [2024-05-15 17:17:11.523279] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:24.071 [2024-05-15 17:17:11.523288] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:24.071 [2024-05-15 17:17:11.523294] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:24.071 [2024-05-15 17:17:11.526152] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:24.071 [2024-05-15 17:17:11.535432] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:24.071 [2024-05-15 17:17:11.535894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.071 [2024-05-15 17:17:11.536142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.071 [2024-05-15 17:17:11.536152] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10b7840 with addr=10.0.0.2, port=4420 00:26:24.071 [2024-05-15 17:17:11.536159] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b7840 is same with the state(5) to be set 00:26:24.071 [2024-05-15 17:17:11.536343] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10b7840 (9): Bad file descriptor 00:26:24.071 [2024-05-15 17:17:11.536523] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:24.071 [2024-05-15 17:17:11.536531] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:24.071 [2024-05-15 17:17:11.536536] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:24.071 [2024-05-15 17:17:11.539398] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:24.071 [2024-05-15 17:17:11.548511] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:24.071 [2024-05-15 17:17:11.548950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.071 [2024-05-15 17:17:11.549208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.071 [2024-05-15 17:17:11.549219] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10b7840 with addr=10.0.0.2, port=4420 00:26:24.071 [2024-05-15 17:17:11.549226] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b7840 is same with the state(5) to be set 00:26:24.071 [2024-05-15 17:17:11.549405] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10b7840 (9): Bad file descriptor 00:26:24.071 [2024-05-15 17:17:11.549584] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:24.071 [2024-05-15 17:17:11.549591] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:24.071 [2024-05-15 17:17:11.549597] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:24.071 [2024-05-15 17:17:11.552460] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:24.071 [2024-05-15 17:17:11.561737] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:24.071 [2024-05-15 17:17:11.562099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.071 [2024-05-15 17:17:11.562356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.071 [2024-05-15 17:17:11.562366] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10b7840 with addr=10.0.0.2, port=4420 00:26:24.071 [2024-05-15 17:17:11.562373] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b7840 is same with the state(5) to be set 00:26:24.071 [2024-05-15 17:17:11.562553] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10b7840 (9): Bad file descriptor 00:26:24.071 [2024-05-15 17:17:11.562732] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:24.071 [2024-05-15 17:17:11.562740] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:24.071 [2024-05-15 17:17:11.562746] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:24.071 [2024-05-15 17:17:11.565605] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:24.071 [2024-05-15 17:17:11.574881] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:24.071 [2024-05-15 17:17:11.575275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.071 [2024-05-15 17:17:11.575452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.071 [2024-05-15 17:17:11.575463] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10b7840 with addr=10.0.0.2, port=4420 00:26:24.071 [2024-05-15 17:17:11.575469] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b7840 is same with the state(5) to be set 00:26:24.071 [2024-05-15 17:17:11.575649] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10b7840 (9): Bad file descriptor 00:26:24.071 [2024-05-15 17:17:11.575828] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:24.071 [2024-05-15 17:17:11.575836] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:24.071 [2024-05-15 17:17:11.575842] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:24.071 [2024-05-15 17:17:11.578703] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:24.071 [2024-05-15 17:17:11.587980] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:24.071 [2024-05-15 17:17:11.588325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.071 [2024-05-15 17:17:11.588587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.071 [2024-05-15 17:17:11.588597] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10b7840 with addr=10.0.0.2, port=4420 00:26:24.071 [2024-05-15 17:17:11.588604] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b7840 is same with the state(5) to be set 00:26:24.071 [2024-05-15 17:17:11.588783] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10b7840 (9): Bad file descriptor 00:26:24.071 [2024-05-15 17:17:11.588963] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:24.071 [2024-05-15 17:17:11.588971] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:24.071 [2024-05-15 17:17:11.588977] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:24.071 [2024-05-15 17:17:11.591837] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:24.071 [2024-05-15 17:17:11.601113] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:24.071 [2024-05-15 17:17:11.601569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.071 [2024-05-15 17:17:11.601821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.071 [2024-05-15 17:17:11.601832] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10b7840 with addr=10.0.0.2, port=4420 00:26:24.071 [2024-05-15 17:17:11.601838] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b7840 is same with the state(5) to be set 00:26:24.071 [2024-05-15 17:17:11.602017] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10b7840 (9): Bad file descriptor 00:26:24.071 [2024-05-15 17:17:11.602200] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:24.071 [2024-05-15 17:17:11.602218] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:24.071 [2024-05-15 17:17:11.602225] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:24.071 [2024-05-15 17:17:11.605092] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:24.071 [2024-05-15 17:17:11.614217] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:24.071 [2024-05-15 17:17:11.614691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.071 [2024-05-15 17:17:11.614948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.071 [2024-05-15 17:17:11.614958] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10b7840 with addr=10.0.0.2, port=4420 00:26:24.071 [2024-05-15 17:17:11.614965] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b7840 is same with the state(5) to be set 00:26:24.071 [2024-05-15 17:17:11.615144] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10b7840 (9): Bad file descriptor 00:26:24.071 [2024-05-15 17:17:11.615328] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:24.072 [2024-05-15 17:17:11.615337] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:24.072 [2024-05-15 17:17:11.615344] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:24.072 [2024-05-15 17:17:11.618201] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:24.072 [2024-05-15 17:17:11.627304] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:24.072 [2024-05-15 17:17:11.627769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.072 [2024-05-15 17:17:11.628042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.072 [2024-05-15 17:17:11.628059] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10b7840 with addr=10.0.0.2, port=4420 00:26:24.072 [2024-05-15 17:17:11.628066] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b7840 is same with the state(5) to be set 00:26:24.072 [2024-05-15 17:17:11.628249] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10b7840 (9): Bad file descriptor 00:26:24.072 [2024-05-15 17:17:11.628429] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:24.072 [2024-05-15 17:17:11.628437] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:24.072 [2024-05-15 17:17:11.628443] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:24.072 [2024-05-15 17:17:11.631302] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:24.072 [2024-05-15 17:17:11.640413] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:24.072 [2024-05-15 17:17:11.640797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.072 [2024-05-15 17:17:11.641028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.072 [2024-05-15 17:17:11.641038] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10b7840 with addr=10.0.0.2, port=4420 00:26:24.072 [2024-05-15 17:17:11.641045] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b7840 is same with the state(5) to be set 00:26:24.072 [2024-05-15 17:17:11.641227] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10b7840 (9): Bad file descriptor 00:26:24.072 [2024-05-15 17:17:11.641406] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:24.072 [2024-05-15 17:17:11.641414] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:24.072 [2024-05-15 17:17:11.641420] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:24.072 [2024-05-15 17:17:11.644281] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:24.072 [2024-05-15 17:17:11.653553] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:24.072 [2024-05-15 17:17:11.654013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.072 [2024-05-15 17:17:11.654263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.072 [2024-05-15 17:17:11.654274] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10b7840 with addr=10.0.0.2, port=4420 00:26:24.072 [2024-05-15 17:17:11.654280] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b7840 is same with the state(5) to be set 00:26:24.072 [2024-05-15 17:17:11.654460] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10b7840 (9): Bad file descriptor 00:26:24.072 [2024-05-15 17:17:11.654639] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:24.072 [2024-05-15 17:17:11.654647] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:24.072 [2024-05-15 17:17:11.654653] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:24.072 [2024-05-15 17:17:11.657515] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:24.072 [2024-05-15 17:17:11.666622] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:24.072 [2024-05-15 17:17:11.666980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.072 [2024-05-15 17:17:11.667250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.072 [2024-05-15 17:17:11.667261] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10b7840 with addr=10.0.0.2, port=4420 00:26:24.072 [2024-05-15 17:17:11.667271] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b7840 is same with the state(5) to be set 00:26:24.072 [2024-05-15 17:17:11.667451] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10b7840 (9): Bad file descriptor 00:26:24.072 [2024-05-15 17:17:11.667630] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:24.072 [2024-05-15 17:17:11.667638] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:24.072 [2024-05-15 17:17:11.667644] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:24.072 [2024-05-15 17:17:11.670508] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:24.072 [2024-05-15 17:17:11.679796] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:24.072 [2024-05-15 17:17:11.680258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.072 [2024-05-15 17:17:11.680509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.072 [2024-05-15 17:17:11.680519] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10b7840 with addr=10.0.0.2, port=4420 00:26:24.072 [2024-05-15 17:17:11.680525] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b7840 is same with the state(5) to be set 00:26:24.072 [2024-05-15 17:17:11.680706] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10b7840 (9): Bad file descriptor 00:26:24.072 [2024-05-15 17:17:11.680885] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:24.072 [2024-05-15 17:17:11.680893] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:24.072 [2024-05-15 17:17:11.680899] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:24.072 [2024-05-15 17:17:11.683761] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:24.072 [2024-05-15 17:17:11.692882] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:24.072 [2024-05-15 17:17:11.693343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.072 [2024-05-15 17:17:11.693474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.072 [2024-05-15 17:17:11.693484] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10b7840 with addr=10.0.0.2, port=4420 00:26:24.072 [2024-05-15 17:17:11.693491] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b7840 is same with the state(5) to be set 00:26:24.072 [2024-05-15 17:17:11.693670] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10b7840 (9): Bad file descriptor 00:26:24.072 [2024-05-15 17:17:11.693850] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:24.072 [2024-05-15 17:17:11.693858] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:24.072 [2024-05-15 17:17:11.693864] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:24.072 [2024-05-15 17:17:11.696728] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:24.072 [2024-05-15 17:17:11.706017] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:24.072 [2024-05-15 17:17:11.706467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.072 [2024-05-15 17:17:11.706747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.072 [2024-05-15 17:17:11.706757] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10b7840 with addr=10.0.0.2, port=4420 00:26:24.072 [2024-05-15 17:17:11.706764] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b7840 is same with the state(5) to be set 00:26:24.072 [2024-05-15 17:17:11.706947] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10b7840 (9): Bad file descriptor 00:26:24.072 [2024-05-15 17:17:11.707126] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:24.072 [2024-05-15 17:17:11.707134] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:24.072 [2024-05-15 17:17:11.707140] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:24.072 [2024-05-15 17:17:11.710000] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:24.072 [2024-05-15 17:17:11.719116] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:24.072 [2024-05-15 17:17:11.719586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.072 [2024-05-15 17:17:11.719763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.072 [2024-05-15 17:17:11.719773] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10b7840 with addr=10.0.0.2, port=4420 00:26:24.073 [2024-05-15 17:17:11.719779] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b7840 is same with the state(5) to be set 00:26:24.073 [2024-05-15 17:17:11.719958] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10b7840 (9): Bad file descriptor 00:26:24.073 [2024-05-15 17:17:11.720137] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:24.073 [2024-05-15 17:17:11.720146] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:24.073 [2024-05-15 17:17:11.720152] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:24.073 [2024-05-15 17:17:11.723014] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:24.331 [2024-05-15 17:17:11.732217] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:24.331 [2024-05-15 17:17:11.732691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.331 [2024-05-15 17:17:11.732946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.331 [2024-05-15 17:17:11.732959] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10b7840 with addr=10.0.0.2, port=4420 00:26:24.331 [2024-05-15 17:17:11.732966] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b7840 is same with the state(5) to be set 00:26:24.331 [2024-05-15 17:17:11.733148] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10b7840 (9): Bad file descriptor 00:26:24.331 [2024-05-15 17:17:11.733334] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:24.331 [2024-05-15 17:17:11.733343] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:24.331 [2024-05-15 17:17:11.733349] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:24.331 [2024-05-15 17:17:11.736232] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:24.331 [2024-05-15 17:17:11.745355] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:24.331 [2024-05-15 17:17:11.745821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.331 [2024-05-15 17:17:11.746006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.331 [2024-05-15 17:17:11.746016] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10b7840 with addr=10.0.0.2, port=4420 00:26:24.331 [2024-05-15 17:17:11.746023] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b7840 is same with the state(5) to be set 00:26:24.331 [2024-05-15 17:17:11.746206] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10b7840 (9): Bad file descriptor 00:26:24.331 [2024-05-15 17:17:11.746389] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:24.331 [2024-05-15 17:17:11.746398] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:24.331 [2024-05-15 17:17:11.746404] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:24.331 [2024-05-15 17:17:11.749271] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:24.331 [2024-05-15 17:17:11.758553] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:24.331 [2024-05-15 17:17:11.759031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.331 [2024-05-15 17:17:11.759306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.331 [2024-05-15 17:17:11.759317] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10b7840 with addr=10.0.0.2, port=4420 00:26:24.331 [2024-05-15 17:17:11.759324] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b7840 is same with the state(5) to be set 00:26:24.331 [2024-05-15 17:17:11.759503] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10b7840 (9): Bad file descriptor 00:26:24.331 [2024-05-15 17:17:11.759683] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:24.331 [2024-05-15 17:17:11.759691] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:24.331 [2024-05-15 17:17:11.759697] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:24.331 [2024-05-15 17:17:11.762556] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:24.331 [2024-05-15 17:17:11.771665] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:24.331 [2024-05-15 17:17:11.772125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.331 [2024-05-15 17:17:11.772356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.331 [2024-05-15 17:17:11.772366] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10b7840 with addr=10.0.0.2, port=4420 00:26:24.331 [2024-05-15 17:17:11.772373] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b7840 is same with the state(5) to be set 00:26:24.331 [2024-05-15 17:17:11.772553] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10b7840 (9): Bad file descriptor 00:26:24.331 [2024-05-15 17:17:11.772732] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:24.331 [2024-05-15 17:17:11.772740] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:24.331 [2024-05-15 17:17:11.772746] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:24.331 [2024-05-15 17:17:11.775606] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:24.331 [2024-05-15 17:17:11.784882] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:24.331 [2024-05-15 17:17:11.785347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.331 [2024-05-15 17:17:11.785576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.331 [2024-05-15 17:17:11.785587] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10b7840 with addr=10.0.0.2, port=4420 00:26:24.332 [2024-05-15 17:17:11.785594] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b7840 is same with the state(5) to be set 00:26:24.332 [2024-05-15 17:17:11.785773] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10b7840 (9): Bad file descriptor 00:26:24.332 [2024-05-15 17:17:11.785952] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:24.332 [2024-05-15 17:17:11.785963] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:24.332 [2024-05-15 17:17:11.785970] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:24.332 [2024-05-15 17:17:11.788832] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:24.332 [2024-05-15 17:17:11.798108] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:24.332 [2024-05-15 17:17:11.798574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.332 [2024-05-15 17:17:11.798766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.332 [2024-05-15 17:17:11.798777] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10b7840 with addr=10.0.0.2, port=4420 00:26:24.332 [2024-05-15 17:17:11.798784] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b7840 is same with the state(5) to be set 00:26:24.332 [2024-05-15 17:17:11.798964] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10b7840 (9): Bad file descriptor 00:26:24.332 [2024-05-15 17:17:11.799143] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:24.332 [2024-05-15 17:17:11.799151] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:24.332 [2024-05-15 17:17:11.799160] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:24.332 [2024-05-15 17:17:11.802030] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:24.332 [2024-05-15 17:17:11.811321] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:24.332 [2024-05-15 17:17:11.811788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.332 [2024-05-15 17:17:11.811967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.332 [2024-05-15 17:17:11.811977] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10b7840 with addr=10.0.0.2, port=4420 00:26:24.332 [2024-05-15 17:17:11.811984] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b7840 is same with the state(5) to be set 00:26:24.332 [2024-05-15 17:17:11.812162] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10b7840 (9): Bad file descriptor 00:26:24.332 [2024-05-15 17:17:11.812347] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:24.332 [2024-05-15 17:17:11.812355] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:24.332 [2024-05-15 17:17:11.812361] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:24.332 [2024-05-15 17:17:11.815223] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:24.332 [2024-05-15 17:17:11.824505] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:24.332 [2024-05-15 17:17:11.824970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.332 [2024-05-15 17:17:11.825177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.332 [2024-05-15 17:17:11.825188] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10b7840 with addr=10.0.0.2, port=4420 00:26:24.332 [2024-05-15 17:17:11.825195] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b7840 is same with the state(5) to be set 00:26:24.332 [2024-05-15 17:17:11.825373] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10b7840 (9): Bad file descriptor 00:26:24.332 [2024-05-15 17:17:11.825552] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:24.332 [2024-05-15 17:17:11.825560] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:24.332 [2024-05-15 17:17:11.825569] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:24.332 [2024-05-15 17:17:11.828433] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:24.332 [2024-05-15 17:17:11.837707] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:24.332 [2024-05-15 17:17:11.838105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.332 [2024-05-15 17:17:11.838283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.332 [2024-05-15 17:17:11.838294] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10b7840 with addr=10.0.0.2, port=4420 00:26:24.332 [2024-05-15 17:17:11.838301] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b7840 is same with the state(5) to be set 00:26:24.332 [2024-05-15 17:17:11.838481] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10b7840 (9): Bad file descriptor 00:26:24.332 [2024-05-15 17:17:11.838660] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:24.332 [2024-05-15 17:17:11.838668] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:24.332 [2024-05-15 17:17:11.838674] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:24.332 [2024-05-15 17:17:11.841534] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:24.332 [2024-05-15 17:17:11.850813] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:24.332 [2024-05-15 17:17:11.851177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.332 [2024-05-15 17:17:11.851405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.332 [2024-05-15 17:17:11.851415] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10b7840 with addr=10.0.0.2, port=4420 00:26:24.332 [2024-05-15 17:17:11.851421] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b7840 is same with the state(5) to be set 00:26:24.332 [2024-05-15 17:17:11.851601] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10b7840 (9): Bad file descriptor 00:26:24.332 [2024-05-15 17:17:11.851780] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:24.332 [2024-05-15 17:17:11.851788] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:24.332 [2024-05-15 17:17:11.851794] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:24.332 [2024-05-15 17:17:11.854656] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:24.332 [2024-05-15 17:17:11.863937] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:24.332 [2024-05-15 17:17:11.864402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.332 [2024-05-15 17:17:11.864657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.332 [2024-05-15 17:17:11.864667] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10b7840 with addr=10.0.0.2, port=4420 00:26:24.332 [2024-05-15 17:17:11.864674] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b7840 is same with the state(5) to be set 00:26:24.332 [2024-05-15 17:17:11.864854] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10b7840 (9): Bad file descriptor 00:26:24.332 [2024-05-15 17:17:11.865033] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:24.332 [2024-05-15 17:17:11.865041] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:24.332 [2024-05-15 17:17:11.865047] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:24.332 [2024-05-15 17:17:11.867915] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:24.332 [2024-05-15 17:17:11.877026] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:24.332 [2024-05-15 17:17:11.877388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.332 [2024-05-15 17:17:11.877642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.332 [2024-05-15 17:17:11.877652] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10b7840 with addr=10.0.0.2, port=4420 00:26:24.332 [2024-05-15 17:17:11.877659] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b7840 is same with the state(5) to be set 00:26:24.332 [2024-05-15 17:17:11.877838] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10b7840 (9): Bad file descriptor 00:26:24.332 [2024-05-15 17:17:11.878017] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:24.332 [2024-05-15 17:17:11.878025] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:24.332 [2024-05-15 17:17:11.878031] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:24.332 [2024-05-15 17:17:11.880894] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:24.332 [2024-05-15 17:17:11.890170] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:24.332 [2024-05-15 17:17:11.890539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.332 [2024-05-15 17:17:11.890713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.332 [2024-05-15 17:17:11.890723] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10b7840 with addr=10.0.0.2, port=4420 00:26:24.332 [2024-05-15 17:17:11.890730] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b7840 is same with the state(5) to be set 00:26:24.332 [2024-05-15 17:17:11.890909] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10b7840 (9): Bad file descriptor 00:26:24.332 [2024-05-15 17:17:11.891088] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:24.333 [2024-05-15 17:17:11.891096] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:24.333 [2024-05-15 17:17:11.891102] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:24.333 [2024-05-15 17:17:11.893963] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:24.333 [2024-05-15 17:17:11.903251] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:24.333 [2024-05-15 17:17:11.903716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.333 [2024-05-15 17:17:11.903857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.333 [2024-05-15 17:17:11.903868] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10b7840 with addr=10.0.0.2, port=4420 00:26:24.333 [2024-05-15 17:17:11.903874] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b7840 is same with the state(5) to be set 00:26:24.333 [2024-05-15 17:17:11.904054] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10b7840 (9): Bad file descriptor 00:26:24.333 [2024-05-15 17:17:11.904238] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:24.333 [2024-05-15 17:17:11.904247] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:24.333 [2024-05-15 17:17:11.904253] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:24.333 [2024-05-15 17:17:11.907113] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:24.333 [2024-05-15 17:17:11.916396] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:24.333 [2024-05-15 17:17:11.916857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.333 [2024-05-15 17:17:11.916986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.333 [2024-05-15 17:17:11.916997] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10b7840 with addr=10.0.0.2, port=4420 00:26:24.333 [2024-05-15 17:17:11.917003] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b7840 is same with the state(5) to be set 00:26:24.333 [2024-05-15 17:17:11.917186] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10b7840 (9): Bad file descriptor 00:26:24.333 [2024-05-15 17:17:11.917366] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:24.333 [2024-05-15 17:17:11.917374] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:24.333 [2024-05-15 17:17:11.917380] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:24.333 [2024-05-15 17:17:11.920246] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:24.333 [2024-05-15 17:17:11.929521] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:24.333 [2024-05-15 17:17:11.929985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.333 [2024-05-15 17:17:11.930192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.333 [2024-05-15 17:17:11.930203] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10b7840 with addr=10.0.0.2, port=4420 00:26:24.333 [2024-05-15 17:17:11.930209] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b7840 is same with the state(5) to be set 00:26:24.333 [2024-05-15 17:17:11.930389] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10b7840 (9): Bad file descriptor 00:26:24.333 [2024-05-15 17:17:11.930568] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:24.333 [2024-05-15 17:17:11.930576] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:24.333 [2024-05-15 17:17:11.930582] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:24.333 [2024-05-15 17:17:11.933440] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:24.333 [2024-05-15 17:17:11.942716] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:24.333 [2024-05-15 17:17:11.943077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.333 [2024-05-15 17:17:11.943332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.333 [2024-05-15 17:17:11.943343] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10b7840 with addr=10.0.0.2, port=4420 00:26:24.333 [2024-05-15 17:17:11.943350] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b7840 is same with the state(5) to be set 00:26:24.333 [2024-05-15 17:17:11.943529] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10b7840 (9): Bad file descriptor 00:26:24.333 [2024-05-15 17:17:11.943709] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:24.333 [2024-05-15 17:17:11.943718] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:24.333 [2024-05-15 17:17:11.943724] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:24.333 [2024-05-15 17:17:11.946587] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:24.333 [2024-05-15 17:17:11.955865] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:24.333 [2024-05-15 17:17:11.956338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.333 [2024-05-15 17:17:11.956527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.333 [2024-05-15 17:17:11.956537] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10b7840 with addr=10.0.0.2, port=4420 00:26:24.333 [2024-05-15 17:17:11.956544] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b7840 is same with the state(5) to be set 00:26:24.333 [2024-05-15 17:17:11.956723] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10b7840 (9): Bad file descriptor 00:26:24.333 [2024-05-15 17:17:11.956902] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:24.333 [2024-05-15 17:17:11.956911] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:24.333 [2024-05-15 17:17:11.956918] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:24.333 [2024-05-15 17:17:11.959783] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:24.333 [2024-05-15 17:17:11.969068] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:24.333 [2024-05-15 17:17:11.969532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.333 [2024-05-15 17:17:11.969689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.333 [2024-05-15 17:17:11.969700] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10b7840 with addr=10.0.0.2, port=4420 00:26:24.333 [2024-05-15 17:17:11.969707] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b7840 is same with the state(5) to be set 00:26:24.333 [2024-05-15 17:17:11.969885] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10b7840 (9): Bad file descriptor 00:26:24.333 [2024-05-15 17:17:11.970065] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:24.333 [2024-05-15 17:17:11.970073] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:24.333 [2024-05-15 17:17:11.970079] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:24.333 [2024-05-15 17:17:11.972943] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:24.333 [2024-05-15 17:17:11.982240] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:24.333 [2024-05-15 17:17:11.982692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.333 [2024-05-15 17:17:11.982851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.333 [2024-05-15 17:17:11.982862] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10b7840 with addr=10.0.0.2, port=4420 00:26:24.333 [2024-05-15 17:17:11.982869] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b7840 is same with the state(5) to be set 00:26:24.333 [2024-05-15 17:17:11.983049] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10b7840 (9): Bad file descriptor 00:26:24.333 [2024-05-15 17:17:11.983234] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:24.333 [2024-05-15 17:17:11.983243] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:24.333 [2024-05-15 17:17:11.983249] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:24.333 [2024-05-15 17:17:11.986124] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:24.592 [2024-05-15 17:17:11.995500] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:24.592 [2024-05-15 17:17:11.995978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.592 [2024-05-15 17:17:11.996182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.592 [2024-05-15 17:17:11.996194] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10b7840 with addr=10.0.0.2, port=4420 00:26:24.592 [2024-05-15 17:17:11.996201] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b7840 is same with the state(5) to be set 00:26:24.592 [2024-05-15 17:17:11.996381] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10b7840 (9): Bad file descriptor 00:26:24.592 [2024-05-15 17:17:11.996560] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:24.592 [2024-05-15 17:17:11.996568] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:24.592 [2024-05-15 17:17:11.996574] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:24.592 [2024-05-15 17:17:11.999442] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:24.592 [2024-05-15 17:17:12.008735] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:24.592 [2024-05-15 17:17:12.009202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.592 [2024-05-15 17:17:12.009395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.592 [2024-05-15 17:17:12.009405] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10b7840 with addr=10.0.0.2, port=4420 00:26:24.592 [2024-05-15 17:17:12.009412] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b7840 is same with the state(5) to be set 00:26:24.592 [2024-05-15 17:17:12.009591] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10b7840 (9): Bad file descriptor 00:26:24.592 [2024-05-15 17:17:12.009770] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:24.592 [2024-05-15 17:17:12.009778] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:24.592 [2024-05-15 17:17:12.009784] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:24.592 [2024-05-15 17:17:12.012642] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:24.592 [2024-05-15 17:17:12.021917] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:24.592 [2024-05-15 17:17:12.022376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.592 [2024-05-15 17:17:12.022613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.592 [2024-05-15 17:17:12.022624] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10b7840 with addr=10.0.0.2, port=4420 00:26:24.592 [2024-05-15 17:17:12.022631] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b7840 is same with the state(5) to be set 00:26:24.593 [2024-05-15 17:17:12.022810] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10b7840 (9): Bad file descriptor 00:26:24.593 [2024-05-15 17:17:12.022989] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:24.593 [2024-05-15 17:17:12.022997] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:24.593 [2024-05-15 17:17:12.023003] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:24.593 [2024-05-15 17:17:12.025862] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:24.593 17:17:12 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:26:24.593 17:17:12 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@860 -- # return 0 00:26:24.593 17:17:12 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:26:24.593 17:17:12 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@726 -- # xtrace_disable 00:26:24.593 17:17:12 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:24.593 [2024-05-15 17:17:12.035147] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:24.593 [2024-05-15 17:17:12.035614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.593 [2024-05-15 17:17:12.035866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.593 [2024-05-15 17:17:12.035876] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10b7840 with addr=10.0.0.2, port=4420 00:26:24.593 [2024-05-15 17:17:12.035883] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b7840 is same with the state(5) to be set 00:26:24.593 [2024-05-15 17:17:12.036062] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10b7840 (9): Bad file descriptor 00:26:24.593 [2024-05-15 17:17:12.036247] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:24.593 [2024-05-15 17:17:12.036255] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:24.593 [2024-05-15 17:17:12.036261] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:24.593 [2024-05-15 17:17:12.039119] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:24.593 [2024-05-15 17:17:12.048239] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:24.593 [2024-05-15 17:17:12.048523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.593 [2024-05-15 17:17:12.048633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.593 [2024-05-15 17:17:12.048644] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10b7840 with addr=10.0.0.2, port=4420 00:26:24.593 [2024-05-15 17:17:12.048650] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b7840 is same with the state(5) to be set 00:26:24.593 [2024-05-15 17:17:12.048829] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10b7840 (9): Bad file descriptor 00:26:24.593 [2024-05-15 17:17:12.049009] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:24.593 [2024-05-15 17:17:12.049017] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:24.593 [2024-05-15 17:17:12.049023] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:24.593 [2024-05-15 17:17:12.051891] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:24.593 [2024-05-15 17:17:12.061343] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:24.593 [2024-05-15 17:17:12.061646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.593 [2024-05-15 17:17:12.061877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.593 [2024-05-15 17:17:12.061888] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10b7840 with addr=10.0.0.2, port=4420 00:26:24.593 [2024-05-15 17:17:12.061895] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b7840 is same with the state(5) to be set 00:26:24.593 [2024-05-15 17:17:12.062074] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10b7840 (9): Bad file descriptor 00:26:24.593 [2024-05-15 17:17:12.062259] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:24.593 [2024-05-15 17:17:12.062268] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:24.593 [2024-05-15 17:17:12.062274] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:24.593 17:17:12 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:24.593 17:17:12 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:26:24.593 17:17:12 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:24.593 17:17:12 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:24.593 [2024-05-15 17:17:12.065133] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:24.593 [2024-05-15 17:17:12.070770] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:24.593 [2024-05-15 17:17:12.074450] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:24.593 [2024-05-15 17:17:12.074847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.593 [2024-05-15 17:17:12.075075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.593 [2024-05-15 17:17:12.075085] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10b7840 with addr=10.0.0.2, port=4420 00:26:24.593 [2024-05-15 17:17:12.075092] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b7840 is same with the state(5) to be set 00:26:24.593 [2024-05-15 17:17:12.075277] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10b7840 (9): Bad file descriptor 00:26:24.593 [2024-05-15 17:17:12.075457] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:24.593 [2024-05-15 17:17:12.075465] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:24.593 [2024-05-15 17:17:12.075471] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:24.593 17:17:12 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:24.593 17:17:12 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:26:24.593 17:17:12 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:24.593 17:17:12 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:24.593 [2024-05-15 17:17:12.078341] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:24.593 [2024-05-15 17:17:12.087622] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:24.593 [2024-05-15 17:17:12.088082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.593 [2024-05-15 17:17:12.088264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.593 [2024-05-15 17:17:12.088274] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10b7840 with addr=10.0.0.2, port=4420 00:26:24.593 [2024-05-15 17:17:12.088281] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b7840 is same with the state(5) to be set 00:26:24.593 [2024-05-15 17:17:12.088461] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10b7840 (9): Bad file descriptor 00:26:24.593 [2024-05-15 17:17:12.088640] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:24.593 [2024-05-15 17:17:12.088648] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:24.593 [2024-05-15 17:17:12.088654] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:24.593 [2024-05-15 17:17:12.091520] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:24.593 [2024-05-15 17:17:12.100850] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:24.593 [2024-05-15 17:17:12.101326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.593 [2024-05-15 17:17:12.101510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.593 [2024-05-15 17:17:12.101520] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10b7840 with addr=10.0.0.2, port=4420 00:26:24.593 [2024-05-15 17:17:12.101527] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b7840 is same with the state(5) to be set 00:26:24.593 [2024-05-15 17:17:12.101707] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10b7840 (9): Bad file descriptor 00:26:24.593 [2024-05-15 17:17:12.101890] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:24.593 [2024-05-15 17:17:12.101898] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:24.593 [2024-05-15 17:17:12.101904] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:24.593 [2024-05-15 17:17:12.104782] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:24.593 Malloc0 00:26:24.593 17:17:12 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:24.593 17:17:12 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:26:24.593 [2024-05-15 17:17:12.114073] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:24.593 17:17:12 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:24.593 [2024-05-15 17:17:12.114467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.593 17:17:12 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:24.593 [2024-05-15 17:17:12.114724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.593 [2024-05-15 17:17:12.114735] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10b7840 with addr=10.0.0.2, port=4420 00:26:24.593 [2024-05-15 17:17:12.114742] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b7840 is same with the state(5) to be set 00:26:24.593 [2024-05-15 17:17:12.114922] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10b7840 (9): Bad file descriptor 00:26:24.593 [2024-05-15 17:17:12.115102] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:24.593 [2024-05-15 17:17:12.115110] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:24.593 [2024-05-15 17:17:12.115116] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:24.593 [2024-05-15 17:17:12.117976] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:24.593 17:17:12 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:24.593 17:17:12 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:26:24.593 17:17:12 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:24.594 17:17:12 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:24.594 [2024-05-15 17:17:12.127257] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:24.594 [2024-05-15 17:17:12.127621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.594 [2024-05-15 17:17:12.127873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.594 [2024-05-15 17:17:12.127883] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10b7840 with addr=10.0.0.2, port=4420 00:26:24.594 [2024-05-15 17:17:12.127890] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b7840 is same with the state(5) to be set 00:26:24.594 [2024-05-15 17:17:12.128070] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10b7840 (9): Bad file descriptor 00:26:24.594 [2024-05-15 17:17:12.128253] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:24.594 [2024-05-15 17:17:12.128262] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:24.594 [2024-05-15 17:17:12.128268] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:24.594 [2024-05-15 17:17:12.131132] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:24.594 17:17:12 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:24.594 17:17:12 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:26:24.594 17:17:12 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:24.594 17:17:12 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:24.594 [2024-05-15 17:17:12.136532] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:26:24.594 [2024-05-15 17:17:12.136758] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:24.594 [2024-05-15 17:17:12.140411] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:24.594 17:17:12 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:24.594 17:17:12 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@38 -- # wait 3215923 00:26:24.594 [2024-05-15 17:17:12.170132] bdev_nvme.c:2055:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:26:34.589 00:26:34.589 Latency(us) 00:26:34.589 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:34.589 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:26:34.589 Verification LBA range: start 0x0 length 0x4000 00:26:34.589 Nvme1n1 : 15.04 7971.54 31.14 12216.17 0.00 6304.13 633.99 41487.14 00:26:34.589 =================================================================================================================== 00:26:34.589 Total : 7971.54 31.14 12216.17 0.00 6304.13 633.99 41487.14 00:26:34.589 17:17:20 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@39 -- # sync 00:26:34.589 17:17:20 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:26:34.589 17:17:20 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:34.589 17:17:20 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:34.589 17:17:20 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:34.589 17:17:20 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@42 -- # trap - SIGINT SIGTERM EXIT 00:26:34.589 17:17:20 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@44 -- # nvmftestfini 00:26:34.589 17:17:20 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@488 -- # nvmfcleanup 00:26:34.589 17:17:20 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@117 -- # sync 00:26:34.589 17:17:20 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:26:34.589 17:17:20 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@120 -- # set +e 00:26:34.589 17:17:20 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@121 -- # for i in {1..20} 00:26:34.589 17:17:20 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:26:34.589 rmmod nvme_tcp 00:26:34.589 rmmod nvme_fabrics 00:26:34.589 rmmod nvme_keyring 00:26:34.589 17:17:20 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:26:34.589 17:17:20 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@124 -- # set -e 00:26:34.589 17:17:20 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@125 -- # return 0 00:26:34.589 17:17:20 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@489 -- # '[' -n 3216917 ']' 00:26:34.589 17:17:20 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@490 -- # killprocess 3216917 00:26:34.589 17:17:20 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@946 -- # '[' -z 3216917 ']' 00:26:34.589 17:17:20 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@950 -- # kill -0 3216917 00:26:34.589 17:17:20 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@951 -- # uname 00:26:34.590 17:17:20 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:26:34.590 17:17:20 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3216917 00:26:34.590 17:17:20 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:26:34.590 17:17:20 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:26:34.590 17:17:20 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3216917' 00:26:34.590 killing process with pid 3216917 00:26:34.590 17:17:20 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@965 -- # kill 3216917 00:26:34.590 [2024-05-15 17:17:20.923560] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:26:34.590 17:17:20 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@970 -- # wait 3216917 00:26:34.590 17:17:21 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:26:34.590 17:17:21 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:26:34.590 17:17:21 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:26:34.590 17:17:21 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:26:34.590 17:17:21 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@278 -- # remove_spdk_ns 00:26:34.590 17:17:21 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:34.590 17:17:21 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:26:34.590 17:17:21 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:35.966 17:17:23 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:26:35.966 00:26:35.966 real 0m26.019s 00:26:35.966 user 1m2.904s 00:26:35.966 sys 0m6.062s 00:26:35.966 17:17:23 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@1122 -- # xtrace_disable 00:26:35.966 17:17:23 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:35.966 ************************************ 00:26:35.966 END TEST nvmf_bdevperf 00:26:35.966 ************************************ 00:26:35.966 17:17:23 nvmf_tcp -- nvmf/nvmf.sh@122 -- # run_test nvmf_target_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:26:35.966 17:17:23 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:26:35.966 17:17:23 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:26:35.966 17:17:23 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:26:35.966 ************************************ 00:26:35.966 START TEST nvmf_target_disconnect 00:26:35.966 ************************************ 00:26:35.966 17:17:23 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:26:35.966 * Looking for test storage... 00:26:35.966 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:26:35.966 17:17:23 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:35.966 17:17:23 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@7 -- # uname -s 00:26:35.966 17:17:23 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:35.966 17:17:23 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:35.966 17:17:23 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:35.966 17:17:23 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:35.966 17:17:23 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:35.966 17:17:23 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:35.966 17:17:23 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:35.966 17:17:23 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:35.966 17:17:23 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:35.966 17:17:23 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:35.966 17:17:23 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:26:35.966 17:17:23 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:26:35.966 17:17:23 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:35.966 17:17:23 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:35.966 17:17:23 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:35.966 17:17:23 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:35.966 17:17:23 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:35.966 17:17:23 nvmf_tcp.nvmf_target_disconnect -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:35.966 17:17:23 nvmf_tcp.nvmf_target_disconnect -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:35.966 17:17:23 nvmf_tcp.nvmf_target_disconnect -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:35.966 17:17:23 nvmf_tcp.nvmf_target_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:35.966 17:17:23 nvmf_tcp.nvmf_target_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:35.966 17:17:23 nvmf_tcp.nvmf_target_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:35.966 17:17:23 nvmf_tcp.nvmf_target_disconnect -- paths/export.sh@5 -- # export PATH 00:26:35.966 17:17:23 nvmf_tcp.nvmf_target_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:35.966 17:17:23 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@47 -- # : 0 00:26:35.966 17:17:23 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:26:35.966 17:17:23 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:26:35.966 17:17:23 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:35.966 17:17:23 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:35.966 17:17:23 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:35.966 17:17:23 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:26:35.966 17:17:23 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:26:35.966 17:17:23 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@51 -- # have_pci_nics=0 00:26:35.966 17:17:23 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@11 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:26:35.966 17:17:23 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@13 -- # MALLOC_BDEV_SIZE=64 00:26:35.966 17:17:23 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:26:35.966 17:17:23 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@69 -- # nvmftestinit 00:26:35.966 17:17:23 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:26:35.966 17:17:23 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:35.966 17:17:23 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@448 -- # prepare_net_devs 00:26:35.966 17:17:23 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@410 -- # local -g is_hw=no 00:26:35.966 17:17:23 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@412 -- # remove_spdk_ns 00:26:35.966 17:17:23 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:35.966 17:17:23 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:26:35.966 17:17:23 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:35.966 17:17:23 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:26:35.966 17:17:23 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:26:35.966 17:17:23 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@285 -- # xtrace_disable 00:26:35.966 17:17:23 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:26:41.228 17:17:28 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:41.228 17:17:28 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@291 -- # pci_devs=() 00:26:41.228 17:17:28 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@291 -- # local -a pci_devs 00:26:41.228 17:17:28 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@292 -- # pci_net_devs=() 00:26:41.228 17:17:28 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:26:41.228 17:17:28 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@293 -- # pci_drivers=() 00:26:41.228 17:17:28 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@293 -- # local -A pci_drivers 00:26:41.228 17:17:28 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@295 -- # net_devs=() 00:26:41.228 17:17:28 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@295 -- # local -ga net_devs 00:26:41.228 17:17:28 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@296 -- # e810=() 00:26:41.228 17:17:28 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@296 -- # local -ga e810 00:26:41.228 17:17:28 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@297 -- # x722=() 00:26:41.228 17:17:28 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@297 -- # local -ga x722 00:26:41.228 17:17:28 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@298 -- # mlx=() 00:26:41.228 17:17:28 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@298 -- # local -ga mlx 00:26:41.228 17:17:28 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:41.228 17:17:28 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:41.228 17:17:28 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:41.228 17:17:28 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:41.228 17:17:28 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:41.228 17:17:28 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:41.228 17:17:28 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:41.228 17:17:28 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:41.228 17:17:28 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:41.228 17:17:28 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:41.228 17:17:28 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:41.228 17:17:28 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:26:41.228 17:17:28 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:26:41.228 17:17:28 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:26:41.228 17:17:28 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:26:41.228 17:17:28 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:26:41.228 17:17:28 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:26:41.228 17:17:28 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:41.228 17:17:28 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:26:41.228 Found 0000:86:00.0 (0x8086 - 0x159b) 00:26:41.228 17:17:28 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:26:41.228 17:17:28 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:26:41.228 17:17:28 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:41.228 17:17:28 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:41.228 17:17:28 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:26:41.228 17:17:28 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:41.228 17:17:28 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:26:41.228 Found 0000:86:00.1 (0x8086 - 0x159b) 00:26:41.228 17:17:28 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:26:41.228 17:17:28 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:26:41.228 17:17:28 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:41.228 17:17:28 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:41.228 17:17:28 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:26:41.228 17:17:28 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:26:41.228 17:17:28 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:26:41.228 17:17:28 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:26:41.228 17:17:28 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:41.228 17:17:28 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:41.228 17:17:28 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:26:41.228 17:17:28 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:41.228 17:17:28 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@390 -- # [[ up == up ]] 00:26:41.228 17:17:28 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:26:41.228 17:17:28 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:41.228 17:17:28 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:26:41.228 Found net devices under 0000:86:00.0: cvl_0_0 00:26:41.228 17:17:28 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:26:41.228 17:17:28 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:41.228 17:17:28 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:41.228 17:17:28 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:26:41.228 17:17:28 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:41.228 17:17:28 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@390 -- # [[ up == up ]] 00:26:41.228 17:17:28 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:26:41.228 17:17:28 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:41.228 17:17:28 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:26:41.228 Found net devices under 0000:86:00.1: cvl_0_1 00:26:41.228 17:17:28 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:26:41.228 17:17:28 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:26:41.228 17:17:28 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@414 -- # is_hw=yes 00:26:41.228 17:17:28 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:26:41.228 17:17:28 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:26:41.228 17:17:28 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:26:41.229 17:17:28 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:41.229 17:17:28 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:41.229 17:17:28 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:41.229 17:17:28 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:26:41.229 17:17:28 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:26:41.229 17:17:28 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:26:41.229 17:17:28 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:26:41.229 17:17:28 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:26:41.229 17:17:28 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:41.229 17:17:28 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:26:41.229 17:17:28 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:26:41.229 17:17:28 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:26:41.229 17:17:28 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:26:41.229 17:17:28 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:26:41.229 17:17:28 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:41.229 17:17:28 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:26:41.229 17:17:28 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:41.229 17:17:28 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:26:41.229 17:17:28 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:26:41.229 17:17:28 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:26:41.229 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:41.229 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.173 ms 00:26:41.229 00:26:41.229 --- 10.0.0.2 ping statistics --- 00:26:41.229 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:41.229 rtt min/avg/max/mdev = 0.173/0.173/0.173/0.000 ms 00:26:41.229 17:17:28 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:26:41.229 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:41.229 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.219 ms 00:26:41.229 00:26:41.229 --- 10.0.0.1 ping statistics --- 00:26:41.229 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:41.229 rtt min/avg/max/mdev = 0.219/0.219/0.219/0.000 ms 00:26:41.229 17:17:28 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:41.229 17:17:28 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@422 -- # return 0 00:26:41.229 17:17:28 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:26:41.229 17:17:28 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:41.229 17:17:28 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:26:41.229 17:17:28 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:26:41.229 17:17:28 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:41.229 17:17:28 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:26:41.229 17:17:28 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:26:41.229 17:17:28 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@70 -- # run_test nvmf_target_disconnect_tc1 nvmf_target_disconnect_tc1 00:26:41.229 17:17:28 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:26:41.229 17:17:28 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1103 -- # xtrace_disable 00:26:41.229 17:17:28 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:26:41.229 ************************************ 00:26:41.229 START TEST nvmf_target_disconnect_tc1 00:26:41.229 ************************************ 00:26:41.229 17:17:28 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1121 -- # nvmf_target_disconnect_tc1 00:26:41.229 17:17:28 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- host/target_disconnect.sh@32 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:26:41.229 17:17:28 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@648 -- # local es=0 00:26:41.229 17:17:28 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:26:41.229 17:17:28 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:26:41.229 17:17:28 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:26:41.229 17:17:28 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:26:41.229 17:17:28 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:26:41.229 17:17:28 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:26:41.229 17:17:28 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:26:41.229 17:17:28 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:26:41.229 17:17:28 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect ]] 00:26:41.229 17:17:28 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:26:41.229 EAL: No free 2048 kB hugepages reported on node 1 00:26:41.229 [2024-05-15 17:17:28.712323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.229 [2024-05-15 17:17:28.712590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.229 [2024-05-15 17:17:28.712602] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1499ae0 with addr=10.0.0.2, port=4420 00:26:41.229 [2024-05-15 17:17:28.712625] nvme_tcp.c:2702:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:26:41.229 [2024-05-15 17:17:28.712638] nvme.c: 821:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:26:41.229 [2024-05-15 17:17:28.712644] nvme.c: 898:spdk_nvme_probe: *ERROR*: Create probe context failed 00:26:41.229 spdk_nvme_probe() failed for transport address '10.0.0.2' 00:26:41.229 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect: errors occurred 00:26:41.229 Initializing NVMe Controllers 00:26:41.229 17:17:28 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@651 -- # es=1 00:26:41.229 17:17:28 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:26:41.229 17:17:28 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:26:41.229 17:17:28 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:26:41.229 00:26:41.229 real 0m0.097s 00:26:41.229 user 0m0.046s 00:26:41.229 sys 0m0.048s 00:26:41.229 17:17:28 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1122 -- # xtrace_disable 00:26:41.229 17:17:28 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@10 -- # set +x 00:26:41.229 ************************************ 00:26:41.229 END TEST nvmf_target_disconnect_tc1 00:26:41.229 ************************************ 00:26:41.229 17:17:28 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@71 -- # run_test nvmf_target_disconnect_tc2 nvmf_target_disconnect_tc2 00:26:41.229 17:17:28 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:26:41.229 17:17:28 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1103 -- # xtrace_disable 00:26:41.229 17:17:28 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:26:41.229 ************************************ 00:26:41.229 START TEST nvmf_target_disconnect_tc2 00:26:41.229 ************************************ 00:26:41.229 17:17:28 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1121 -- # nvmf_target_disconnect_tc2 00:26:41.229 17:17:28 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@37 -- # disconnect_init 10.0.0.2 00:26:41.229 17:17:28 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:26:41.229 17:17:28 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:26:41.229 17:17:28 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@720 -- # xtrace_disable 00:26:41.229 17:17:28 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:26:41.229 17:17:28 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@481 -- # nvmfpid=3221999 00:26:41.229 17:17:28 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@482 -- # waitforlisten 3221999 00:26:41.229 17:17:28 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:26:41.230 17:17:28 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@827 -- # '[' -z 3221999 ']' 00:26:41.230 17:17:28 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:41.230 17:17:28 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@832 -- # local max_retries=100 00:26:41.230 17:17:28 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:41.230 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:41.230 17:17:28 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@836 -- # xtrace_disable 00:26:41.230 17:17:28 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:26:41.230 [2024-05-15 17:17:28.850329] Starting SPDK v24.05-pre git sha1 0ba8ca574 / DPDK 23.11.0 initialization... 00:26:41.230 [2024-05-15 17:17:28.850370] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:41.230 EAL: No free 2048 kB hugepages reported on node 1 00:26:41.488 [2024-05-15 17:17:28.917553] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:26:41.488 [2024-05-15 17:17:28.994303] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:41.488 [2024-05-15 17:17:28.994343] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:41.488 [2024-05-15 17:17:28.994350] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:41.488 [2024-05-15 17:17:28.994356] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:41.488 [2024-05-15 17:17:28.994361] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:41.488 [2024-05-15 17:17:28.994498] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:26:41.488 [2024-05-15 17:17:28.994624] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:26:41.488 [2024-05-15 17:17:28.995008] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:26:41.488 [2024-05-15 17:17:28.995008] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 7 00:26:42.054 17:17:29 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:26:42.054 17:17:29 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@860 -- # return 0 00:26:42.054 17:17:29 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:26:42.054 17:17:29 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@726 -- # xtrace_disable 00:26:42.054 17:17:29 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:26:42.054 17:17:29 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:42.054 17:17:29 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:26:42.054 17:17:29 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:42.054 17:17:29 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:26:42.054 Malloc0 00:26:42.312 17:17:29 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:42.312 17:17:29 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:26:42.312 17:17:29 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:42.312 17:17:29 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:26:42.312 [2024-05-15 17:17:29.720074] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:42.312 17:17:29 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:42.312 17:17:29 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:26:42.312 17:17:29 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:42.312 17:17:29 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:26:42.312 17:17:29 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:42.312 17:17:29 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:26:42.312 17:17:29 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:42.312 17:17:29 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:26:42.312 17:17:29 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:42.312 17:17:29 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:26:42.312 17:17:29 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:42.312 17:17:29 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:26:42.312 [2024-05-15 17:17:29.752115] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:26:42.312 [2024-05-15 17:17:29.752363] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:42.312 17:17:29 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:42.312 17:17:29 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:26:42.312 17:17:29 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:42.312 17:17:29 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:26:42.312 17:17:29 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:42.312 17:17:29 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@42 -- # reconnectpid=3222132 00:26:42.312 17:17:29 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@44 -- # sleep 2 00:26:42.312 17:17:29 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:26:42.312 EAL: No free 2048 kB hugepages reported on node 1 00:26:44.210 17:17:31 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@45 -- # kill -9 3221999 00:26:44.210 17:17:31 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@47 -- # sleep 2 00:26:44.210 Read completed with error (sct=0, sc=8) 00:26:44.210 starting I/O failed 00:26:44.210 Read completed with error (sct=0, sc=8) 00:26:44.210 starting I/O failed 00:26:44.210 Read completed with error (sct=0, sc=8) 00:26:44.210 starting I/O failed 00:26:44.210 Read completed with error (sct=0, sc=8) 00:26:44.210 starting I/O failed 00:26:44.210 Read completed with error (sct=0, sc=8) 00:26:44.210 starting I/O failed 00:26:44.210 Read completed with error (sct=0, sc=8) 00:26:44.210 starting I/O failed 00:26:44.210 Read completed with error (sct=0, sc=8) 00:26:44.210 starting I/O failed 00:26:44.210 Read completed with error (sct=0, sc=8) 00:26:44.210 starting I/O failed 00:26:44.210 Write completed with error (sct=0, sc=8) 00:26:44.210 starting I/O failed 00:26:44.210 Write completed with error (sct=0, sc=8) 00:26:44.210 starting I/O failed 00:26:44.210 Write completed with error (sct=0, sc=8) 00:26:44.210 starting I/O failed 00:26:44.210 Write completed with error (sct=0, sc=8) 00:26:44.210 starting I/O failed 00:26:44.210 Read completed with error (sct=0, sc=8) 00:26:44.210 starting I/O failed 00:26:44.210 Write completed with error (sct=0, sc=8) 00:26:44.210 starting I/O failed 00:26:44.210 Write completed with error (sct=0, sc=8) 00:26:44.210 starting I/O failed 00:26:44.210 Write completed with error (sct=0, sc=8) 00:26:44.210 starting I/O failed 00:26:44.210 Write completed with error (sct=0, sc=8) 00:26:44.210 starting I/O failed 00:26:44.210 Write completed with error (sct=0, sc=8) 00:26:44.210 starting I/O failed 00:26:44.210 Read completed with error (sct=0, sc=8) 00:26:44.210 starting I/O failed 00:26:44.210 Read completed with error (sct=0, sc=8) 00:26:44.210 starting I/O failed 00:26:44.210 Read completed with error (sct=0, sc=8) 00:26:44.210 starting I/O failed 00:26:44.210 Write completed with error (sct=0, sc=8) 00:26:44.210 starting I/O failed 00:26:44.210 Write completed with error (sct=0, sc=8) 00:26:44.210 starting I/O failed 00:26:44.210 Read completed with error (sct=0, sc=8) 00:26:44.210 starting I/O failed 00:26:44.210 Write completed with error (sct=0, sc=8) 00:26:44.210 starting I/O failed 00:26:44.210 Write completed with error (sct=0, sc=8) 00:26:44.210 starting I/O failed 00:26:44.210 Write completed with error (sct=0, sc=8) 00:26:44.210 starting I/O failed 00:26:44.210 Read completed with error (sct=0, sc=8) 00:26:44.210 starting I/O failed 00:26:44.210 Write completed with error (sct=0, sc=8) 00:26:44.210 starting I/O failed 00:26:44.210 Write completed with error (sct=0, sc=8) 00:26:44.210 starting I/O failed 00:26:44.210 Read completed with error (sct=0, sc=8) 00:26:44.210 starting I/O failed 00:26:44.210 Read completed with error (sct=0, sc=8) 00:26:44.210 starting I/O failed 00:26:44.210 Read completed with error (sct=0, sc=8) 00:26:44.210 starting I/O failed 00:26:44.210 Read completed with error (sct=0, sc=8) 00:26:44.210 starting I/O failed 00:26:44.210 Read completed with error (sct=0, sc=8) 00:26:44.210 starting I/O failed 00:26:44.210 Read completed with error (sct=0, sc=8) 00:26:44.210 [2024-05-15 17:17:31.780057] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:44.210 starting I/O failed 00:26:44.210 Read completed with error (sct=0, sc=8) 00:26:44.210 starting I/O failed 00:26:44.210 Read completed with error (sct=0, sc=8) 00:26:44.210 starting I/O failed 00:26:44.210 Read completed with error (sct=0, sc=8) 00:26:44.210 starting I/O failed 00:26:44.210 Read completed with error (sct=0, sc=8) 00:26:44.210 starting I/O failed 00:26:44.210 Read completed with error (sct=0, sc=8) 00:26:44.210 starting I/O failed 00:26:44.210 Read completed with error (sct=0, sc=8) 00:26:44.210 starting I/O failed 00:26:44.210 Read completed with error (sct=0, sc=8) 00:26:44.210 starting I/O failed 00:26:44.210 Read completed with error (sct=0, sc=8) 00:26:44.210 starting I/O failed 00:26:44.210 Read completed with error (sct=0, sc=8) 00:26:44.210 starting I/O failed 00:26:44.210 Read completed with error (sct=0, sc=8) 00:26:44.210 starting I/O failed 00:26:44.210 Write completed with error (sct=0, sc=8) 00:26:44.210 starting I/O failed 00:26:44.210 Write completed with error (sct=0, sc=8) 00:26:44.210 starting I/O failed 00:26:44.210 Read completed with error (sct=0, sc=8) 00:26:44.210 starting I/O failed 00:26:44.210 Write completed with error (sct=0, sc=8) 00:26:44.210 starting I/O failed 00:26:44.210 Read completed with error (sct=0, sc=8) 00:26:44.210 starting I/O failed 00:26:44.210 Read completed with error (sct=0, sc=8) 00:26:44.210 starting I/O failed 00:26:44.210 Read completed with error (sct=0, sc=8) 00:26:44.210 starting I/O failed 00:26:44.210 Write completed with error (sct=0, sc=8) 00:26:44.210 starting I/O failed 00:26:44.210 Write completed with error (sct=0, sc=8) 00:26:44.210 starting I/O failed 00:26:44.210 Read completed with error (sct=0, sc=8) 00:26:44.210 starting I/O failed 00:26:44.210 Read completed with error (sct=0, sc=8) 00:26:44.210 starting I/O failed 00:26:44.210 Read completed with error (sct=0, sc=8) 00:26:44.210 starting I/O failed 00:26:44.210 Read completed with error (sct=0, sc=8) 00:26:44.210 starting I/O failed 00:26:44.210 Read completed with error (sct=0, sc=8) 00:26:44.210 starting I/O failed 00:26:44.210 Write completed with error (sct=0, sc=8) 00:26:44.210 starting I/O failed 00:26:44.210 Write completed with error (sct=0, sc=8) 00:26:44.210 starting I/O failed 00:26:44.210 Write completed with error (sct=0, sc=8) 00:26:44.210 starting I/O failed 00:26:44.210 Read completed with error (sct=0, sc=8) 00:26:44.210 starting I/O failed 00:26:44.210 [2024-05-15 17:17:31.780262] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:44.210 Read completed with error (sct=0, sc=8) 00:26:44.210 starting I/O failed 00:26:44.210 Read completed with error (sct=0, sc=8) 00:26:44.210 starting I/O failed 00:26:44.210 Read completed with error (sct=0, sc=8) 00:26:44.210 starting I/O failed 00:26:44.210 Read completed with error (sct=0, sc=8) 00:26:44.210 starting I/O failed 00:26:44.210 Read completed with error (sct=0, sc=8) 00:26:44.210 starting I/O failed 00:26:44.210 Read completed with error (sct=0, sc=8) 00:26:44.210 starting I/O failed 00:26:44.210 Read completed with error (sct=0, sc=8) 00:26:44.210 starting I/O failed 00:26:44.210 Read completed with error (sct=0, sc=8) 00:26:44.210 starting I/O failed 00:26:44.210 Read completed with error (sct=0, sc=8) 00:26:44.210 starting I/O failed 00:26:44.210 Read completed with error (sct=0, sc=8) 00:26:44.210 starting I/O failed 00:26:44.210 Read completed with error (sct=0, sc=8) 00:26:44.210 starting I/O failed 00:26:44.210 Read completed with error (sct=0, sc=8) 00:26:44.210 starting I/O failed 00:26:44.210 Read completed with error (sct=0, sc=8) 00:26:44.210 starting I/O failed 00:26:44.210 Read completed with error (sct=0, sc=8) 00:26:44.210 starting I/O failed 00:26:44.210 Read completed with error (sct=0, sc=8) 00:26:44.210 starting I/O failed 00:26:44.210 Read completed with error (sct=0, sc=8) 00:26:44.210 starting I/O failed 00:26:44.210 Read completed with error (sct=0, sc=8) 00:26:44.210 starting I/O failed 00:26:44.210 Read completed with error (sct=0, sc=8) 00:26:44.210 starting I/O failed 00:26:44.210 Write completed with error (sct=0, sc=8) 00:26:44.210 starting I/O failed 00:26:44.210 Read completed with error (sct=0, sc=8) 00:26:44.210 starting I/O failed 00:26:44.210 Read completed with error (sct=0, sc=8) 00:26:44.210 starting I/O failed 00:26:44.210 Read completed with error (sct=0, sc=8) 00:26:44.210 starting I/O failed 00:26:44.210 Write completed with error (sct=0, sc=8) 00:26:44.210 starting I/O failed 00:26:44.210 Read completed with error (sct=0, sc=8) 00:26:44.210 starting I/O failed 00:26:44.210 Read completed with error (sct=0, sc=8) 00:26:44.210 starting I/O failed 00:26:44.210 Read completed with error (sct=0, sc=8) 00:26:44.210 starting I/O failed 00:26:44.210 Read completed with error (sct=0, sc=8) 00:26:44.210 starting I/O failed 00:26:44.210 Read completed with error (sct=0, sc=8) 00:26:44.210 starting I/O failed 00:26:44.210 Write completed with error (sct=0, sc=8) 00:26:44.210 starting I/O failed 00:26:44.210 Write completed with error (sct=0, sc=8) 00:26:44.210 starting I/O failed 00:26:44.210 Write completed with error (sct=0, sc=8) 00:26:44.210 starting I/O failed 00:26:44.210 Write completed with error (sct=0, sc=8) 00:26:44.210 starting I/O failed 00:26:44.210 [2024-05-15 17:17:31.780454] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:26:44.210 Read completed with error (sct=0, sc=8) 00:26:44.210 starting I/O failed 00:26:44.210 Read completed with error (sct=0, sc=8) 00:26:44.210 starting I/O failed 00:26:44.210 Read completed with error (sct=0, sc=8) 00:26:44.210 starting I/O failed 00:26:44.210 Read completed with error (sct=0, sc=8) 00:26:44.210 starting I/O failed 00:26:44.210 Read completed with error (sct=0, sc=8) 00:26:44.210 starting I/O failed 00:26:44.210 Read completed with error (sct=0, sc=8) 00:26:44.210 starting I/O failed 00:26:44.210 Read completed with error (sct=0, sc=8) 00:26:44.211 starting I/O failed 00:26:44.211 Read completed with error (sct=0, sc=8) 00:26:44.211 starting I/O failed 00:26:44.211 Read completed with error (sct=0, sc=8) 00:26:44.211 starting I/O failed 00:26:44.211 Read completed with error (sct=0, sc=8) 00:26:44.211 starting I/O failed 00:26:44.211 Read completed with error (sct=0, sc=8) 00:26:44.211 starting I/O failed 00:26:44.211 Read completed with error (sct=0, sc=8) 00:26:44.211 starting I/O failed 00:26:44.211 Read completed with error (sct=0, sc=8) 00:26:44.211 starting I/O failed 00:26:44.211 Write completed with error (sct=0, sc=8) 00:26:44.211 starting I/O failed 00:26:44.211 Read completed with error (sct=0, sc=8) 00:26:44.211 starting I/O failed 00:26:44.211 Read completed with error (sct=0, sc=8) 00:26:44.211 starting I/O failed 00:26:44.211 Read completed with error (sct=0, sc=8) 00:26:44.211 starting I/O failed 00:26:44.211 Read completed with error (sct=0, sc=8) 00:26:44.211 starting I/O failed 00:26:44.211 Write completed with error (sct=0, sc=8) 00:26:44.211 starting I/O failed 00:26:44.211 Write completed with error (sct=0, sc=8) 00:26:44.211 starting I/O failed 00:26:44.211 Read completed with error (sct=0, sc=8) 00:26:44.211 starting I/O failed 00:26:44.211 Read completed with error (sct=0, sc=8) 00:26:44.211 starting I/O failed 00:26:44.211 Write completed with error (sct=0, sc=8) 00:26:44.211 starting I/O failed 00:26:44.211 Read completed with error (sct=0, sc=8) 00:26:44.211 starting I/O failed 00:26:44.211 Write completed with error (sct=0, sc=8) 00:26:44.211 starting I/O failed 00:26:44.211 Write completed with error (sct=0, sc=8) 00:26:44.211 starting I/O failed 00:26:44.211 Read completed with error (sct=0, sc=8) 00:26:44.211 starting I/O failed 00:26:44.211 Read completed with error (sct=0, sc=8) 00:26:44.211 starting I/O failed 00:26:44.211 Write completed with error (sct=0, sc=8) 00:26:44.211 starting I/O failed 00:26:44.211 Read completed with error (sct=0, sc=8) 00:26:44.211 starting I/O failed 00:26:44.211 Write completed with error (sct=0, sc=8) 00:26:44.211 starting I/O failed 00:26:44.211 Read completed with error (sct=0, sc=8) 00:26:44.211 starting I/O failed 00:26:44.211 [2024-05-15 17:17:31.780645] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:44.211 [2024-05-15 17:17:31.780869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.211 [2024-05-15 17:17:31.781000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.211 [2024-05-15 17:17:31.781011] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:44.211 qpair failed and we were unable to recover it. 00:26:44.211 [2024-05-15 17:17:31.781140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.211 [2024-05-15 17:17:31.781352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.211 [2024-05-15 17:17:31.781363] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:44.211 qpair failed and we were unable to recover it. 00:26:44.211 [2024-05-15 17:17:31.781494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.211 [2024-05-15 17:17:31.781656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.211 [2024-05-15 17:17:31.781685] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:44.211 qpair failed and we were unable to recover it. 00:26:44.211 [2024-05-15 17:17:31.781824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.211 [2024-05-15 17:17:31.781993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.211 [2024-05-15 17:17:31.782022] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:44.211 qpair failed and we were unable to recover it. 00:26:44.211 [2024-05-15 17:17:31.782130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.211 [2024-05-15 17:17:31.782292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.211 [2024-05-15 17:17:31.782301] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:44.211 qpair failed and we were unable to recover it. 00:26:44.211 [2024-05-15 17:17:31.782375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.211 [2024-05-15 17:17:31.782557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.211 [2024-05-15 17:17:31.782566] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:44.211 qpair failed and we were unable to recover it. 00:26:44.211 [2024-05-15 17:17:31.782705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.211 [2024-05-15 17:17:31.782832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.211 [2024-05-15 17:17:31.782842] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:44.211 qpair failed and we were unable to recover it. 00:26:44.211 [2024-05-15 17:17:31.782947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.211 [2024-05-15 17:17:31.783144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.211 [2024-05-15 17:17:31.783154] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:44.211 qpair failed and we were unable to recover it. 00:26:44.211 [2024-05-15 17:17:31.783346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.211 [2024-05-15 17:17:31.783477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.211 [2024-05-15 17:17:31.783488] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:44.211 qpair failed and we were unable to recover it. 00:26:44.211 [2024-05-15 17:17:31.783608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.211 [2024-05-15 17:17:31.783846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.211 [2024-05-15 17:17:31.783875] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:44.211 qpair failed and we were unable to recover it. 00:26:44.211 [2024-05-15 17:17:31.784028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.211 [2024-05-15 17:17:31.784235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.211 [2024-05-15 17:17:31.784266] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:44.211 qpair failed and we were unable to recover it. 00:26:44.211 [2024-05-15 17:17:31.784491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.211 [2024-05-15 17:17:31.784612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.211 [2024-05-15 17:17:31.784641] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:44.211 qpair failed and we were unable to recover it. 00:26:44.211 [2024-05-15 17:17:31.784803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.211 [2024-05-15 17:17:31.785041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.211 [2024-05-15 17:17:31.785070] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:44.211 qpair failed and we were unable to recover it. 00:26:44.211 [2024-05-15 17:17:31.785344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.211 [2024-05-15 17:17:31.785484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.211 [2024-05-15 17:17:31.785513] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:44.211 qpair failed and we were unable to recover it. 00:26:44.211 [2024-05-15 17:17:31.785634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.211 [2024-05-15 17:17:31.785879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.211 [2024-05-15 17:17:31.785907] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:44.211 qpair failed and we were unable to recover it. 00:26:44.211 [2024-05-15 17:17:31.786180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.211 [2024-05-15 17:17:31.786332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.211 [2024-05-15 17:17:31.786342] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:44.211 qpair failed and we were unable to recover it. 00:26:44.211 [2024-05-15 17:17:31.786533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.211 [2024-05-15 17:17:31.786622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.211 [2024-05-15 17:17:31.786632] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:44.211 qpair failed and we were unable to recover it. 00:26:44.211 [2024-05-15 17:17:31.786750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.211 [2024-05-15 17:17:31.786839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.211 [2024-05-15 17:17:31.786849] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:44.211 qpair failed and we were unable to recover it. 00:26:44.211 [2024-05-15 17:17:31.787011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.211 [2024-05-15 17:17:31.787170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.211 [2024-05-15 17:17:31.787180] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:44.211 qpair failed and we were unable to recover it. 00:26:44.211 [2024-05-15 17:17:31.787252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.211 [2024-05-15 17:17:31.787460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.211 [2024-05-15 17:17:31.787489] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:44.211 qpair failed and we were unable to recover it. 00:26:44.211 [2024-05-15 17:17:31.787700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.211 [2024-05-15 17:17:31.787838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.211 [2024-05-15 17:17:31.787867] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:44.211 qpair failed and we were unable to recover it. 00:26:44.211 [2024-05-15 17:17:31.788019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.212 [2024-05-15 17:17:31.788151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.212 [2024-05-15 17:17:31.788193] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:44.212 qpair failed and we were unable to recover it. 00:26:44.212 [2024-05-15 17:17:31.788303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.212 [2024-05-15 17:17:31.788540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.212 [2024-05-15 17:17:31.788550] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:44.212 qpair failed and we were unable to recover it. 00:26:44.212 [2024-05-15 17:17:31.788730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.212 [2024-05-15 17:17:31.788822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.212 [2024-05-15 17:17:31.788832] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:44.212 qpair failed and we were unable to recover it. 00:26:44.212 [2024-05-15 17:17:31.788964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.212 [2024-05-15 17:17:31.789203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.212 [2024-05-15 17:17:31.789217] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:44.212 qpair failed and we were unable to recover it. 00:26:44.212 [2024-05-15 17:17:31.789463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.212 [2024-05-15 17:17:31.789662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.212 [2024-05-15 17:17:31.789691] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:44.212 qpair failed and we were unable to recover it. 00:26:44.212 [2024-05-15 17:17:31.789919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.212 [2024-05-15 17:17:31.790054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.212 [2024-05-15 17:17:31.790083] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:44.212 qpair failed and we were unable to recover it. 00:26:44.212 [2024-05-15 17:17:31.790254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.212 [2024-05-15 17:17:31.790542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.212 [2024-05-15 17:17:31.790570] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:44.212 qpair failed and we were unable to recover it. 00:26:44.212 [2024-05-15 17:17:31.790779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.212 [2024-05-15 17:17:31.791063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.212 [2024-05-15 17:17:31.791091] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:44.212 qpair failed and we were unable to recover it. 00:26:44.212 [2024-05-15 17:17:31.791233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.212 [2024-05-15 17:17:31.791372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.212 [2024-05-15 17:17:31.791400] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:44.212 qpair failed and we were unable to recover it. 00:26:44.212 [2024-05-15 17:17:31.791613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.212 [2024-05-15 17:17:31.791820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.212 [2024-05-15 17:17:31.791848] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:44.212 qpair failed and we were unable to recover it. 00:26:44.212 [2024-05-15 17:17:31.792063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.212 [2024-05-15 17:17:31.792333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.212 [2024-05-15 17:17:31.792363] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:44.212 qpair failed and we were unable to recover it. 00:26:44.212 [2024-05-15 17:17:31.792615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.212 [2024-05-15 17:17:31.792772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.212 [2024-05-15 17:17:31.792785] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:44.212 qpair failed and we were unable to recover it. 00:26:44.212 [2024-05-15 17:17:31.793004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.212 [2024-05-15 17:17:31.793217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.212 [2024-05-15 17:17:31.793247] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:44.212 qpair failed and we were unable to recover it. 00:26:44.212 [2024-05-15 17:17:31.793377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.212 [2024-05-15 17:17:31.793483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.212 [2024-05-15 17:17:31.793511] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:44.212 qpair failed and we were unable to recover it. 00:26:44.212 [2024-05-15 17:17:31.793723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.212 [2024-05-15 17:17:31.793937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.212 [2024-05-15 17:17:31.793965] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:44.212 qpair failed and we were unable to recover it. 00:26:44.212 [2024-05-15 17:17:31.794185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.212 [2024-05-15 17:17:31.794367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.212 [2024-05-15 17:17:31.794395] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:44.212 qpair failed and we were unable to recover it. 00:26:44.212 [2024-05-15 17:17:31.794543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.212 [2024-05-15 17:17:31.794764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.212 [2024-05-15 17:17:31.794792] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:44.212 qpair failed and we were unable to recover it. 00:26:44.212 [2024-05-15 17:17:31.795062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.212 [2024-05-15 17:17:31.795268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.212 [2024-05-15 17:17:31.795298] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:44.212 qpair failed and we were unable to recover it. 00:26:44.212 [2024-05-15 17:17:31.795509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.212 [2024-05-15 17:17:31.795691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.212 [2024-05-15 17:17:31.795704] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:44.212 qpair failed and we were unable to recover it. 00:26:44.212 [2024-05-15 17:17:31.795958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.212 [2024-05-15 17:17:31.796045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.212 [2024-05-15 17:17:31.796058] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:44.212 qpair failed and we were unable to recover it. 00:26:44.212 [2024-05-15 17:17:31.796297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.212 [2024-05-15 17:17:31.796442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.212 [2024-05-15 17:17:31.796455] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:44.212 qpair failed and we were unable to recover it. 00:26:44.212 [2024-05-15 17:17:31.796635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.212 [2024-05-15 17:17:31.796815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.212 [2024-05-15 17:17:31.796829] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:44.212 qpair failed and we were unable to recover it. 00:26:44.212 [2024-05-15 17:17:31.797063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.212 [2024-05-15 17:17:31.797264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.212 [2024-05-15 17:17:31.797278] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:44.212 qpair failed and we were unable to recover it. 00:26:44.212 [2024-05-15 17:17:31.797462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.212 [2024-05-15 17:17:31.797574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.212 [2024-05-15 17:17:31.797587] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:44.212 qpair failed and we were unable to recover it. 00:26:44.212 [2024-05-15 17:17:31.797824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.212 [2024-05-15 17:17:31.797920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.212 [2024-05-15 17:17:31.797933] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:44.212 qpair failed and we were unable to recover it. 00:26:44.212 [2024-05-15 17:17:31.798047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.212 [2024-05-15 17:17:31.798148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.212 [2024-05-15 17:17:31.798162] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:44.212 qpair failed and we were unable to recover it. 00:26:44.212 [2024-05-15 17:17:31.798469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.212 [2024-05-15 17:17:31.798650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.212 [2024-05-15 17:17:31.798664] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:44.212 qpair failed and we were unable to recover it. 00:26:44.212 [2024-05-15 17:17:31.798855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.212 [2024-05-15 17:17:31.799076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.212 [2024-05-15 17:17:31.799090] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:44.212 qpair failed and we were unable to recover it. 00:26:44.212 [2024-05-15 17:17:31.799261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.212 [2024-05-15 17:17:31.799372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.213 [2024-05-15 17:17:31.799385] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:44.213 qpair failed and we were unable to recover it. 00:26:44.213 [2024-05-15 17:17:31.799557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.213 [2024-05-15 17:17:31.799774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.213 [2024-05-15 17:17:31.799803] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:44.213 qpair failed and we were unable to recover it. 00:26:44.213 [2024-05-15 17:17:31.799961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.213 [2024-05-15 17:17:31.800152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.213 [2024-05-15 17:17:31.800189] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:44.213 qpair failed and we were unable to recover it. 00:26:44.213 [2024-05-15 17:17:31.800352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.213 [2024-05-15 17:17:31.800636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.213 [2024-05-15 17:17:31.800665] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:44.213 qpair failed and we were unable to recover it. 00:26:44.213 [2024-05-15 17:17:31.800957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.213 [2024-05-15 17:17:31.801163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.213 [2024-05-15 17:17:31.801182] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:44.213 qpair failed and we were unable to recover it. 00:26:44.213 [2024-05-15 17:17:31.801303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.213 [2024-05-15 17:17:31.801416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.213 [2024-05-15 17:17:31.801430] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:44.213 qpair failed and we were unable to recover it. 00:26:44.213 [2024-05-15 17:17:31.801630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.213 [2024-05-15 17:17:31.801906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.213 [2024-05-15 17:17:31.801935] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:44.213 qpair failed and we were unable to recover it. 00:26:44.213 [2024-05-15 17:17:31.802207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.213 [2024-05-15 17:17:31.802404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.213 [2024-05-15 17:17:31.802417] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:44.213 qpair failed and we were unable to recover it. 00:26:44.213 [2024-05-15 17:17:31.802538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.213 [2024-05-15 17:17:31.802672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.213 [2024-05-15 17:17:31.802684] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:44.213 qpair failed and we were unable to recover it. 00:26:44.213 [2024-05-15 17:17:31.802806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.213 [2024-05-15 17:17:31.803013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.213 [2024-05-15 17:17:31.803042] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:44.213 qpair failed and we were unable to recover it. 00:26:44.213 [2024-05-15 17:17:31.803252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.213 [2024-05-15 17:17:31.803393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.213 [2024-05-15 17:17:31.803407] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:44.213 qpair failed and we were unable to recover it. 00:26:44.213 [2024-05-15 17:17:31.803602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.213 [2024-05-15 17:17:31.803759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.213 [2024-05-15 17:17:31.803788] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:44.213 qpair failed and we were unable to recover it. 00:26:44.213 [2024-05-15 17:17:31.804057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.213 [2024-05-15 17:17:31.804196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.213 [2024-05-15 17:17:31.804210] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:44.213 qpair failed and we were unable to recover it. 00:26:44.213 [2024-05-15 17:17:31.804411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.213 [2024-05-15 17:17:31.804603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.213 [2024-05-15 17:17:31.804631] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:44.213 qpair failed and we were unable to recover it. 00:26:44.213 [2024-05-15 17:17:31.804854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.213 [2024-05-15 17:17:31.805000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.213 [2024-05-15 17:17:31.805013] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:44.213 qpair failed and we were unable to recover it. 00:26:44.213 [2024-05-15 17:17:31.805124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.213 [2024-05-15 17:17:31.805288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.213 [2024-05-15 17:17:31.805302] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:44.213 qpair failed and we were unable to recover it. 00:26:44.213 [2024-05-15 17:17:31.805481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.213 [2024-05-15 17:17:31.805645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.213 [2024-05-15 17:17:31.805658] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:44.213 qpair failed and we were unable to recover it. 00:26:44.213 [2024-05-15 17:17:31.805836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.213 [2024-05-15 17:17:31.806045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.213 [2024-05-15 17:17:31.806058] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:44.213 qpair failed and we were unable to recover it. 00:26:44.213 [2024-05-15 17:17:31.806176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.213 [2024-05-15 17:17:31.806363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.213 [2024-05-15 17:17:31.806376] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:44.213 qpair failed and we were unable to recover it. 00:26:44.213 [2024-05-15 17:17:31.806642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.213 [2024-05-15 17:17:31.806875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.213 [2024-05-15 17:17:31.806903] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:44.213 qpair failed and we were unable to recover it. 00:26:44.213 [2024-05-15 17:17:31.807059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.213 [2024-05-15 17:17:31.807260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.213 [2024-05-15 17:17:31.807274] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:44.213 qpair failed and we were unable to recover it. 00:26:44.213 [2024-05-15 17:17:31.807383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.213 [2024-05-15 17:17:31.807499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.213 [2024-05-15 17:17:31.807512] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:44.213 qpair failed and we were unable to recover it. 00:26:44.213 [2024-05-15 17:17:31.807625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.213 [2024-05-15 17:17:31.807789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.213 [2024-05-15 17:17:31.807802] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:44.213 qpair failed and we were unable to recover it. 00:26:44.213 [2024-05-15 17:17:31.807976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.213 [2024-05-15 17:17:31.808205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.213 [2024-05-15 17:17:31.808235] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:44.213 qpair failed and we were unable to recover it. 00:26:44.213 [2024-05-15 17:17:31.808391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.213 [2024-05-15 17:17:31.808584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.213 [2024-05-15 17:17:31.808612] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:44.213 qpair failed and we were unable to recover it. 00:26:44.213 [2024-05-15 17:17:31.808922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.213 [2024-05-15 17:17:31.809122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.213 [2024-05-15 17:17:31.809136] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:44.213 qpair failed and we were unable to recover it. 00:26:44.213 [2024-05-15 17:17:31.809308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.213 [2024-05-15 17:17:31.809541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.213 [2024-05-15 17:17:31.809554] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:44.213 qpair failed and we were unable to recover it. 00:26:44.213 [2024-05-15 17:17:31.809786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.213 [2024-05-15 17:17:31.809990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.213 [2024-05-15 17:17:31.810003] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:44.213 qpair failed and we were unable to recover it. 00:26:44.213 [2024-05-15 17:17:31.810113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.213 [2024-05-15 17:17:31.810227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.213 [2024-05-15 17:17:31.810241] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:44.214 qpair failed and we were unable to recover it. 00:26:44.214 [2024-05-15 17:17:31.810363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.214 [2024-05-15 17:17:31.810526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.214 [2024-05-15 17:17:31.810539] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:44.214 qpair failed and we were unable to recover it. 00:26:44.214 [2024-05-15 17:17:31.810721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.214 [2024-05-15 17:17:31.810986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.214 [2024-05-15 17:17:31.811015] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:44.214 qpair failed and we were unable to recover it. 00:26:44.214 [2024-05-15 17:17:31.811305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.214 [2024-05-15 17:17:31.811442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.214 [2024-05-15 17:17:31.811455] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:44.214 qpair failed and we were unable to recover it. 00:26:44.214 [2024-05-15 17:17:31.811656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.214 [2024-05-15 17:17:31.811769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.214 [2024-05-15 17:17:31.811782] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:44.214 qpair failed and we were unable to recover it. 00:26:44.214 [2024-05-15 17:17:31.811967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.214 [2024-05-15 17:17:31.812089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.214 [2024-05-15 17:17:31.812102] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:44.214 qpair failed and we were unable to recover it. 00:26:44.214 [2024-05-15 17:17:31.812280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.214 [2024-05-15 17:17:31.812455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.214 [2024-05-15 17:17:31.812468] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:44.214 qpair failed and we were unable to recover it. 00:26:44.214 [2024-05-15 17:17:31.812635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.214 [2024-05-15 17:17:31.812804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.214 [2024-05-15 17:17:31.812817] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:44.214 qpair failed and we were unable to recover it. 00:26:44.214 [2024-05-15 17:17:31.812983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.214 [2024-05-15 17:17:31.813100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.214 [2024-05-15 17:17:31.813113] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:44.214 qpair failed and we were unable to recover it. 00:26:44.214 [2024-05-15 17:17:31.813224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.214 [2024-05-15 17:17:31.813465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.214 [2024-05-15 17:17:31.813478] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:44.214 qpair failed and we were unable to recover it. 00:26:44.214 [2024-05-15 17:17:31.813728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.214 [2024-05-15 17:17:31.813856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.214 [2024-05-15 17:17:31.813869] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:44.214 qpair failed and we were unable to recover it. 00:26:44.214 [2024-05-15 17:17:31.813974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.214 [2024-05-15 17:17:31.814205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.214 [2024-05-15 17:17:31.814218] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:44.214 qpair failed and we were unable to recover it. 00:26:44.214 [2024-05-15 17:17:31.814384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.214 [2024-05-15 17:17:31.814574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.214 [2024-05-15 17:17:31.814602] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:44.214 qpair failed and we were unable to recover it. 00:26:44.214 [2024-05-15 17:17:31.814761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.214 [2024-05-15 17:17:31.814968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.214 [2024-05-15 17:17:31.814996] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:44.214 qpair failed and we were unable to recover it. 00:26:44.214 [2024-05-15 17:17:31.815142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.214 [2024-05-15 17:17:31.815390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.214 [2024-05-15 17:17:31.815403] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:44.214 qpair failed and we were unable to recover it. 00:26:44.214 [2024-05-15 17:17:31.815538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.214 [2024-05-15 17:17:31.815696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.214 [2024-05-15 17:17:31.815709] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:44.214 qpair failed and we were unable to recover it. 00:26:44.214 [2024-05-15 17:17:31.815941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.214 [2024-05-15 17:17:31.816181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.214 [2024-05-15 17:17:31.816194] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:44.214 qpair failed and we were unable to recover it. 00:26:44.214 [2024-05-15 17:17:31.816442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.214 [2024-05-15 17:17:31.816547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.214 [2024-05-15 17:17:31.816560] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:44.214 qpair failed and we were unable to recover it. 00:26:44.214 [2024-05-15 17:17:31.816741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.214 [2024-05-15 17:17:31.817040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.214 [2024-05-15 17:17:31.817069] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:44.214 qpair failed and we were unable to recover it. 00:26:44.214 [2024-05-15 17:17:31.817257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.214 [2024-05-15 17:17:31.817399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.214 [2024-05-15 17:17:31.817412] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:44.214 qpair failed and we were unable to recover it. 00:26:44.214 [2024-05-15 17:17:31.817554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.214 [2024-05-15 17:17:31.817717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.214 [2024-05-15 17:17:31.817730] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:44.214 qpair failed and we were unable to recover it. 00:26:44.214 [2024-05-15 17:17:31.817842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.214 [2024-05-15 17:17:31.817972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.214 [2024-05-15 17:17:31.817985] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:44.214 qpair failed and we were unable to recover it. 00:26:44.214 [2024-05-15 17:17:31.818214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.214 [2024-05-15 17:17:31.818338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.214 [2024-05-15 17:17:31.818367] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:44.214 qpair failed and we were unable to recover it. 00:26:44.214 [2024-05-15 17:17:31.818633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.214 [2024-05-15 17:17:31.818770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.215 [2024-05-15 17:17:31.818798] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:44.215 qpair failed and we were unable to recover it. 00:26:44.215 [2024-05-15 17:17:31.819046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.215 [2024-05-15 17:17:31.819189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.215 [2024-05-15 17:17:31.819219] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:44.215 qpair failed and we were unable to recover it. 00:26:44.215 [2024-05-15 17:17:31.819423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.215 [2024-05-15 17:17:31.819629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.215 [2024-05-15 17:17:31.819658] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:44.215 qpair failed and we were unable to recover it. 00:26:44.215 [2024-05-15 17:17:31.819878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.215 [2024-05-15 17:17:31.820113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.215 [2024-05-15 17:17:31.820142] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:44.215 qpair failed and we were unable to recover it. 00:26:44.215 [2024-05-15 17:17:31.820363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.215 [2024-05-15 17:17:31.820476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.215 [2024-05-15 17:17:31.820490] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:44.215 qpair failed and we were unable to recover it. 00:26:44.215 [2024-05-15 17:17:31.820723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.215 [2024-05-15 17:17:31.820897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.215 [2024-05-15 17:17:31.820910] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:44.215 qpair failed and we were unable to recover it. 00:26:44.215 [2024-05-15 17:17:31.821045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.215 [2024-05-15 17:17:31.821225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.215 [2024-05-15 17:17:31.821239] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:44.215 qpair failed and we were unable to recover it. 00:26:44.215 [2024-05-15 17:17:31.821327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.215 [2024-05-15 17:17:31.821498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.215 [2024-05-15 17:17:31.821514] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:44.215 qpair failed and we were unable to recover it. 00:26:44.215 [2024-05-15 17:17:31.821693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.215 [2024-05-15 17:17:31.821948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.215 [2024-05-15 17:17:31.821976] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:44.215 qpair failed and we were unable to recover it. 00:26:44.215 [2024-05-15 17:17:31.822209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.215 [2024-05-15 17:17:31.822417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.215 [2024-05-15 17:17:31.822446] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:44.215 qpair failed and we were unable to recover it. 00:26:44.215 [2024-05-15 17:17:31.822597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.215 [2024-05-15 17:17:31.822856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.215 [2024-05-15 17:17:31.822884] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:44.215 qpair failed and we were unable to recover it. 00:26:44.215 [2024-05-15 17:17:31.823100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.215 [2024-05-15 17:17:31.823363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.215 [2024-05-15 17:17:31.823392] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:44.215 qpair failed and we were unable to recover it. 00:26:44.215 [2024-05-15 17:17:31.823608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.215 [2024-05-15 17:17:31.823819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.215 [2024-05-15 17:17:31.823847] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:44.215 qpair failed and we were unable to recover it. 00:26:44.215 [2024-05-15 17:17:31.824072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.215 [2024-05-15 17:17:31.824312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.215 [2024-05-15 17:17:31.824341] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:44.215 qpair failed and we were unable to recover it. 00:26:44.215 [2024-05-15 17:17:31.824563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.215 [2024-05-15 17:17:31.824715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.215 [2024-05-15 17:17:31.824743] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:44.215 qpair failed and we were unable to recover it. 00:26:44.215 [2024-05-15 17:17:31.825028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.215 [2024-05-15 17:17:31.825264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.215 [2024-05-15 17:17:31.825294] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:44.215 qpair failed and we were unable to recover it. 00:26:44.215 [2024-05-15 17:17:31.825581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.215 [2024-05-15 17:17:31.825821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.215 [2024-05-15 17:17:31.825850] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:44.215 qpair failed and we were unable to recover it. 00:26:44.215 [2024-05-15 17:17:31.826051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.215 [2024-05-15 17:17:31.826231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.215 [2024-05-15 17:17:31.826266] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:44.215 qpair failed and we were unable to recover it. 00:26:44.215 [2024-05-15 17:17:31.826551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.215 [2024-05-15 17:17:31.826698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.215 [2024-05-15 17:17:31.826727] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:44.215 qpair failed and we were unable to recover it. 00:26:44.215 [2024-05-15 17:17:31.826936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.215 [2024-05-15 17:17:31.827075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.215 [2024-05-15 17:17:31.827103] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:44.215 qpair failed and we were unable to recover it. 00:26:44.215 [2024-05-15 17:17:31.827247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.215 [2024-05-15 17:17:31.827513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.215 [2024-05-15 17:17:31.827526] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:44.215 qpair failed and we were unable to recover it. 00:26:44.215 [2024-05-15 17:17:31.827644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.215 [2024-05-15 17:17:31.827821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.215 [2024-05-15 17:17:31.827835] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:44.215 qpair failed and we were unable to recover it. 00:26:44.215 [2024-05-15 17:17:31.828090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.215 [2024-05-15 17:17:31.828256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.215 [2024-05-15 17:17:31.828269] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:44.215 qpair failed and we were unable to recover it. 00:26:44.215 [2024-05-15 17:17:31.828397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.215 [2024-05-15 17:17:31.828631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.215 [2024-05-15 17:17:31.828644] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:44.215 qpair failed and we were unable to recover it. 00:26:44.215 [2024-05-15 17:17:31.828769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.215 [2024-05-15 17:17:31.828950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.215 [2024-05-15 17:17:31.828963] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:44.215 qpair failed and we were unable to recover it. 00:26:44.215 [2024-05-15 17:17:31.829036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.215 [2024-05-15 17:17:31.829141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.215 [2024-05-15 17:17:31.829154] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:44.215 qpair failed and we were unable to recover it. 00:26:44.215 [2024-05-15 17:17:31.829276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.215 [2024-05-15 17:17:31.829397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.215 [2024-05-15 17:17:31.829411] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:44.215 qpair failed and we were unable to recover it. 00:26:44.215 [2024-05-15 17:17:31.829530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.215 [2024-05-15 17:17:31.829651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.215 [2024-05-15 17:17:31.829667] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:44.215 qpair failed and we were unable to recover it. 00:26:44.215 [2024-05-15 17:17:31.829770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.215 [2024-05-15 17:17:31.829945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.215 [2024-05-15 17:17:31.829958] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:44.216 qpair failed and we were unable to recover it. 00:26:44.216 [2024-05-15 17:17:31.830135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.216 [2024-05-15 17:17:31.830415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.216 [2024-05-15 17:17:31.830445] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:44.216 qpair failed and we were unable to recover it. 00:26:44.216 [2024-05-15 17:17:31.830589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.216 [2024-05-15 17:17:31.830860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.216 [2024-05-15 17:17:31.830888] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:44.216 qpair failed and we were unable to recover it. 00:26:44.216 [2024-05-15 17:17:31.831143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.216 [2024-05-15 17:17:31.831375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.216 [2024-05-15 17:17:31.831389] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:44.216 qpair failed and we were unable to recover it. 00:26:44.216 [2024-05-15 17:17:31.831576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.216 [2024-05-15 17:17:31.831735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.216 [2024-05-15 17:17:31.831749] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:44.216 qpair failed and we were unable to recover it. 00:26:44.216 [2024-05-15 17:17:31.831844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.216 [2024-05-15 17:17:31.831949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.216 [2024-05-15 17:17:31.831962] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:44.216 qpair failed and we were unable to recover it. 00:26:44.216 [2024-05-15 17:17:31.832140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.216 [2024-05-15 17:17:31.832345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.216 [2024-05-15 17:17:31.832359] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:44.216 qpair failed and we were unable to recover it. 00:26:44.216 [2024-05-15 17:17:31.832531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.216 [2024-05-15 17:17:31.832724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.216 [2024-05-15 17:17:31.832753] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:44.216 qpair failed and we were unable to recover it. 00:26:44.216 [2024-05-15 17:17:31.832905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.216 [2024-05-15 17:17:31.833135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.216 [2024-05-15 17:17:31.833163] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:44.216 qpair failed and we were unable to recover it. 00:26:44.216 [2024-05-15 17:17:31.833499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.216 [2024-05-15 17:17:31.833713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.216 [2024-05-15 17:17:31.833741] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:44.216 qpair failed and we were unable to recover it. 00:26:44.216 [2024-05-15 17:17:31.833956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.216 [2024-05-15 17:17:31.834097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.216 [2024-05-15 17:17:31.834125] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:44.216 qpair failed and we were unable to recover it. 00:26:44.216 [2024-05-15 17:17:31.834428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.216 [2024-05-15 17:17:31.834621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.216 [2024-05-15 17:17:31.834635] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:44.216 qpair failed and we were unable to recover it. 00:26:44.216 [2024-05-15 17:17:31.834747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.216 [2024-05-15 17:17:31.834862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.216 [2024-05-15 17:17:31.834875] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:44.216 qpair failed and we were unable to recover it. 00:26:44.216 [2024-05-15 17:17:31.835090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.216 [2024-05-15 17:17:31.835369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.216 [2024-05-15 17:17:31.835399] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:44.216 qpair failed and we were unable to recover it. 00:26:44.216 [2024-05-15 17:17:31.835604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.216 [2024-05-15 17:17:31.835807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.216 [2024-05-15 17:17:31.835835] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:44.216 qpair failed and we were unable to recover it. 00:26:44.216 [2024-05-15 17:17:31.836054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.216 [2024-05-15 17:17:31.836189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.216 [2024-05-15 17:17:31.836218] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:44.216 qpair failed and we were unable to recover it. 00:26:44.216 [2024-05-15 17:17:31.836430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.216 [2024-05-15 17:17:31.836709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.216 [2024-05-15 17:17:31.836738] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:44.216 qpair failed and we were unable to recover it. 00:26:44.216 [2024-05-15 17:17:31.836953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.216 [2024-05-15 17:17:31.837109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.216 [2024-05-15 17:17:31.837137] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:44.216 qpair failed and we were unable to recover it. 00:26:44.216 [2024-05-15 17:17:31.837347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.216 [2024-05-15 17:17:31.837540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.216 [2024-05-15 17:17:31.837570] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:44.216 qpair failed and we were unable to recover it. 00:26:44.216 [2024-05-15 17:17:31.837728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.216 [2024-05-15 17:17:31.837936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.216 [2024-05-15 17:17:31.837964] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:44.216 qpair failed and we were unable to recover it. 00:26:44.216 [2024-05-15 17:17:31.838207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.216 [2024-05-15 17:17:31.838516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.216 [2024-05-15 17:17:31.838545] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:44.216 qpair failed and we were unable to recover it. 00:26:44.216 [2024-05-15 17:17:31.838746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.216 [2024-05-15 17:17:31.838953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.216 [2024-05-15 17:17:31.838981] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:44.216 qpair failed and we were unable to recover it. 00:26:44.216 [2024-05-15 17:17:31.839190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.216 [2024-05-15 17:17:31.839334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.216 [2024-05-15 17:17:31.839363] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:44.216 qpair failed and we were unable to recover it. 00:26:44.216 [2024-05-15 17:17:31.839521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.216 [2024-05-15 17:17:31.839782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.216 [2024-05-15 17:17:31.839810] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:44.216 qpair failed and we were unable to recover it. 00:26:44.216 [2024-05-15 17:17:31.839958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.216 [2024-05-15 17:17:31.840097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.216 [2024-05-15 17:17:31.840126] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:44.216 qpair failed and we were unable to recover it. 00:26:44.216 [2024-05-15 17:17:31.840399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.216 [2024-05-15 17:17:31.840550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.216 [2024-05-15 17:17:31.840579] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:44.216 qpair failed and we were unable to recover it. 00:26:44.216 [2024-05-15 17:17:31.840797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.216 [2024-05-15 17:17:31.840934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.216 [2024-05-15 17:17:31.840962] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:44.216 qpair failed and we were unable to recover it. 00:26:44.216 [2024-05-15 17:17:31.841109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.216 [2024-05-15 17:17:31.841369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.216 [2024-05-15 17:17:31.841399] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:44.216 qpair failed and we were unable to recover it. 00:26:44.216 [2024-05-15 17:17:31.841688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.216 [2024-05-15 17:17:31.841892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.216 [2024-05-15 17:17:31.841922] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:44.216 qpair failed and we were unable to recover it. 00:26:44.216 [2024-05-15 17:17:31.842138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.217 [2024-05-15 17:17:31.842314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.217 [2024-05-15 17:17:31.842343] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:44.217 qpair failed and we were unable to recover it. 00:26:44.217 [2024-05-15 17:17:31.842497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.217 [2024-05-15 17:17:31.842779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.217 [2024-05-15 17:17:31.842808] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:44.217 qpair failed and we were unable to recover it. 00:26:44.217 [2024-05-15 17:17:31.842956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.217 [2024-05-15 17:17:31.843216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.217 [2024-05-15 17:17:31.843245] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:44.217 qpair failed and we were unable to recover it. 00:26:44.217 [2024-05-15 17:17:31.843520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.217 [2024-05-15 17:17:31.843644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.217 [2024-05-15 17:17:31.843657] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:44.217 qpair failed and we were unable to recover it. 00:26:44.217 [2024-05-15 17:17:31.843847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.217 [2024-05-15 17:17:31.844053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.217 [2024-05-15 17:17:31.844082] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:44.217 qpair failed and we were unable to recover it. 00:26:44.217 [2024-05-15 17:17:31.844316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.217 [2024-05-15 17:17:31.844465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.217 [2024-05-15 17:17:31.844494] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:44.217 qpair failed and we were unable to recover it. 00:26:44.217 [2024-05-15 17:17:31.844641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.217 [2024-05-15 17:17:31.844924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.217 [2024-05-15 17:17:31.844953] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:44.217 qpair failed and we were unable to recover it. 00:26:44.217 [2024-05-15 17:17:31.845174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.217 [2024-05-15 17:17:31.845360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.217 [2024-05-15 17:17:31.845388] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:44.217 qpair failed and we were unable to recover it. 00:26:44.217 [2024-05-15 17:17:31.845685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.217 [2024-05-15 17:17:31.845819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.217 [2024-05-15 17:17:31.845847] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:44.217 qpair failed and we were unable to recover it. 00:26:44.217 [2024-05-15 17:17:31.846144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.217 [2024-05-15 17:17:31.846367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.217 [2024-05-15 17:17:31.846396] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:44.217 qpair failed and we were unable to recover it. 00:26:44.217 [2024-05-15 17:17:31.846664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.217 [2024-05-15 17:17:31.846789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.217 [2024-05-15 17:17:31.846818] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:44.217 qpair failed and we were unable to recover it. 00:26:44.217 [2024-05-15 17:17:31.847039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.217 [2024-05-15 17:17:31.847185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.217 [2024-05-15 17:17:31.847215] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:44.217 qpair failed and we were unable to recover it. 00:26:44.217 [2024-05-15 17:17:31.847483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.217 [2024-05-15 17:17:31.847676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.217 [2024-05-15 17:17:31.847704] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:44.217 qpair failed and we were unable to recover it. 00:26:44.217 [2024-05-15 17:17:31.847968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.217 [2024-05-15 17:17:31.848159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.217 [2024-05-15 17:17:31.848176] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:44.217 qpair failed and we were unable to recover it. 00:26:44.217 [2024-05-15 17:17:31.848407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.217 [2024-05-15 17:17:31.848604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.217 [2024-05-15 17:17:31.848632] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:44.217 qpair failed and we were unable to recover it. 00:26:44.217 [2024-05-15 17:17:31.848776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.217 [2024-05-15 17:17:31.848883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.217 [2024-05-15 17:17:31.848911] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:44.217 qpair failed and we were unable to recover it. 00:26:44.217 [2024-05-15 17:17:31.849145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.217 [2024-05-15 17:17:31.849252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.217 [2024-05-15 17:17:31.849282] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:44.217 qpair failed and we were unable to recover it. 00:26:44.217 [2024-05-15 17:17:31.849505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.217 [2024-05-15 17:17:31.849730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.217 [2024-05-15 17:17:31.849758] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:44.217 qpair failed and we were unable to recover it. 00:26:44.217 [2024-05-15 17:17:31.849879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.217 [2024-05-15 17:17:31.850090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.217 [2024-05-15 17:17:31.850119] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:44.217 qpair failed and we were unable to recover it. 00:26:44.217 [2024-05-15 17:17:31.850367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.217 [2024-05-15 17:17:31.850546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.217 [2024-05-15 17:17:31.850574] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:44.217 qpair failed and we were unable to recover it. 00:26:44.217 [2024-05-15 17:17:31.850749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.217 [2024-05-15 17:17:31.850902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.217 [2024-05-15 17:17:31.850930] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:44.217 qpair failed and we were unable to recover it. 00:26:44.217 [2024-05-15 17:17:31.851072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.217 [2024-05-15 17:17:31.851291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.217 [2024-05-15 17:17:31.851305] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:44.217 qpair failed and we were unable to recover it. 00:26:44.217 [2024-05-15 17:17:31.851435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.217 [2024-05-15 17:17:31.851531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.217 [2024-05-15 17:17:31.851544] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:44.217 qpair failed and we were unable to recover it. 00:26:44.217 [2024-05-15 17:17:31.851725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.217 [2024-05-15 17:17:31.851981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.217 [2024-05-15 17:17:31.851994] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:44.217 qpair failed and we were unable to recover it. 00:26:44.217 [2024-05-15 17:17:31.852174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.217 [2024-05-15 17:17:31.852350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.217 [2024-05-15 17:17:31.852363] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:44.217 qpair failed and we were unable to recover it. 00:26:44.217 [2024-05-15 17:17:31.852545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.217 [2024-05-15 17:17:31.852668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.217 [2024-05-15 17:17:31.852681] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:44.217 qpair failed and we were unable to recover it. 00:26:44.217 [2024-05-15 17:17:31.852788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.217 [2024-05-15 17:17:31.852955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.217 [2024-05-15 17:17:31.852968] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:44.217 qpair failed and we were unable to recover it. 00:26:44.217 [2024-05-15 17:17:31.853085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.217 [2024-05-15 17:17:31.853266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.217 [2024-05-15 17:17:31.853279] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:44.217 qpair failed and we were unable to recover it. 00:26:44.217 [2024-05-15 17:17:31.853531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.217 [2024-05-15 17:17:31.853744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.218 [2024-05-15 17:17:31.853773] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:44.218 qpair failed and we were unable to recover it. 00:26:44.218 [2024-05-15 17:17:31.853922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.218 [2024-05-15 17:17:31.854130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.218 [2024-05-15 17:17:31.854158] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:44.218 qpair failed and we were unable to recover it. 00:26:44.218 [2024-05-15 17:17:31.854313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.218 [2024-05-15 17:17:31.854454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.218 [2024-05-15 17:17:31.854482] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:44.218 qpair failed and we were unable to recover it. 00:26:44.218 [2024-05-15 17:17:31.854758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.218 [2024-05-15 17:17:31.854863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.218 [2024-05-15 17:17:31.854892] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:44.218 qpair failed and we were unable to recover it. 00:26:44.218 [2024-05-15 17:17:31.855094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.218 [2024-05-15 17:17:31.855308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.218 [2024-05-15 17:17:31.855337] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:44.218 qpair failed and we were unable to recover it. 00:26:44.218 [2024-05-15 17:17:31.855468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.218 [2024-05-15 17:17:31.855659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.218 [2024-05-15 17:17:31.855687] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:44.218 qpair failed and we were unable to recover it. 00:26:44.218 [2024-05-15 17:17:31.855979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.218 [2024-05-15 17:17:31.856135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.218 [2024-05-15 17:17:31.856148] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:44.218 qpair failed and we were unable to recover it. 00:26:44.218 [2024-05-15 17:17:31.856267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.218 [2024-05-15 17:17:31.856499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.218 [2024-05-15 17:17:31.856513] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:44.218 qpair failed and we were unable to recover it. 00:26:44.218 [2024-05-15 17:17:31.856681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.218 [2024-05-15 17:17:31.856797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.218 [2024-05-15 17:17:31.856811] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:44.218 qpair failed and we were unable to recover it. 00:26:44.218 [2024-05-15 17:17:31.856886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.218 [2024-05-15 17:17:31.857059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.218 [2024-05-15 17:17:31.857072] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:44.218 qpair failed and we were unable to recover it. 00:26:44.218 [2024-05-15 17:17:31.857304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.218 [2024-05-15 17:17:31.857468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.218 [2024-05-15 17:17:31.857496] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:44.218 qpair failed and we were unable to recover it. 00:26:44.218 [2024-05-15 17:17:31.857789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.218 [2024-05-15 17:17:31.858006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.218 [2024-05-15 17:17:31.858044] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:44.218 qpair failed and we were unable to recover it. 00:26:44.218 [2024-05-15 17:17:31.858171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.218 [2024-05-15 17:17:31.858336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.218 [2024-05-15 17:17:31.858350] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:44.218 qpair failed and we were unable to recover it. 00:26:44.218 [2024-05-15 17:17:31.858465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.218 [2024-05-15 17:17:31.858571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.218 [2024-05-15 17:17:31.858585] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:44.218 qpair failed and we were unable to recover it. 00:26:44.218 [2024-05-15 17:17:31.858751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.218 [2024-05-15 17:17:31.859010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.218 [2024-05-15 17:17:31.859038] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:44.218 qpair failed and we were unable to recover it. 00:26:44.218 [2024-05-15 17:17:31.859249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.218 [2024-05-15 17:17:31.859429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.218 [2024-05-15 17:17:31.859457] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:44.218 qpair failed and we were unable to recover it. 00:26:44.218 [2024-05-15 17:17:31.859669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.218 [2024-05-15 17:17:31.859889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.218 [2024-05-15 17:17:31.859917] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:44.218 qpair failed and we were unable to recover it. 00:26:44.218 [2024-05-15 17:17:31.860134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.218 [2024-05-15 17:17:31.860290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.218 [2024-05-15 17:17:31.860328] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:44.218 qpair failed and we were unable to recover it. 00:26:44.218 [2024-05-15 17:17:31.860435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.218 [2024-05-15 17:17:31.860556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.218 [2024-05-15 17:17:31.860569] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:44.218 qpair failed and we were unable to recover it. 00:26:44.218 [2024-05-15 17:17:31.860827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.218 [2024-05-15 17:17:31.861077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.218 [2024-05-15 17:17:31.861091] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:44.218 qpair failed and we were unable to recover it. 00:26:44.218 [2024-05-15 17:17:31.861209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.218 [2024-05-15 17:17:31.861391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.218 [2024-05-15 17:17:31.861404] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:44.218 qpair failed and we were unable to recover it. 00:26:44.218 [2024-05-15 17:17:31.861533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.218 [2024-05-15 17:17:31.861625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.218 [2024-05-15 17:17:31.861638] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:44.218 qpair failed and we were unable to recover it. 00:26:44.218 [2024-05-15 17:17:31.861752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.218 [2024-05-15 17:17:31.861987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.218 [2024-05-15 17:17:31.862009] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:44.218 qpair failed and we were unable to recover it. 00:26:44.218 [2024-05-15 17:17:31.862223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.218 [2024-05-15 17:17:31.862429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.218 [2024-05-15 17:17:31.862454] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f0000b90 with addr=10.0.0.2, port=4420 00:26:44.218 qpair failed and we were unable to recover it. 00:26:44.218 [2024-05-15 17:17:31.862562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.218 [2024-05-15 17:17:31.862733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.218 [2024-05-15 17:17:31.862747] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f0000b90 with addr=10.0.0.2, port=4420 00:26:44.218 qpair failed and we were unable to recover it. 00:26:44.218 [2024-05-15 17:17:31.862852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.218 [2024-05-15 17:17:31.863042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.218 [2024-05-15 17:17:31.863068] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f0000b90 with addr=10.0.0.2, port=4420 00:26:44.218 qpair failed and we were unable to recover it. 00:26:44.218 [2024-05-15 17:17:31.863198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.218 [2024-05-15 17:17:31.863300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.218 [2024-05-15 17:17:31.863313] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f0000b90 with addr=10.0.0.2, port=4420 00:26:44.218 qpair failed and we were unable to recover it. 00:26:44.218 [2024-05-15 17:17:31.863396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.218 [2024-05-15 17:17:31.863637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.218 [2024-05-15 17:17:31.863650] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f0000b90 with addr=10.0.0.2, port=4420 00:26:44.218 qpair failed and we were unable to recover it. 00:26:44.218 [2024-05-15 17:17:31.863768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.218 [2024-05-15 17:17:31.864017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.218 [2024-05-15 17:17:31.864037] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f0000b90 with addr=10.0.0.2, port=4420 00:26:44.218 qpair failed and we were unable to recover it. 00:26:44.219 [2024-05-15 17:17:31.864264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.219 [2024-05-15 17:17:31.864457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.219 [2024-05-15 17:17:31.864470] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f0000b90 with addr=10.0.0.2, port=4420 00:26:44.219 qpair failed and we were unable to recover it. 00:26:44.219 [2024-05-15 17:17:31.864585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.219 [2024-05-15 17:17:31.864768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.219 [2024-05-15 17:17:31.864783] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f0000b90 with addr=10.0.0.2, port=4420 00:26:44.219 qpair failed and we were unable to recover it. 00:26:44.219 [2024-05-15 17:17:31.864864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.219 [2024-05-15 17:17:31.865118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.219 [2024-05-15 17:17:31.865132] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f0000b90 with addr=10.0.0.2, port=4420 00:26:44.219 qpair failed and we were unable to recover it. 00:26:44.219 [2024-05-15 17:17:31.865313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.219 [2024-05-15 17:17:31.865488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.219 [2024-05-15 17:17:31.865501] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f0000b90 with addr=10.0.0.2, port=4420 00:26:44.219 qpair failed and we were unable to recover it. 00:26:44.490 [2024-05-15 17:17:31.865696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.490 [2024-05-15 17:17:31.865916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.490 [2024-05-15 17:17:31.865936] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f0000b90 with addr=10.0.0.2, port=4420 00:26:44.490 qpair failed and we were unable to recover it. 00:26:44.490 [2024-05-15 17:17:31.866130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.490 [2024-05-15 17:17:31.866253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.490 [2024-05-15 17:17:31.866268] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f0000b90 with addr=10.0.0.2, port=4420 00:26:44.490 qpair failed and we were unable to recover it. 00:26:44.490 [2024-05-15 17:17:31.866366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.490 [2024-05-15 17:17:31.866568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.490 [2024-05-15 17:17:31.866582] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f0000b90 with addr=10.0.0.2, port=4420 00:26:44.490 qpair failed and we were unable to recover it. 00:26:44.490 [2024-05-15 17:17:31.866767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.490 [2024-05-15 17:17:31.866958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.490 [2024-05-15 17:17:31.866971] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f0000b90 with addr=10.0.0.2, port=4420 00:26:44.490 qpair failed and we were unable to recover it. 00:26:44.490 [2024-05-15 17:17:31.867085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.490 [2024-05-15 17:17:31.867200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.490 [2024-05-15 17:17:31.867214] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f0000b90 with addr=10.0.0.2, port=4420 00:26:44.490 qpair failed and we were unable to recover it. 00:26:44.490 [2024-05-15 17:17:31.867338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.490 [2024-05-15 17:17:31.867451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.490 [2024-05-15 17:17:31.867465] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f0000b90 with addr=10.0.0.2, port=4420 00:26:44.490 qpair failed and we were unable to recover it. 00:26:44.490 [2024-05-15 17:17:31.867703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.490 [2024-05-15 17:17:31.867862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.490 [2024-05-15 17:17:31.867875] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f0000b90 with addr=10.0.0.2, port=4420 00:26:44.490 qpair failed and we were unable to recover it. 00:26:44.490 [2024-05-15 17:17:31.867986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.490 [2024-05-15 17:17:31.868170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.490 [2024-05-15 17:17:31.868185] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f0000b90 with addr=10.0.0.2, port=4420 00:26:44.490 qpair failed and we were unable to recover it. 00:26:44.490 [2024-05-15 17:17:31.868284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.490 [2024-05-15 17:17:31.868537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.490 [2024-05-15 17:17:31.868551] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f0000b90 with addr=10.0.0.2, port=4420 00:26:44.490 qpair failed and we were unable to recover it. 00:26:44.490 [2024-05-15 17:17:31.868857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.490 [2024-05-15 17:17:31.868997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.490 [2024-05-15 17:17:31.869026] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f0000b90 with addr=10.0.0.2, port=4420 00:26:44.490 qpair failed and we were unable to recover it. 00:26:44.490 [2024-05-15 17:17:31.869295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.490 [2024-05-15 17:17:31.869511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.490 [2024-05-15 17:17:31.869541] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f0000b90 with addr=10.0.0.2, port=4420 00:26:44.490 qpair failed and we were unable to recover it. 00:26:44.490 [2024-05-15 17:17:31.869837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.490 [2024-05-15 17:17:31.870046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.490 [2024-05-15 17:17:31.870075] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f0000b90 with addr=10.0.0.2, port=4420 00:26:44.490 qpair failed and we were unable to recover it. 00:26:44.490 [2024-05-15 17:17:31.870336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.490 [2024-05-15 17:17:31.870565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.490 [2024-05-15 17:17:31.870578] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f0000b90 with addr=10.0.0.2, port=4420 00:26:44.490 qpair failed and we were unable to recover it. 00:26:44.490 [2024-05-15 17:17:31.870779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.490 [2024-05-15 17:17:31.870857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.490 [2024-05-15 17:17:31.870870] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f0000b90 with addr=10.0.0.2, port=4420 00:26:44.490 qpair failed and we were unable to recover it. 00:26:44.490 [2024-05-15 17:17:31.870978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.490 [2024-05-15 17:17:31.871156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.490 [2024-05-15 17:17:31.871207] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f0000b90 with addr=10.0.0.2, port=4420 00:26:44.490 qpair failed and we were unable to recover it. 00:26:44.490 [2024-05-15 17:17:31.871374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.490 [2024-05-15 17:17:31.871574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.490 [2024-05-15 17:17:31.871603] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f0000b90 with addr=10.0.0.2, port=4420 00:26:44.490 qpair failed and we were unable to recover it. 00:26:44.490 [2024-05-15 17:17:31.871843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.490 [2024-05-15 17:17:31.872058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.490 [2024-05-15 17:17:31.872087] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f0000b90 with addr=10.0.0.2, port=4420 00:26:44.490 qpair failed and we were unable to recover it. 00:26:44.490 [2024-05-15 17:17:31.872300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.490 [2024-05-15 17:17:31.872409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.490 [2024-05-15 17:17:31.872422] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f0000b90 with addr=10.0.0.2, port=4420 00:26:44.490 qpair failed and we were unable to recover it. 00:26:44.490 [2024-05-15 17:17:31.872585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.490 [2024-05-15 17:17:31.872760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.490 [2024-05-15 17:17:31.872773] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f0000b90 with addr=10.0.0.2, port=4420 00:26:44.490 qpair failed and we were unable to recover it. 00:26:44.490 [2024-05-15 17:17:31.872937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.490 [2024-05-15 17:17:31.873120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.490 [2024-05-15 17:17:31.873133] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f0000b90 with addr=10.0.0.2, port=4420 00:26:44.490 qpair failed and we were unable to recover it. 00:26:44.490 [2024-05-15 17:17:31.873323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.490 [2024-05-15 17:17:31.873470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.490 [2024-05-15 17:17:31.873500] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f0000b90 with addr=10.0.0.2, port=4420 00:26:44.490 qpair failed and we were unable to recover it. 00:26:44.490 [2024-05-15 17:17:31.873712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.490 [2024-05-15 17:17:31.873954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.491 [2024-05-15 17:17:31.873983] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f0000b90 with addr=10.0.0.2, port=4420 00:26:44.491 qpair failed and we were unable to recover it. 00:26:44.491 [2024-05-15 17:17:31.874184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.491 [2024-05-15 17:17:31.874315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.491 [2024-05-15 17:17:31.874328] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f0000b90 with addr=10.0.0.2, port=4420 00:26:44.491 qpair failed and we were unable to recover it. 00:26:44.491 [2024-05-15 17:17:31.874450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.491 [2024-05-15 17:17:31.874648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.491 [2024-05-15 17:17:31.874661] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f0000b90 with addr=10.0.0.2, port=4420 00:26:44.491 qpair failed and we were unable to recover it. 00:26:44.491 [2024-05-15 17:17:31.874837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.491 [2024-05-15 17:17:31.875029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.491 [2024-05-15 17:17:31.875058] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f0000b90 with addr=10.0.0.2, port=4420 00:26:44.491 qpair failed and we were unable to recover it. 00:26:44.491 [2024-05-15 17:17:31.875257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.491 [2024-05-15 17:17:31.875466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.491 [2024-05-15 17:17:31.875496] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f0000b90 with addr=10.0.0.2, port=4420 00:26:44.491 qpair failed and we were unable to recover it. 00:26:44.491 [2024-05-15 17:17:31.875649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.491 [2024-05-15 17:17:31.875786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.491 [2024-05-15 17:17:31.875815] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f0000b90 with addr=10.0.0.2, port=4420 00:26:44.491 qpair failed and we were unable to recover it. 00:26:44.491 [2024-05-15 17:17:31.875968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.491 [2024-05-15 17:17:31.876115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.491 [2024-05-15 17:17:31.876144] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f0000b90 with addr=10.0.0.2, port=4420 00:26:44.491 qpair failed and we were unable to recover it. 00:26:44.491 [2024-05-15 17:17:31.876288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.491 [2024-05-15 17:17:31.876398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.491 [2024-05-15 17:17:31.876411] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f0000b90 with addr=10.0.0.2, port=4420 00:26:44.491 qpair failed and we were unable to recover it. 00:26:44.491 [2024-05-15 17:17:31.876592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.491 [2024-05-15 17:17:31.876779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.491 [2024-05-15 17:17:31.876808] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f0000b90 with addr=10.0.0.2, port=4420 00:26:44.491 qpair failed and we were unable to recover it. 00:26:44.491 [2024-05-15 17:17:31.876975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.491 [2024-05-15 17:17:31.877137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.491 [2024-05-15 17:17:31.877176] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f0000b90 with addr=10.0.0.2, port=4420 00:26:44.491 qpair failed and we were unable to recover it. 00:26:44.491 [2024-05-15 17:17:31.877414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.491 [2024-05-15 17:17:31.877554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.491 [2024-05-15 17:17:31.877567] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f0000b90 with addr=10.0.0.2, port=4420 00:26:44.491 qpair failed and we were unable to recover it. 00:26:44.491 [2024-05-15 17:17:31.877811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.491 [2024-05-15 17:17:31.877945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.491 [2024-05-15 17:17:31.877974] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f0000b90 with addr=10.0.0.2, port=4420 00:26:44.491 qpair failed and we were unable to recover it. 00:26:44.491 [2024-05-15 17:17:31.878117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.491 [2024-05-15 17:17:31.878396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.491 [2024-05-15 17:17:31.878427] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f0000b90 with addr=10.0.0.2, port=4420 00:26:44.491 qpair failed and we were unable to recover it. 00:26:44.491 [2024-05-15 17:17:31.878626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.491 [2024-05-15 17:17:31.878907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.491 [2024-05-15 17:17:31.878935] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f0000b90 with addr=10.0.0.2, port=4420 00:26:44.491 qpair failed and we were unable to recover it. 00:26:44.491 [2024-05-15 17:17:31.879079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.491 [2024-05-15 17:17:31.879277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.491 [2024-05-15 17:17:31.879307] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f0000b90 with addr=10.0.0.2, port=4420 00:26:44.491 qpair failed and we were unable to recover it. 00:26:44.491 [2024-05-15 17:17:31.879517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.491 [2024-05-15 17:17:31.879701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.491 [2024-05-15 17:17:31.879715] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f0000b90 with addr=10.0.0.2, port=4420 00:26:44.491 qpair failed and we were unable to recover it. 00:26:44.491 [2024-05-15 17:17:31.879984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.491 [2024-05-15 17:17:31.880191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.491 [2024-05-15 17:17:31.880205] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f0000b90 with addr=10.0.0.2, port=4420 00:26:44.491 qpair failed and we were unable to recover it. 00:26:44.491 [2024-05-15 17:17:31.880487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.491 [2024-05-15 17:17:31.880753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.491 [2024-05-15 17:17:31.880768] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f0000b90 with addr=10.0.0.2, port=4420 00:26:44.491 qpair failed and we were unable to recover it. 00:26:44.491 [2024-05-15 17:17:31.880956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.491 [2024-05-15 17:17:31.881132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.491 [2024-05-15 17:17:31.881163] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f0000b90 with addr=10.0.0.2, port=4420 00:26:44.491 qpair failed and we were unable to recover it. 00:26:44.491 [2024-05-15 17:17:31.881464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.491 [2024-05-15 17:17:31.881755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.491 [2024-05-15 17:17:31.881798] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f0000b90 with addr=10.0.0.2, port=4420 00:26:44.491 qpair failed and we were unable to recover it. 00:26:44.491 [2024-05-15 17:17:31.882008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.491 [2024-05-15 17:17:31.882219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.491 [2024-05-15 17:17:31.882249] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f0000b90 with addr=10.0.0.2, port=4420 00:26:44.491 qpair failed and we were unable to recover it. 00:26:44.491 [2024-05-15 17:17:31.882438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.491 [2024-05-15 17:17:31.882685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.491 [2024-05-15 17:17:31.882714] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f0000b90 with addr=10.0.0.2, port=4420 00:26:44.491 qpair failed and we were unable to recover it. 00:26:44.491 [2024-05-15 17:17:31.882924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.491 [2024-05-15 17:17:31.883157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.491 [2024-05-15 17:17:31.883199] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f0000b90 with addr=10.0.0.2, port=4420 00:26:44.491 qpair failed and we were unable to recover it. 00:26:44.491 [2024-05-15 17:17:31.883409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.491 [2024-05-15 17:17:31.883598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.491 [2024-05-15 17:17:31.883628] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f0000b90 with addr=10.0.0.2, port=4420 00:26:44.491 qpair failed and we were unable to recover it. 00:26:44.491 [2024-05-15 17:17:31.883847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.491 [2024-05-15 17:17:31.884135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.491 [2024-05-15 17:17:31.884172] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f0000b90 with addr=10.0.0.2, port=4420 00:26:44.491 qpair failed and we were unable to recover it. 00:26:44.491 [2024-05-15 17:17:31.884463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.491 [2024-05-15 17:17:31.884685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.491 [2024-05-15 17:17:31.884714] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f0000b90 with addr=10.0.0.2, port=4420 00:26:44.491 qpair failed and we were unable to recover it. 00:26:44.491 [2024-05-15 17:17:31.884955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.491 [2024-05-15 17:17:31.885214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.491 [2024-05-15 17:17:31.885228] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f0000b90 with addr=10.0.0.2, port=4420 00:26:44.491 qpair failed and we were unable to recover it. 00:26:44.491 [2024-05-15 17:17:31.885435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.491 [2024-05-15 17:17:31.885594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.491 [2024-05-15 17:17:31.885607] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f0000b90 with addr=10.0.0.2, port=4420 00:26:44.491 qpair failed and we were unable to recover it. 00:26:44.491 [2024-05-15 17:17:31.885790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.492 [2024-05-15 17:17:31.885999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.492 [2024-05-15 17:17:31.886012] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f0000b90 with addr=10.0.0.2, port=4420 00:26:44.492 qpair failed and we were unable to recover it. 00:26:44.492 [2024-05-15 17:17:31.886206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.492 [2024-05-15 17:17:31.886325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.492 [2024-05-15 17:17:31.886342] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f0000b90 with addr=10.0.0.2, port=4420 00:26:44.492 qpair failed and we were unable to recover it. 00:26:44.492 [2024-05-15 17:17:31.886440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.492 [2024-05-15 17:17:31.886602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.492 [2024-05-15 17:17:31.886616] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f0000b90 with addr=10.0.0.2, port=4420 00:26:44.492 qpair failed and we were unable to recover it. 00:26:44.492 [2024-05-15 17:17:31.886803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.492 [2024-05-15 17:17:31.886976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.492 [2024-05-15 17:17:31.886989] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f0000b90 with addr=10.0.0.2, port=4420 00:26:44.492 qpair failed and we were unable to recover it. 00:26:44.492 [2024-05-15 17:17:31.887112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.492 [2024-05-15 17:17:31.887206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.492 [2024-05-15 17:17:31.887220] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f0000b90 with addr=10.0.0.2, port=4420 00:26:44.492 qpair failed and we were unable to recover it. 00:26:44.492 [2024-05-15 17:17:31.887404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.492 [2024-05-15 17:17:31.887586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.492 [2024-05-15 17:17:31.887615] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f0000b90 with addr=10.0.0.2, port=4420 00:26:44.492 qpair failed and we were unable to recover it. 00:26:44.492 [2024-05-15 17:17:31.887817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.492 [2024-05-15 17:17:31.888018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.492 [2024-05-15 17:17:31.888047] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f0000b90 with addr=10.0.0.2, port=4420 00:26:44.492 qpair failed and we were unable to recover it. 00:26:44.492 [2024-05-15 17:17:31.888268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.492 [2024-05-15 17:17:31.888497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.492 [2024-05-15 17:17:31.888511] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f0000b90 with addr=10.0.0.2, port=4420 00:26:44.492 qpair failed and we were unable to recover it. 00:26:44.492 [2024-05-15 17:17:31.888605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.492 [2024-05-15 17:17:31.888778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.492 [2024-05-15 17:17:31.888791] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f0000b90 with addr=10.0.0.2, port=4420 00:26:44.492 qpair failed and we were unable to recover it. 00:26:44.492 [2024-05-15 17:17:31.888973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.492 [2024-05-15 17:17:31.889212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.492 [2024-05-15 17:17:31.889242] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f0000b90 with addr=10.0.0.2, port=4420 00:26:44.492 qpair failed and we were unable to recover it. 00:26:44.492 [2024-05-15 17:17:31.889392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.492 [2024-05-15 17:17:31.889534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.492 [2024-05-15 17:17:31.889564] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f0000b90 with addr=10.0.0.2, port=4420 00:26:44.492 qpair failed and we were unable to recover it. 00:26:44.492 [2024-05-15 17:17:31.889766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.492 [2024-05-15 17:17:31.889980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.492 [2024-05-15 17:17:31.890015] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f0000b90 with addr=10.0.0.2, port=4420 00:26:44.492 qpair failed and we were unable to recover it. 00:26:44.492 [2024-05-15 17:17:31.890141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.492 [2024-05-15 17:17:31.890298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.492 [2024-05-15 17:17:31.890328] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f0000b90 with addr=10.0.0.2, port=4420 00:26:44.492 qpair failed and we were unable to recover it. 00:26:44.492 [2024-05-15 17:17:31.890596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.492 [2024-05-15 17:17:31.890786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.492 [2024-05-15 17:17:31.890816] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f0000b90 with addr=10.0.0.2, port=4420 00:26:44.492 qpair failed and we were unable to recover it. 00:26:44.492 [2024-05-15 17:17:31.891037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.492 [2024-05-15 17:17:31.891188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.492 [2024-05-15 17:17:31.891217] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f0000b90 with addr=10.0.0.2, port=4420 00:26:44.492 qpair failed and we were unable to recover it. 00:26:44.492 [2024-05-15 17:17:31.891408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.492 [2024-05-15 17:17:31.891558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.492 [2024-05-15 17:17:31.891587] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f0000b90 with addr=10.0.0.2, port=4420 00:26:44.492 qpair failed and we were unable to recover it. 00:26:44.492 [2024-05-15 17:17:31.891867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.492 [2024-05-15 17:17:31.892088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.492 [2024-05-15 17:17:31.892126] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f0000b90 with addr=10.0.0.2, port=4420 00:26:44.492 qpair failed and we were unable to recover it. 00:26:44.492 [2024-05-15 17:17:31.892384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.492 [2024-05-15 17:17:31.892636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.492 [2024-05-15 17:17:31.892649] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f0000b90 with addr=10.0.0.2, port=4420 00:26:44.492 qpair failed and we were unable to recover it. 00:26:44.492 [2024-05-15 17:17:31.892882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.492 [2024-05-15 17:17:31.892993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.492 [2024-05-15 17:17:31.893024] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f0000b90 with addr=10.0.0.2, port=4420 00:26:44.492 qpair failed and we were unable to recover it. 00:26:44.492 [2024-05-15 17:17:31.893246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.492 [2024-05-15 17:17:31.893450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.492 [2024-05-15 17:17:31.893479] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f0000b90 with addr=10.0.0.2, port=4420 00:26:44.492 qpair failed and we were unable to recover it. 00:26:44.492 [2024-05-15 17:17:31.893768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.492 [2024-05-15 17:17:31.893933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.492 [2024-05-15 17:17:31.893947] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f0000b90 with addr=10.0.0.2, port=4420 00:26:44.492 qpair failed and we were unable to recover it. 00:26:44.492 [2024-05-15 17:17:31.894133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.492 [2024-05-15 17:17:31.894339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.492 [2024-05-15 17:17:31.894375] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f0000b90 with addr=10.0.0.2, port=4420 00:26:44.492 qpair failed and we were unable to recover it. 00:26:44.492 [2024-05-15 17:17:31.894596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.492 [2024-05-15 17:17:31.894799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.492 [2024-05-15 17:17:31.894828] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f0000b90 with addr=10.0.0.2, port=4420 00:26:44.492 qpair failed and we were unable to recover it. 00:26:44.492 [2024-05-15 17:17:31.895070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.492 [2024-05-15 17:17:31.895277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.492 [2024-05-15 17:17:31.895290] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f0000b90 with addr=10.0.0.2, port=4420 00:26:44.492 qpair failed and we were unable to recover it. 00:26:44.492 [2024-05-15 17:17:31.895478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.492 [2024-05-15 17:17:31.895650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.492 [2024-05-15 17:17:31.895663] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f0000b90 with addr=10.0.0.2, port=4420 00:26:44.492 qpair failed and we were unable to recover it. 00:26:44.492 [2024-05-15 17:17:31.895866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.492 [2024-05-15 17:17:31.896040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.492 [2024-05-15 17:17:31.896054] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f0000b90 with addr=10.0.0.2, port=4420 00:26:44.492 qpair failed and we were unable to recover it. 00:26:44.492 [2024-05-15 17:17:31.896247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.492 [2024-05-15 17:17:31.896453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.492 [2024-05-15 17:17:31.896491] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f0000b90 with addr=10.0.0.2, port=4420 00:26:44.492 qpair failed and we were unable to recover it. 00:26:44.492 [2024-05-15 17:17:31.896707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.492 [2024-05-15 17:17:31.896925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.492 [2024-05-15 17:17:31.896955] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f0000b90 with addr=10.0.0.2, port=4420 00:26:44.492 qpair failed and we were unable to recover it. 00:26:44.492 [2024-05-15 17:17:31.897092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.493 [2024-05-15 17:17:31.897329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.493 [2024-05-15 17:17:31.897359] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f0000b90 with addr=10.0.0.2, port=4420 00:26:44.493 qpair failed and we were unable to recover it. 00:26:44.493 [2024-05-15 17:17:31.897624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.493 [2024-05-15 17:17:31.897818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.493 [2024-05-15 17:17:31.897847] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f0000b90 with addr=10.0.0.2, port=4420 00:26:44.493 qpair failed and we were unable to recover it. 00:26:44.493 [2024-05-15 17:17:31.898056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.493 [2024-05-15 17:17:31.898286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.493 [2024-05-15 17:17:31.898299] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f0000b90 with addr=10.0.0.2, port=4420 00:26:44.493 qpair failed and we were unable to recover it. 00:26:44.493 [2024-05-15 17:17:31.898490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.493 [2024-05-15 17:17:31.898697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.493 [2024-05-15 17:17:31.898726] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f0000b90 with addr=10.0.0.2, port=4420 00:26:44.493 qpair failed and we were unable to recover it. 00:26:44.493 [2024-05-15 17:17:31.898998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.493 [2024-05-15 17:17:31.899206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.493 [2024-05-15 17:17:31.899219] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f0000b90 with addr=10.0.0.2, port=4420 00:26:44.493 qpair failed and we were unable to recover it. 00:26:44.493 [2024-05-15 17:17:31.899471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.493 [2024-05-15 17:17:31.899635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.493 [2024-05-15 17:17:31.899648] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f0000b90 with addr=10.0.0.2, port=4420 00:26:44.493 qpair failed and we were unable to recover it. 00:26:44.493 [2024-05-15 17:17:31.899860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.493 [2024-05-15 17:17:31.900003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.493 [2024-05-15 17:17:31.900031] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f0000b90 with addr=10.0.0.2, port=4420 00:26:44.493 qpair failed and we were unable to recover it. 00:26:44.493 [2024-05-15 17:17:31.900251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.493 [2024-05-15 17:17:31.900447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.493 [2024-05-15 17:17:31.900476] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f0000b90 with addr=10.0.0.2, port=4420 00:26:44.493 qpair failed and we were unable to recover it. 00:26:44.493 [2024-05-15 17:17:31.900677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.493 [2024-05-15 17:17:31.900818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.493 [2024-05-15 17:17:31.900847] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f0000b90 with addr=10.0.0.2, port=4420 00:26:44.493 qpair failed and we were unable to recover it. 00:26:44.493 [2024-05-15 17:17:31.901049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.493 [2024-05-15 17:17:31.901255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.493 [2024-05-15 17:17:31.901269] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f0000b90 with addr=10.0.0.2, port=4420 00:26:44.493 qpair failed and we were unable to recover it. 00:26:44.493 [2024-05-15 17:17:31.901508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.493 [2024-05-15 17:17:31.901769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.493 [2024-05-15 17:17:31.901798] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f0000b90 with addr=10.0.0.2, port=4420 00:26:44.493 qpair failed and we were unable to recover it. 00:26:44.493 [2024-05-15 17:17:31.901939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.493 [2024-05-15 17:17:31.902201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.493 [2024-05-15 17:17:31.902232] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f0000b90 with addr=10.0.0.2, port=4420 00:26:44.493 qpair failed and we were unable to recover it. 00:26:44.493 [2024-05-15 17:17:31.902388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.493 [2024-05-15 17:17:31.902513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.493 [2024-05-15 17:17:31.902553] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f0000b90 with addr=10.0.0.2, port=4420 00:26:44.493 qpair failed and we were unable to recover it. 00:26:44.493 [2024-05-15 17:17:31.902719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.493 [2024-05-15 17:17:31.902847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.493 [2024-05-15 17:17:31.902860] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f0000b90 with addr=10.0.0.2, port=4420 00:26:44.493 qpair failed and we were unable to recover it. 00:26:44.493 [2024-05-15 17:17:31.902996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.493 [2024-05-15 17:17:31.903157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.493 [2024-05-15 17:17:31.903175] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f0000b90 with addr=10.0.0.2, port=4420 00:26:44.493 qpair failed and we were unable to recover it. 00:26:44.493 [2024-05-15 17:17:31.903289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.493 [2024-05-15 17:17:31.903459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.493 [2024-05-15 17:17:31.903488] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f0000b90 with addr=10.0.0.2, port=4420 00:26:44.493 qpair failed and we were unable to recover it. 00:26:44.493 [2024-05-15 17:17:31.903700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.493 [2024-05-15 17:17:31.904017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.493 [2024-05-15 17:17:31.904046] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f0000b90 with addr=10.0.0.2, port=4420 00:26:44.493 qpair failed and we were unable to recover it. 00:26:44.493 [2024-05-15 17:17:31.904267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.493 [2024-05-15 17:17:31.904472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.493 [2024-05-15 17:17:31.904500] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f0000b90 with addr=10.0.0.2, port=4420 00:26:44.493 qpair failed and we were unable to recover it. 00:26:44.493 [2024-05-15 17:17:31.904734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.493 [2024-05-15 17:17:31.904906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.493 [2024-05-15 17:17:31.904919] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f0000b90 with addr=10.0.0.2, port=4420 00:26:44.493 qpair failed and we were unable to recover it. 00:26:44.493 [2024-05-15 17:17:31.905128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.493 [2024-05-15 17:17:31.905281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.493 [2024-05-15 17:17:31.905309] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f0000b90 with addr=10.0.0.2, port=4420 00:26:44.493 qpair failed and we were unable to recover it. 00:26:44.493 [2024-05-15 17:17:31.905505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.493 [2024-05-15 17:17:31.905729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.493 [2024-05-15 17:17:31.905757] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f0000b90 with addr=10.0.0.2, port=4420 00:26:44.493 qpair failed and we were unable to recover it. 00:26:44.493 [2024-05-15 17:17:31.905995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.493 [2024-05-15 17:17:31.906124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.493 [2024-05-15 17:17:31.906153] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f0000b90 with addr=10.0.0.2, port=4420 00:26:44.493 qpair failed and we were unable to recover it. 00:26:44.493 [2024-05-15 17:17:31.906337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.493 [2024-05-15 17:17:31.906525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.493 [2024-05-15 17:17:31.906554] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f0000b90 with addr=10.0.0.2, port=4420 00:26:44.493 qpair failed and we were unable to recover it. 00:26:44.493 [2024-05-15 17:17:31.906846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.493 [2024-05-15 17:17:31.906986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.493 [2024-05-15 17:17:31.907015] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f0000b90 with addr=10.0.0.2, port=4420 00:26:44.493 qpair failed and we were unable to recover it. 00:26:44.493 [2024-05-15 17:17:31.907313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.493 [2024-05-15 17:17:31.907517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.493 [2024-05-15 17:17:31.907531] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f0000b90 with addr=10.0.0.2, port=4420 00:26:44.493 qpair failed and we were unable to recover it. 00:26:44.493 [2024-05-15 17:17:31.907708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.493 [2024-05-15 17:17:31.907873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.493 [2024-05-15 17:17:31.907904] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f0000b90 with addr=10.0.0.2, port=4420 00:26:44.493 qpair failed and we were unable to recover it. 00:26:44.493 [2024-05-15 17:17:31.908119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.493 [2024-05-15 17:17:31.908436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.493 [2024-05-15 17:17:31.908466] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f0000b90 with addr=10.0.0.2, port=4420 00:26:44.493 qpair failed and we were unable to recover it. 00:26:44.493 [2024-05-15 17:17:31.908601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.493 [2024-05-15 17:17:31.908753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.493 [2024-05-15 17:17:31.908783] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f0000b90 with addr=10.0.0.2, port=4420 00:26:44.493 qpair failed and we were unable to recover it. 00:26:44.493 [2024-05-15 17:17:31.909014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.493 [2024-05-15 17:17:31.909159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.494 [2024-05-15 17:17:31.909196] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f0000b90 with addr=10.0.0.2, port=4420 00:26:44.494 qpair failed and we were unable to recover it. 00:26:44.494 [2024-05-15 17:17:31.909459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.494 [2024-05-15 17:17:31.909625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.494 [2024-05-15 17:17:31.909638] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f0000b90 with addr=10.0.0.2, port=4420 00:26:44.494 qpair failed and we were unable to recover it. 00:26:44.494 [2024-05-15 17:17:31.909764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.494 [2024-05-15 17:17:31.909957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.494 [2024-05-15 17:17:31.909987] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f0000b90 with addr=10.0.0.2, port=4420 00:26:44.494 qpair failed and we were unable to recover it. 00:26:44.494 [2024-05-15 17:17:31.910221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.494 [2024-05-15 17:17:31.910534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.494 [2024-05-15 17:17:31.910562] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f0000b90 with addr=10.0.0.2, port=4420 00:26:44.494 qpair failed and we were unable to recover it. 00:26:44.494 [2024-05-15 17:17:31.910834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.494 [2024-05-15 17:17:31.911125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.494 [2024-05-15 17:17:31.911154] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f0000b90 with addr=10.0.0.2, port=4420 00:26:44.494 qpair failed and we were unable to recover it. 00:26:44.494 [2024-05-15 17:17:31.911316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.494 [2024-05-15 17:17:31.911461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.494 [2024-05-15 17:17:31.911489] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f0000b90 with addr=10.0.0.2, port=4420 00:26:44.494 qpair failed and we were unable to recover it. 00:26:44.494 [2024-05-15 17:17:31.911760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.494 [2024-05-15 17:17:31.912043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.494 [2024-05-15 17:17:31.912071] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f0000b90 with addr=10.0.0.2, port=4420 00:26:44.494 qpair failed and we were unable to recover it. 00:26:44.494 [2024-05-15 17:17:31.912281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.494 [2024-05-15 17:17:31.912489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.494 [2024-05-15 17:17:31.912517] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f0000b90 with addr=10.0.0.2, port=4420 00:26:44.494 qpair failed and we were unable to recover it. 00:26:44.494 [2024-05-15 17:17:31.912729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.494 [2024-05-15 17:17:31.912930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.494 [2024-05-15 17:17:31.912958] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f0000b90 with addr=10.0.0.2, port=4420 00:26:44.494 qpair failed and we were unable to recover it. 00:26:44.494 [2024-05-15 17:17:31.913234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.494 [2024-05-15 17:17:31.913427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.494 [2024-05-15 17:17:31.913440] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f0000b90 with addr=10.0.0.2, port=4420 00:26:44.494 qpair failed and we were unable to recover it. 00:26:44.494 [2024-05-15 17:17:31.913671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.494 [2024-05-15 17:17:31.913779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.494 [2024-05-15 17:17:31.913809] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f0000b90 with addr=10.0.0.2, port=4420 00:26:44.494 qpair failed and we were unable to recover it. 00:26:44.494 [2024-05-15 17:17:31.914109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.494 [2024-05-15 17:17:31.914318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.494 [2024-05-15 17:17:31.914354] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f0000b90 with addr=10.0.0.2, port=4420 00:26:44.494 qpair failed and we were unable to recover it. 00:26:44.494 [2024-05-15 17:17:31.914611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.494 [2024-05-15 17:17:31.914793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.494 [2024-05-15 17:17:31.914806] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f0000b90 with addr=10.0.0.2, port=4420 00:26:44.494 qpair failed and we were unable to recover it. 00:26:44.494 [2024-05-15 17:17:31.914914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.494 [2024-05-15 17:17:31.915028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.494 [2024-05-15 17:17:31.915041] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f0000b90 with addr=10.0.0.2, port=4420 00:26:44.494 qpair failed and we were unable to recover it. 00:26:44.494 [2024-05-15 17:17:31.915211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.494 [2024-05-15 17:17:31.915391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.494 [2024-05-15 17:17:31.915405] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f0000b90 with addr=10.0.0.2, port=4420 00:26:44.494 qpair failed and we were unable to recover it. 00:26:44.494 [2024-05-15 17:17:31.915595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.494 [2024-05-15 17:17:31.915779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.494 [2024-05-15 17:17:31.915808] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f0000b90 with addr=10.0.0.2, port=4420 00:26:44.494 qpair failed and we were unable to recover it. 00:26:44.494 [2024-05-15 17:17:31.916078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.494 [2024-05-15 17:17:31.916288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.494 [2024-05-15 17:17:31.916319] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f0000b90 with addr=10.0.0.2, port=4420 00:26:44.494 qpair failed and we were unable to recover it. 00:26:44.494 [2024-05-15 17:17:31.916574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.494 [2024-05-15 17:17:31.916844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.494 [2024-05-15 17:17:31.916857] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f0000b90 with addr=10.0.0.2, port=4420 00:26:44.494 qpair failed and we were unable to recover it. 00:26:44.494 [2024-05-15 17:17:31.917036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.494 [2024-05-15 17:17:31.917201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.494 [2024-05-15 17:17:31.917215] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f0000b90 with addr=10.0.0.2, port=4420 00:26:44.494 qpair failed and we were unable to recover it. 00:26:44.494 [2024-05-15 17:17:31.917393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.494 [2024-05-15 17:17:31.917620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.494 [2024-05-15 17:17:31.917633] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f0000b90 with addr=10.0.0.2, port=4420 00:26:44.494 qpair failed and we were unable to recover it. 00:26:44.494 [2024-05-15 17:17:31.917814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.494 [2024-05-15 17:17:31.917915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.494 [2024-05-15 17:17:31.917929] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f0000b90 with addr=10.0.0.2, port=4420 00:26:44.494 qpair failed and we were unable to recover it. 00:26:44.494 [2024-05-15 17:17:31.918030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.494 [2024-05-15 17:17:31.918195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.494 [2024-05-15 17:17:31.918209] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f0000b90 with addr=10.0.0.2, port=4420 00:26:44.494 qpair failed and we were unable to recover it. 00:26:44.494 [2024-05-15 17:17:31.918390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.494 [2024-05-15 17:17:31.918629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.494 [2024-05-15 17:17:31.918642] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f0000b90 with addr=10.0.0.2, port=4420 00:26:44.495 qpair failed and we were unable to recover it. 00:26:44.495 [2024-05-15 17:17:31.918817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.495 [2024-05-15 17:17:31.919017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.495 [2024-05-15 17:17:31.919046] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f0000b90 with addr=10.0.0.2, port=4420 00:26:44.495 qpair failed and we were unable to recover it. 00:26:44.495 [2024-05-15 17:17:31.919246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.495 [2024-05-15 17:17:31.919464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.495 [2024-05-15 17:17:31.919494] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f0000b90 with addr=10.0.0.2, port=4420 00:26:44.495 qpair failed and we were unable to recover it. 00:26:44.495 [2024-05-15 17:17:31.919650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.495 [2024-05-15 17:17:31.919913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.495 [2024-05-15 17:17:31.919942] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f0000b90 with addr=10.0.0.2, port=4420 00:26:44.495 qpair failed and we were unable to recover it. 00:26:44.495 [2024-05-15 17:17:31.920085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.495 [2024-05-15 17:17:31.920350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.495 [2024-05-15 17:17:31.920380] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f0000b90 with addr=10.0.0.2, port=4420 00:26:44.495 qpair failed and we were unable to recover it. 00:26:44.495 [2024-05-15 17:17:31.920531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.495 [2024-05-15 17:17:31.920692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.495 [2024-05-15 17:17:31.920705] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f0000b90 with addr=10.0.0.2, port=4420 00:26:44.495 qpair failed and we were unable to recover it. 00:26:44.495 [2024-05-15 17:17:31.920875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.495 [2024-05-15 17:17:31.921122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.495 [2024-05-15 17:17:31.921151] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f0000b90 with addr=10.0.0.2, port=4420 00:26:44.495 qpair failed and we were unable to recover it. 00:26:44.495 [2024-05-15 17:17:31.921474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.495 [2024-05-15 17:17:31.921625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.495 [2024-05-15 17:17:31.921654] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f0000b90 with addr=10.0.0.2, port=4420 00:26:44.495 qpair failed and we were unable to recover it. 00:26:44.495 [2024-05-15 17:17:31.921801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.495 [2024-05-15 17:17:31.922087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.495 [2024-05-15 17:17:31.922115] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f0000b90 with addr=10.0.0.2, port=4420 00:26:44.495 qpair failed and we were unable to recover it. 00:26:44.495 [2024-05-15 17:17:31.922347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.495 [2024-05-15 17:17:31.922550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.495 [2024-05-15 17:17:31.922579] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f0000b90 with addr=10.0.0.2, port=4420 00:26:44.495 qpair failed and we were unable to recover it. 00:26:44.495 [2024-05-15 17:17:31.922818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.495 [2024-05-15 17:17:31.923081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.495 [2024-05-15 17:17:31.923109] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f0000b90 with addr=10.0.0.2, port=4420 00:26:44.495 qpair failed and we were unable to recover it. 00:26:44.495 [2024-05-15 17:17:31.923379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.495 [2024-05-15 17:17:31.923649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.495 [2024-05-15 17:17:31.923678] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f0000b90 with addr=10.0.0.2, port=4420 00:26:44.495 qpair failed and we were unable to recover it. 00:26:44.495 [2024-05-15 17:17:31.923892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.495 [2024-05-15 17:17:31.924042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.495 [2024-05-15 17:17:31.924070] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f0000b90 with addr=10.0.0.2, port=4420 00:26:44.495 qpair failed and we were unable to recover it. 00:26:44.495 [2024-05-15 17:17:31.924353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.495 [2024-05-15 17:17:31.924557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.495 [2024-05-15 17:17:31.924586] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f0000b90 with addr=10.0.0.2, port=4420 00:26:44.495 qpair failed and we were unable to recover it. 00:26:44.495 [2024-05-15 17:17:31.924801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.495 [2024-05-15 17:17:31.925060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.495 [2024-05-15 17:17:31.925089] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f0000b90 with addr=10.0.0.2, port=4420 00:26:44.495 qpair failed and we were unable to recover it. 00:26:44.495 [2024-05-15 17:17:31.925382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.495 [2024-05-15 17:17:31.925584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.495 [2024-05-15 17:17:31.925597] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f0000b90 with addr=10.0.0.2, port=4420 00:26:44.495 qpair failed and we were unable to recover it. 00:26:44.495 [2024-05-15 17:17:31.925795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.495 [2024-05-15 17:17:31.925919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.495 [2024-05-15 17:17:31.925932] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f0000b90 with addr=10.0.0.2, port=4420 00:26:44.495 qpair failed and we were unable to recover it. 00:26:44.495 [2024-05-15 17:17:31.926188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.495 [2024-05-15 17:17:31.926351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.495 [2024-05-15 17:17:31.926365] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f0000b90 with addr=10.0.0.2, port=4420 00:26:44.495 qpair failed and we were unable to recover it. 00:26:44.495 [2024-05-15 17:17:31.926577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.495 [2024-05-15 17:17:31.926809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.495 [2024-05-15 17:17:31.926822] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f0000b90 with addr=10.0.0.2, port=4420 00:26:44.495 qpair failed and we were unable to recover it. 00:26:44.495 [2024-05-15 17:17:31.927017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.495 [2024-05-15 17:17:31.927222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.495 [2024-05-15 17:17:31.927254] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f0000b90 with addr=10.0.0.2, port=4420 00:26:44.495 qpair failed and we were unable to recover it. 00:26:44.495 [2024-05-15 17:17:31.927495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.495 [2024-05-15 17:17:31.927755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.495 [2024-05-15 17:17:31.927768] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f0000b90 with addr=10.0.0.2, port=4420 00:26:44.495 qpair failed and we were unable to recover it. 00:26:44.495 [2024-05-15 17:17:31.928024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.495 [2024-05-15 17:17:31.928253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.495 [2024-05-15 17:17:31.928267] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f0000b90 with addr=10.0.0.2, port=4420 00:26:44.495 qpair failed and we were unable to recover it. 00:26:44.495 [2024-05-15 17:17:31.928452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.495 [2024-05-15 17:17:31.928635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.495 [2024-05-15 17:17:31.928664] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f0000b90 with addr=10.0.0.2, port=4420 00:26:44.495 qpair failed and we were unable to recover it. 00:26:44.495 [2024-05-15 17:17:31.928963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.495 [2024-05-15 17:17:31.929182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.495 [2024-05-15 17:17:31.929212] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f0000b90 with addr=10.0.0.2, port=4420 00:26:44.495 qpair failed and we were unable to recover it. 00:26:44.495 [2024-05-15 17:17:31.929444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.495 [2024-05-15 17:17:31.929668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.495 [2024-05-15 17:17:31.929697] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f0000b90 with addr=10.0.0.2, port=4420 00:26:44.495 qpair failed and we were unable to recover it. 00:26:44.495 [2024-05-15 17:17:31.929896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.495 [2024-05-15 17:17:31.930181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.495 [2024-05-15 17:17:31.930210] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f0000b90 with addr=10.0.0.2, port=4420 00:26:44.495 qpair failed and we were unable to recover it. 00:26:44.495 [2024-05-15 17:17:31.930419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.495 [2024-05-15 17:17:31.930576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.495 [2024-05-15 17:17:31.930604] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f0000b90 with addr=10.0.0.2, port=4420 00:26:44.495 qpair failed and we were unable to recover it. 00:26:44.495 [2024-05-15 17:17:31.930794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.495 [2024-05-15 17:17:31.931048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.495 [2024-05-15 17:17:31.931076] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f0000b90 with addr=10.0.0.2, port=4420 00:26:44.495 qpair failed and we were unable to recover it. 00:26:44.495 [2024-05-15 17:17:31.931218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.495 [2024-05-15 17:17:31.931447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.495 [2024-05-15 17:17:31.931476] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f0000b90 with addr=10.0.0.2, port=4420 00:26:44.495 qpair failed and we were unable to recover it. 00:26:44.496 [2024-05-15 17:17:31.931621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.496 [2024-05-15 17:17:31.931784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.496 [2024-05-15 17:17:31.931797] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f0000b90 with addr=10.0.0.2, port=4420 00:26:44.496 qpair failed and we were unable to recover it. 00:26:44.496 [2024-05-15 17:17:31.932057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.496 [2024-05-15 17:17:31.932175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.496 [2024-05-15 17:17:31.932190] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f0000b90 with addr=10.0.0.2, port=4420 00:26:44.496 qpair failed and we were unable to recover it. 00:26:44.496 [2024-05-15 17:17:31.932421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.496 [2024-05-15 17:17:31.932582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.496 [2024-05-15 17:17:31.932610] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f0000b90 with addr=10.0.0.2, port=4420 00:26:44.496 qpair failed and we were unable to recover it. 00:26:44.496 [2024-05-15 17:17:31.932744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.496 [2024-05-15 17:17:31.932975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.496 [2024-05-15 17:17:31.933003] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f0000b90 with addr=10.0.0.2, port=4420 00:26:44.496 qpair failed and we were unable to recover it. 00:26:44.496 [2024-05-15 17:17:31.933216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.496 [2024-05-15 17:17:31.933444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.496 [2024-05-15 17:17:31.933472] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f0000b90 with addr=10.0.0.2, port=4420 00:26:44.496 qpair failed and we were unable to recover it. 00:26:44.496 [2024-05-15 17:17:31.933690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.496 [2024-05-15 17:17:31.933923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.496 [2024-05-15 17:17:31.933952] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f0000b90 with addr=10.0.0.2, port=4420 00:26:44.496 qpair failed and we were unable to recover it. 00:26:44.496 [2024-05-15 17:17:31.934198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.496 [2024-05-15 17:17:31.934486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.496 [2024-05-15 17:17:31.934515] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f0000b90 with addr=10.0.0.2, port=4420 00:26:44.496 qpair failed and we were unable to recover it. 00:26:44.496 [2024-05-15 17:17:31.934748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.496 [2024-05-15 17:17:31.934853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.496 [2024-05-15 17:17:31.934866] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f0000b90 with addr=10.0.0.2, port=4420 00:26:44.496 qpair failed and we were unable to recover it. 00:26:44.496 [2024-05-15 17:17:31.934981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.496 [2024-05-15 17:17:31.935180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.496 [2024-05-15 17:17:31.935210] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f0000b90 with addr=10.0.0.2, port=4420 00:26:44.496 qpair failed and we were unable to recover it. 00:26:44.496 [2024-05-15 17:17:31.935376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.496 [2024-05-15 17:17:31.935501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.496 [2024-05-15 17:17:31.935529] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f0000b90 with addr=10.0.0.2, port=4420 00:26:44.496 qpair failed and we were unable to recover it. 00:26:44.496 [2024-05-15 17:17:31.935746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.496 [2024-05-15 17:17:31.935898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.496 [2024-05-15 17:17:31.935927] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f0000b90 with addr=10.0.0.2, port=4420 00:26:44.496 qpair failed and we were unable to recover it. 00:26:44.496 [2024-05-15 17:17:31.936230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.496 [2024-05-15 17:17:31.936420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.496 [2024-05-15 17:17:31.936449] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f0000b90 with addr=10.0.0.2, port=4420 00:26:44.496 qpair failed and we were unable to recover it. 00:26:44.496 [2024-05-15 17:17:31.936593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.496 [2024-05-15 17:17:31.936700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.496 [2024-05-15 17:17:31.936713] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f0000b90 with addr=10.0.0.2, port=4420 00:26:44.496 qpair failed and we were unable to recover it. 00:26:44.496 [2024-05-15 17:17:31.936898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.496 [2024-05-15 17:17:31.937134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.496 [2024-05-15 17:17:31.937162] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f0000b90 with addr=10.0.0.2, port=4420 00:26:44.496 qpair failed and we were unable to recover it. 00:26:44.496 [2024-05-15 17:17:31.937329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.496 [2024-05-15 17:17:31.937541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.496 [2024-05-15 17:17:31.937570] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f0000b90 with addr=10.0.0.2, port=4420 00:26:44.496 qpair failed and we were unable to recover it. 00:26:44.496 [2024-05-15 17:17:31.937781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.496 [2024-05-15 17:17:31.937961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.496 [2024-05-15 17:17:31.937990] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f0000b90 with addr=10.0.0.2, port=4420 00:26:44.496 qpair failed and we were unable to recover it. 00:26:44.496 [2024-05-15 17:17:31.938191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.496 [2024-05-15 17:17:31.938406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.496 [2024-05-15 17:17:31.938435] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f0000b90 with addr=10.0.0.2, port=4420 00:26:44.496 qpair failed and we were unable to recover it. 00:26:44.496 [2024-05-15 17:17:31.938726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.496 [2024-05-15 17:17:31.938840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.496 [2024-05-15 17:17:31.938853] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f0000b90 with addr=10.0.0.2, port=4420 00:26:44.496 qpair failed and we were unable to recover it. 00:26:44.496 [2024-05-15 17:17:31.939028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.496 [2024-05-15 17:17:31.939339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.496 [2024-05-15 17:17:31.939368] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f0000b90 with addr=10.0.0.2, port=4420 00:26:44.496 qpair failed and we were unable to recover it. 00:26:44.496 [2024-05-15 17:17:31.939535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.496 [2024-05-15 17:17:31.939672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.496 [2024-05-15 17:17:31.939685] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f0000b90 with addr=10.0.0.2, port=4420 00:26:44.496 qpair failed and we were unable to recover it. 00:26:44.496 [2024-05-15 17:17:31.939802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.496 [2024-05-15 17:17:31.940047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.496 [2024-05-15 17:17:31.940060] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f0000b90 with addr=10.0.0.2, port=4420 00:26:44.496 qpair failed and we were unable to recover it. 00:26:44.496 [2024-05-15 17:17:31.940240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.496 [2024-05-15 17:17:31.940495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.496 [2024-05-15 17:17:31.940524] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f0000b90 with addr=10.0.0.2, port=4420 00:26:44.496 qpair failed and we were unable to recover it. 00:26:44.496 [2024-05-15 17:17:31.940805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.496 [2024-05-15 17:17:31.941064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.496 [2024-05-15 17:17:31.941093] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f0000b90 with addr=10.0.0.2, port=4420 00:26:44.496 qpair failed and we were unable to recover it. 00:26:44.496 [2024-05-15 17:17:31.941303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.496 [2024-05-15 17:17:31.941567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.496 [2024-05-15 17:17:31.941596] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f0000b90 with addr=10.0.0.2, port=4420 00:26:44.496 qpair failed and we were unable to recover it. 00:26:44.496 [2024-05-15 17:17:31.941815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.496 [2024-05-15 17:17:31.942080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.496 [2024-05-15 17:17:31.942108] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f0000b90 with addr=10.0.0.2, port=4420 00:26:44.496 qpair failed and we were unable to recover it. 00:26:44.496 [2024-05-15 17:17:31.942319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.496 [2024-05-15 17:17:31.942604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.496 [2024-05-15 17:17:31.942633] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f0000b90 with addr=10.0.0.2, port=4420 00:26:44.496 qpair failed and we were unable to recover it. 00:26:44.496 [2024-05-15 17:17:31.942795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.496 [2024-05-15 17:17:31.942955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.496 [2024-05-15 17:17:31.942984] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f0000b90 with addr=10.0.0.2, port=4420 00:26:44.496 qpair failed and we were unable to recover it. 00:26:44.496 [2024-05-15 17:17:31.943129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.496 [2024-05-15 17:17:31.943285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.496 [2024-05-15 17:17:31.943315] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f0000b90 with addr=10.0.0.2, port=4420 00:26:44.496 qpair failed and we were unable to recover it. 00:26:44.497 [2024-05-15 17:17:31.943581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.497 [2024-05-15 17:17:31.943845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.497 [2024-05-15 17:17:31.943858] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f0000b90 with addr=10.0.0.2, port=4420 00:26:44.497 qpair failed and we were unable to recover it. 00:26:44.497 [2024-05-15 17:17:31.943973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.497 [2024-05-15 17:17:31.944114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.497 [2024-05-15 17:17:31.944127] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f0000b90 with addr=10.0.0.2, port=4420 00:26:44.497 qpair failed and we were unable to recover it. 00:26:44.497 [2024-05-15 17:17:31.944247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.497 [2024-05-15 17:17:31.944446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.497 [2024-05-15 17:17:31.944459] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f0000b90 with addr=10.0.0.2, port=4420 00:26:44.497 qpair failed and we were unable to recover it. 00:26:44.497 [2024-05-15 17:17:31.944549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.497 [2024-05-15 17:17:31.944736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.497 [2024-05-15 17:17:31.944750] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f0000b90 with addr=10.0.0.2, port=4420 00:26:44.497 qpair failed and we were unable to recover it. 00:26:44.497 [2024-05-15 17:17:31.944847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.497 [2024-05-15 17:17:31.945026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.497 [2024-05-15 17:17:31.945039] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f0000b90 with addr=10.0.0.2, port=4420 00:26:44.497 qpair failed and we were unable to recover it. 00:26:44.497 [2024-05-15 17:17:31.945276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.497 [2024-05-15 17:17:31.945405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.497 [2024-05-15 17:17:31.945419] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f0000b90 with addr=10.0.0.2, port=4420 00:26:44.497 qpair failed and we were unable to recover it. 00:26:44.497 [2024-05-15 17:17:31.945533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.497 [2024-05-15 17:17:31.945708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.497 [2024-05-15 17:17:31.945736] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f0000b90 with addr=10.0.0.2, port=4420 00:26:44.497 qpair failed and we were unable to recover it. 00:26:44.497 [2024-05-15 17:17:31.946008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.497 [2024-05-15 17:17:31.946203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.497 [2024-05-15 17:17:31.946234] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f0000b90 with addr=10.0.0.2, port=4420 00:26:44.497 qpair failed and we were unable to recover it. 00:26:44.497 [2024-05-15 17:17:31.946468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.497 [2024-05-15 17:17:31.946672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.497 [2024-05-15 17:17:31.946685] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f0000b90 with addr=10.0.0.2, port=4420 00:26:44.497 qpair failed and we were unable to recover it. 00:26:44.497 [2024-05-15 17:17:31.946795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.497 [2024-05-15 17:17:31.946964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.497 [2024-05-15 17:17:31.946978] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f0000b90 with addr=10.0.0.2, port=4420 00:26:44.497 qpair failed and we were unable to recover it. 00:26:44.497 [2024-05-15 17:17:31.947153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.497 [2024-05-15 17:17:31.947327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.497 [2024-05-15 17:17:31.947367] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f0000b90 with addr=10.0.0.2, port=4420 00:26:44.497 qpair failed and we were unable to recover it. 00:26:44.497 [2024-05-15 17:17:31.947509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.497 [2024-05-15 17:17:31.947706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.497 [2024-05-15 17:17:31.947734] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f0000b90 with addr=10.0.0.2, port=4420 00:26:44.497 qpair failed and we were unable to recover it. 00:26:44.497 [2024-05-15 17:17:31.947889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.497 [2024-05-15 17:17:31.948157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.497 [2024-05-15 17:17:31.948195] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f0000b90 with addr=10.0.0.2, port=4420 00:26:44.497 qpair failed and we were unable to recover it. 00:26:44.497 [2024-05-15 17:17:31.948360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.497 [2024-05-15 17:17:31.948593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.497 [2024-05-15 17:17:31.948621] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f0000b90 with addr=10.0.0.2, port=4420 00:26:44.497 qpair failed and we were unable to recover it. 00:26:44.497 [2024-05-15 17:17:31.948770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.497 [2024-05-15 17:17:31.948986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.497 [2024-05-15 17:17:31.949015] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f0000b90 with addr=10.0.0.2, port=4420 00:26:44.497 qpair failed and we were unable to recover it. 00:26:44.497 [2024-05-15 17:17:31.949208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.497 [2024-05-15 17:17:31.949350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.497 [2024-05-15 17:17:31.949379] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f0000b90 with addr=10.0.0.2, port=4420 00:26:44.497 qpair failed and we were unable to recover it. 00:26:44.497 [2024-05-15 17:17:31.949652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.497 [2024-05-15 17:17:31.949857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.497 [2024-05-15 17:17:31.949886] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f0000b90 with addr=10.0.0.2, port=4420 00:26:44.497 qpair failed and we were unable to recover it. 00:26:44.497 [2024-05-15 17:17:31.950087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.497 [2024-05-15 17:17:31.950228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.497 [2024-05-15 17:17:31.950262] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f0000b90 with addr=10.0.0.2, port=4420 00:26:44.497 qpair failed and we were unable to recover it. 00:26:44.497 [2024-05-15 17:17:31.950432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.497 [2024-05-15 17:17:31.950641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.497 [2024-05-15 17:17:31.950654] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f0000b90 with addr=10.0.0.2, port=4420 00:26:44.497 qpair failed and we were unable to recover it. 00:26:44.497 [2024-05-15 17:17:31.950845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.497 [2024-05-15 17:17:31.951021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.497 [2024-05-15 17:17:31.951051] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f0000b90 with addr=10.0.0.2, port=4420 00:26:44.497 qpair failed and we were unable to recover it. 00:26:44.497 [2024-05-15 17:17:31.951206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.497 [2024-05-15 17:17:31.951363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.497 [2024-05-15 17:17:31.951392] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f0000b90 with addr=10.0.0.2, port=4420 00:26:44.497 qpair failed and we were unable to recover it. 00:26:44.497 [2024-05-15 17:17:31.951615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.497 [2024-05-15 17:17:31.951770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.497 [2024-05-15 17:17:31.951783] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f0000b90 with addr=10.0.0.2, port=4420 00:26:44.497 qpair failed and we were unable to recover it. 00:26:44.497 [2024-05-15 17:17:31.951894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.497 [2024-05-15 17:17:31.952009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.497 [2024-05-15 17:17:31.952022] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f0000b90 with addr=10.0.0.2, port=4420 00:26:44.497 qpair failed and we were unable to recover it. 00:26:44.497 [2024-05-15 17:17:31.952203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.497 [2024-05-15 17:17:31.952364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.497 [2024-05-15 17:17:31.952377] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f0000b90 with addr=10.0.0.2, port=4420 00:26:44.497 qpair failed and we were unable to recover it. 00:26:44.497 [2024-05-15 17:17:31.952545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.497 [2024-05-15 17:17:31.952734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.497 [2024-05-15 17:17:31.952747] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f0000b90 with addr=10.0.0.2, port=4420 00:26:44.497 qpair failed and we were unable to recover it. 00:26:44.497 [2024-05-15 17:17:31.952981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.497 [2024-05-15 17:17:31.953179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.497 [2024-05-15 17:17:31.953193] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f0000b90 with addr=10.0.0.2, port=4420 00:26:44.497 qpair failed and we were unable to recover it. 00:26:44.497 [2024-05-15 17:17:31.953320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.497 [2024-05-15 17:17:31.953488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.497 [2024-05-15 17:17:31.953501] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f0000b90 with addr=10.0.0.2, port=4420 00:26:44.497 qpair failed and we were unable to recover it. 00:26:44.497 [2024-05-15 17:17:31.953584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.497 [2024-05-15 17:17:31.953811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.497 [2024-05-15 17:17:31.953826] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f0000b90 with addr=10.0.0.2, port=4420 00:26:44.497 qpair failed and we were unable to recover it. 00:26:44.497 [2024-05-15 17:17:31.953933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.498 [2024-05-15 17:17:31.954086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.498 [2024-05-15 17:17:31.954099] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f0000b90 with addr=10.0.0.2, port=4420 00:26:44.498 qpair failed and we were unable to recover it. 00:26:44.498 [2024-05-15 17:17:31.954263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.498 [2024-05-15 17:17:31.954501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.498 [2024-05-15 17:17:31.954530] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f0000b90 with addr=10.0.0.2, port=4420 00:26:44.498 qpair failed and we were unable to recover it. 00:26:44.498 [2024-05-15 17:17:31.954774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.498 [2024-05-15 17:17:31.954975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.498 [2024-05-15 17:17:31.955004] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f0000b90 with addr=10.0.0.2, port=4420 00:26:44.498 qpair failed and we were unable to recover it. 00:26:44.498 [2024-05-15 17:17:31.955202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.498 [2024-05-15 17:17:31.955389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.498 [2024-05-15 17:17:31.955418] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f0000b90 with addr=10.0.0.2, port=4420 00:26:44.498 qpair failed and we were unable to recover it. 00:26:44.498 [2024-05-15 17:17:31.955656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.498 [2024-05-15 17:17:31.955850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.498 [2024-05-15 17:17:31.955878] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f0000b90 with addr=10.0.0.2, port=4420 00:26:44.498 qpair failed and we were unable to recover it. 00:26:44.498 [2024-05-15 17:17:31.956172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.498 [2024-05-15 17:17:31.956337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.498 [2024-05-15 17:17:31.956366] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f0000b90 with addr=10.0.0.2, port=4420 00:26:44.498 qpair failed and we were unable to recover it. 00:26:44.498 [2024-05-15 17:17:31.956571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.498 [2024-05-15 17:17:31.956783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.498 [2024-05-15 17:17:31.956812] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f0000b90 with addr=10.0.0.2, port=4420 00:26:44.498 qpair failed and we were unable to recover it. 00:26:44.498 [2024-05-15 17:17:31.957030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.498 [2024-05-15 17:17:31.957268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.498 [2024-05-15 17:17:31.957298] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f0000b90 with addr=10.0.0.2, port=4420 00:26:44.498 qpair failed and we were unable to recover it. 00:26:44.498 [2024-05-15 17:17:31.957565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.498 [2024-05-15 17:17:31.957737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.498 [2024-05-15 17:17:31.957765] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f0000b90 with addr=10.0.0.2, port=4420 00:26:44.498 qpair failed and we were unable to recover it. 00:26:44.498 [2024-05-15 17:17:31.958041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.498 [2024-05-15 17:17:31.958203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.498 [2024-05-15 17:17:31.958238] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f0000b90 with addr=10.0.0.2, port=4420 00:26:44.498 qpair failed and we were unable to recover it. 00:26:44.498 [2024-05-15 17:17:31.958513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.498 [2024-05-15 17:17:31.958787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.498 [2024-05-15 17:17:31.958816] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f0000b90 with addr=10.0.0.2, port=4420 00:26:44.498 qpair failed and we were unable to recover it. 00:26:44.498 [2024-05-15 17:17:31.958978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.498 [2024-05-15 17:17:31.959262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.498 [2024-05-15 17:17:31.959292] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f0000b90 with addr=10.0.0.2, port=4420 00:26:44.498 qpair failed and we were unable to recover it. 00:26:44.498 [2024-05-15 17:17:31.959560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.498 [2024-05-15 17:17:31.959728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.498 [2024-05-15 17:17:31.959757] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f0000b90 with addr=10.0.0.2, port=4420 00:26:44.498 qpair failed and we were unable to recover it. 00:26:44.498 [2024-05-15 17:17:31.959958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.498 [2024-05-15 17:17:31.960153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.498 [2024-05-15 17:17:31.960191] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f0000b90 with addr=10.0.0.2, port=4420 00:26:44.498 qpair failed and we were unable to recover it. 00:26:44.498 [2024-05-15 17:17:31.960427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.498 [2024-05-15 17:17:31.960566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.498 [2024-05-15 17:17:31.960594] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f0000b90 with addr=10.0.0.2, port=4420 00:26:44.498 qpair failed and we were unable to recover it. 00:26:44.498 [2024-05-15 17:17:31.960797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.498 [2024-05-15 17:17:31.960995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.498 [2024-05-15 17:17:31.961023] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f0000b90 with addr=10.0.0.2, port=4420 00:26:44.498 qpair failed and we were unable to recover it. 00:26:44.498 [2024-05-15 17:17:31.961192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.498 [2024-05-15 17:17:31.961403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.498 [2024-05-15 17:17:31.961432] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f0000b90 with addr=10.0.0.2, port=4420 00:26:44.498 qpair failed and we were unable to recover it. 00:26:44.498 [2024-05-15 17:17:31.961583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.498 [2024-05-15 17:17:31.961698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.498 [2024-05-15 17:17:31.961712] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f0000b90 with addr=10.0.0.2, port=4420 00:26:44.498 qpair failed and we were unable to recover it. 00:26:44.498 [2024-05-15 17:17:31.961940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.498 [2024-05-15 17:17:31.962071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.498 [2024-05-15 17:17:31.962113] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f0000b90 with addr=10.0.0.2, port=4420 00:26:44.498 qpair failed and we were unable to recover it. 00:26:44.498 [2024-05-15 17:17:31.962412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.498 [2024-05-15 17:17:31.962641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.498 [2024-05-15 17:17:31.962676] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f0000b90 with addr=10.0.0.2, port=4420 00:26:44.498 qpair failed and we were unable to recover it. 00:26:44.498 [2024-05-15 17:17:31.962887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.498 [2024-05-15 17:17:31.963060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.498 [2024-05-15 17:17:31.963089] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f0000b90 with addr=10.0.0.2, port=4420 00:26:44.498 qpair failed and we were unable to recover it. 00:26:44.498 [2024-05-15 17:17:31.963311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.498 [2024-05-15 17:17:31.963541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.498 [2024-05-15 17:17:31.963570] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f0000b90 with addr=10.0.0.2, port=4420 00:26:44.498 qpair failed and we were unable to recover it. 00:26:44.498 [2024-05-15 17:17:31.963843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.498 [2024-05-15 17:17:31.963970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.498 [2024-05-15 17:17:31.963991] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f0000b90 with addr=10.0.0.2, port=4420 00:26:44.498 qpair failed and we were unable to recover it. 00:26:44.498 [2024-05-15 17:17:31.964190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.498 [2024-05-15 17:17:31.964361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.498 [2024-05-15 17:17:31.964376] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f0000b90 with addr=10.0.0.2, port=4420 00:26:44.498 qpair failed and we were unable to recover it. 00:26:44.498 [2024-05-15 17:17:31.964623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.498 [2024-05-15 17:17:31.964720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.498 [2024-05-15 17:17:31.964734] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f0000b90 with addr=10.0.0.2, port=4420 00:26:44.498 qpair failed and we were unable to recover it. 00:26:44.498 [2024-05-15 17:17:31.964914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.498 [2024-05-15 17:17:31.965112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.498 [2024-05-15 17:17:31.965126] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f0000b90 with addr=10.0.0.2, port=4420 00:26:44.498 qpair failed and we were unable to recover it. 00:26:44.498 [2024-05-15 17:17:31.965367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.498 [2024-05-15 17:17:31.965489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.498 [2024-05-15 17:17:31.965507] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f0000b90 with addr=10.0.0.2, port=4420 00:26:44.498 qpair failed and we were unable to recover it. 00:26:44.498 [2024-05-15 17:17:31.965657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.498 [2024-05-15 17:17:31.965849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.498 [2024-05-15 17:17:31.965867] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f0000b90 with addr=10.0.0.2, port=4420 00:26:44.498 qpair failed and we were unable to recover it. 00:26:44.498 [2024-05-15 17:17:31.966058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.498 [2024-05-15 17:17:31.966200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.499 [2024-05-15 17:17:31.966218] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f0000b90 with addr=10.0.0.2, port=4420 00:26:44.499 qpair failed and we were unable to recover it. 00:26:44.499 [2024-05-15 17:17:31.966451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.499 [2024-05-15 17:17:31.966569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.499 [2024-05-15 17:17:31.966585] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f0000b90 with addr=10.0.0.2, port=4420 00:26:44.499 qpair failed and we were unable to recover it. 00:26:44.499 [2024-05-15 17:17:31.966736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.499 [2024-05-15 17:17:31.966864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.499 [2024-05-15 17:17:31.966885] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f0000b90 with addr=10.0.0.2, port=4420 00:26:44.499 qpair failed and we were unable to recover it. 00:26:44.499 [2024-05-15 17:17:31.966994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.499 [2024-05-15 17:17:31.967136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.499 [2024-05-15 17:17:31.967155] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f0000b90 with addr=10.0.0.2, port=4420 00:26:44.499 qpair failed and we were unable to recover it. 00:26:44.499 [2024-05-15 17:17:31.967374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.499 [2024-05-15 17:17:31.967513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.499 [2024-05-15 17:17:31.967538] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f0000b90 with addr=10.0.0.2, port=4420 00:26:44.499 qpair failed and we were unable to recover it. 00:26:44.499 [2024-05-15 17:17:31.967721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.499 [2024-05-15 17:17:31.967876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.499 [2024-05-15 17:17:31.967905] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f0000b90 with addr=10.0.0.2, port=4420 00:26:44.499 qpair failed and we were unable to recover it. 00:26:44.499 [2024-05-15 17:17:31.968114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.499 [2024-05-15 17:17:31.968269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.499 [2024-05-15 17:17:31.968287] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f0000b90 with addr=10.0.0.2, port=4420 00:26:44.499 qpair failed and we were unable to recover it. 00:26:44.499 [2024-05-15 17:17:31.968448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.499 [2024-05-15 17:17:31.968676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.499 [2024-05-15 17:17:31.968706] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f0000b90 with addr=10.0.0.2, port=4420 00:26:44.499 qpair failed and we were unable to recover it. 00:26:44.499 [2024-05-15 17:17:31.968904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.499 [2024-05-15 17:17:31.969118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.499 [2024-05-15 17:17:31.969154] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f0000b90 with addr=10.0.0.2, port=4420 00:26:44.499 qpair failed and we were unable to recover it. 00:26:44.499 [2024-05-15 17:17:31.969319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.499 [2024-05-15 17:17:31.969491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.499 [2024-05-15 17:17:31.969518] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f0000b90 with addr=10.0.0.2, port=4420 00:26:44.499 qpair failed and we were unable to recover it. 00:26:44.499 [2024-05-15 17:17:31.969692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.499 [2024-05-15 17:17:31.969821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.499 [2024-05-15 17:17:31.969837] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f0000b90 with addr=10.0.0.2, port=4420 00:26:44.499 qpair failed and we were unable to recover it. 00:26:44.499 [2024-05-15 17:17:31.969963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.499 [2024-05-15 17:17:31.970111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.499 [2024-05-15 17:17:31.970124] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f0000b90 with addr=10.0.0.2, port=4420 00:26:44.499 qpair failed and we were unable to recover it. 00:26:44.499 [2024-05-15 17:17:31.970238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.499 [2024-05-15 17:17:31.970351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.499 [2024-05-15 17:17:31.970365] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f0000b90 with addr=10.0.0.2, port=4420 00:26:44.499 qpair failed and we were unable to recover it. 00:26:44.499 [2024-05-15 17:17:31.970482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.499 [2024-05-15 17:17:31.970667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.499 [2024-05-15 17:17:31.970680] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f0000b90 with addr=10.0.0.2, port=4420 00:26:44.499 qpair failed and we were unable to recover it. 00:26:44.499 [2024-05-15 17:17:31.970793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.499 [2024-05-15 17:17:31.970919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.499 [2024-05-15 17:17:31.970932] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f0000b90 with addr=10.0.0.2, port=4420 00:26:44.499 qpair failed and we were unable to recover it. 00:26:44.499 [2024-05-15 17:17:31.971042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.499 [2024-05-15 17:17:31.971148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.499 [2024-05-15 17:17:31.971161] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f0000b90 with addr=10.0.0.2, port=4420 00:26:44.499 qpair failed and we were unable to recover it. 00:26:44.499 [2024-05-15 17:17:31.971364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.499 [2024-05-15 17:17:31.971582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.499 [2024-05-15 17:17:31.971611] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f0000b90 with addr=10.0.0.2, port=4420 00:26:44.499 qpair failed and we were unable to recover it. 00:26:44.499 [2024-05-15 17:17:31.971831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.499 [2024-05-15 17:17:31.972037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.499 [2024-05-15 17:17:31.972066] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f0000b90 with addr=10.0.0.2, port=4420 00:26:44.499 qpair failed and we were unable to recover it. 00:26:44.499 [2024-05-15 17:17:31.972233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.499 [2024-05-15 17:17:31.972451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.499 [2024-05-15 17:17:31.972480] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f0000b90 with addr=10.0.0.2, port=4420 00:26:44.499 qpair failed and we were unable to recover it. 00:26:44.499 [2024-05-15 17:17:31.972693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.499 [2024-05-15 17:17:31.972790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.499 [2024-05-15 17:17:31.972803] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f0000b90 with addr=10.0.0.2, port=4420 00:26:44.499 qpair failed and we were unable to recover it. 00:26:44.499 [2024-05-15 17:17:31.972921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.499 [2024-05-15 17:17:31.973099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.499 [2024-05-15 17:17:31.973112] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f0000b90 with addr=10.0.0.2, port=4420 00:26:44.499 qpair failed and we were unable to recover it. 00:26:44.499 [2024-05-15 17:17:31.973288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.499 [2024-05-15 17:17:31.973401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.499 [2024-05-15 17:17:31.973414] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f0000b90 with addr=10.0.0.2, port=4420 00:26:44.499 qpair failed and we were unable to recover it. 00:26:44.499 [2024-05-15 17:17:31.973534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.499 [2024-05-15 17:17:31.973659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.499 [2024-05-15 17:17:31.973673] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f0000b90 with addr=10.0.0.2, port=4420 00:26:44.499 qpair failed and we were unable to recover it. 00:26:44.499 [2024-05-15 17:17:31.973863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.499 [2024-05-15 17:17:31.974154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.499 [2024-05-15 17:17:31.974200] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f0000b90 with addr=10.0.0.2, port=4420 00:26:44.499 qpair failed and we were unable to recover it. 00:26:44.500 [2024-05-15 17:17:31.974500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.500 [2024-05-15 17:17:31.974628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.500 [2024-05-15 17:17:31.974657] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f0000b90 with addr=10.0.0.2, port=4420 00:26:44.500 qpair failed and we were unable to recover it. 00:26:44.500 [2024-05-15 17:17:31.974875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.500 [2024-05-15 17:17:31.975066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.500 [2024-05-15 17:17:31.975095] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f0000b90 with addr=10.0.0.2, port=4420 00:26:44.500 qpair failed and we were unable to recover it. 00:26:44.500 [2024-05-15 17:17:31.975317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.500 [2024-05-15 17:17:31.975472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.500 [2024-05-15 17:17:31.975501] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f0000b90 with addr=10.0.0.2, port=4420 00:26:44.500 qpair failed and we were unable to recover it. 00:26:44.500 [2024-05-15 17:17:31.975663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.500 [2024-05-15 17:17:31.975925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.500 [2024-05-15 17:17:31.975954] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f0000b90 with addr=10.0.0.2, port=4420 00:26:44.500 qpair failed and we were unable to recover it. 00:26:44.500 [2024-05-15 17:17:31.976179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.500 [2024-05-15 17:17:31.976409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.500 [2024-05-15 17:17:31.976438] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f0000b90 with addr=10.0.0.2, port=4420 00:26:44.500 qpair failed and we were unable to recover it. 00:26:44.500 [2024-05-15 17:17:31.976704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.500 [2024-05-15 17:17:31.976921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.500 [2024-05-15 17:17:31.976934] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f0000b90 with addr=10.0.0.2, port=4420 00:26:44.500 qpair failed and we were unable to recover it. 00:26:44.500 [2024-05-15 17:17:31.977191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.500 [2024-05-15 17:17:31.977420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.500 [2024-05-15 17:17:31.977433] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f0000b90 with addr=10.0.0.2, port=4420 00:26:44.500 qpair failed and we were unable to recover it. 00:26:44.500 [2024-05-15 17:17:31.977640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.500 [2024-05-15 17:17:31.977817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.500 [2024-05-15 17:17:31.977830] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f0000b90 with addr=10.0.0.2, port=4420 00:26:44.500 qpair failed and we were unable to recover it. 00:26:44.500 [2024-05-15 17:17:31.977940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.500 [2024-05-15 17:17:31.978211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.500 [2024-05-15 17:17:31.978243] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f0000b90 with addr=10.0.0.2, port=4420 00:26:44.500 qpair failed and we were unable to recover it. 00:26:44.500 [2024-05-15 17:17:31.978400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.500 [2024-05-15 17:17:31.978595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.500 [2024-05-15 17:17:31.978636] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f0000b90 with addr=10.0.0.2, port=4420 00:26:44.500 qpair failed and we were unable to recover it. 00:26:44.500 [2024-05-15 17:17:31.978884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.500 [2024-05-15 17:17:31.979049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.500 [2024-05-15 17:17:31.979062] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f0000b90 with addr=10.0.0.2, port=4420 00:26:44.500 qpair failed and we were unable to recover it. 00:26:44.500 [2024-05-15 17:17:31.979239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.500 [2024-05-15 17:17:31.979359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.500 [2024-05-15 17:17:31.979372] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f0000b90 with addr=10.0.0.2, port=4420 00:26:44.500 qpair failed and we were unable to recover it. 00:26:44.500 [2024-05-15 17:17:31.979558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.500 [2024-05-15 17:17:31.979687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.500 [2024-05-15 17:17:31.979700] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f0000b90 with addr=10.0.0.2, port=4420 00:26:44.500 qpair failed and we were unable to recover it. 00:26:44.500 [2024-05-15 17:17:31.979832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.500 [2024-05-15 17:17:31.979941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.500 [2024-05-15 17:17:31.979954] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f0000b90 with addr=10.0.0.2, port=4420 00:26:44.500 qpair failed and we were unable to recover it. 00:26:44.500 [2024-05-15 17:17:31.980083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.500 [2024-05-15 17:17:31.980260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.500 [2024-05-15 17:17:31.980275] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f0000b90 with addr=10.0.0.2, port=4420 00:26:44.500 qpair failed and we were unable to recover it. 00:26:44.500 [2024-05-15 17:17:31.980456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.500 [2024-05-15 17:17:31.980634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.500 [2024-05-15 17:17:31.980647] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f0000b90 with addr=10.0.0.2, port=4420 00:26:44.500 qpair failed and we were unable to recover it. 00:26:44.500 [2024-05-15 17:17:31.980819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.500 [2024-05-15 17:17:31.981044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.500 [2024-05-15 17:17:31.981081] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f0000b90 with addr=10.0.0.2, port=4420 00:26:44.500 qpair failed and we were unable to recover it. 00:26:44.500 [2024-05-15 17:17:31.981290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.500 [2024-05-15 17:17:31.981441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.500 [2024-05-15 17:17:31.981470] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f0000b90 with addr=10.0.0.2, port=4420 00:26:44.500 qpair failed and we were unable to recover it. 00:26:44.500 [2024-05-15 17:17:31.981616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.500 [2024-05-15 17:17:31.981794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.500 [2024-05-15 17:17:31.981807] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f0000b90 with addr=10.0.0.2, port=4420 00:26:44.500 qpair failed and we were unable to recover it. 00:26:44.500 [2024-05-15 17:17:31.981985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.500 [2024-05-15 17:17:31.982228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.500 [2024-05-15 17:17:31.982260] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f0000b90 with addr=10.0.0.2, port=4420 00:26:44.500 qpair failed and we were unable to recover it. 00:26:44.500 [2024-05-15 17:17:31.982463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.500 [2024-05-15 17:17:31.982690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.500 [2024-05-15 17:17:31.982719] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f0000b90 with addr=10.0.0.2, port=4420 00:26:44.500 qpair failed and we were unable to recover it. 00:26:44.500 [2024-05-15 17:17:31.982930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.500 [2024-05-15 17:17:31.983138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.500 [2024-05-15 17:17:31.983182] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f0000b90 with addr=10.0.0.2, port=4420 00:26:44.500 qpair failed and we were unable to recover it. 00:26:44.500 [2024-05-15 17:17:31.983423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.500 [2024-05-15 17:17:31.983704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.500 [2024-05-15 17:17:31.983734] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f0000b90 with addr=10.0.0.2, port=4420 00:26:44.500 qpair failed and we were unable to recover it. 00:26:44.500 [2024-05-15 17:17:31.983906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.500 [2024-05-15 17:17:31.984136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.500 [2024-05-15 17:17:31.984177] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f0000b90 with addr=10.0.0.2, port=4420 00:26:44.500 qpair failed and we were unable to recover it. 00:26:44.500 [2024-05-15 17:17:31.984347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.500 [2024-05-15 17:17:31.984492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.500 [2024-05-15 17:17:31.984522] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f0000b90 with addr=10.0.0.2, port=4420 00:26:44.500 qpair failed and we were unable to recover it. 00:26:44.500 [2024-05-15 17:17:31.984721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.500 [2024-05-15 17:17:31.984877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.500 [2024-05-15 17:17:31.984905] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f0000b90 with addr=10.0.0.2, port=4420 00:26:44.500 qpair failed and we were unable to recover it. 00:26:44.500 [2024-05-15 17:17:31.985048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.500 [2024-05-15 17:17:31.985193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.500 [2024-05-15 17:17:31.985224] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f0000b90 with addr=10.0.0.2, port=4420 00:26:44.500 qpair failed and we were unable to recover it. 00:26:44.500 [2024-05-15 17:17:31.985495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.500 [2024-05-15 17:17:31.985637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.500 [2024-05-15 17:17:31.985666] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f0000b90 with addr=10.0.0.2, port=4420 00:26:44.500 qpair failed and we were unable to recover it. 00:26:44.500 [2024-05-15 17:17:31.985915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.501 [2024-05-15 17:17:31.986212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.501 [2024-05-15 17:17:31.986243] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f0000b90 with addr=10.0.0.2, port=4420 00:26:44.501 qpair failed and we were unable to recover it. 00:26:44.501 [2024-05-15 17:17:31.986411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.501 [2024-05-15 17:17:31.986532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.501 [2024-05-15 17:17:31.986561] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f0000b90 with addr=10.0.0.2, port=4420 00:26:44.501 qpair failed and we were unable to recover it. 00:26:44.501 [2024-05-15 17:17:31.986796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.501 [2024-05-15 17:17:31.986918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.501 [2024-05-15 17:17:31.986931] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f0000b90 with addr=10.0.0.2, port=4420 00:26:44.501 qpair failed and we were unable to recover it. 00:26:44.501 [2024-05-15 17:17:31.987097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.501 [2024-05-15 17:17:31.987247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.501 [2024-05-15 17:17:31.987277] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f0000b90 with addr=10.0.0.2, port=4420 00:26:44.501 qpair failed and we were unable to recover it. 00:26:44.501 [2024-05-15 17:17:31.987415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.501 [2024-05-15 17:17:31.987620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.501 [2024-05-15 17:17:31.987634] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f0000b90 with addr=10.0.0.2, port=4420 00:26:44.501 qpair failed and we were unable to recover it. 00:26:44.501 [2024-05-15 17:17:31.987814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.501 [2024-05-15 17:17:31.987929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.501 [2024-05-15 17:17:31.987942] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f0000b90 with addr=10.0.0.2, port=4420 00:26:44.501 qpair failed and we were unable to recover it. 00:26:44.501 [2024-05-15 17:17:31.988141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.501 [2024-05-15 17:17:31.988311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.501 [2024-05-15 17:17:31.988324] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f0000b90 with addr=10.0.0.2, port=4420 00:26:44.501 qpair failed and we were unable to recover it. 00:26:44.501 [2024-05-15 17:17:31.988431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.501 [2024-05-15 17:17:31.988608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.501 [2024-05-15 17:17:31.988622] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f0000b90 with addr=10.0.0.2, port=4420 00:26:44.501 qpair failed and we were unable to recover it. 00:26:44.501 [2024-05-15 17:17:31.988726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.501 [2024-05-15 17:17:31.988834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.501 [2024-05-15 17:17:31.988847] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f0000b90 with addr=10.0.0.2, port=4420 00:26:44.501 qpair failed and we were unable to recover it. 00:26:44.501 [2024-05-15 17:17:31.988979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.501 [2024-05-15 17:17:31.989094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.501 [2024-05-15 17:17:31.989108] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f0000b90 with addr=10.0.0.2, port=4420 00:26:44.501 qpair failed and we were unable to recover it. 00:26:44.501 [2024-05-15 17:17:31.989220] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x245d770 is same with the state(5) to be set 00:26:44.501 [2024-05-15 17:17:31.989455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.501 [2024-05-15 17:17:31.989570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.501 [2024-05-15 17:17:31.989584] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:44.501 qpair failed and we were unable to recover it. 00:26:44.501 [2024-05-15 17:17:31.989694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.501 [2024-05-15 17:17:31.989872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.501 [2024-05-15 17:17:31.989902] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:44.501 qpair failed and we were unable to recover it. 00:26:44.501 [2024-05-15 17:17:31.990183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.501 [2024-05-15 17:17:31.990403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.501 [2024-05-15 17:17:31.990434] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:44.501 qpair failed and we were unable to recover it. 00:26:44.501 [2024-05-15 17:17:31.990701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.501 [2024-05-15 17:17:31.990899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.501 [2024-05-15 17:17:31.990928] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:44.501 qpair failed and we were unable to recover it. 00:26:44.501 [2024-05-15 17:17:31.991085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.501 [2024-05-15 17:17:31.991369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.501 [2024-05-15 17:17:31.991399] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:44.501 qpair failed and we were unable to recover it. 00:26:44.501 [2024-05-15 17:17:31.991580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.501 [2024-05-15 17:17:31.991839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.501 [2024-05-15 17:17:31.991868] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:44.501 qpair failed and we were unable to recover it. 00:26:44.501 [2024-05-15 17:17:31.992030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.501 [2024-05-15 17:17:31.992239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.501 [2024-05-15 17:17:31.992270] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:44.501 qpair failed and we were unable to recover it. 00:26:44.501 [2024-05-15 17:17:31.992405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.501 [2024-05-15 17:17:31.992582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.501 [2024-05-15 17:17:31.992595] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:44.501 qpair failed and we were unable to recover it. 00:26:44.501 [2024-05-15 17:17:31.992698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.501 [2024-05-15 17:17:31.992830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.501 [2024-05-15 17:17:31.992845] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:44.501 qpair failed and we were unable to recover it. 00:26:44.501 [2024-05-15 17:17:31.993107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.501 [2024-05-15 17:17:31.993384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.501 [2024-05-15 17:17:31.993397] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:44.501 qpair failed and we were unable to recover it. 00:26:44.501 [2024-05-15 17:17:31.993582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.501 [2024-05-15 17:17:31.993689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.501 [2024-05-15 17:17:31.993702] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:44.501 qpair failed and we were unable to recover it. 00:26:44.501 [2024-05-15 17:17:31.993936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.501 [2024-05-15 17:17:31.994042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.501 [2024-05-15 17:17:31.994055] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:44.501 qpair failed and we were unable to recover it. 00:26:44.501 [2024-05-15 17:17:31.994215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.501 [2024-05-15 17:17:31.994342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.501 [2024-05-15 17:17:31.994356] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:44.501 qpair failed and we were unable to recover it. 00:26:44.501 [2024-05-15 17:17:31.994591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.501 [2024-05-15 17:17:31.994694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.501 [2024-05-15 17:17:31.994708] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:44.501 qpair failed and we were unable to recover it. 00:26:44.501 [2024-05-15 17:17:31.994957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.501 [2024-05-15 17:17:31.995149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.501 [2024-05-15 17:17:31.995162] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:44.501 qpair failed and we were unable to recover it. 00:26:44.501 [2024-05-15 17:17:31.995295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.501 [2024-05-15 17:17:31.995394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.501 [2024-05-15 17:17:31.995408] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:44.501 qpair failed and we were unable to recover it. 00:26:44.501 [2024-05-15 17:17:31.995568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.501 [2024-05-15 17:17:31.995673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.501 [2024-05-15 17:17:31.995686] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:44.501 qpair failed and we were unable to recover it. 00:26:44.501 [2024-05-15 17:17:31.995867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.501 [2024-05-15 17:17:31.996085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.502 [2024-05-15 17:17:31.996098] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:44.502 qpair failed and we were unable to recover it. 00:26:44.502 [2024-05-15 17:17:31.996232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.502 [2024-05-15 17:17:31.996361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.502 [2024-05-15 17:17:31.996374] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:44.502 qpair failed and we were unable to recover it. 00:26:44.502 [2024-05-15 17:17:31.996614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.502 [2024-05-15 17:17:31.996817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.502 [2024-05-15 17:17:31.996846] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:44.502 qpair failed and we were unable to recover it. 00:26:44.502 [2024-05-15 17:17:31.996987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.502 [2024-05-15 17:17:31.997195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.502 [2024-05-15 17:17:31.997225] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:44.502 qpair failed and we were unable to recover it. 00:26:44.502 [2024-05-15 17:17:31.997527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.502 [2024-05-15 17:17:31.997669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.502 [2024-05-15 17:17:31.997699] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:44.502 qpair failed and we were unable to recover it. 00:26:44.502 [2024-05-15 17:17:31.997846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.502 [2024-05-15 17:17:31.998025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.502 [2024-05-15 17:17:31.998039] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:44.502 qpair failed and we were unable to recover it. 00:26:44.502 [2024-05-15 17:17:31.998284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.502 [2024-05-15 17:17:31.998467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.502 [2024-05-15 17:17:31.998481] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:44.502 qpair failed and we were unable to recover it. 00:26:44.502 [2024-05-15 17:17:31.998589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.502 [2024-05-15 17:17:31.998769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.502 [2024-05-15 17:17:31.998782] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:44.502 qpair failed and we were unable to recover it. 00:26:44.502 [2024-05-15 17:17:31.998908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.502 [2024-05-15 17:17:31.999107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.502 [2024-05-15 17:17:31.999120] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:44.502 qpair failed and we were unable to recover it. 00:26:44.502 [2024-05-15 17:17:31.999289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.502 [2024-05-15 17:17:31.999403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.502 [2024-05-15 17:17:31.999416] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:44.502 qpair failed and we were unable to recover it. 00:26:44.502 [2024-05-15 17:17:31.999598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.502 [2024-05-15 17:17:31.999780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.502 [2024-05-15 17:17:31.999808] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:44.502 qpair failed and we were unable to recover it. 00:26:44.502 [2024-05-15 17:17:31.999957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.502 [2024-05-15 17:17:32.000101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.502 [2024-05-15 17:17:32.000130] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:44.502 qpair failed and we were unable to recover it. 00:26:44.502 [2024-05-15 17:17:32.000372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.502 [2024-05-15 17:17:32.000479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.502 [2024-05-15 17:17:32.000492] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:44.502 qpair failed and we were unable to recover it. 00:26:44.502 [2024-05-15 17:17:32.000758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.502 [2024-05-15 17:17:32.000990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.502 [2024-05-15 17:17:32.001019] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:44.502 qpair failed and we were unable to recover it. 00:26:44.502 [2024-05-15 17:17:32.001237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.502 [2024-05-15 17:17:32.001380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.502 [2024-05-15 17:17:32.001409] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:44.502 qpair failed and we were unable to recover it. 00:26:44.502 [2024-05-15 17:17:32.001604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.502 [2024-05-15 17:17:32.001790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.502 [2024-05-15 17:17:32.001818] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:44.502 qpair failed and we were unable to recover it. 00:26:44.502 [2024-05-15 17:17:32.001969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.502 [2024-05-15 17:17:32.002071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.502 [2024-05-15 17:17:32.002100] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:44.502 qpair failed and we were unable to recover it. 00:26:44.502 [2024-05-15 17:17:32.002312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.502 [2024-05-15 17:17:32.002576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.502 [2024-05-15 17:17:32.002604] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:44.502 qpair failed and we were unable to recover it. 00:26:44.502 [2024-05-15 17:17:32.002872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.502 [2024-05-15 17:17:32.002999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.502 [2024-05-15 17:17:32.003028] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:44.502 qpair failed and we were unable to recover it. 00:26:44.502 [2024-05-15 17:17:32.003251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.502 [2024-05-15 17:17:32.003456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.502 [2024-05-15 17:17:32.003485] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:44.502 qpair failed and we were unable to recover it. 00:26:44.502 [2024-05-15 17:17:32.003728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.502 [2024-05-15 17:17:32.003923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.502 [2024-05-15 17:17:32.003952] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:44.502 qpair failed and we were unable to recover it. 00:26:44.502 [2024-05-15 17:17:32.004234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.502 [2024-05-15 17:17:32.004393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.502 [2024-05-15 17:17:32.004422] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:44.502 qpair failed and we were unable to recover it. 00:26:44.502 [2024-05-15 17:17:32.004569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.502 [2024-05-15 17:17:32.004711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.502 [2024-05-15 17:17:32.004741] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:44.502 qpair failed and we were unable to recover it. 00:26:44.502 [2024-05-15 17:17:32.004884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.502 [2024-05-15 17:17:32.005024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.502 [2024-05-15 17:17:32.005059] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:44.503 qpair failed and we were unable to recover it. 00:26:44.503 [2024-05-15 17:17:32.005328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.503 [2024-05-15 17:17:32.005543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.503 [2024-05-15 17:17:32.005573] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:44.503 qpair failed and we were unable to recover it. 00:26:44.503 [2024-05-15 17:17:32.005834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.503 [2024-05-15 17:17:32.006092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.503 [2024-05-15 17:17:32.006106] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:44.503 qpair failed and we were unable to recover it. 00:26:44.503 [2024-05-15 17:17:32.006289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.503 [2024-05-15 17:17:32.006521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.503 [2024-05-15 17:17:32.006534] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:44.503 qpair failed and we were unable to recover it. 00:26:44.503 [2024-05-15 17:17:32.006714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.503 [2024-05-15 17:17:32.006891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.503 [2024-05-15 17:17:32.006904] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:44.503 qpair failed and we were unable to recover it. 00:26:44.503 [2024-05-15 17:17:32.007149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.503 [2024-05-15 17:17:32.007354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.503 [2024-05-15 17:17:32.007384] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:44.503 qpair failed and we were unable to recover it. 00:26:44.503 [2024-05-15 17:17:32.007596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.503 [2024-05-15 17:17:32.007815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.503 [2024-05-15 17:17:32.007844] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:44.503 qpair failed and we were unable to recover it. 00:26:44.503 [2024-05-15 17:17:32.008109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.503 [2024-05-15 17:17:32.008245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.503 [2024-05-15 17:17:32.008278] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:44.503 qpair failed and we were unable to recover it. 00:26:44.503 [2024-05-15 17:17:32.008490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.503 [2024-05-15 17:17:32.008633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.503 [2024-05-15 17:17:32.008661] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:44.503 qpair failed and we were unable to recover it. 00:26:44.503 [2024-05-15 17:17:32.008793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.503 [2024-05-15 17:17:32.008923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.503 [2024-05-15 17:17:32.008936] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:44.503 qpair failed and we were unable to recover it. 00:26:44.503 [2024-05-15 17:17:32.009017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.503 [2024-05-15 17:17:32.009139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.503 [2024-05-15 17:17:32.009185] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:44.503 qpair failed and we were unable to recover it. 00:26:44.503 [2024-05-15 17:17:32.009400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.503 [2024-05-15 17:17:32.009618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.503 [2024-05-15 17:17:32.009647] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:44.503 qpair failed and we were unable to recover it. 00:26:44.503 [2024-05-15 17:17:32.009769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.503 [2024-05-15 17:17:32.009888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.503 [2024-05-15 17:17:32.009901] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:44.503 qpair failed and we were unable to recover it. 00:26:44.503 [2024-05-15 17:17:32.009999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.503 [2024-05-15 17:17:32.010104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.503 [2024-05-15 17:17:32.010118] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:44.503 qpair failed and we were unable to recover it. 00:26:44.503 [2024-05-15 17:17:32.010284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.503 [2024-05-15 17:17:32.010452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.503 [2024-05-15 17:17:32.010465] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:44.503 qpair failed and we were unable to recover it. 00:26:44.503 [2024-05-15 17:17:32.010650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.503 [2024-05-15 17:17:32.010753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.503 [2024-05-15 17:17:32.010767] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:44.503 qpair failed and we were unable to recover it. 00:26:44.503 [2024-05-15 17:17:32.010954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.503 [2024-05-15 17:17:32.011131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.503 [2024-05-15 17:17:32.011160] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:44.503 qpair failed and we were unable to recover it. 00:26:44.503 [2024-05-15 17:17:32.011376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.503 [2024-05-15 17:17:32.011609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.503 [2024-05-15 17:17:32.011638] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:44.503 qpair failed and we were unable to recover it. 00:26:44.503 [2024-05-15 17:17:32.011841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.503 [2024-05-15 17:17:32.012026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.503 [2024-05-15 17:17:32.012055] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:44.503 qpair failed and we were unable to recover it. 00:26:44.503 [2024-05-15 17:17:32.012223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.503 [2024-05-15 17:17:32.012491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.503 [2024-05-15 17:17:32.012529] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:44.503 qpair failed and we were unable to recover it. 00:26:44.503 [2024-05-15 17:17:32.012699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.503 [2024-05-15 17:17:32.012824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.503 [2024-05-15 17:17:32.012838] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:44.503 qpair failed and we were unable to recover it. 00:26:44.503 [2024-05-15 17:17:32.013074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.503 [2024-05-15 17:17:32.013196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.503 [2024-05-15 17:17:32.013210] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:44.503 qpair failed and we were unable to recover it. 00:26:44.503 [2024-05-15 17:17:32.013485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.503 [2024-05-15 17:17:32.013606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.503 [2024-05-15 17:17:32.013634] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:44.503 qpair failed and we were unable to recover it. 00:26:44.503 [2024-05-15 17:17:32.013872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.503 [2024-05-15 17:17:32.014070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.503 [2024-05-15 17:17:32.014099] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:44.503 qpair failed and we were unable to recover it. 00:26:44.503 [2024-05-15 17:17:32.014327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.503 [2024-05-15 17:17:32.014469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.503 [2024-05-15 17:17:32.014497] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:44.503 qpair failed and we were unable to recover it. 00:26:44.503 [2024-05-15 17:17:32.014717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.503 [2024-05-15 17:17:32.014916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.503 [2024-05-15 17:17:32.014929] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:44.503 qpair failed and we were unable to recover it. 00:26:44.504 [2024-05-15 17:17:32.015185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.504 [2024-05-15 17:17:32.015313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.504 [2024-05-15 17:17:32.015326] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:44.504 qpair failed and we were unable to recover it. 00:26:44.504 [2024-05-15 17:17:32.015427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.504 [2024-05-15 17:17:32.015528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.504 [2024-05-15 17:17:32.015542] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:44.504 qpair failed and we were unable to recover it. 00:26:44.504 [2024-05-15 17:17:32.015694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.504 [2024-05-15 17:17:32.015809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.504 [2024-05-15 17:17:32.015822] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:44.504 qpair failed and we were unable to recover it. 00:26:44.504 [2024-05-15 17:17:32.015932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.504 [2024-05-15 17:17:32.016175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.504 [2024-05-15 17:17:32.016193] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:44.504 qpair failed and we were unable to recover it. 00:26:44.504 [2024-05-15 17:17:32.016313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.504 [2024-05-15 17:17:32.016510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.504 [2024-05-15 17:17:32.016523] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:44.504 qpair failed and we were unable to recover it. 00:26:44.504 [2024-05-15 17:17:32.016655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.504 [2024-05-15 17:17:32.016836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.504 [2024-05-15 17:17:32.016849] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:44.504 qpair failed and we were unable to recover it. 00:26:44.504 [2024-05-15 17:17:32.017033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.504 [2024-05-15 17:17:32.017136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.504 [2024-05-15 17:17:32.017149] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:44.504 qpair failed and we were unable to recover it. 00:26:44.504 [2024-05-15 17:17:32.017273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.504 [2024-05-15 17:17:32.017377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.504 [2024-05-15 17:17:32.017390] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:44.504 qpair failed and we were unable to recover it. 00:26:44.504 [2024-05-15 17:17:32.017497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.504 [2024-05-15 17:17:32.017617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.504 [2024-05-15 17:17:32.017630] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:44.504 qpair failed and we were unable to recover it. 00:26:44.504 [2024-05-15 17:17:32.017827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.504 [2024-05-15 17:17:32.018037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.504 [2024-05-15 17:17:32.018065] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:44.504 qpair failed and we were unable to recover it. 00:26:44.504 [2024-05-15 17:17:32.018224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.504 [2024-05-15 17:17:32.018359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.504 [2024-05-15 17:17:32.018371] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:44.504 qpair failed and we were unable to recover it. 00:26:44.504 [2024-05-15 17:17:32.018455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.504 [2024-05-15 17:17:32.018647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.504 [2024-05-15 17:17:32.018660] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:44.504 qpair failed and we were unable to recover it. 00:26:44.504 [2024-05-15 17:17:32.018843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.504 [2024-05-15 17:17:32.018960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.504 [2024-05-15 17:17:32.018973] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:44.504 qpair failed and we were unable to recover it. 00:26:44.504 [2024-05-15 17:17:32.019227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.504 [2024-05-15 17:17:32.019432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.504 [2024-05-15 17:17:32.019445] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:44.504 qpair failed and we were unable to recover it. 00:26:44.504 [2024-05-15 17:17:32.019562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.504 [2024-05-15 17:17:32.019672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.504 [2024-05-15 17:17:32.019685] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:44.504 qpair failed and we were unable to recover it. 00:26:44.504 [2024-05-15 17:17:32.019884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.504 [2024-05-15 17:17:32.020014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.504 [2024-05-15 17:17:32.020027] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:44.504 qpair failed and we were unable to recover it. 00:26:44.504 [2024-05-15 17:17:32.020194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.504 [2024-05-15 17:17:32.020289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.504 [2024-05-15 17:17:32.020302] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:44.504 qpair failed and we were unable to recover it. 00:26:44.504 [2024-05-15 17:17:32.020483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.504 [2024-05-15 17:17:32.020701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.504 [2024-05-15 17:17:32.020730] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:44.504 qpair failed and we were unable to recover it. 00:26:44.504 [2024-05-15 17:17:32.020880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.504 [2024-05-15 17:17:32.021021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.505 [2024-05-15 17:17:32.021051] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:44.505 qpair failed and we were unable to recover it. 00:26:44.505 [2024-05-15 17:17:32.021189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.505 [2024-05-15 17:17:32.021472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.505 [2024-05-15 17:17:32.021501] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:44.505 qpair failed and we were unable to recover it. 00:26:44.505 [2024-05-15 17:17:32.021723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.505 [2024-05-15 17:17:32.021932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.505 [2024-05-15 17:17:32.021961] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:44.505 qpair failed and we were unable to recover it. 00:26:44.505 [2024-05-15 17:17:32.022241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.505 [2024-05-15 17:17:32.022448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.505 [2024-05-15 17:17:32.022477] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:44.505 qpair failed and we were unable to recover it. 00:26:44.505 [2024-05-15 17:17:32.022690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.505 [2024-05-15 17:17:32.022884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.505 [2024-05-15 17:17:32.022913] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:44.505 qpair failed and we were unable to recover it. 00:26:44.505 [2024-05-15 17:17:32.023125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.505 [2024-05-15 17:17:32.023283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.505 [2024-05-15 17:17:32.023314] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:44.505 qpair failed and we were unable to recover it. 00:26:44.505 [2024-05-15 17:17:32.023537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.505 [2024-05-15 17:17:32.023817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.505 [2024-05-15 17:17:32.023846] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:44.505 qpair failed and we were unable to recover it. 00:26:44.505 [2024-05-15 17:17:32.023992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.505 [2024-05-15 17:17:32.024211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.505 [2024-05-15 17:17:32.024244] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:44.505 qpair failed and we were unable to recover it. 00:26:44.505 [2024-05-15 17:17:32.024455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.505 [2024-05-15 17:17:32.024689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.505 [2024-05-15 17:17:32.024718] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:44.505 qpair failed and we were unable to recover it. 00:26:44.505 [2024-05-15 17:17:32.024933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.505 [2024-05-15 17:17:32.025155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.505 [2024-05-15 17:17:32.025215] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:44.505 qpair failed and we were unable to recover it. 00:26:44.505 [2024-05-15 17:17:32.025510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.505 [2024-05-15 17:17:32.025724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.505 [2024-05-15 17:17:32.025753] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:44.505 qpair failed and we were unable to recover it. 00:26:44.505 [2024-05-15 17:17:32.025973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.505 [2024-05-15 17:17:32.026162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.505 [2024-05-15 17:17:32.026204] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:44.505 qpair failed and we were unable to recover it. 00:26:44.505 [2024-05-15 17:17:32.026453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.505 [2024-05-15 17:17:32.026722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.505 [2024-05-15 17:17:32.026750] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:44.505 qpair failed and we were unable to recover it. 00:26:44.505 [2024-05-15 17:17:32.026961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.505 [2024-05-15 17:17:32.027189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.505 [2024-05-15 17:17:32.027219] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:44.505 qpair failed and we were unable to recover it. 00:26:44.505 [2024-05-15 17:17:32.027437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.505 [2024-05-15 17:17:32.027630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.505 [2024-05-15 17:17:32.027659] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:44.505 qpair failed and we were unable to recover it. 00:26:44.505 [2024-05-15 17:17:32.027868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.505 [2024-05-15 17:17:32.028118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.505 [2024-05-15 17:17:32.028147] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:44.505 qpair failed and we were unable to recover it. 00:26:44.505 [2024-05-15 17:17:32.028327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.505 [2024-05-15 17:17:32.028530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.505 [2024-05-15 17:17:32.028559] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:44.505 qpair failed and we were unable to recover it. 00:26:44.505 [2024-05-15 17:17:32.028774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.505 [2024-05-15 17:17:32.028970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.505 [2024-05-15 17:17:32.029005] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:44.505 qpair failed and we were unable to recover it. 00:26:44.505 [2024-05-15 17:17:32.029221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.505 [2024-05-15 17:17:32.029416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.505 [2024-05-15 17:17:32.029444] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:44.505 qpair failed and we were unable to recover it. 00:26:44.505 [2024-05-15 17:17:32.029604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.505 [2024-05-15 17:17:32.029718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.505 [2024-05-15 17:17:32.029731] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:44.505 qpair failed and we were unable to recover it. 00:26:44.505 [2024-05-15 17:17:32.029928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.505 [2024-05-15 17:17:32.030093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.505 [2024-05-15 17:17:32.030106] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:44.505 qpair failed and we were unable to recover it. 00:26:44.505 [2024-05-15 17:17:32.030277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.505 [2024-05-15 17:17:32.030402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.505 [2024-05-15 17:17:32.030416] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:44.505 qpair failed and we were unable to recover it. 00:26:44.505 [2024-05-15 17:17:32.030542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.505 [2024-05-15 17:17:32.030655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.505 [2024-05-15 17:17:32.030668] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:44.505 qpair failed and we were unable to recover it. 00:26:44.505 [2024-05-15 17:17:32.030787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.505 [2024-05-15 17:17:32.030971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.505 [2024-05-15 17:17:32.030983] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:44.505 qpair failed and we were unable to recover it. 00:26:44.505 [2024-05-15 17:17:32.031184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.505 [2024-05-15 17:17:32.031377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.505 [2024-05-15 17:17:32.031391] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:44.505 qpair failed and we were unable to recover it. 00:26:44.505 [2024-05-15 17:17:32.031488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.505 [2024-05-15 17:17:32.031618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.505 [2024-05-15 17:17:32.031632] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:44.505 qpair failed and we were unable to recover it. 00:26:44.506 [2024-05-15 17:17:32.031927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.506 [2024-05-15 17:17:32.032098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.506 [2024-05-15 17:17:32.032112] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:44.506 qpair failed and we were unable to recover it. 00:26:44.506 [2024-05-15 17:17:32.032386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.506 [2024-05-15 17:17:32.032567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.506 [2024-05-15 17:17:32.032581] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:44.506 qpair failed and we were unable to recover it. 00:26:44.506 [2024-05-15 17:17:32.032830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.506 [2024-05-15 17:17:32.032982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.506 [2024-05-15 17:17:32.032996] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:44.506 qpair failed and we were unable to recover it. 00:26:44.506 [2024-05-15 17:17:32.033288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.506 [2024-05-15 17:17:32.033478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.506 [2024-05-15 17:17:32.033492] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:44.506 qpair failed and we were unable to recover it. 00:26:44.506 [2024-05-15 17:17:32.033686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.506 [2024-05-15 17:17:32.033957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.506 [2024-05-15 17:17:32.033972] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:44.506 qpair failed and we were unable to recover it. 00:26:44.506 [2024-05-15 17:17:32.034178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.506 [2024-05-15 17:17:32.034444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.506 [2024-05-15 17:17:32.034460] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:44.506 qpair failed and we were unable to recover it. 00:26:44.506 [2024-05-15 17:17:32.034725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.506 [2024-05-15 17:17:32.034955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.506 [2024-05-15 17:17:32.034969] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:44.506 qpair failed and we were unable to recover it. 00:26:44.506 [2024-05-15 17:17:32.035136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.506 [2024-05-15 17:17:32.035345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.506 [2024-05-15 17:17:32.035361] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:44.506 qpair failed and we were unable to recover it. 00:26:44.506 [2024-05-15 17:17:32.035479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.506 [2024-05-15 17:17:32.035621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.506 [2024-05-15 17:17:32.035634] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:44.506 qpair failed and we were unable to recover it. 00:26:44.506 [2024-05-15 17:17:32.035851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.506 [2024-05-15 17:17:32.035973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.506 [2024-05-15 17:17:32.035988] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:44.506 qpair failed and we were unable to recover it. 00:26:44.506 [2024-05-15 17:17:32.036125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.506 [2024-05-15 17:17:32.036355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.506 [2024-05-15 17:17:32.036370] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:44.506 qpair failed and we were unable to recover it. 00:26:44.506 [2024-05-15 17:17:32.036560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.506 [2024-05-15 17:17:32.036724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.506 [2024-05-15 17:17:32.036737] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:44.506 qpair failed and we were unable to recover it. 00:26:44.506 [2024-05-15 17:17:32.036918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.506 [2024-05-15 17:17:32.037115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.506 [2024-05-15 17:17:32.037128] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:44.506 qpair failed and we were unable to recover it. 00:26:44.506 [2024-05-15 17:17:32.037293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.506 [2024-05-15 17:17:32.037514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.506 [2024-05-15 17:17:32.037527] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:44.506 qpair failed and we were unable to recover it. 00:26:44.506 [2024-05-15 17:17:32.037769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.506 [2024-05-15 17:17:32.038069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.506 [2024-05-15 17:17:32.038082] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:44.506 qpair failed and we were unable to recover it. 00:26:44.506 [2024-05-15 17:17:32.038266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.506 [2024-05-15 17:17:32.038464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.506 [2024-05-15 17:17:32.038478] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:44.506 qpair failed and we were unable to recover it. 00:26:44.506 [2024-05-15 17:17:32.038726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.506 [2024-05-15 17:17:32.038976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.506 [2024-05-15 17:17:32.038989] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:44.506 qpair failed and we were unable to recover it. 00:26:44.506 [2024-05-15 17:17:32.039249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.506 [2024-05-15 17:17:32.039420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.506 [2024-05-15 17:17:32.039433] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:44.506 qpair failed and we were unable to recover it. 00:26:44.506 [2024-05-15 17:17:32.039689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.506 [2024-05-15 17:17:32.039904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.506 [2024-05-15 17:17:32.039917] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:44.506 qpair failed and we were unable to recover it. 00:26:44.506 [2024-05-15 17:17:32.040158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.506 [2024-05-15 17:17:32.040348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.506 [2024-05-15 17:17:32.040362] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:44.506 qpair failed and we were unable to recover it. 00:26:44.506 [2024-05-15 17:17:32.040495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.506 [2024-05-15 17:17:32.040608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.506 [2024-05-15 17:17:32.040621] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:44.506 qpair failed and we were unable to recover it. 00:26:44.506 [2024-05-15 17:17:32.040861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.506 [2024-05-15 17:17:32.041136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.506 [2024-05-15 17:17:32.041150] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:44.506 qpair failed and we were unable to recover it. 00:26:44.506 [2024-05-15 17:17:32.041340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.506 [2024-05-15 17:17:32.041462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.506 [2024-05-15 17:17:32.041475] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:44.506 qpair failed and we were unable to recover it. 00:26:44.506 [2024-05-15 17:17:32.041591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.506 [2024-05-15 17:17:32.041764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.506 [2024-05-15 17:17:32.041778] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:44.506 qpair failed and we were unable to recover it. 00:26:44.506 [2024-05-15 17:17:32.041959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.507 [2024-05-15 17:17:32.042225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.507 [2024-05-15 17:17:32.042241] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:44.507 qpair failed and we were unable to recover it. 00:26:44.507 [2024-05-15 17:17:32.042409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.507 [2024-05-15 17:17:32.042664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.507 [2024-05-15 17:17:32.042677] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:44.507 qpair failed and we were unable to recover it. 00:26:44.507 [2024-05-15 17:17:32.042909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.507 [2024-05-15 17:17:32.043189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.507 [2024-05-15 17:17:32.043202] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:44.507 qpair failed and we were unable to recover it. 00:26:44.507 [2024-05-15 17:17:32.043334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.507 [2024-05-15 17:17:32.043505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.507 [2024-05-15 17:17:32.043518] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:44.507 qpair failed and we were unable to recover it. 00:26:44.507 [2024-05-15 17:17:32.043748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.507 [2024-05-15 17:17:32.043945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.507 [2024-05-15 17:17:32.043974] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:44.507 qpair failed and we were unable to recover it. 00:26:44.507 [2024-05-15 17:17:32.044231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.507 [2024-05-15 17:17:32.044412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.507 [2024-05-15 17:17:32.044425] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:44.507 qpair failed and we were unable to recover it. 00:26:44.507 [2024-05-15 17:17:32.044754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.507 [2024-05-15 17:17:32.044918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.507 [2024-05-15 17:17:32.044931] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:44.507 qpair failed and we were unable to recover it. 00:26:44.507 [2024-05-15 17:17:32.045135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.507 [2024-05-15 17:17:32.045385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.507 [2024-05-15 17:17:32.045399] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:44.507 qpair failed and we were unable to recover it. 00:26:44.507 [2024-05-15 17:17:32.045580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.507 [2024-05-15 17:17:32.045707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.507 [2024-05-15 17:17:32.045720] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:44.507 qpair failed and we were unable to recover it. 00:26:44.507 [2024-05-15 17:17:32.045854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.507 [2024-05-15 17:17:32.046109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.507 [2024-05-15 17:17:32.046122] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:44.507 qpair failed and we were unable to recover it. 00:26:44.507 [2024-05-15 17:17:32.046303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.507 [2024-05-15 17:17:32.046482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.507 [2024-05-15 17:17:32.046495] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:44.507 qpair failed and we were unable to recover it. 00:26:44.507 [2024-05-15 17:17:32.046679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.507 [2024-05-15 17:17:32.046852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.507 [2024-05-15 17:17:32.046865] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:44.507 qpair failed and we were unable to recover it. 00:26:44.507 [2024-05-15 17:17:32.047131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.507 [2024-05-15 17:17:32.047361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.507 [2024-05-15 17:17:32.047375] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:44.507 qpair failed and we were unable to recover it. 00:26:44.507 [2024-05-15 17:17:32.047557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.507 [2024-05-15 17:17:32.047720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.507 [2024-05-15 17:17:32.047734] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:44.507 qpair failed and we were unable to recover it. 00:26:44.507 [2024-05-15 17:17:32.047900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.507 [2024-05-15 17:17:32.048081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.507 [2024-05-15 17:17:32.048094] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:44.507 qpair failed and we were unable to recover it. 00:26:44.507 [2024-05-15 17:17:32.048267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.507 [2024-05-15 17:17:32.048515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.507 [2024-05-15 17:17:32.048528] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:44.507 qpair failed and we were unable to recover it. 00:26:44.507 [2024-05-15 17:17:32.048782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.507 [2024-05-15 17:17:32.048956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.507 [2024-05-15 17:17:32.048969] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:44.507 qpair failed and we were unable to recover it. 00:26:44.507 [2024-05-15 17:17:32.049245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.507 [2024-05-15 17:17:32.049496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.507 [2024-05-15 17:17:32.049509] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:44.507 qpair failed and we were unable to recover it. 00:26:44.507 [2024-05-15 17:17:32.049694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.507 [2024-05-15 17:17:32.049867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.507 [2024-05-15 17:17:32.049883] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:44.507 qpair failed and we were unable to recover it. 00:26:44.507 [2024-05-15 17:17:32.050062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.507 [2024-05-15 17:17:32.050337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.507 [2024-05-15 17:17:32.050352] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:44.507 qpair failed and we were unable to recover it. 00:26:44.507 [2024-05-15 17:17:32.050544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.507 [2024-05-15 17:17:32.050736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.507 [2024-05-15 17:17:32.050748] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:44.507 qpair failed and we were unable to recover it. 00:26:44.507 [2024-05-15 17:17:32.050920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.507 [2024-05-15 17:17:32.051184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.507 [2024-05-15 17:17:32.051199] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:44.507 qpair failed and we were unable to recover it. 00:26:44.507 [2024-05-15 17:17:32.051429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.507 [2024-05-15 17:17:32.051641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.507 [2024-05-15 17:17:32.051654] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:44.507 qpair failed and we were unable to recover it. 00:26:44.507 [2024-05-15 17:17:32.051960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.507 [2024-05-15 17:17:32.052210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.507 [2024-05-15 17:17:32.052224] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:44.507 qpair failed and we were unable to recover it. 00:26:44.507 [2024-05-15 17:17:32.052456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.507 [2024-05-15 17:17:32.052583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.508 [2024-05-15 17:17:32.052596] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:44.508 qpair failed and we were unable to recover it. 00:26:44.508 [2024-05-15 17:17:32.052834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.508 [2024-05-15 17:17:32.053104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.508 [2024-05-15 17:17:32.053118] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:44.508 qpair failed and we were unable to recover it. 00:26:44.508 [2024-05-15 17:17:32.053391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.508 [2024-05-15 17:17:32.053623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.508 [2024-05-15 17:17:32.053636] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:44.508 qpair failed and we were unable to recover it. 00:26:44.508 [2024-05-15 17:17:32.053814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.508 [2024-05-15 17:17:32.054003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.508 [2024-05-15 17:17:32.054016] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:44.508 qpair failed and we were unable to recover it. 00:26:44.508 [2024-05-15 17:17:32.054281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.508 [2024-05-15 17:17:32.054416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.508 [2024-05-15 17:17:32.054429] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:44.508 qpair failed and we were unable to recover it. 00:26:44.508 [2024-05-15 17:17:32.054689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.508 [2024-05-15 17:17:32.054941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.508 [2024-05-15 17:17:32.054954] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:44.508 qpair failed and we were unable to recover it. 00:26:44.508 [2024-05-15 17:17:32.055204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.508 [2024-05-15 17:17:32.055461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.508 [2024-05-15 17:17:32.055475] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:44.508 qpair failed and we were unable to recover it. 00:26:44.508 [2024-05-15 17:17:32.055733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.508 [2024-05-15 17:17:32.055915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.508 [2024-05-15 17:17:32.055928] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:44.508 qpair failed and we were unable to recover it. 00:26:44.508 [2024-05-15 17:17:32.056109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.508 [2024-05-15 17:17:32.056277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.508 [2024-05-15 17:17:32.056291] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:44.508 qpair failed and we were unable to recover it. 00:26:44.508 [2024-05-15 17:17:32.056472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.508 [2024-05-15 17:17:32.056700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.508 [2024-05-15 17:17:32.056713] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:44.508 qpair failed and we were unable to recover it. 00:26:44.508 [2024-05-15 17:17:32.057014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.508 [2024-05-15 17:17:32.057274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.508 [2024-05-15 17:17:32.057288] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:44.508 qpair failed and we were unable to recover it. 00:26:44.508 [2024-05-15 17:17:32.057470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.508 [2024-05-15 17:17:32.057720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.508 [2024-05-15 17:17:32.057733] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:44.508 qpair failed and we were unable to recover it. 00:26:44.508 [2024-05-15 17:17:32.057948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.508 [2024-05-15 17:17:32.058201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.508 [2024-05-15 17:17:32.058217] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:44.508 qpair failed and we were unable to recover it. 00:26:44.508 [2024-05-15 17:17:32.058484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.508 [2024-05-15 17:17:32.058667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.508 [2024-05-15 17:17:32.058680] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:44.508 qpair failed and we were unable to recover it. 00:26:44.508 [2024-05-15 17:17:32.058865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.508 [2024-05-15 17:17:32.058996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.508 [2024-05-15 17:17:32.059009] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:44.508 qpair failed and we were unable to recover it. 00:26:44.508 [2024-05-15 17:17:32.059254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.508 [2024-05-15 17:17:32.059437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.508 [2024-05-15 17:17:32.059450] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:44.508 qpair failed and we were unable to recover it. 00:26:44.508 [2024-05-15 17:17:32.059627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.508 [2024-05-15 17:17:32.059907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.508 [2024-05-15 17:17:32.059920] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:44.508 qpair failed and we were unable to recover it. 00:26:44.508 [2024-05-15 17:17:32.060173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.508 [2024-05-15 17:17:32.060417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.508 [2024-05-15 17:17:32.060430] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:44.508 qpair failed and we were unable to recover it. 00:26:44.508 [2024-05-15 17:17:32.060614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.508 [2024-05-15 17:17:32.060878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.508 [2024-05-15 17:17:32.060892] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:44.508 qpair failed and we were unable to recover it. 00:26:44.508 [2024-05-15 17:17:32.061099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.508 [2024-05-15 17:17:32.061297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.508 [2024-05-15 17:17:32.061311] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:44.508 qpair failed and we were unable to recover it. 00:26:44.508 [2024-05-15 17:17:32.061477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.508 [2024-05-15 17:17:32.061730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.508 [2024-05-15 17:17:32.061743] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:44.508 qpair failed and we were unable to recover it. 00:26:44.508 [2024-05-15 17:17:32.062016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.508 [2024-05-15 17:17:32.062248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.508 [2024-05-15 17:17:32.062264] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:44.508 qpair failed and we were unable to recover it. 00:26:44.508 [2024-05-15 17:17:32.062495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.508 [2024-05-15 17:17:32.062722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.509 [2024-05-15 17:17:32.062735] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:44.509 qpair failed and we were unable to recover it. 00:26:44.509 [2024-05-15 17:17:32.063035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.509 [2024-05-15 17:17:32.063288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.509 [2024-05-15 17:17:32.063302] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:44.509 qpair failed and we were unable to recover it. 00:26:44.509 [2024-05-15 17:17:32.063567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.509 [2024-05-15 17:17:32.063804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.509 [2024-05-15 17:17:32.063817] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:44.509 qpair failed and we were unable to recover it. 00:26:44.509 [2024-05-15 17:17:32.064053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.509 [2024-05-15 17:17:32.064305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.509 [2024-05-15 17:17:32.064319] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:44.509 qpair failed and we were unable to recover it. 00:26:44.509 [2024-05-15 17:17:32.064576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.509 [2024-05-15 17:17:32.064780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.509 [2024-05-15 17:17:32.064794] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:44.509 qpair failed and we were unable to recover it. 00:26:44.509 [2024-05-15 17:17:32.064980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.509 [2024-05-15 17:17:32.065180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.509 [2024-05-15 17:17:32.065194] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:44.509 qpair failed and we were unable to recover it. 00:26:44.509 [2024-05-15 17:17:32.065377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.509 [2024-05-15 17:17:32.065552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.509 [2024-05-15 17:17:32.065565] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:44.509 qpair failed and we were unable to recover it. 00:26:44.509 [2024-05-15 17:17:32.065819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.509 [2024-05-15 17:17:32.065950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.509 [2024-05-15 17:17:32.065964] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:44.509 qpair failed and we were unable to recover it. 00:26:44.509 [2024-05-15 17:17:32.066146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.509 [2024-05-15 17:17:32.066383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.509 [2024-05-15 17:17:32.066398] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:44.509 qpair failed and we were unable to recover it. 00:26:44.509 [2024-05-15 17:17:32.066523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.509 [2024-05-15 17:17:32.066796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.509 [2024-05-15 17:17:32.066810] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:44.509 qpair failed and we were unable to recover it. 00:26:44.509 [2024-05-15 17:17:32.067017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.509 [2024-05-15 17:17:32.067286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.509 [2024-05-15 17:17:32.067300] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:44.509 qpair failed and we were unable to recover it. 00:26:44.509 [2024-05-15 17:17:32.067535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.509 [2024-05-15 17:17:32.067664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.509 [2024-05-15 17:17:32.067677] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:44.509 qpair failed and we were unable to recover it. 00:26:44.509 [2024-05-15 17:17:32.067878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.509 [2024-05-15 17:17:32.068134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.509 [2024-05-15 17:17:32.068147] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:44.509 qpair failed and we were unable to recover it. 00:26:44.509 [2024-05-15 17:17:32.068398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.509 [2024-05-15 17:17:32.068574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.509 [2024-05-15 17:17:32.068587] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:44.509 qpair failed and we were unable to recover it. 00:26:44.509 [2024-05-15 17:17:32.068829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.509 [2024-05-15 17:17:32.068991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.509 [2024-05-15 17:17:32.069004] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:44.509 qpair failed and we were unable to recover it. 00:26:44.509 [2024-05-15 17:17:32.069230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.509 [2024-05-15 17:17:32.069465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.509 [2024-05-15 17:17:32.069478] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:44.509 qpair failed and we were unable to recover it. 00:26:44.509 [2024-05-15 17:17:32.069605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.509 [2024-05-15 17:17:32.069779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.509 [2024-05-15 17:17:32.069792] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:44.509 qpair failed and we were unable to recover it. 00:26:44.509 [2024-05-15 17:17:32.069987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.509 [2024-05-15 17:17:32.070171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.509 [2024-05-15 17:17:32.070189] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:44.509 qpair failed and we were unable to recover it. 00:26:44.509 [2024-05-15 17:17:32.070322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.509 [2024-05-15 17:17:32.070578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.509 [2024-05-15 17:17:32.070591] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:44.509 qpair failed and we were unable to recover it. 00:26:44.509 [2024-05-15 17:17:32.070845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.509 [2024-05-15 17:17:32.071096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.509 [2024-05-15 17:17:32.071109] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:44.509 qpair failed and we were unable to recover it. 00:26:44.509 [2024-05-15 17:17:32.071363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.509 [2024-05-15 17:17:32.071562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.509 [2024-05-15 17:17:32.071575] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:44.509 qpair failed and we were unable to recover it. 00:26:44.509 [2024-05-15 17:17:32.071739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.509 [2024-05-15 17:17:32.071973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.509 [2024-05-15 17:17:32.071986] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:44.509 qpair failed and we were unable to recover it. 00:26:44.509 [2024-05-15 17:17:32.072100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.509 [2024-05-15 17:17:32.072390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.509 [2024-05-15 17:17:32.072404] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:44.509 qpair failed and we were unable to recover it. 00:26:44.509 [2024-05-15 17:17:32.072530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.509 [2024-05-15 17:17:32.072808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.509 [2024-05-15 17:17:32.072824] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:44.509 qpair failed and we were unable to recover it. 00:26:44.509 [2024-05-15 17:17:32.073078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.509 [2024-05-15 17:17:32.073323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.509 [2024-05-15 17:17:32.073336] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:44.510 qpair failed and we were unable to recover it. 00:26:44.510 [2024-05-15 17:17:32.073573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.510 [2024-05-15 17:17:32.073774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.510 [2024-05-15 17:17:32.073788] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:44.510 qpair failed and we were unable to recover it. 00:26:44.510 [2024-05-15 17:17:32.074032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.510 [2024-05-15 17:17:32.074287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.510 [2024-05-15 17:17:32.074302] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:44.510 qpair failed and we were unable to recover it. 00:26:44.510 [2024-05-15 17:17:32.074483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.510 [2024-05-15 17:17:32.074665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.510 [2024-05-15 17:17:32.074678] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:44.510 qpair failed and we were unable to recover it. 00:26:44.510 [2024-05-15 17:17:32.074872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.510 [2024-05-15 17:17:32.075040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.510 [2024-05-15 17:17:32.075053] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:44.510 qpair failed and we were unable to recover it. 00:26:44.510 [2024-05-15 17:17:32.075306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.510 [2024-05-15 17:17:32.075424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.510 [2024-05-15 17:17:32.075437] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:44.510 qpair failed and we were unable to recover it. 00:26:44.510 [2024-05-15 17:17:32.075623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.510 [2024-05-15 17:17:32.075854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.510 [2024-05-15 17:17:32.075867] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:44.510 qpair failed and we were unable to recover it. 00:26:44.510 [2024-05-15 17:17:32.076102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.510 [2024-05-15 17:17:32.076280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.510 [2024-05-15 17:17:32.076294] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:44.510 qpair failed and we were unable to recover it. 00:26:44.510 [2024-05-15 17:17:32.076467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.510 [2024-05-15 17:17:32.076631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.510 [2024-05-15 17:17:32.076644] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:44.510 qpair failed and we were unable to recover it. 00:26:44.510 [2024-05-15 17:17:32.076936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.510 [2024-05-15 17:17:32.077169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.510 [2024-05-15 17:17:32.077183] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:44.510 qpair failed and we were unable to recover it. 00:26:44.510 [2024-05-15 17:17:32.077368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.510 [2024-05-15 17:17:32.077543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.510 [2024-05-15 17:17:32.077556] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:44.510 qpair failed and we were unable to recover it. 00:26:44.510 [2024-05-15 17:17:32.077845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.510 [2024-05-15 17:17:32.078130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.510 [2024-05-15 17:17:32.078142] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:44.510 qpair failed and we were unable to recover it. 00:26:44.510 [2024-05-15 17:17:32.078315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.510 [2024-05-15 17:17:32.078498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.510 [2024-05-15 17:17:32.078511] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:44.510 qpair failed and we were unable to recover it. 00:26:44.510 [2024-05-15 17:17:32.078742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.510 [2024-05-15 17:17:32.079022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.510 [2024-05-15 17:17:32.079035] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:44.510 qpair failed and we were unable to recover it. 00:26:44.510 [2024-05-15 17:17:32.079279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.510 [2024-05-15 17:17:32.079532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.510 [2024-05-15 17:17:32.079545] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:44.510 qpair failed and we were unable to recover it. 00:26:44.510 [2024-05-15 17:17:32.079674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.510 [2024-05-15 17:17:32.079931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.510 [2024-05-15 17:17:32.079944] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:44.510 qpair failed and we were unable to recover it. 00:26:44.510 [2024-05-15 17:17:32.080136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.510 [2024-05-15 17:17:32.080330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.510 [2024-05-15 17:17:32.080344] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:44.510 qpair failed and we were unable to recover it. 00:26:44.510 [2024-05-15 17:17:32.080587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.510 [2024-05-15 17:17:32.080841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.510 [2024-05-15 17:17:32.080854] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:44.510 qpair failed and we were unable to recover it. 00:26:44.510 [2024-05-15 17:17:32.081115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.510 [2024-05-15 17:17:32.081356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.510 [2024-05-15 17:17:32.081369] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:44.511 qpair failed and we were unable to recover it. 00:26:44.511 [2024-05-15 17:17:32.081546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.511 [2024-05-15 17:17:32.081640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.511 [2024-05-15 17:17:32.081652] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:44.511 qpair failed and we were unable to recover it. 00:26:44.511 [2024-05-15 17:17:32.081866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.511 [2024-05-15 17:17:32.082093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.511 [2024-05-15 17:17:32.082107] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:44.511 qpair failed and we were unable to recover it. 00:26:44.511 [2024-05-15 17:17:32.082342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.511 [2024-05-15 17:17:32.082521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.511 [2024-05-15 17:17:32.082535] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:44.511 qpair failed and we were unable to recover it. 00:26:44.511 [2024-05-15 17:17:32.082736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.511 [2024-05-15 17:17:32.082910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.511 [2024-05-15 17:17:32.082923] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:44.511 qpair failed and we were unable to recover it. 00:26:44.511 [2024-05-15 17:17:32.083180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.511 [2024-05-15 17:17:32.083435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.511 [2024-05-15 17:17:32.083448] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:44.511 qpair failed and we were unable to recover it. 00:26:44.511 [2024-05-15 17:17:32.083702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.511 [2024-05-15 17:17:32.083950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.511 [2024-05-15 17:17:32.083963] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:44.511 qpair failed and we were unable to recover it. 00:26:44.511 [2024-05-15 17:17:32.084139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.511 [2024-05-15 17:17:32.084404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.511 [2024-05-15 17:17:32.084417] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:44.511 qpair failed and we were unable to recover it. 00:26:44.511 [2024-05-15 17:17:32.084605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.511 [2024-05-15 17:17:32.084776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.511 [2024-05-15 17:17:32.084789] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:44.511 qpair failed and we were unable to recover it. 00:26:44.511 [2024-05-15 17:17:32.085084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.511 [2024-05-15 17:17:32.085288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.511 [2024-05-15 17:17:32.085302] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:44.511 qpair failed and we were unable to recover it. 00:26:44.511 [2024-05-15 17:17:32.085560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.511 [2024-05-15 17:17:32.085826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.511 [2024-05-15 17:17:32.085839] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:44.511 qpair failed and we were unable to recover it. 00:26:44.511 [2024-05-15 17:17:32.086133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.511 [2024-05-15 17:17:32.086365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.511 [2024-05-15 17:17:32.086380] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:44.511 qpair failed and we were unable to recover it. 00:26:44.511 [2024-05-15 17:17:32.086619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.511 [2024-05-15 17:17:32.086800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.511 [2024-05-15 17:17:32.086814] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:44.511 qpair failed and we were unable to recover it. 00:26:44.511 [2024-05-15 17:17:32.086979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.511 [2024-05-15 17:17:32.087174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.511 [2024-05-15 17:17:32.087188] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:44.511 qpair failed and we were unable to recover it. 00:26:44.511 [2024-05-15 17:17:32.087420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.511 [2024-05-15 17:17:32.087585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.511 [2024-05-15 17:17:32.087598] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:44.511 qpair failed and we were unable to recover it. 00:26:44.511 [2024-05-15 17:17:32.087803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.511 [2024-05-15 17:17:32.088077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.511 [2024-05-15 17:17:32.088091] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:44.511 qpair failed and we were unable to recover it. 00:26:44.511 [2024-05-15 17:17:32.088343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.511 [2024-05-15 17:17:32.088507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.511 [2024-05-15 17:17:32.088520] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:44.511 qpair failed and we were unable to recover it. 00:26:44.511 [2024-05-15 17:17:32.088733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.511 [2024-05-15 17:17:32.088921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.511 [2024-05-15 17:17:32.088935] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:44.511 qpair failed and we were unable to recover it. 00:26:44.511 [2024-05-15 17:17:32.089045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.511 [2024-05-15 17:17:32.089299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.511 [2024-05-15 17:17:32.089312] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:44.511 qpair failed and we were unable to recover it. 00:26:44.511 [2024-05-15 17:17:32.089490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.511 [2024-05-15 17:17:32.089686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.511 [2024-05-15 17:17:32.089699] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:44.511 qpair failed and we were unable to recover it. 00:26:44.511 [2024-05-15 17:17:32.089952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.511 [2024-05-15 17:17:32.090208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.511 [2024-05-15 17:17:32.090223] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:44.511 qpair failed and we were unable to recover it. 00:26:44.511 [2024-05-15 17:17:32.090431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.511 [2024-05-15 17:17:32.090713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.511 [2024-05-15 17:17:32.090726] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:44.511 qpair failed and we were unable to recover it. 00:26:44.511 [2024-05-15 17:17:32.090985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.511 [2024-05-15 17:17:32.091100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.511 [2024-05-15 17:17:32.091114] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:44.511 qpair failed and we were unable to recover it. 00:26:44.511 [2024-05-15 17:17:32.091364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.511 [2024-05-15 17:17:32.091545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.511 [2024-05-15 17:17:32.091559] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:44.511 qpair failed and we were unable to recover it. 00:26:44.511 [2024-05-15 17:17:32.091813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.511 [2024-05-15 17:17:32.092064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.511 [2024-05-15 17:17:32.092077] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:44.511 qpair failed and we were unable to recover it. 00:26:44.511 [2024-05-15 17:17:32.092244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.511 [2024-05-15 17:17:32.092442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.511 [2024-05-15 17:17:32.092456] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:44.511 qpair failed and we were unable to recover it. 00:26:44.512 [2024-05-15 17:17:32.092688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.512 [2024-05-15 17:17:32.092874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.512 [2024-05-15 17:17:32.092887] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:44.512 qpair failed and we were unable to recover it. 00:26:44.512 [2024-05-15 17:17:32.093068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.512 [2024-05-15 17:17:32.093173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.512 [2024-05-15 17:17:32.093187] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:44.512 qpair failed and we were unable to recover it. 00:26:44.512 [2024-05-15 17:17:32.093476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.512 [2024-05-15 17:17:32.093654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.512 [2024-05-15 17:17:32.093667] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:44.512 qpair failed and we were unable to recover it. 00:26:44.512 [2024-05-15 17:17:32.093916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.512 [2024-05-15 17:17:32.094173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.512 [2024-05-15 17:17:32.094190] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:44.512 qpair failed and we were unable to recover it. 00:26:44.512 [2024-05-15 17:17:32.094446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.512 [2024-05-15 17:17:32.094682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.512 [2024-05-15 17:17:32.094695] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:44.512 qpair failed and we were unable to recover it. 00:26:44.512 [2024-05-15 17:17:32.094928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.512 [2024-05-15 17:17:32.095183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.512 [2024-05-15 17:17:32.095197] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:44.512 qpair failed and we were unable to recover it. 00:26:44.512 [2024-05-15 17:17:32.095375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.512 [2024-05-15 17:17:32.095557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.512 [2024-05-15 17:17:32.095574] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:44.512 qpair failed and we were unable to recover it. 00:26:44.512 [2024-05-15 17:17:32.095745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.512 [2024-05-15 17:17:32.095908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.512 [2024-05-15 17:17:32.095921] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:44.512 qpair failed and we were unable to recover it. 00:26:44.512 [2024-05-15 17:17:32.096149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.512 [2024-05-15 17:17:32.096278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.512 [2024-05-15 17:17:32.096292] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:44.512 qpair failed and we were unable to recover it. 00:26:44.512 [2024-05-15 17:17:32.096500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.512 [2024-05-15 17:17:32.096701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.512 [2024-05-15 17:17:32.096714] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:44.512 qpair failed and we were unable to recover it. 00:26:44.512 [2024-05-15 17:17:32.096971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.512 [2024-05-15 17:17:32.097176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.512 [2024-05-15 17:17:32.097190] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:44.512 qpair failed and we were unable to recover it. 00:26:44.512 [2024-05-15 17:17:32.097371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.512 [2024-05-15 17:17:32.097622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.512 [2024-05-15 17:17:32.097635] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:44.512 qpair failed and we were unable to recover it. 00:26:44.512 [2024-05-15 17:17:32.097752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.512 [2024-05-15 17:17:32.098009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.512 [2024-05-15 17:17:32.098022] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:44.512 qpair failed and we were unable to recover it. 00:26:44.512 [2024-05-15 17:17:32.098196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.512 [2024-05-15 17:17:32.098426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.512 [2024-05-15 17:17:32.098440] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:44.512 qpair failed and we were unable to recover it. 00:26:44.512 [2024-05-15 17:17:32.098626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.512 [2024-05-15 17:17:32.098728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.512 [2024-05-15 17:17:32.098742] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:44.512 qpair failed and we were unable to recover it. 00:26:44.512 [2024-05-15 17:17:32.098928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.512 [2024-05-15 17:17:32.099208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.512 [2024-05-15 17:17:32.099222] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:44.512 qpair failed and we were unable to recover it. 00:26:44.512 [2024-05-15 17:17:32.099348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.512 [2024-05-15 17:17:32.099462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.512 [2024-05-15 17:17:32.099477] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:44.512 qpair failed and we were unable to recover it. 00:26:44.512 [2024-05-15 17:17:32.099613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.512 [2024-05-15 17:17:32.099795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.512 [2024-05-15 17:17:32.099808] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:44.512 qpair failed and we were unable to recover it. 00:26:44.512 [2024-05-15 17:17:32.099973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.512 [2024-05-15 17:17:32.100257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.512 [2024-05-15 17:17:32.100271] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:44.512 qpair failed and we were unable to recover it. 00:26:44.512 [2024-05-15 17:17:32.100525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.512 [2024-05-15 17:17:32.100801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.512 [2024-05-15 17:17:32.100814] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:44.512 qpair failed and we were unable to recover it. 00:26:44.512 [2024-05-15 17:17:32.101070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.512 [2024-05-15 17:17:32.101253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.512 [2024-05-15 17:17:32.101268] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:44.512 qpair failed and we were unable to recover it. 00:26:44.512 [2024-05-15 17:17:32.101525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.512 [2024-05-15 17:17:32.101631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.512 [2024-05-15 17:17:32.101644] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:44.512 qpair failed and we were unable to recover it. 00:26:44.512 [2024-05-15 17:17:32.101844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.512 [2024-05-15 17:17:32.102108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.512 [2024-05-15 17:17:32.102121] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:44.512 qpair failed and we were unable to recover it. 00:26:44.512 [2024-05-15 17:17:32.102401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.512 [2024-05-15 17:17:32.102604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.512 [2024-05-15 17:17:32.102617] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:44.512 qpair failed and we were unable to recover it. 00:26:44.512 [2024-05-15 17:17:32.102801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.512 [2024-05-15 17:17:32.102966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.513 [2024-05-15 17:17:32.102979] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:44.513 qpair failed and we were unable to recover it. 00:26:44.513 [2024-05-15 17:17:32.103234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.513 [2024-05-15 17:17:32.103406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.513 [2024-05-15 17:17:32.103420] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:44.513 qpair failed and we were unable to recover it. 00:26:44.513 [2024-05-15 17:17:32.103659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.513 [2024-05-15 17:17:32.103838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.513 [2024-05-15 17:17:32.103851] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:44.513 qpair failed and we were unable to recover it. 00:26:44.513 [2024-05-15 17:17:32.104041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.513 [2024-05-15 17:17:32.104319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.513 [2024-05-15 17:17:32.104332] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:44.513 qpair failed and we were unable to recover it. 00:26:44.513 [2024-05-15 17:17:32.104535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.513 [2024-05-15 17:17:32.104792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.513 [2024-05-15 17:17:32.104805] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:44.513 qpair failed and we were unable to recover it. 00:26:44.513 [2024-05-15 17:17:32.104990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.513 [2024-05-15 17:17:32.105095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.513 [2024-05-15 17:17:32.105108] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:44.513 qpair failed and we were unable to recover it. 00:26:44.513 [2024-05-15 17:17:32.105363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.513 [2024-05-15 17:17:32.105605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.513 [2024-05-15 17:17:32.105618] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:44.513 qpair failed and we were unable to recover it. 00:26:44.513 [2024-05-15 17:17:32.105882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.513 [2024-05-15 17:17:32.106135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.513 [2024-05-15 17:17:32.106149] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:44.513 qpair failed and we were unable to recover it. 00:26:44.513 [2024-05-15 17:17:32.106334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.513 [2024-05-15 17:17:32.106588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.513 [2024-05-15 17:17:32.106601] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:44.513 qpair failed and we were unable to recover it. 00:26:44.513 [2024-05-15 17:17:32.106782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.513 [2024-05-15 17:17:32.107032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.513 [2024-05-15 17:17:32.107046] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:44.513 qpair failed and we were unable to recover it. 00:26:44.513 [2024-05-15 17:17:32.107299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.513 [2024-05-15 17:17:32.107549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.513 [2024-05-15 17:17:32.107562] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:44.513 qpair failed and we were unable to recover it. 00:26:44.513 [2024-05-15 17:17:32.107749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.513 [2024-05-15 17:17:32.107949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.513 [2024-05-15 17:17:32.107962] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:44.513 qpair failed and we were unable to recover it. 00:26:44.513 [2024-05-15 17:17:32.108154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.513 [2024-05-15 17:17:32.108417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.513 [2024-05-15 17:17:32.108431] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:44.513 qpair failed and we were unable to recover it. 00:26:44.513 [2024-05-15 17:17:32.108737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.513 [2024-05-15 17:17:32.108910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.513 [2024-05-15 17:17:32.108923] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:44.513 qpair failed and we were unable to recover it. 00:26:44.513 [2024-05-15 17:17:32.109157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.513 [2024-05-15 17:17:32.109384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.513 [2024-05-15 17:17:32.109397] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:44.513 qpair failed and we were unable to recover it. 00:26:44.513 [2024-05-15 17:17:32.109700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.513 [2024-05-15 17:17:32.109887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.513 [2024-05-15 17:17:32.109900] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:44.513 qpair failed and we were unable to recover it. 00:26:44.513 [2024-05-15 17:17:32.110132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.513 [2024-05-15 17:17:32.110383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.513 [2024-05-15 17:17:32.110398] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:44.513 qpair failed and we were unable to recover it. 00:26:44.513 [2024-05-15 17:17:32.110634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.513 [2024-05-15 17:17:32.110892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.513 [2024-05-15 17:17:32.110905] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:44.513 qpair failed and we were unable to recover it. 00:26:44.513 [2024-05-15 17:17:32.111135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.513 [2024-05-15 17:17:32.111302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.513 [2024-05-15 17:17:32.111315] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:44.513 qpair failed and we were unable to recover it. 00:26:44.513 [2024-05-15 17:17:32.111569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.513 [2024-05-15 17:17:32.111816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.513 [2024-05-15 17:17:32.111829] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:44.513 qpair failed and we were unable to recover it. 00:26:44.513 [2024-05-15 17:17:32.112086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.513 [2024-05-15 17:17:32.112290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.513 [2024-05-15 17:17:32.112304] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:44.513 qpair failed and we were unable to recover it. 00:26:44.513 [2024-05-15 17:17:32.112586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.513 [2024-05-15 17:17:32.112796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.513 [2024-05-15 17:17:32.112809] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:44.513 qpair failed and we were unable to recover it. 00:26:44.513 [2024-05-15 17:17:32.113089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.513 [2024-05-15 17:17:32.113351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.513 [2024-05-15 17:17:32.113365] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:44.513 qpair failed and we were unable to recover it. 00:26:44.513 [2024-05-15 17:17:32.113624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.513 [2024-05-15 17:17:32.113869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.513 [2024-05-15 17:17:32.113882] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:44.513 qpair failed and we were unable to recover it. 00:26:44.513 [2024-05-15 17:17:32.114061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.513 [2024-05-15 17:17:32.114186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.513 [2024-05-15 17:17:32.114201] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:44.513 qpair failed and we were unable to recover it. 00:26:44.513 [2024-05-15 17:17:32.114432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.513 [2024-05-15 17:17:32.114606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.513 [2024-05-15 17:17:32.114619] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:44.513 qpair failed and we were unable to recover it. 00:26:44.514 [2024-05-15 17:17:32.114820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.514 [2024-05-15 17:17:32.114999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.514 [2024-05-15 17:17:32.115012] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:44.514 qpair failed and we were unable to recover it. 00:26:44.514 [2024-05-15 17:17:32.115265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.514 [2024-05-15 17:17:32.115393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.514 [2024-05-15 17:17:32.115407] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:44.514 qpair failed and we were unable to recover it. 00:26:44.514 [2024-05-15 17:17:32.115658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.514 [2024-05-15 17:17:32.115837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.514 [2024-05-15 17:17:32.115849] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:44.514 qpair failed and we were unable to recover it. 00:26:44.514 [2024-05-15 17:17:32.116094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.514 [2024-05-15 17:17:32.116271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.514 [2024-05-15 17:17:32.116284] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:44.514 qpair failed and we were unable to recover it. 00:26:44.514 [2024-05-15 17:17:32.116394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.514 [2024-05-15 17:17:32.116573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.514 [2024-05-15 17:17:32.116586] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:44.514 qpair failed and we were unable to recover it. 00:26:44.514 [2024-05-15 17:17:32.116797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.514 [2024-05-15 17:17:32.116961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.514 [2024-05-15 17:17:32.116974] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:44.514 qpair failed and we were unable to recover it. 00:26:44.514 [2024-05-15 17:17:32.117221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.514 [2024-05-15 17:17:32.117397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.514 [2024-05-15 17:17:32.117411] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:44.514 qpair failed and we were unable to recover it. 00:26:44.514 [2024-05-15 17:17:32.117590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.514 [2024-05-15 17:17:32.117874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.514 [2024-05-15 17:17:32.117888] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:44.514 qpair failed and we were unable to recover it. 00:26:44.514 [2024-05-15 17:17:32.118104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.514 [2024-05-15 17:17:32.118334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.514 [2024-05-15 17:17:32.118350] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:44.514 qpair failed and we were unable to recover it. 00:26:44.514 [2024-05-15 17:17:32.118524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.514 [2024-05-15 17:17:32.118810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.514 [2024-05-15 17:17:32.118823] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:44.514 qpair failed and we were unable to recover it. 00:26:44.514 [2024-05-15 17:17:32.119025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.514 [2024-05-15 17:17:32.119198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.514 [2024-05-15 17:17:32.119212] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:44.514 qpair failed and we were unable to recover it. 00:26:44.514 [2024-05-15 17:17:32.119403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.514 [2024-05-15 17:17:32.119662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.514 [2024-05-15 17:17:32.119676] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:44.514 qpair failed and we were unable to recover it. 00:26:44.514 [2024-05-15 17:17:32.119927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.514 [2024-05-15 17:17:32.120185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.514 [2024-05-15 17:17:32.120199] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:44.514 qpair failed and we were unable to recover it. 00:26:44.514 [2024-05-15 17:17:32.120443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.514 [2024-05-15 17:17:32.120673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.514 [2024-05-15 17:17:32.120686] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:44.514 qpair failed and we were unable to recover it. 00:26:44.514 [2024-05-15 17:17:32.120850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.514 [2024-05-15 17:17:32.121097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.514 [2024-05-15 17:17:32.121111] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:44.514 qpair failed and we were unable to recover it. 00:26:44.514 [2024-05-15 17:17:32.121282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.514 [2024-05-15 17:17:32.121530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.514 [2024-05-15 17:17:32.121543] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:44.514 qpair failed and we were unable to recover it. 00:26:44.514 [2024-05-15 17:17:32.121789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.514 [2024-05-15 17:17:32.121971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.514 [2024-05-15 17:17:32.121984] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:44.514 qpair failed and we were unable to recover it. 00:26:44.514 [2024-05-15 17:17:32.122104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.514 [2024-05-15 17:17:32.122334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.514 [2024-05-15 17:17:32.122352] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:44.514 qpair failed and we were unable to recover it. 00:26:44.514 [2024-05-15 17:17:32.122485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.514 [2024-05-15 17:17:32.122647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.514 [2024-05-15 17:17:32.122660] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:44.514 qpair failed and we were unable to recover it. 00:26:44.514 [2024-05-15 17:17:32.122841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.514 [2024-05-15 17:17:32.123063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.514 [2024-05-15 17:17:32.123076] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:44.514 qpair failed and we were unable to recover it. 00:26:44.514 [2024-05-15 17:17:32.123212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.514 [2024-05-15 17:17:32.123461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.514 [2024-05-15 17:17:32.123475] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:44.514 qpair failed and we were unable to recover it. 00:26:44.514 [2024-05-15 17:17:32.123657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.514 [2024-05-15 17:17:32.123819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.514 [2024-05-15 17:17:32.123831] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:44.514 qpair failed and we were unable to recover it. 00:26:44.514 [2024-05-15 17:17:32.124013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.514 [2024-05-15 17:17:32.124264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.514 [2024-05-15 17:17:32.124277] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:44.514 qpair failed and we were unable to recover it. 00:26:44.514 [2024-05-15 17:17:32.124478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.514 [2024-05-15 17:17:32.124652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.514 [2024-05-15 17:17:32.124665] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:44.514 qpair failed and we were unable to recover it. 00:26:44.514 [2024-05-15 17:17:32.124844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.514 [2024-05-15 17:17:32.125061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.514 [2024-05-15 17:17:32.125074] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:44.514 qpair failed and we were unable to recover it. 00:26:44.514 [2024-05-15 17:17:32.125346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.514 [2024-05-15 17:17:32.125601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.514 [2024-05-15 17:17:32.125614] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:44.514 qpair failed and we were unable to recover it. 00:26:44.514 [2024-05-15 17:17:32.125777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.515 [2024-05-15 17:17:32.126043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.515 [2024-05-15 17:17:32.126056] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:44.515 qpair failed and we were unable to recover it. 00:26:44.515 [2024-05-15 17:17:32.126311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.515 [2024-05-15 17:17:32.126443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.515 [2024-05-15 17:17:32.126457] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:44.515 qpair failed and we were unable to recover it. 00:26:44.515 [2024-05-15 17:17:32.126691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.515 [2024-05-15 17:17:32.126954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.515 [2024-05-15 17:17:32.126967] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:44.515 qpair failed and we were unable to recover it. 00:26:44.515 [2024-05-15 17:17:32.127199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.515 [2024-05-15 17:17:32.127430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.515 [2024-05-15 17:17:32.127443] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:44.515 qpair failed and we were unable to recover it. 00:26:44.515 [2024-05-15 17:17:32.127640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.515 [2024-05-15 17:17:32.127893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.515 [2024-05-15 17:17:32.127906] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:44.515 qpair failed and we were unable to recover it. 00:26:44.515 [2024-05-15 17:17:32.128116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.515 [2024-05-15 17:17:32.128294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.515 [2024-05-15 17:17:32.128308] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:44.515 qpair failed and we were unable to recover it. 00:26:44.515 [2024-05-15 17:17:32.128487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.515 [2024-05-15 17:17:32.128742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.515 [2024-05-15 17:17:32.128755] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:44.515 qpair failed and we were unable to recover it. 00:26:44.515 [2024-05-15 17:17:32.128934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.515 [2024-05-15 17:17:32.129134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.515 [2024-05-15 17:17:32.129147] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:44.515 qpair failed and we were unable to recover it. 00:26:44.515 [2024-05-15 17:17:32.129469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.515 [2024-05-15 17:17:32.129655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.515 [2024-05-15 17:17:32.129669] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:44.515 qpair failed and we were unable to recover it. 00:26:44.515 [2024-05-15 17:17:32.129899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.515 [2024-05-15 17:17:32.130080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.515 [2024-05-15 17:17:32.130093] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:44.515 qpair failed and we were unable to recover it. 00:26:44.515 [2024-05-15 17:17:32.130358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.515 [2024-05-15 17:17:32.130555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.515 [2024-05-15 17:17:32.130569] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:44.515 qpair failed and we were unable to recover it. 00:26:44.515 [2024-05-15 17:17:32.130776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.515 [2024-05-15 17:17:32.130897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.515 [2024-05-15 17:17:32.130911] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:44.515 qpair failed and we were unable to recover it. 00:26:44.515 [2024-05-15 17:17:32.131027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.515 [2024-05-15 17:17:32.131204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.515 [2024-05-15 17:17:32.131217] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:44.515 qpair failed and we were unable to recover it. 00:26:44.515 [2024-05-15 17:17:32.131448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.515 [2024-05-15 17:17:32.131620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.515 [2024-05-15 17:17:32.131637] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:44.515 qpair failed and we were unable to recover it. 00:26:44.515 [2024-05-15 17:17:32.131923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.515 [2024-05-15 17:17:32.132127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.515 [2024-05-15 17:17:32.132141] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:44.515 qpair failed and we were unable to recover it. 00:26:44.515 [2024-05-15 17:17:32.132266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.515 [2024-05-15 17:17:32.132497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.515 [2024-05-15 17:17:32.132510] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:44.515 qpair failed and we were unable to recover it. 00:26:44.515 [2024-05-15 17:17:32.132701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.515 [2024-05-15 17:17:32.132903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.515 [2024-05-15 17:17:32.132916] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:44.515 qpair failed and we were unable to recover it. 00:26:44.515 [2024-05-15 17:17:32.133093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.515 [2024-05-15 17:17:32.133355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.515 [2024-05-15 17:17:32.133370] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:44.515 qpair failed and we were unable to recover it. 00:26:44.515 [2024-05-15 17:17:32.133482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.516 [2024-05-15 17:17:32.133664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.516 [2024-05-15 17:17:32.133682] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:44.516 qpair failed and we were unable to recover it. 00:26:44.516 [2024-05-15 17:17:32.133880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.516 [2024-05-15 17:17:32.134044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.516 [2024-05-15 17:17:32.134060] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:44.516 qpair failed and we were unable to recover it. 00:26:44.516 [2024-05-15 17:17:32.134260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.516 [2024-05-15 17:17:32.134517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.516 [2024-05-15 17:17:32.134531] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:44.516 qpair failed and we were unable to recover it. 00:26:44.516 [2024-05-15 17:17:32.134762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.516 [2024-05-15 17:17:32.134924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.516 [2024-05-15 17:17:32.134937] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:44.516 qpair failed and we were unable to recover it. 00:26:44.516 [2024-05-15 17:17:32.135137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.516 [2024-05-15 17:17:32.135440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.516 [2024-05-15 17:17:32.135456] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:44.781 qpair failed and we were unable to recover it. 00:26:44.781 [2024-05-15 17:17:32.135663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.781 [2024-05-15 17:17:32.135933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.781 [2024-05-15 17:17:32.135954] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:44.781 qpair failed and we were unable to recover it. 00:26:44.781 [2024-05-15 17:17:32.136207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.781 [2024-05-15 17:17:32.136428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.781 [2024-05-15 17:17:32.136448] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:44.781 qpair failed and we were unable to recover it. 00:26:44.781 [2024-05-15 17:17:32.136649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.781 [2024-05-15 17:17:32.136846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.781 [2024-05-15 17:17:32.136862] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:44.781 qpair failed and we were unable to recover it. 00:26:44.781 [2024-05-15 17:17:32.137070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.781 [2024-05-15 17:17:32.137308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.781 [2024-05-15 17:17:32.137323] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:44.781 qpair failed and we were unable to recover it. 00:26:44.781 [2024-05-15 17:17:32.137511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.781 [2024-05-15 17:17:32.137713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.781 [2024-05-15 17:17:32.137726] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:44.781 qpair failed and we were unable to recover it. 00:26:44.781 [2024-05-15 17:17:32.137921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.781 [2024-05-15 17:17:32.138156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.781 [2024-05-15 17:17:32.138178] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:44.781 qpair failed and we were unable to recover it. 00:26:44.781 [2024-05-15 17:17:32.138351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.781 [2024-05-15 17:17:32.138614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.781 [2024-05-15 17:17:32.138627] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:44.781 qpair failed and we were unable to recover it. 00:26:44.781 [2024-05-15 17:17:32.138909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.781 [2024-05-15 17:17:32.139034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.781 [2024-05-15 17:17:32.139047] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:44.781 qpair failed and we were unable to recover it. 00:26:44.781 [2024-05-15 17:17:32.139163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.781 [2024-05-15 17:17:32.139433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.781 [2024-05-15 17:17:32.139446] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:44.781 qpair failed and we were unable to recover it. 00:26:44.781 [2024-05-15 17:17:32.139702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.781 [2024-05-15 17:17:32.139946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.781 [2024-05-15 17:17:32.139960] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:44.781 qpair failed and we were unable to recover it. 00:26:44.781 [2024-05-15 17:17:32.140173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.781 [2024-05-15 17:17:32.140352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.781 [2024-05-15 17:17:32.140365] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:44.781 qpair failed and we were unable to recover it. 00:26:44.781 [2024-05-15 17:17:32.140623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.781 [2024-05-15 17:17:32.140874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.781 [2024-05-15 17:17:32.140888] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:44.781 qpair failed and we were unable to recover it. 00:26:44.781 [2024-05-15 17:17:32.141053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.781 [2024-05-15 17:17:32.141330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.781 [2024-05-15 17:17:32.141344] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:44.781 qpair failed and we were unable to recover it. 00:26:44.781 [2024-05-15 17:17:32.141619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.781 [2024-05-15 17:17:32.141848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.781 [2024-05-15 17:17:32.141862] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:44.781 qpair failed and we were unable to recover it. 00:26:44.781 [2024-05-15 17:17:32.142096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.781 [2024-05-15 17:17:32.142226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.781 [2024-05-15 17:17:32.142241] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:44.781 qpair failed and we were unable to recover it. 00:26:44.781 [2024-05-15 17:17:32.142504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.781 [2024-05-15 17:17:32.142791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.781 [2024-05-15 17:17:32.142805] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:44.781 qpair failed and we were unable to recover it. 00:26:44.781 [2024-05-15 17:17:32.143037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.781 [2024-05-15 17:17:32.143162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.781 [2024-05-15 17:17:32.143189] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:44.781 qpair failed and we were unable to recover it. 00:26:44.781 [2024-05-15 17:17:32.143448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.781 [2024-05-15 17:17:32.143640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.781 [2024-05-15 17:17:32.143654] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:44.781 qpair failed and we were unable to recover it. 00:26:44.781 [2024-05-15 17:17:32.143853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.781 [2024-05-15 17:17:32.144105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.781 [2024-05-15 17:17:32.144118] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:44.781 qpair failed and we were unable to recover it. 00:26:44.781 [2024-05-15 17:17:32.144283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.781 [2024-05-15 17:17:32.144482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.781 [2024-05-15 17:17:32.144498] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:44.781 qpair failed and we were unable to recover it. 00:26:44.781 [2024-05-15 17:17:32.144729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.781 [2024-05-15 17:17:32.145001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.781 [2024-05-15 17:17:32.145017] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:44.781 qpair failed and we were unable to recover it. 00:26:44.781 [2024-05-15 17:17:32.145131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.781 [2024-05-15 17:17:32.145291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.781 [2024-05-15 17:17:32.145306] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:44.781 qpair failed and we were unable to recover it. 00:26:44.781 [2024-05-15 17:17:32.145440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.781 [2024-05-15 17:17:32.145692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.781 [2024-05-15 17:17:32.145707] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:44.781 qpair failed and we were unable to recover it. 00:26:44.781 [2024-05-15 17:17:32.145890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.781 [2024-05-15 17:17:32.146097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.781 [2024-05-15 17:17:32.146110] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:44.781 qpair failed and we were unable to recover it. 00:26:44.781 [2024-05-15 17:17:32.146245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.781 [2024-05-15 17:17:32.146478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.781 [2024-05-15 17:17:32.146491] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:44.781 qpair failed and we were unable to recover it. 00:26:44.781 [2024-05-15 17:17:32.146771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.781 [2024-05-15 17:17:32.146953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.781 [2024-05-15 17:17:32.146966] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:44.781 qpair failed and we were unable to recover it. 00:26:44.781 [2024-05-15 17:17:32.147093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.781 [2024-05-15 17:17:32.147270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.781 [2024-05-15 17:17:32.147284] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:44.781 qpair failed and we were unable to recover it. 00:26:44.782 [2024-05-15 17:17:32.147383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.782 [2024-05-15 17:17:32.147633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.782 [2024-05-15 17:17:32.147647] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:44.782 qpair failed and we were unable to recover it. 00:26:44.782 [2024-05-15 17:17:32.147920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.782 [2024-05-15 17:17:32.148050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.782 [2024-05-15 17:17:32.148063] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:44.782 qpair failed and we were unable to recover it. 00:26:44.782 [2024-05-15 17:17:32.148290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.782 [2024-05-15 17:17:32.148471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.782 [2024-05-15 17:17:32.148484] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:44.782 qpair failed and we were unable to recover it. 00:26:44.782 [2024-05-15 17:17:32.148695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.782 [2024-05-15 17:17:32.148880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.782 [2024-05-15 17:17:32.148893] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:44.782 qpair failed and we were unable to recover it. 00:26:44.782 [2024-05-15 17:17:32.149129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.782 [2024-05-15 17:17:32.149337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.782 [2024-05-15 17:17:32.149351] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:44.782 qpair failed and we were unable to recover it. 00:26:44.782 [2024-05-15 17:17:32.149581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.782 [2024-05-15 17:17:32.149851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.782 [2024-05-15 17:17:32.149864] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:44.782 qpair failed and we were unable to recover it. 00:26:44.782 [2024-05-15 17:17:32.150122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.782 [2024-05-15 17:17:32.150297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.782 [2024-05-15 17:17:32.150312] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:44.782 qpair failed and we were unable to recover it. 00:26:44.782 [2024-05-15 17:17:32.150452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.782 [2024-05-15 17:17:32.150637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.782 [2024-05-15 17:17:32.150650] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:44.782 qpair failed and we were unable to recover it. 00:26:44.782 [2024-05-15 17:17:32.150829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.782 [2024-05-15 17:17:32.151052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.782 [2024-05-15 17:17:32.151066] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:44.782 qpair failed and we were unable to recover it. 00:26:44.782 [2024-05-15 17:17:32.151319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.782 [2024-05-15 17:17:32.151586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.782 [2024-05-15 17:17:32.151599] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:44.782 qpair failed and we were unable to recover it. 00:26:44.782 [2024-05-15 17:17:32.151852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.782 [2024-05-15 17:17:32.152133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.782 [2024-05-15 17:17:32.152146] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:44.782 qpair failed and we were unable to recover it. 00:26:44.782 [2024-05-15 17:17:32.152450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.782 [2024-05-15 17:17:32.152730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.782 [2024-05-15 17:17:32.152744] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:44.782 qpair failed and we were unable to recover it. 00:26:44.782 [2024-05-15 17:17:32.152998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.782 [2024-05-15 17:17:32.153121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.782 [2024-05-15 17:17:32.153135] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:44.782 qpair failed and we were unable to recover it. 00:26:44.782 [2024-05-15 17:17:32.153390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.782 [2024-05-15 17:17:32.153618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.782 [2024-05-15 17:17:32.153631] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:44.782 qpair failed and we were unable to recover it. 00:26:44.782 [2024-05-15 17:17:32.153832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.782 [2024-05-15 17:17:32.154010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.782 [2024-05-15 17:17:32.154023] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:44.782 qpair failed and we were unable to recover it. 00:26:44.782 [2024-05-15 17:17:32.154276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.782 [2024-05-15 17:17:32.154527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.782 [2024-05-15 17:17:32.154541] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:44.782 qpair failed and we were unable to recover it. 00:26:44.782 [2024-05-15 17:17:32.154706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.782 [2024-05-15 17:17:32.154881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.782 [2024-05-15 17:17:32.154895] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:44.782 qpair failed and we were unable to recover it. 00:26:44.782 [2024-05-15 17:17:32.155018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.782 [2024-05-15 17:17:32.155184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.782 [2024-05-15 17:17:32.155198] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:44.782 qpair failed and we were unable to recover it. 00:26:44.782 [2024-05-15 17:17:32.155378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.782 [2024-05-15 17:17:32.155540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.782 [2024-05-15 17:17:32.155553] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:44.782 qpair failed and we were unable to recover it. 00:26:44.782 [2024-05-15 17:17:32.155834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.782 [2024-05-15 17:17:32.156081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.782 [2024-05-15 17:17:32.156095] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:44.782 qpair failed and we were unable to recover it. 00:26:44.782 [2024-05-15 17:17:32.156292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.782 [2024-05-15 17:17:32.156545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.782 [2024-05-15 17:17:32.156558] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:44.782 qpair failed and we were unable to recover it. 00:26:44.782 [2024-05-15 17:17:32.156810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.782 [2024-05-15 17:17:32.157009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.782 [2024-05-15 17:17:32.157022] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:44.782 qpair failed and we were unable to recover it. 00:26:44.782 [2024-05-15 17:17:32.157196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.782 [2024-05-15 17:17:32.157427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.782 [2024-05-15 17:17:32.157441] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:44.782 qpair failed and we were unable to recover it. 00:26:44.782 [2024-05-15 17:17:32.157622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.782 [2024-05-15 17:17:32.157896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.782 [2024-05-15 17:17:32.157909] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:44.782 qpair failed and we were unable to recover it. 00:26:44.782 [2024-05-15 17:17:32.158092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.782 [2024-05-15 17:17:32.158347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.782 [2024-05-15 17:17:32.158362] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:44.782 qpair failed and we were unable to recover it. 00:26:44.782 [2024-05-15 17:17:32.158541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.782 [2024-05-15 17:17:32.158752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.782 [2024-05-15 17:17:32.158765] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:44.782 qpair failed and we were unable to recover it. 00:26:44.782 [2024-05-15 17:17:32.158949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.782 [2024-05-15 17:17:32.159133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.782 [2024-05-15 17:17:32.159146] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:44.782 qpair failed and we were unable to recover it. 00:26:44.782 [2024-05-15 17:17:32.159410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.782 [2024-05-15 17:17:32.159640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.782 [2024-05-15 17:17:32.159654] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:44.782 qpair failed and we were unable to recover it. 00:26:44.782 [2024-05-15 17:17:32.159940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.782 [2024-05-15 17:17:32.160119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.782 [2024-05-15 17:17:32.160132] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:44.782 qpair failed and we were unable to recover it. 00:26:44.782 [2024-05-15 17:17:32.160426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.782 [2024-05-15 17:17:32.160625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.782 [2024-05-15 17:17:32.160639] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:44.782 qpair failed and we were unable to recover it. 00:26:44.782 [2024-05-15 17:17:32.160810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.782 [2024-05-15 17:17:32.161072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.782 [2024-05-15 17:17:32.161085] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:44.782 qpair failed and we were unable to recover it. 00:26:44.782 [2024-05-15 17:17:32.161274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.782 [2024-05-15 17:17:32.161502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.782 [2024-05-15 17:17:32.161515] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:44.782 qpair failed and we were unable to recover it. 00:26:44.782 [2024-05-15 17:17:32.161812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.782 [2024-05-15 17:17:32.161915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.782 [2024-05-15 17:17:32.161928] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:44.782 qpair failed and we were unable to recover it. 00:26:44.782 [2024-05-15 17:17:32.162109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.782 [2024-05-15 17:17:32.162295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.782 [2024-05-15 17:17:32.162311] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:44.782 qpair failed and we were unable to recover it. 00:26:44.782 [2024-05-15 17:17:32.162492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.782 [2024-05-15 17:17:32.162627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.782 [2024-05-15 17:17:32.162641] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:44.782 qpair failed and we were unable to recover it. 00:26:44.782 [2024-05-15 17:17:32.162874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.782 [2024-05-15 17:17:32.163101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.782 [2024-05-15 17:17:32.163114] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:44.782 qpair failed and we were unable to recover it. 00:26:44.782 [2024-05-15 17:17:32.163393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.782 [2024-05-15 17:17:32.163555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.782 [2024-05-15 17:17:32.163569] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:44.782 qpair failed and we were unable to recover it. 00:26:44.782 [2024-05-15 17:17:32.163733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.782 [2024-05-15 17:17:32.163916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.782 [2024-05-15 17:17:32.163929] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:44.782 qpair failed and we were unable to recover it. 00:26:44.782 [2024-05-15 17:17:32.164108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.782 [2024-05-15 17:17:32.164315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.782 [2024-05-15 17:17:32.164329] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:44.782 qpair failed and we were unable to recover it. 00:26:44.782 [2024-05-15 17:17:32.164515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.782 [2024-05-15 17:17:32.164757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.782 [2024-05-15 17:17:32.164770] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:44.782 qpair failed and we were unable to recover it. 00:26:44.782 [2024-05-15 17:17:32.164934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.782 [2024-05-15 17:17:32.165152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.782 [2024-05-15 17:17:32.165170] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:44.782 qpair failed and we were unable to recover it. 00:26:44.782 [2024-05-15 17:17:32.165279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.782 [2024-05-15 17:17:32.165482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.782 [2024-05-15 17:17:32.165495] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:44.782 qpair failed and we were unable to recover it. 00:26:44.782 [2024-05-15 17:17:32.165746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.782 [2024-05-15 17:17:32.165929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.782 [2024-05-15 17:17:32.165942] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:44.782 qpair failed and we were unable to recover it. 00:26:44.782 [2024-05-15 17:17:32.166196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.782 [2024-05-15 17:17:32.166430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.782 [2024-05-15 17:17:32.166446] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:44.782 qpair failed and we were unable to recover it. 00:26:44.782 [2024-05-15 17:17:32.166637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.782 [2024-05-15 17:17:32.166868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.782 [2024-05-15 17:17:32.166881] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:44.782 qpair failed and we were unable to recover it. 00:26:44.782 [2024-05-15 17:17:32.167134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.782 [2024-05-15 17:17:32.167260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.782 [2024-05-15 17:17:32.167273] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:44.782 qpair failed and we were unable to recover it. 00:26:44.782 [2024-05-15 17:17:32.167439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.782 [2024-05-15 17:17:32.167665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.782 [2024-05-15 17:17:32.167679] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:44.782 qpair failed and we were unable to recover it. 00:26:44.782 [2024-05-15 17:17:32.167916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.782 [2024-05-15 17:17:32.168095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.782 [2024-05-15 17:17:32.168109] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:44.782 qpair failed and we were unable to recover it. 00:26:44.782 [2024-05-15 17:17:32.168361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.782 [2024-05-15 17:17:32.168541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.782 [2024-05-15 17:17:32.168554] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:44.782 qpair failed and we were unable to recover it. 00:26:44.782 [2024-05-15 17:17:32.168739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.782 [2024-05-15 17:17:32.168937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.782 [2024-05-15 17:17:32.168951] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:44.782 qpair failed and we were unable to recover it. 00:26:44.782 [2024-05-15 17:17:32.169153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.782 [2024-05-15 17:17:32.169361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.782 [2024-05-15 17:17:32.169375] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:44.782 qpair failed and we were unable to recover it. 00:26:44.782 [2024-05-15 17:17:32.169630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.782 [2024-05-15 17:17:32.169809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.782 [2024-05-15 17:17:32.169823] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:44.782 qpair failed and we were unable to recover it. 00:26:44.782 [2024-05-15 17:17:32.169950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.782 [2024-05-15 17:17:32.170205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.782 [2024-05-15 17:17:32.170220] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:44.782 qpair failed and we were unable to recover it. 00:26:44.782 [2024-05-15 17:17:32.170466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.783 [2024-05-15 17:17:32.170720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.783 [2024-05-15 17:17:32.170736] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:44.783 qpair failed and we were unable to recover it. 00:26:44.783 [2024-05-15 17:17:32.170921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.783 [2024-05-15 17:17:32.171046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.783 [2024-05-15 17:17:32.171059] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:44.783 qpair failed and we were unable to recover it. 00:26:44.783 [2024-05-15 17:17:32.171237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.783 [2024-05-15 17:17:32.171466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.783 [2024-05-15 17:17:32.171480] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:44.783 qpair failed and we were unable to recover it. 00:26:44.783 [2024-05-15 17:17:32.171734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.783 [2024-05-15 17:17:32.171907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.783 [2024-05-15 17:17:32.171920] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:44.783 qpair failed and we were unable to recover it. 00:26:44.783 [2024-05-15 17:17:32.172099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.783 [2024-05-15 17:17:32.172342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.783 [2024-05-15 17:17:32.172356] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:44.783 qpair failed and we were unable to recover it. 00:26:44.783 [2024-05-15 17:17:32.172536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.783 [2024-05-15 17:17:32.172651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.783 [2024-05-15 17:17:32.172664] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:44.783 qpair failed and we were unable to recover it. 00:26:44.783 [2024-05-15 17:17:32.172905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.783 [2024-05-15 17:17:32.173136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.783 [2024-05-15 17:17:32.173150] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:44.783 qpair failed and we were unable to recover it. 00:26:44.783 [2024-05-15 17:17:32.173433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.783 [2024-05-15 17:17:32.173547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.783 [2024-05-15 17:17:32.173561] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:44.783 qpair failed and we were unable to recover it. 00:26:44.783 [2024-05-15 17:17:32.173820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.783 [2024-05-15 17:17:32.174059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.783 [2024-05-15 17:17:32.174072] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:44.783 qpair failed and we were unable to recover it. 00:26:44.783 [2024-05-15 17:17:32.174239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.783 [2024-05-15 17:17:32.174531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.783 [2024-05-15 17:17:32.174544] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:44.783 qpair failed and we were unable to recover it. 00:26:44.783 [2024-05-15 17:17:32.174729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.783 [2024-05-15 17:17:32.174935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.783 [2024-05-15 17:17:32.174949] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:44.783 qpair failed and we were unable to recover it. 00:26:44.783 [2024-05-15 17:17:32.175186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.783 [2024-05-15 17:17:32.175383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.783 [2024-05-15 17:17:32.175397] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:44.783 qpair failed and we were unable to recover it. 00:26:44.783 [2024-05-15 17:17:32.175626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.783 [2024-05-15 17:17:32.175875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.783 [2024-05-15 17:17:32.175888] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:44.783 qpair failed and we were unable to recover it. 00:26:44.783 [2024-05-15 17:17:32.176080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.783 [2024-05-15 17:17:32.176260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.783 [2024-05-15 17:17:32.176274] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:44.783 qpair failed and we were unable to recover it. 00:26:44.783 [2024-05-15 17:17:32.176409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.783 [2024-05-15 17:17:32.176639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.783 [2024-05-15 17:17:32.176652] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:44.783 qpair failed and we were unable to recover it. 00:26:44.783 [2024-05-15 17:17:32.176914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.783 [2024-05-15 17:17:32.177142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.783 [2024-05-15 17:17:32.177155] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:44.783 qpair failed and we were unable to recover it. 00:26:44.783 [2024-05-15 17:17:32.177424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.783 [2024-05-15 17:17:32.177610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.783 [2024-05-15 17:17:32.177628] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:44.783 qpair failed and we were unable to recover it. 00:26:44.783 [2024-05-15 17:17:32.177883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.783 [2024-05-15 17:17:32.178078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.783 [2024-05-15 17:17:32.178093] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:44.783 qpair failed and we were unable to recover it. 00:26:44.783 [2024-05-15 17:17:32.178350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.783 [2024-05-15 17:17:32.178524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.783 [2024-05-15 17:17:32.178538] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:44.783 qpair failed and we were unable to recover it. 00:26:44.783 [2024-05-15 17:17:32.178721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.783 [2024-05-15 17:17:32.179002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.783 [2024-05-15 17:17:32.179015] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:44.783 qpair failed and we were unable to recover it. 00:26:44.783 [2024-05-15 17:17:32.179216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.783 [2024-05-15 17:17:32.179323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.783 [2024-05-15 17:17:32.179336] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:44.783 qpair failed and we were unable to recover it. 00:26:44.783 [2024-05-15 17:17:32.179520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.783 [2024-05-15 17:17:32.179694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.783 [2024-05-15 17:17:32.179709] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:44.783 qpair failed and we were unable to recover it. 00:26:44.783 [2024-05-15 17:17:32.180006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.783 [2024-05-15 17:17:32.180287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.783 [2024-05-15 17:17:32.180301] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:44.783 qpair failed and we were unable to recover it. 00:26:44.783 [2024-05-15 17:17:32.180550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.783 [2024-05-15 17:17:32.180821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.783 [2024-05-15 17:17:32.180834] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:44.783 qpair failed and we were unable to recover it. 00:26:44.783 [2024-05-15 17:17:32.181086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.783 [2024-05-15 17:17:32.181273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.783 [2024-05-15 17:17:32.181287] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:44.783 qpair failed and we were unable to recover it. 00:26:44.783 [2024-05-15 17:17:32.181467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.783 [2024-05-15 17:17:32.181723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.783 [2024-05-15 17:17:32.181736] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:44.783 qpair failed and we were unable to recover it. 00:26:44.783 [2024-05-15 17:17:32.181989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.783 [2024-05-15 17:17:32.182239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.783 [2024-05-15 17:17:32.182254] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:44.783 qpair failed and we were unable to recover it. 00:26:44.783 [2024-05-15 17:17:32.182442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.783 [2024-05-15 17:17:32.182622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.783 [2024-05-15 17:17:32.182635] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:44.783 qpair failed and we were unable to recover it. 00:26:44.783 [2024-05-15 17:17:32.182892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.783 [2024-05-15 17:17:32.183070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.783 [2024-05-15 17:17:32.183083] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:44.783 qpair failed and we were unable to recover it. 00:26:44.783 [2024-05-15 17:17:32.183314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.783 [2024-05-15 17:17:32.183602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.783 [2024-05-15 17:17:32.183615] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:44.783 qpair failed and we were unable to recover it. 00:26:44.783 [2024-05-15 17:17:32.183863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.783 [2024-05-15 17:17:32.184093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.783 [2024-05-15 17:17:32.184106] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:44.783 qpair failed and we were unable to recover it. 00:26:44.783 [2024-05-15 17:17:32.184340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.783 [2024-05-15 17:17:32.184587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.783 [2024-05-15 17:17:32.184601] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:44.783 qpair failed and we were unable to recover it. 00:26:44.783 [2024-05-15 17:17:32.184794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.783 [2024-05-15 17:17:32.185031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.783 [2024-05-15 17:17:32.185044] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:44.783 qpair failed and we were unable to recover it. 00:26:44.783 [2024-05-15 17:17:32.185300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.783 [2024-05-15 17:17:32.185558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.783 [2024-05-15 17:17:32.185571] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:44.783 qpair failed and we were unable to recover it. 00:26:44.783 [2024-05-15 17:17:32.185771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.783 [2024-05-15 17:17:32.185975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.783 [2024-05-15 17:17:32.185988] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:44.783 qpair failed and we were unable to recover it. 00:26:44.783 [2024-05-15 17:17:32.186218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.783 [2024-05-15 17:17:32.186450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.783 [2024-05-15 17:17:32.186463] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:44.783 qpair failed and we were unable to recover it. 00:26:44.783 [2024-05-15 17:17:32.186646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.783 [2024-05-15 17:17:32.186848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.783 [2024-05-15 17:17:32.186861] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:44.783 qpair failed and we were unable to recover it. 00:26:44.783 [2024-05-15 17:17:32.187047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.783 [2024-05-15 17:17:32.187248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.783 [2024-05-15 17:17:32.187264] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:44.783 qpair failed and we were unable to recover it. 00:26:44.783 [2024-05-15 17:17:32.187389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.783 [2024-05-15 17:17:32.187648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.783 [2024-05-15 17:17:32.187661] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:44.783 qpair failed and we were unable to recover it. 00:26:44.783 [2024-05-15 17:17:32.187900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.783 [2024-05-15 17:17:32.188080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.783 [2024-05-15 17:17:32.188094] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:44.783 qpair failed and we were unable to recover it. 00:26:44.783 [2024-05-15 17:17:32.188348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.783 [2024-05-15 17:17:32.188521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.783 [2024-05-15 17:17:32.188534] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:44.783 qpair failed and we were unable to recover it. 00:26:44.783 [2024-05-15 17:17:32.188794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.783 [2024-05-15 17:17:32.189029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.783 [2024-05-15 17:17:32.189042] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:44.783 qpair failed and we were unable to recover it. 00:26:44.783 [2024-05-15 17:17:32.189205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.783 [2024-05-15 17:17:32.189461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.783 [2024-05-15 17:17:32.189475] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:44.783 qpair failed and we were unable to recover it. 00:26:44.783 [2024-05-15 17:17:32.189674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.783 [2024-05-15 17:17:32.189885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.783 [2024-05-15 17:17:32.189898] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:44.783 qpair failed and we were unable to recover it. 00:26:44.783 [2024-05-15 17:17:32.190107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.783 [2024-05-15 17:17:32.190355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.783 [2024-05-15 17:17:32.190369] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:44.783 qpair failed and we were unable to recover it. 00:26:44.783 [2024-05-15 17:17:32.190674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.783 [2024-05-15 17:17:32.190803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.783 [2024-05-15 17:17:32.190816] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:44.783 qpair failed and we were unable to recover it. 00:26:44.783 [2024-05-15 17:17:32.191053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.783 [2024-05-15 17:17:32.191177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.783 [2024-05-15 17:17:32.191194] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:44.783 qpair failed and we were unable to recover it. 00:26:44.783 [2024-05-15 17:17:32.191383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.783 [2024-05-15 17:17:32.191670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.783 [2024-05-15 17:17:32.191699] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:44.783 qpair failed and we were unable to recover it. 00:26:44.783 [2024-05-15 17:17:32.191921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.783 [2024-05-15 17:17:32.192187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.783 [2024-05-15 17:17:32.192202] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:44.784 qpair failed and we were unable to recover it. 00:26:44.784 [2024-05-15 17:17:32.192461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.784 [2024-05-15 17:17:32.192698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.784 [2024-05-15 17:17:32.192711] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:44.784 qpair failed and we were unable to recover it. 00:26:44.784 [2024-05-15 17:17:32.192841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.784 [2024-05-15 17:17:32.192945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.784 [2024-05-15 17:17:32.192958] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:44.784 qpair failed and we were unable to recover it. 00:26:44.784 [2024-05-15 17:17:32.193146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.784 [2024-05-15 17:17:32.193320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.784 [2024-05-15 17:17:32.193337] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:44.784 qpair failed and we were unable to recover it. 00:26:44.784 [2024-05-15 17:17:32.193434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.784 [2024-05-15 17:17:32.193631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.784 [2024-05-15 17:17:32.193644] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:44.784 qpair failed and we were unable to recover it. 00:26:44.784 [2024-05-15 17:17:32.193828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.784 [2024-05-15 17:17:32.194024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.784 [2024-05-15 17:17:32.194036] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:44.784 qpair failed and we were unable to recover it. 00:26:44.784 [2024-05-15 17:17:32.194311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.784 [2024-05-15 17:17:32.194507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.784 [2024-05-15 17:17:32.194520] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:44.784 qpair failed and we were unable to recover it. 00:26:44.784 [2024-05-15 17:17:32.194767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.784 [2024-05-15 17:17:32.194936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.784 [2024-05-15 17:17:32.194949] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:44.784 qpair failed and we were unable to recover it. 00:26:44.784 [2024-05-15 17:17:32.195124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.784 [2024-05-15 17:17:32.195424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.784 [2024-05-15 17:17:32.195438] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:44.784 qpair failed and we were unable to recover it. 00:26:44.784 [2024-05-15 17:17:32.195673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.784 [2024-05-15 17:17:32.195937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.784 [2024-05-15 17:17:32.195949] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:44.784 qpair failed and we were unable to recover it. 00:26:44.784 [2024-05-15 17:17:32.196255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.784 [2024-05-15 17:17:32.196438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.784 [2024-05-15 17:17:32.196450] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:44.784 qpair failed and we were unable to recover it. 00:26:44.784 [2024-05-15 17:17:32.196617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.784 [2024-05-15 17:17:32.196880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.784 [2024-05-15 17:17:32.196892] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:44.784 qpair failed and we were unable to recover it. 00:26:44.784 [2024-05-15 17:17:32.197150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.784 [2024-05-15 17:17:32.197354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.784 [2024-05-15 17:17:32.197367] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:44.784 qpair failed and we were unable to recover it. 00:26:44.784 [2024-05-15 17:17:32.197535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.784 [2024-05-15 17:17:32.197812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.784 [2024-05-15 17:17:32.197823] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:44.784 qpair failed and we were unable to recover it. 00:26:44.784 [2024-05-15 17:17:32.197958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.784 [2024-05-15 17:17:32.198188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.784 [2024-05-15 17:17:32.198201] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:44.784 qpair failed and we were unable to recover it. 00:26:44.784 [2024-05-15 17:17:32.198468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.784 [2024-05-15 17:17:32.198704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.784 [2024-05-15 17:17:32.198716] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:44.784 qpair failed and we were unable to recover it. 00:26:44.784 [2024-05-15 17:17:32.198889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.784 [2024-05-15 17:17:32.199067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.784 [2024-05-15 17:17:32.199079] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:44.784 qpair failed and we were unable to recover it. 00:26:44.784 [2024-05-15 17:17:32.199322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.784 [2024-05-15 17:17:32.199565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.784 [2024-05-15 17:17:32.199578] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:44.784 qpair failed and we were unable to recover it. 00:26:44.784 [2024-05-15 17:17:32.199768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.784 [2024-05-15 17:17:32.199999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.784 [2024-05-15 17:17:32.200013] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:44.784 qpair failed and we were unable to recover it. 00:26:44.784 [2024-05-15 17:17:32.200214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.784 [2024-05-15 17:17:32.200470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.784 [2024-05-15 17:17:32.200484] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:44.784 qpair failed and we were unable to recover it. 00:26:44.784 [2024-05-15 17:17:32.200621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.784 [2024-05-15 17:17:32.200906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.784 [2024-05-15 17:17:32.200919] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:44.784 qpair failed and we were unable to recover it. 00:26:44.784 [2024-05-15 17:17:32.201147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.784 [2024-05-15 17:17:32.201358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.784 [2024-05-15 17:17:32.201372] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:44.784 qpair failed and we were unable to recover it. 00:26:44.784 [2024-05-15 17:17:32.201603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.784 [2024-05-15 17:17:32.201825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.784 [2024-05-15 17:17:32.201838] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:44.784 qpair failed and we were unable to recover it. 00:26:44.784 [2024-05-15 17:17:32.202105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.784 [2024-05-15 17:17:32.202344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.784 [2024-05-15 17:17:32.202358] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:44.784 qpair failed and we were unable to recover it. 00:26:44.784 [2024-05-15 17:17:32.202613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.784 [2024-05-15 17:17:32.202783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.784 [2024-05-15 17:17:32.202796] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:44.784 qpair failed and we were unable to recover it. 00:26:44.784 [2024-05-15 17:17:32.202909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.784 [2024-05-15 17:17:32.203119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.784 [2024-05-15 17:17:32.203132] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:44.784 qpair failed and we were unable to recover it. 00:26:44.784 [2024-05-15 17:17:32.203310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.784 [2024-05-15 17:17:32.203494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.784 [2024-05-15 17:17:32.203507] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:44.784 qpair failed and we were unable to recover it. 00:26:44.784 [2024-05-15 17:17:32.203767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.784 [2024-05-15 17:17:32.203967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.784 [2024-05-15 17:17:32.203980] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:44.784 qpair failed and we were unable to recover it. 00:26:44.784 [2024-05-15 17:17:32.204107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.784 [2024-05-15 17:17:32.204292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.784 [2024-05-15 17:17:32.204306] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:44.784 qpair failed and we were unable to recover it. 00:26:44.784 [2024-05-15 17:17:32.204582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.784 [2024-05-15 17:17:32.204893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.784 [2024-05-15 17:17:32.204922] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:44.784 qpair failed and we were unable to recover it. 00:26:44.784 [2024-05-15 17:17:32.205229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.784 [2024-05-15 17:17:32.205531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.784 [2024-05-15 17:17:32.205544] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:44.784 qpair failed and we were unable to recover it. 00:26:44.784 [2024-05-15 17:17:32.205724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.784 [2024-05-15 17:17:32.205902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.784 [2024-05-15 17:17:32.205915] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:44.784 qpair failed and we were unable to recover it. 00:26:44.784 [2024-05-15 17:17:32.206151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.784 [2024-05-15 17:17:32.206362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.784 [2024-05-15 17:17:32.206391] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:44.784 qpair failed and we were unable to recover it. 00:26:44.784 [2024-05-15 17:17:32.206655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.784 [2024-05-15 17:17:32.206886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.784 [2024-05-15 17:17:32.206915] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:44.784 qpair failed and we were unable to recover it. 00:26:44.784 [2024-05-15 17:17:32.207158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.784 [2024-05-15 17:17:32.207415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.784 [2024-05-15 17:17:32.207430] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:44.784 qpair failed and we were unable to recover it. 00:26:44.784 [2024-05-15 17:17:32.207674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.784 [2024-05-15 17:17:32.207847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.784 [2024-05-15 17:17:32.207860] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:44.784 qpair failed and we were unable to recover it. 00:26:44.784 [2024-05-15 17:17:32.208083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.784 [2024-05-15 17:17:32.208355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.784 [2024-05-15 17:17:32.208386] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:44.784 qpair failed and we were unable to recover it. 00:26:44.784 [2024-05-15 17:17:32.208673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.784 [2024-05-15 17:17:32.208959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.784 [2024-05-15 17:17:32.208988] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:44.784 qpair failed and we were unable to recover it. 00:26:44.784 [2024-05-15 17:17:32.209281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.784 [2024-05-15 17:17:32.209493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.784 [2024-05-15 17:17:32.209522] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:44.784 qpair failed and we were unable to recover it. 00:26:44.784 [2024-05-15 17:17:32.209739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.784 [2024-05-15 17:17:32.210039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.784 [2024-05-15 17:17:32.210068] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:44.784 qpair failed and we were unable to recover it. 00:26:44.784 [2024-05-15 17:17:32.210386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.784 [2024-05-15 17:17:32.210596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.784 [2024-05-15 17:17:32.210609] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:44.784 qpair failed and we were unable to recover it. 00:26:44.784 [2024-05-15 17:17:32.210774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.784 [2024-05-15 17:17:32.211036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.784 [2024-05-15 17:17:32.211050] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:44.784 qpair failed and we were unable to recover it. 00:26:44.784 [2024-05-15 17:17:32.211211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.784 [2024-05-15 17:17:32.211393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.784 [2024-05-15 17:17:32.211406] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:44.784 qpair failed and we were unable to recover it. 00:26:44.784 [2024-05-15 17:17:32.211571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.784 [2024-05-15 17:17:32.211773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.784 [2024-05-15 17:17:32.211802] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:44.784 qpair failed and we were unable to recover it. 00:26:44.784 [2024-05-15 17:17:32.212096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.784 [2024-05-15 17:17:32.212395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.784 [2024-05-15 17:17:32.212426] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:44.784 qpair failed and we were unable to recover it. 00:26:44.784 [2024-05-15 17:17:32.212742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.784 [2024-05-15 17:17:32.212967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.784 [2024-05-15 17:17:32.212995] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:44.784 qpair failed and we were unable to recover it. 00:26:44.784 [2024-05-15 17:17:32.213206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.784 [2024-05-15 17:17:32.213479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.784 [2024-05-15 17:17:32.213507] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:44.784 qpair failed and we were unable to recover it. 00:26:44.784 [2024-05-15 17:17:32.213741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.784 [2024-05-15 17:17:32.213952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.784 [2024-05-15 17:17:32.213991] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:44.784 qpair failed and we were unable to recover it. 00:26:44.784 [2024-05-15 17:17:32.214230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.784 [2024-05-15 17:17:32.214460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.784 [2024-05-15 17:17:32.214473] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:44.784 qpair failed and we were unable to recover it. 00:26:44.784 [2024-05-15 17:17:32.214639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.784 [2024-05-15 17:17:32.214896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.784 [2024-05-15 17:17:32.214925] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:44.784 qpair failed and we were unable to recover it. 00:26:44.784 [2024-05-15 17:17:32.215232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.784 [2024-05-15 17:17:32.215546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.784 [2024-05-15 17:17:32.215575] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:44.784 qpair failed and we were unable to recover it. 00:26:44.784 [2024-05-15 17:17:32.215772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.784 [2024-05-15 17:17:32.216006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.784 [2024-05-15 17:17:32.216045] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:44.784 qpair failed and we were unable to recover it. 00:26:44.784 [2024-05-15 17:17:32.216246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.784 [2024-05-15 17:17:32.216433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.784 [2024-05-15 17:17:32.216463] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:44.784 qpair failed and we were unable to recover it. 00:26:44.784 [2024-05-15 17:17:32.216764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.784 [2024-05-15 17:17:32.217074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.784 [2024-05-15 17:17:32.217103] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:44.784 qpair failed and we were unable to recover it. 00:26:44.784 [2024-05-15 17:17:32.217368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.784 [2024-05-15 17:17:32.217663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.784 [2024-05-15 17:17:32.217698] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:44.784 qpair failed and we were unable to recover it. 00:26:44.785 [2024-05-15 17:17:32.218015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.785 [2024-05-15 17:17:32.218219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.785 [2024-05-15 17:17:32.218248] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:44.785 qpair failed and we were unable to recover it. 00:26:44.785 [2024-05-15 17:17:32.218460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.785 [2024-05-15 17:17:32.218629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.785 [2024-05-15 17:17:32.218657] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:44.785 qpair failed and we were unable to recover it. 00:26:44.785 [2024-05-15 17:17:32.218949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.785 [2024-05-15 17:17:32.219185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.785 [2024-05-15 17:17:32.219223] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:44.785 qpair failed and we were unable to recover it. 00:26:44.785 [2024-05-15 17:17:32.219440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.785 [2024-05-15 17:17:32.219632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.785 [2024-05-15 17:17:32.219660] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:44.785 qpair failed and we were unable to recover it. 00:26:44.785 [2024-05-15 17:17:32.219955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.785 [2024-05-15 17:17:32.220265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.785 [2024-05-15 17:17:32.220295] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:44.785 qpair failed and we were unable to recover it. 00:26:44.785 [2024-05-15 17:17:32.220564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.785 [2024-05-15 17:17:32.220839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.785 [2024-05-15 17:17:32.220868] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:44.785 qpair failed and we were unable to recover it. 00:26:44.785 [2024-05-15 17:17:32.221006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.785 [2024-05-15 17:17:32.221260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.785 [2024-05-15 17:17:32.221274] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:44.785 qpair failed and we were unable to recover it. 00:26:44.785 [2024-05-15 17:17:32.221543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.785 [2024-05-15 17:17:32.221829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.785 [2024-05-15 17:17:32.221857] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:44.785 qpair failed and we were unable to recover it. 00:26:44.785 [2024-05-15 17:17:32.222004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.785 [2024-05-15 17:17:32.222329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.785 [2024-05-15 17:17:32.222359] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:44.785 qpair failed and we were unable to recover it. 00:26:44.785 [2024-05-15 17:17:32.222658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.785 [2024-05-15 17:17:32.222971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.785 [2024-05-15 17:17:32.223009] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:44.785 qpair failed and we were unable to recover it. 00:26:44.785 [2024-05-15 17:17:32.223245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.785 [2024-05-15 17:17:32.223424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.785 [2024-05-15 17:17:32.223454] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:44.785 qpair failed and we were unable to recover it. 00:26:44.785 [2024-05-15 17:17:32.223745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.785 [2024-05-15 17:17:32.223953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.785 [2024-05-15 17:17:32.223982] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:44.785 qpair failed and we were unable to recover it. 00:26:44.785 [2024-05-15 17:17:32.224264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.785 [2024-05-15 17:17:32.224464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.785 [2024-05-15 17:17:32.224478] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:44.785 qpair failed and we were unable to recover it. 00:26:44.785 [2024-05-15 17:17:32.224644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.785 [2024-05-15 17:17:32.224913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.785 [2024-05-15 17:17:32.224927] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:44.785 qpair failed and we were unable to recover it. 00:26:44.785 [2024-05-15 17:17:32.225184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.785 [2024-05-15 17:17:32.225475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.785 [2024-05-15 17:17:32.225504] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:44.785 qpair failed and we were unable to recover it. 00:26:44.785 [2024-05-15 17:17:32.225733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.785 [2024-05-15 17:17:32.226018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.785 [2024-05-15 17:17:32.226047] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:44.785 qpair failed and we were unable to recover it. 00:26:44.785 [2024-05-15 17:17:32.226340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.785 [2024-05-15 17:17:32.226562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.785 [2024-05-15 17:17:32.226590] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:44.785 qpair failed and we were unable to recover it. 00:26:44.785 [2024-05-15 17:17:32.226810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.785 [2024-05-15 17:17:32.227098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.785 [2024-05-15 17:17:32.227126] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:44.785 qpair failed and we were unable to recover it. 00:26:44.785 [2024-05-15 17:17:32.227453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.785 [2024-05-15 17:17:32.227650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.785 [2024-05-15 17:17:32.227663] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:44.785 qpair failed and we were unable to recover it. 00:26:44.785 [2024-05-15 17:17:32.227866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.785 [2024-05-15 17:17:32.227998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.785 [2024-05-15 17:17:32.228011] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:44.785 qpair failed and we were unable to recover it. 00:26:44.785 [2024-05-15 17:17:32.228296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.785 [2024-05-15 17:17:32.228505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.785 [2024-05-15 17:17:32.228533] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:44.785 qpair failed and we were unable to recover it. 00:26:44.785 [2024-05-15 17:17:32.228829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.785 [2024-05-15 17:17:32.229072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.785 [2024-05-15 17:17:32.229101] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:44.785 qpair failed and we were unable to recover it. 00:26:44.785 [2024-05-15 17:17:32.229312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.785 [2024-05-15 17:17:32.229568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.785 [2024-05-15 17:17:32.229598] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:44.785 qpair failed and we were unable to recover it. 00:26:44.785 [2024-05-15 17:17:32.229864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.785 [2024-05-15 17:17:32.230146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.785 [2024-05-15 17:17:32.230200] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:44.785 qpair failed and we were unable to recover it. 00:26:44.785 [2024-05-15 17:17:32.230516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.785 [2024-05-15 17:17:32.230807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.785 [2024-05-15 17:17:32.230836] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:44.785 qpair failed and we were unable to recover it. 00:26:44.785 [2024-05-15 17:17:32.231075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.785 [2024-05-15 17:17:32.231285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.785 [2024-05-15 17:17:32.231318] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:44.785 qpair failed and we were unable to recover it. 00:26:44.785 [2024-05-15 17:17:32.231605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.785 [2024-05-15 17:17:32.231784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.785 [2024-05-15 17:17:32.231798] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:44.785 qpair failed and we were unable to recover it. 00:26:44.785 [2024-05-15 17:17:32.232006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.785 [2024-05-15 17:17:32.232204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.785 [2024-05-15 17:17:32.232219] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:44.785 qpair failed and we were unable to recover it. 00:26:44.785 [2024-05-15 17:17:32.232404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.785 [2024-05-15 17:17:32.232607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.785 [2024-05-15 17:17:32.232635] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:44.785 qpair failed and we were unable to recover it. 00:26:44.785 [2024-05-15 17:17:32.232952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.785 [2024-05-15 17:17:32.233162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.785 [2024-05-15 17:17:32.233200] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:44.785 qpair failed and we were unable to recover it. 00:26:44.785 [2024-05-15 17:17:32.233501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.785 [2024-05-15 17:17:32.233810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.785 [2024-05-15 17:17:32.233839] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:44.785 qpair failed and we were unable to recover it. 00:26:44.785 [2024-05-15 17:17:32.234074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.785 [2024-05-15 17:17:32.234283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.785 [2024-05-15 17:17:32.234313] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:44.785 qpair failed and we were unable to recover it. 00:26:44.785 [2024-05-15 17:17:32.234583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.785 [2024-05-15 17:17:32.234712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.785 [2024-05-15 17:17:32.234726] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:44.785 qpair failed and we were unable to recover it. 00:26:44.785 [2024-05-15 17:17:32.234985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.785 [2024-05-15 17:17:32.235192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.785 [2024-05-15 17:17:32.235229] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:44.785 qpair failed and we were unable to recover it. 00:26:44.785 [2024-05-15 17:17:32.235505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.785 [2024-05-15 17:17:32.235768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.785 [2024-05-15 17:17:32.235798] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:44.785 qpair failed and we were unable to recover it. 00:26:44.785 [2024-05-15 17:17:32.236092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.785 [2024-05-15 17:17:32.236308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.785 [2024-05-15 17:17:32.236338] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:44.785 qpair failed and we were unable to recover it. 00:26:44.785 [2024-05-15 17:17:32.236649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.785 [2024-05-15 17:17:32.236830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.785 [2024-05-15 17:17:32.236843] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:44.785 qpair failed and we were unable to recover it. 00:26:44.785 [2024-05-15 17:17:32.237022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.785 [2024-05-15 17:17:32.237263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.785 [2024-05-15 17:17:32.237293] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:44.785 qpair failed and we were unable to recover it. 00:26:44.785 [2024-05-15 17:17:32.237513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.785 [2024-05-15 17:17:32.237718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.785 [2024-05-15 17:17:32.237746] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:44.785 qpair failed and we were unable to recover it. 00:26:44.785 [2024-05-15 17:17:32.237989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.785 [2024-05-15 17:17:32.238201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.785 [2024-05-15 17:17:32.238231] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:44.785 qpair failed and we were unable to recover it. 00:26:44.785 [2024-05-15 17:17:32.238483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.785 [2024-05-15 17:17:32.238674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.785 [2024-05-15 17:17:32.238687] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:44.785 qpair failed and we were unable to recover it. 00:26:44.785 [2024-05-15 17:17:32.238984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.785 [2024-05-15 17:17:32.239213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.785 [2024-05-15 17:17:32.239245] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:44.785 qpair failed and we were unable to recover it. 00:26:44.785 [2024-05-15 17:17:32.239383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.785 [2024-05-15 17:17:32.239660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.785 [2024-05-15 17:17:32.239674] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:44.785 qpair failed and we were unable to recover it. 00:26:44.785 [2024-05-15 17:17:32.239794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.785 [2024-05-15 17:17:32.239976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.785 [2024-05-15 17:17:32.239989] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:44.785 qpair failed and we were unable to recover it. 00:26:44.785 [2024-05-15 17:17:32.240248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.785 [2024-05-15 17:17:32.240418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.785 [2024-05-15 17:17:32.240431] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:44.785 qpair failed and we were unable to recover it. 00:26:44.785 [2024-05-15 17:17:32.240622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.785 [2024-05-15 17:17:32.240910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.785 [2024-05-15 17:17:32.240938] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:44.785 qpair failed and we were unable to recover it. 00:26:44.785 [2024-05-15 17:17:32.241178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.785 [2024-05-15 17:17:32.241412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.785 [2024-05-15 17:17:32.241441] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:44.785 qpair failed and we were unable to recover it. 00:26:44.785 [2024-05-15 17:17:32.241750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.785 [2024-05-15 17:17:32.242015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.785 [2024-05-15 17:17:32.242045] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:44.785 qpair failed and we were unable to recover it. 00:26:44.785 [2024-05-15 17:17:32.242261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.785 [2024-05-15 17:17:32.242548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.785 [2024-05-15 17:17:32.242577] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:44.785 qpair failed and we were unable to recover it. 00:26:44.785 [2024-05-15 17:17:32.242776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.785 [2024-05-15 17:17:32.242932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.785 [2024-05-15 17:17:32.242961] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:44.785 qpair failed and we were unable to recover it. 00:26:44.785 [2024-05-15 17:17:32.243186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.785 [2024-05-15 17:17:32.243389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.785 [2024-05-15 17:17:32.243423] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:44.785 qpair failed and we were unable to recover it. 00:26:44.785 [2024-05-15 17:17:32.243660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.785 [2024-05-15 17:17:32.243958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.785 [2024-05-15 17:17:32.243986] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:44.785 qpair failed and we were unable to recover it. 00:26:44.785 [2024-05-15 17:17:32.244305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.785 [2024-05-15 17:17:32.244550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.785 [2024-05-15 17:17:32.244564] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:44.785 qpair failed and we were unable to recover it. 00:26:44.785 [2024-05-15 17:17:32.244837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.785 [2024-05-15 17:17:32.245062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.785 [2024-05-15 17:17:32.245075] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:44.785 qpair failed and we were unable to recover it. 00:26:44.786 [2024-05-15 17:17:32.245338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.786 [2024-05-15 17:17:32.245529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.786 [2024-05-15 17:17:32.245542] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:44.786 qpair failed and we were unable to recover it. 00:26:44.786 [2024-05-15 17:17:32.245718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.786 [2024-05-15 17:17:32.245827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.786 [2024-05-15 17:17:32.245841] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:44.786 qpair failed and we were unable to recover it. 00:26:44.786 [2024-05-15 17:17:32.246076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.786 [2024-05-15 17:17:32.246265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.786 [2024-05-15 17:17:32.246279] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:44.786 qpair failed and we were unable to recover it. 00:26:44.786 [2024-05-15 17:17:32.246560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.786 [2024-05-15 17:17:32.246742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.786 [2024-05-15 17:17:32.246755] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:44.786 qpair failed and we were unable to recover it. 00:26:44.786 [2024-05-15 17:17:32.246885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.786 [2024-05-15 17:17:32.247076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.786 [2024-05-15 17:17:32.247089] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:44.786 qpair failed and we were unable to recover it. 00:26:44.786 [2024-05-15 17:17:32.247256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.786 [2024-05-15 17:17:32.247511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.786 [2024-05-15 17:17:32.247540] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:44.786 qpair failed and we were unable to recover it. 00:26:44.786 [2024-05-15 17:17:32.247846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.786 [2024-05-15 17:17:32.248064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.786 [2024-05-15 17:17:32.248099] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:44.786 qpair failed and we were unable to recover it. 00:26:44.786 [2024-05-15 17:17:32.248411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.786 [2024-05-15 17:17:32.248531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.786 [2024-05-15 17:17:32.248544] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:44.786 qpair failed and we were unable to recover it. 00:26:44.786 [2024-05-15 17:17:32.248745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.786 [2024-05-15 17:17:32.248930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.786 [2024-05-15 17:17:32.248944] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:44.786 qpair failed and we were unable to recover it. 00:26:44.786 [2024-05-15 17:17:32.249203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.786 [2024-05-15 17:17:32.249405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.786 [2024-05-15 17:17:32.249435] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:44.786 qpair failed and we were unable to recover it. 00:26:44.786 [2024-05-15 17:17:32.249727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.786 [2024-05-15 17:17:32.249952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.786 [2024-05-15 17:17:32.249981] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:44.786 qpair failed and we were unable to recover it. 00:26:44.786 [2024-05-15 17:17:32.250389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.786 [2024-05-15 17:17:32.250648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.786 [2024-05-15 17:17:32.250663] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:44.786 qpair failed and we were unable to recover it. 00:26:44.786 [2024-05-15 17:17:32.250894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.786 [2024-05-15 17:17:32.251134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.786 [2024-05-15 17:17:32.251174] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:44.786 qpair failed and we were unable to recover it. 00:26:44.786 [2024-05-15 17:17:32.251493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.786 [2024-05-15 17:17:32.251782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.786 [2024-05-15 17:17:32.251810] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:44.786 qpair failed and we were unable to recover it. 00:26:44.786 [2024-05-15 17:17:32.252109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.786 [2024-05-15 17:17:32.252431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.786 [2024-05-15 17:17:32.252461] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:44.786 qpair failed and we were unable to recover it. 00:26:44.786 [2024-05-15 17:17:32.252665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.786 [2024-05-15 17:17:32.252877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.786 [2024-05-15 17:17:32.252906] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:44.786 qpair failed and we were unable to recover it. 00:26:44.786 [2024-05-15 17:17:32.253205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.786 [2024-05-15 17:17:32.253460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.786 [2024-05-15 17:17:32.253473] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:44.786 qpair failed and we were unable to recover it. 00:26:44.786 [2024-05-15 17:17:32.253671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.786 [2024-05-15 17:17:32.253855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.786 [2024-05-15 17:17:32.253884] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:44.786 qpair failed and we were unable to recover it. 00:26:44.786 [2024-05-15 17:17:32.254126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.786 [2024-05-15 17:17:32.254334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.786 [2024-05-15 17:17:32.254364] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:44.786 qpair failed and we were unable to recover it. 00:26:44.786 [2024-05-15 17:17:32.254581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.786 [2024-05-15 17:17:32.254834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.786 [2024-05-15 17:17:32.254848] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:44.786 qpair failed and we were unable to recover it. 00:26:44.786 [2024-05-15 17:17:32.255092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.786 [2024-05-15 17:17:32.255361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.786 [2024-05-15 17:17:32.255394] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:44.786 qpair failed and we were unable to recover it. 00:26:44.786 [2024-05-15 17:17:32.255662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.786 [2024-05-15 17:17:32.255946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.786 [2024-05-15 17:17:32.255974] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:44.786 qpair failed and we were unable to recover it. 00:26:44.786 [2024-05-15 17:17:32.256245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.786 [2024-05-15 17:17:32.256517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.786 [2024-05-15 17:17:32.256546] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:44.786 qpair failed and we were unable to recover it. 00:26:44.786 [2024-05-15 17:17:32.256800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.786 [2024-05-15 17:17:32.257090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.786 [2024-05-15 17:17:32.257118] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:44.786 qpair failed and we were unable to recover it. 00:26:44.786 [2024-05-15 17:17:32.257456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.786 [2024-05-15 17:17:32.257695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.786 [2024-05-15 17:17:32.257723] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:44.786 qpair failed and we were unable to recover it. 00:26:44.786 [2024-05-15 17:17:32.258007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.786 [2024-05-15 17:17:32.258230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.786 [2024-05-15 17:17:32.258259] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:44.786 qpair failed and we were unable to recover it. 00:26:44.786 [2024-05-15 17:17:32.258526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.786 [2024-05-15 17:17:32.258665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.786 [2024-05-15 17:17:32.258694] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:44.786 qpair failed and we were unable to recover it. 00:26:44.786 [2024-05-15 17:17:32.258995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.786 [2024-05-15 17:17:32.259324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.786 [2024-05-15 17:17:32.259357] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:44.786 qpair failed and we were unable to recover it. 00:26:44.786 [2024-05-15 17:17:32.259595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.786 [2024-05-15 17:17:32.259833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.786 [2024-05-15 17:17:32.259862] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:44.786 qpair failed and we were unable to recover it. 00:26:44.786 [2024-05-15 17:17:32.260201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.786 [2024-05-15 17:17:32.260512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.786 [2024-05-15 17:17:32.260542] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:44.786 qpair failed and we were unable to recover it. 00:26:44.786 [2024-05-15 17:17:32.260826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.786 [2024-05-15 17:17:32.261141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.786 [2024-05-15 17:17:32.261181] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:44.786 qpair failed and we were unable to recover it. 00:26:44.786 [2024-05-15 17:17:32.261483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.786 [2024-05-15 17:17:32.261800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.786 [2024-05-15 17:17:32.261829] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:44.786 qpair failed and we were unable to recover it. 00:26:44.786 [2024-05-15 17:17:32.262111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.786 [2024-05-15 17:17:32.262405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.786 [2024-05-15 17:17:32.262445] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:44.786 qpair failed and we were unable to recover it. 00:26:44.786 [2024-05-15 17:17:32.262633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.786 [2024-05-15 17:17:32.262892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.786 [2024-05-15 17:17:32.262922] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:44.786 qpair failed and we were unable to recover it. 00:26:44.786 [2024-05-15 17:17:32.263198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.786 [2024-05-15 17:17:32.263546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.786 [2024-05-15 17:17:32.263575] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:44.786 qpair failed and we were unable to recover it. 00:26:44.786 [2024-05-15 17:17:32.263855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.786 [2024-05-15 17:17:32.264085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.786 [2024-05-15 17:17:32.264098] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:44.786 qpair failed and we were unable to recover it. 00:26:44.786 [2024-05-15 17:17:32.264382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.786 [2024-05-15 17:17:32.264617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.786 [2024-05-15 17:17:32.264631] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:44.786 qpair failed and we were unable to recover it. 00:26:44.786 [2024-05-15 17:17:32.264873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.786 [2024-05-15 17:17:32.265073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.786 [2024-05-15 17:17:32.265086] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:44.786 qpair failed and we were unable to recover it. 00:26:44.786 [2024-05-15 17:17:32.265325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.786 [2024-05-15 17:17:32.265509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.786 [2024-05-15 17:17:32.265538] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:44.786 qpair failed and we were unable to recover it. 00:26:44.786 [2024-05-15 17:17:32.265748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.786 [2024-05-15 17:17:32.266010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.786 [2024-05-15 17:17:32.266039] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:44.786 qpair failed and we were unable to recover it. 00:26:44.786 [2024-05-15 17:17:32.266328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.786 [2024-05-15 17:17:32.266494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.786 [2024-05-15 17:17:32.266507] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:44.786 qpair failed and we were unable to recover it. 00:26:44.786 [2024-05-15 17:17:32.266690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.786 [2024-05-15 17:17:32.266974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.786 [2024-05-15 17:17:32.267002] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:44.786 qpair failed and we were unable to recover it. 00:26:44.786 [2024-05-15 17:17:32.267217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.786 [2024-05-15 17:17:32.267489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.786 [2024-05-15 17:17:32.267518] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:44.786 qpair failed and we were unable to recover it. 00:26:44.786 [2024-05-15 17:17:32.267741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.786 [2024-05-15 17:17:32.267957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.786 [2024-05-15 17:17:32.267986] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:44.786 qpair failed and we were unable to recover it. 00:26:44.786 [2024-05-15 17:17:32.268263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.786 [2024-05-15 17:17:32.268434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.786 [2024-05-15 17:17:32.268447] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:44.786 qpair failed and we were unable to recover it. 00:26:44.786 [2024-05-15 17:17:32.268642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.786 [2024-05-15 17:17:32.268927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.786 [2024-05-15 17:17:32.268956] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:44.786 qpair failed and we were unable to recover it. 00:26:44.786 [2024-05-15 17:17:32.269225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.786 [2024-05-15 17:17:32.269458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.786 [2024-05-15 17:17:32.269487] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:44.786 qpair failed and we were unable to recover it. 00:26:44.786 [2024-05-15 17:17:32.269756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.786 [2024-05-15 17:17:32.270032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.786 [2024-05-15 17:17:32.270065] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:44.786 qpair failed and we were unable to recover it. 00:26:44.786 [2024-05-15 17:17:32.270312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.786 [2024-05-15 17:17:32.270597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.786 [2024-05-15 17:17:32.270625] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:44.786 qpair failed and we were unable to recover it. 00:26:44.786 [2024-05-15 17:17:32.270870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.787 [2024-05-15 17:17:32.271157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.787 [2024-05-15 17:17:32.271207] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:44.787 qpair failed and we were unable to recover it. 00:26:44.787 [2024-05-15 17:17:32.271436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.787 [2024-05-15 17:17:32.271725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.787 [2024-05-15 17:17:32.271754] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:44.787 qpair failed and we were unable to recover it. 00:26:44.787 [2024-05-15 17:17:32.272084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.787 [2024-05-15 17:17:32.272372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.787 [2024-05-15 17:17:32.272403] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:44.787 qpair failed and we were unable to recover it. 00:26:44.787 [2024-05-15 17:17:32.272676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.787 [2024-05-15 17:17:32.272891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.787 [2024-05-15 17:17:32.272919] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:44.787 qpair failed and we were unable to recover it. 00:26:44.787 [2024-05-15 17:17:32.273217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.787 [2024-05-15 17:17:32.273531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.787 [2024-05-15 17:17:32.273544] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:44.787 qpair failed and we were unable to recover it. 00:26:44.787 [2024-05-15 17:17:32.273730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.787 [2024-05-15 17:17:32.273895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.787 [2024-05-15 17:17:32.273908] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:44.787 qpair failed and we were unable to recover it. 00:26:44.787 [2024-05-15 17:17:32.274101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.787 [2024-05-15 17:17:32.274277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.787 [2024-05-15 17:17:32.274308] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:44.787 qpair failed and we were unable to recover it. 00:26:44.787 [2024-05-15 17:17:32.274618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.787 [2024-05-15 17:17:32.274874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.787 [2024-05-15 17:17:32.274903] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:44.787 qpair failed and we were unable to recover it. 00:26:44.787 [2024-05-15 17:17:32.275195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.787 [2024-05-15 17:17:32.275517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.787 [2024-05-15 17:17:32.275552] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:44.787 qpair failed and we were unable to recover it. 00:26:44.787 [2024-05-15 17:17:32.275849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.787 [2024-05-15 17:17:32.276161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.787 [2024-05-15 17:17:32.276217] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:44.787 qpair failed and we were unable to recover it. 00:26:44.787 [2024-05-15 17:17:32.276510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.787 [2024-05-15 17:17:32.276825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.787 [2024-05-15 17:17:32.276854] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:44.787 qpair failed and we were unable to recover it. 00:26:44.787 [2024-05-15 17:17:32.277154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.787 [2024-05-15 17:17:32.277376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.787 [2024-05-15 17:17:32.277405] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:44.787 qpair failed and we were unable to recover it. 00:26:44.787 [2024-05-15 17:17:32.277713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.787 [2024-05-15 17:17:32.278029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.787 [2024-05-15 17:17:32.278058] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:44.787 qpair failed and we were unable to recover it. 00:26:44.787 [2024-05-15 17:17:32.278356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.787 [2024-05-15 17:17:32.278562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.787 [2024-05-15 17:17:32.278591] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:44.787 qpair failed and we were unable to recover it. 00:26:44.787 [2024-05-15 17:17:32.278811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.787 [2024-05-15 17:17:32.279117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.787 [2024-05-15 17:17:32.279145] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:44.787 qpair failed and we were unable to recover it. 00:26:44.787 [2024-05-15 17:17:32.279373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.787 [2024-05-15 17:17:32.279686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.787 [2024-05-15 17:17:32.279721] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:44.787 qpair failed and we were unable to recover it. 00:26:44.787 [2024-05-15 17:17:32.280032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.787 [2024-05-15 17:17:32.280229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.787 [2024-05-15 17:17:32.280269] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:44.787 qpair failed and we were unable to recover it. 00:26:44.787 [2024-05-15 17:17:32.280533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.787 [2024-05-15 17:17:32.280650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.787 [2024-05-15 17:17:32.280663] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:44.787 qpair failed and we were unable to recover it. 00:26:44.787 [2024-05-15 17:17:32.280846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.787 [2024-05-15 17:17:32.280948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.787 [2024-05-15 17:17:32.280962] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:44.787 qpair failed and we were unable to recover it. 00:26:44.787 [2024-05-15 17:17:32.281134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.787 [2024-05-15 17:17:32.281303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.787 [2024-05-15 17:17:32.281317] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:44.787 qpair failed and we were unable to recover it. 00:26:44.787 [2024-05-15 17:17:32.281559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.787 [2024-05-15 17:17:32.281813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.787 [2024-05-15 17:17:32.281826] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:44.787 qpair failed and we were unable to recover it. 00:26:44.787 [2024-05-15 17:17:32.282084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.787 [2024-05-15 17:17:32.282287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.787 [2024-05-15 17:17:32.282301] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:44.787 qpair failed and we were unable to recover it. 00:26:44.787 [2024-05-15 17:17:32.282587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.787 [2024-05-15 17:17:32.282827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.787 [2024-05-15 17:17:32.282840] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:44.787 qpair failed and we were unable to recover it. 00:26:44.787 [2024-05-15 17:17:32.283043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.787 [2024-05-15 17:17:32.283299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.787 [2024-05-15 17:17:32.283314] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:44.787 qpair failed and we were unable to recover it. 00:26:44.787 [2024-05-15 17:17:32.283556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.787 [2024-05-15 17:17:32.283668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.787 [2024-05-15 17:17:32.283681] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:44.787 qpair failed and we were unable to recover it. 00:26:44.787 [2024-05-15 17:17:32.283942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.787 [2024-05-15 17:17:32.284204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.787 [2024-05-15 17:17:32.284218] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:44.787 qpair failed and we were unable to recover it. 00:26:44.787 [2024-05-15 17:17:32.284460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.787 [2024-05-15 17:17:32.284645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.787 [2024-05-15 17:17:32.284659] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:44.787 qpair failed and we were unable to recover it. 00:26:44.787 [2024-05-15 17:17:32.284920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.787 [2024-05-15 17:17:32.285177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.787 [2024-05-15 17:17:32.285191] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:44.787 qpair failed and we were unable to recover it. 00:26:44.787 [2024-05-15 17:17:32.285379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.787 [2024-05-15 17:17:32.285579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.787 [2024-05-15 17:17:32.285592] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:44.787 qpair failed and we were unable to recover it. 00:26:44.787 [2024-05-15 17:17:32.285831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.787 [2024-05-15 17:17:32.285944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.787 [2024-05-15 17:17:32.285957] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:44.787 qpair failed and we were unable to recover it. 00:26:44.787 [2024-05-15 17:17:32.286224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.787 [2024-05-15 17:17:32.286479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.787 [2024-05-15 17:17:32.286493] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:44.787 qpair failed and we were unable to recover it. 00:26:44.787 [2024-05-15 17:17:32.286736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.787 [2024-05-15 17:17:32.286915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.787 [2024-05-15 17:17:32.286928] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:44.787 qpair failed and we were unable to recover it. 00:26:44.787 [2024-05-15 17:17:32.287105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.787 [2024-05-15 17:17:32.287305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.787 [2024-05-15 17:17:32.287326] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:44.787 qpair failed and we were unable to recover it. 00:26:44.787 [2024-05-15 17:17:32.287563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.787 [2024-05-15 17:17:32.287681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.787 [2024-05-15 17:17:32.287695] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:44.787 qpair failed and we were unable to recover it. 00:26:44.787 [2024-05-15 17:17:32.287930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.787 [2024-05-15 17:17:32.288043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.787 [2024-05-15 17:17:32.288056] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:44.787 qpair failed and we were unable to recover it. 00:26:44.787 [2024-05-15 17:17:32.288249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.787 [2024-05-15 17:17:32.288511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.787 [2024-05-15 17:17:32.288525] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:44.787 qpair failed and we were unable to recover it. 00:26:44.787 [2024-05-15 17:17:32.288714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.787 [2024-05-15 17:17:32.288977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.787 [2024-05-15 17:17:32.288992] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:44.787 qpair failed and we were unable to recover it. 00:26:44.787 [2024-05-15 17:17:32.289177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.787 [2024-05-15 17:17:32.289439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.787 [2024-05-15 17:17:32.289452] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:44.787 qpair failed and we were unable to recover it. 00:26:44.787 [2024-05-15 17:17:32.289709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.787 [2024-05-15 17:17:32.289957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.787 [2024-05-15 17:17:32.289970] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:44.787 qpair failed and we were unable to recover it. 00:26:44.787 [2024-05-15 17:17:32.290236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.787 [2024-05-15 17:17:32.290486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.787 [2024-05-15 17:17:32.290500] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:44.787 qpair failed and we were unable to recover it. 00:26:44.787 [2024-05-15 17:17:32.290677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.787 [2024-05-15 17:17:32.290933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.787 [2024-05-15 17:17:32.290946] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:44.787 qpair failed and we were unable to recover it. 00:26:44.787 [2024-05-15 17:17:32.291061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.787 [2024-05-15 17:17:32.291345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.787 [2024-05-15 17:17:32.291360] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:44.787 qpair failed and we were unable to recover it. 00:26:44.787 [2024-05-15 17:17:32.291597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.787 [2024-05-15 17:17:32.291779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.787 [2024-05-15 17:17:32.291793] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:44.787 qpair failed and we were unable to recover it. 00:26:44.787 [2024-05-15 17:17:32.292065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.787 [2024-05-15 17:17:32.292328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.787 [2024-05-15 17:17:32.292342] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:44.787 qpair failed and we were unable to recover it. 00:26:44.787 [2024-05-15 17:17:32.292527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.787 [2024-05-15 17:17:32.292695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.787 [2024-05-15 17:17:32.292709] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:44.787 qpair failed and we were unable to recover it. 00:26:44.787 [2024-05-15 17:17:32.292928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.787 [2024-05-15 17:17:32.293186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.787 [2024-05-15 17:17:32.293200] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:44.787 qpair failed and we were unable to recover it. 00:26:44.787 [2024-05-15 17:17:32.293463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.787 [2024-05-15 17:17:32.293690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.787 [2024-05-15 17:17:32.293703] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:44.787 qpair failed and we were unable to recover it. 00:26:44.787 [2024-05-15 17:17:32.293888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.787 [2024-05-15 17:17:32.294122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.787 [2024-05-15 17:17:32.294136] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:44.787 qpair failed and we were unable to recover it. 00:26:44.787 [2024-05-15 17:17:32.294241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.787 [2024-05-15 17:17:32.294529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.787 [2024-05-15 17:17:32.294542] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:44.787 qpair failed and we were unable to recover it. 00:26:44.787 [2024-05-15 17:17:32.294721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.787 [2024-05-15 17:17:32.295001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.787 [2024-05-15 17:17:32.295015] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:44.787 qpair failed and we were unable to recover it. 00:26:44.787 [2024-05-15 17:17:32.295133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.787 [2024-05-15 17:17:32.295382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.787 [2024-05-15 17:17:32.295397] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:44.787 qpair failed and we were unable to recover it. 00:26:44.787 [2024-05-15 17:17:32.295658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.787 [2024-05-15 17:17:32.295903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.787 [2024-05-15 17:17:32.295917] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:44.787 qpair failed and we were unable to recover it. 00:26:44.787 [2024-05-15 17:17:32.296100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.787 [2024-05-15 17:17:32.296357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.787 [2024-05-15 17:17:32.296371] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:44.787 qpair failed and we were unable to recover it. 00:26:44.787 [2024-05-15 17:17:32.296481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.787 [2024-05-15 17:17:32.296744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.787 [2024-05-15 17:17:32.296757] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:44.787 qpair failed and we were unable to recover it. 00:26:44.787 [2024-05-15 17:17:32.297014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.787 [2024-05-15 17:17:32.297247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.787 [2024-05-15 17:17:32.297261] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:44.788 qpair failed and we were unable to recover it. 00:26:44.788 [2024-05-15 17:17:32.297524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.788 [2024-05-15 17:17:32.297755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.788 [2024-05-15 17:17:32.297768] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:44.788 qpair failed and we were unable to recover it. 00:26:44.788 [2024-05-15 17:17:32.297945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.788 [2024-05-15 17:17:32.298142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.788 [2024-05-15 17:17:32.298155] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:44.788 qpair failed and we were unable to recover it. 00:26:44.788 [2024-05-15 17:17:32.298421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.788 [2024-05-15 17:17:32.298531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.788 [2024-05-15 17:17:32.298545] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:44.788 qpair failed and we were unable to recover it. 00:26:44.788 [2024-05-15 17:17:32.298783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.788 [2024-05-15 17:17:32.299015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.788 [2024-05-15 17:17:32.299029] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:44.788 qpair failed and we were unable to recover it. 00:26:44.788 [2024-05-15 17:17:32.299313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.788 [2024-05-15 17:17:32.299509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.788 [2024-05-15 17:17:32.299526] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:44.788 qpair failed and we were unable to recover it. 00:26:44.788 [2024-05-15 17:17:32.299711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.788 [2024-05-15 17:17:32.299914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.788 [2024-05-15 17:17:32.299927] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:44.788 qpair failed and we were unable to recover it. 00:26:44.788 [2024-05-15 17:17:32.300162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.788 [2024-05-15 17:17:32.300371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.788 [2024-05-15 17:17:32.300384] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:44.788 qpair failed and we were unable to recover it. 00:26:44.788 [2024-05-15 17:17:32.300566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.788 [2024-05-15 17:17:32.300825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.788 [2024-05-15 17:17:32.300839] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:44.788 qpair failed and we were unable to recover it. 00:26:44.788 [2024-05-15 17:17:32.301101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.788 [2024-05-15 17:17:32.301340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.788 [2024-05-15 17:17:32.301353] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:44.788 qpair failed and we were unable to recover it. 00:26:44.788 [2024-05-15 17:17:32.301588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.788 [2024-05-15 17:17:32.301767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.788 [2024-05-15 17:17:32.301780] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:44.788 qpair failed and we were unable to recover it. 00:26:44.788 [2024-05-15 17:17:32.302045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.788 [2024-05-15 17:17:32.302310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.788 [2024-05-15 17:17:32.302324] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:44.788 qpair failed and we were unable to recover it. 00:26:44.788 [2024-05-15 17:17:32.302507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.788 [2024-05-15 17:17:32.302671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.788 [2024-05-15 17:17:32.302686] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:44.788 qpair failed and we were unable to recover it. 00:26:44.788 [2024-05-15 17:17:32.302892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.788 [2024-05-15 17:17:32.303071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.788 [2024-05-15 17:17:32.303084] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:44.788 qpair failed and we were unable to recover it. 00:26:44.788 [2024-05-15 17:17:32.303265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.788 [2024-05-15 17:17:32.303390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.788 [2024-05-15 17:17:32.303404] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:44.788 qpair failed and we were unable to recover it. 00:26:44.788 [2024-05-15 17:17:32.303663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.788 [2024-05-15 17:17:32.303975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.788 [2024-05-15 17:17:32.303988] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:44.788 qpair failed and we were unable to recover it. 00:26:44.788 [2024-05-15 17:17:32.304181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.788 [2024-05-15 17:17:32.304380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.788 [2024-05-15 17:17:32.304394] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:44.788 qpair failed and we were unable to recover it. 00:26:44.788 [2024-05-15 17:17:32.304575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.788 [2024-05-15 17:17:32.304831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.788 [2024-05-15 17:17:32.304844] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:44.788 qpair failed and we were unable to recover it. 00:26:44.788 [2024-05-15 17:17:32.305107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.788 [2024-05-15 17:17:32.305353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.788 [2024-05-15 17:17:32.305367] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:44.788 qpair failed and we were unable to recover it. 00:26:44.788 [2024-05-15 17:17:32.305550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.788 [2024-05-15 17:17:32.305808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.788 [2024-05-15 17:17:32.305821] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:44.788 qpair failed and we were unable to recover it. 00:26:44.788 [2024-05-15 17:17:32.306003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.788 [2024-05-15 17:17:32.306205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.788 [2024-05-15 17:17:32.306219] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:44.788 qpair failed and we were unable to recover it. 00:26:44.788 [2024-05-15 17:17:32.306400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.788 [2024-05-15 17:17:32.306656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.788 [2024-05-15 17:17:32.306670] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:44.788 qpair failed and we were unable to recover it. 00:26:44.788 [2024-05-15 17:17:32.306953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.788 [2024-05-15 17:17:32.307056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.788 [2024-05-15 17:17:32.307069] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:44.788 qpair failed and we were unable to recover it. 00:26:44.788 [2024-05-15 17:17:32.307246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.788 [2024-05-15 17:17:32.307493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.788 [2024-05-15 17:17:32.307507] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:44.788 qpair failed and we were unable to recover it. 00:26:44.788 [2024-05-15 17:17:32.307717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.788 [2024-05-15 17:17:32.307902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.788 [2024-05-15 17:17:32.307915] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:44.788 qpair failed and we were unable to recover it. 00:26:44.788 [2024-05-15 17:17:32.308175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.788 [2024-05-15 17:17:32.308357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.788 [2024-05-15 17:17:32.308370] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:44.788 qpair failed and we were unable to recover it. 00:26:44.788 [2024-05-15 17:17:32.308634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.788 [2024-05-15 17:17:32.308800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.788 [2024-05-15 17:17:32.308814] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:44.788 qpair failed and we were unable to recover it. 00:26:44.788 [2024-05-15 17:17:32.309086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.788 [2024-05-15 17:17:32.309321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.788 [2024-05-15 17:17:32.309335] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:44.788 qpair failed and we were unable to recover it. 00:26:44.788 [2024-05-15 17:17:32.309619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.788 [2024-05-15 17:17:32.309862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.788 [2024-05-15 17:17:32.309876] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:44.788 qpair failed and we were unable to recover it. 00:26:44.788 [2024-05-15 17:17:32.310113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.788 [2024-05-15 17:17:32.310396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.788 [2024-05-15 17:17:32.310410] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:44.788 qpair failed and we were unable to recover it. 00:26:44.788 [2024-05-15 17:17:32.310655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.788 [2024-05-15 17:17:32.310911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.788 [2024-05-15 17:17:32.310924] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:44.788 qpair failed and we were unable to recover it. 00:26:44.788 [2024-05-15 17:17:32.311047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.788 [2024-05-15 17:17:32.311242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.788 [2024-05-15 17:17:32.311257] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:44.788 qpair failed and we were unable to recover it. 00:26:44.788 [2024-05-15 17:17:32.311469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.788 [2024-05-15 17:17:32.311748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.788 [2024-05-15 17:17:32.311761] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:44.788 qpair failed and we were unable to recover it. 00:26:44.788 [2024-05-15 17:17:32.311972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.788 [2024-05-15 17:17:32.312140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.788 [2024-05-15 17:17:32.312154] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:44.788 qpair failed and we were unable to recover it. 00:26:44.788 [2024-05-15 17:17:32.312463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.788 [2024-05-15 17:17:32.312715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.788 [2024-05-15 17:17:32.312728] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:44.788 qpair failed and we were unable to recover it. 00:26:44.788 [2024-05-15 17:17:32.312908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.788 [2024-05-15 17:17:32.313170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.788 [2024-05-15 17:17:32.313183] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:44.788 qpair failed and we were unable to recover it. 00:26:44.788 [2024-05-15 17:17:32.313369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.788 [2024-05-15 17:17:32.313627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.788 [2024-05-15 17:17:32.313640] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:44.788 qpair failed and we were unable to recover it. 00:26:44.788 [2024-05-15 17:17:32.313823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.788 [2024-05-15 17:17:32.314065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.788 [2024-05-15 17:17:32.314078] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:44.788 qpair failed and we were unable to recover it. 00:26:44.788 [2024-05-15 17:17:32.314317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.788 [2024-05-15 17:17:32.314552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.788 [2024-05-15 17:17:32.314566] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:44.788 qpair failed and we were unable to recover it. 00:26:44.788 [2024-05-15 17:17:32.314806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.788 [2024-05-15 17:17:32.315091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.788 [2024-05-15 17:17:32.315104] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:44.788 qpair failed and we were unable to recover it. 00:26:44.788 [2024-05-15 17:17:32.315349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.788 [2024-05-15 17:17:32.315558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.788 [2024-05-15 17:17:32.315572] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:44.788 qpair failed and we were unable to recover it. 00:26:44.788 [2024-05-15 17:17:32.315775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.788 [2024-05-15 17:17:32.315959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.788 [2024-05-15 17:17:32.315973] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:44.788 qpair failed and we were unable to recover it. 00:26:44.788 [2024-05-15 17:17:32.316174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.788 [2024-05-15 17:17:32.316386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.788 [2024-05-15 17:17:32.316400] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:44.788 qpair failed and we were unable to recover it. 00:26:44.788 [2024-05-15 17:17:32.316660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.788 [2024-05-15 17:17:32.316847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.788 [2024-05-15 17:17:32.316860] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:44.788 qpair failed and we were unable to recover it. 00:26:44.788 [2024-05-15 17:17:32.317032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.788 [2024-05-15 17:17:32.317267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.788 [2024-05-15 17:17:32.317300] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:44.788 qpair failed and we were unable to recover it. 00:26:44.788 [2024-05-15 17:17:32.317577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.788 [2024-05-15 17:17:32.317862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.788 [2024-05-15 17:17:32.317890] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:44.788 qpair failed and we were unable to recover it. 00:26:44.788 [2024-05-15 17:17:32.318108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.788 [2024-05-15 17:17:32.318386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.788 [2024-05-15 17:17:32.318424] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:44.788 qpair failed and we were unable to recover it. 00:26:44.788 [2024-05-15 17:17:32.318658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.788 [2024-05-15 17:17:32.318914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.788 [2024-05-15 17:17:32.318928] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:44.788 qpair failed and we were unable to recover it. 00:26:44.788 [2024-05-15 17:17:32.319194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.788 [2024-05-15 17:17:32.319423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.788 [2024-05-15 17:17:32.319436] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:44.788 qpair failed and we were unable to recover it. 00:26:44.788 [2024-05-15 17:17:32.319699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.788 [2024-05-15 17:17:32.319967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.788 [2024-05-15 17:17:32.319981] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:44.788 qpair failed and we were unable to recover it. 00:26:44.788 [2024-05-15 17:17:32.320217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.788 [2024-05-15 17:17:32.320327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.789 [2024-05-15 17:17:32.320340] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:44.789 qpair failed and we were unable to recover it. 00:26:44.789 [2024-05-15 17:17:32.320517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.789 [2024-05-15 17:17:32.320795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.789 [2024-05-15 17:17:32.320824] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:44.789 qpair failed and we were unable to recover it. 00:26:44.789 [2024-05-15 17:17:32.321024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.789 [2024-05-15 17:17:32.321239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.789 [2024-05-15 17:17:32.321269] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:44.789 qpair failed and we were unable to recover it. 00:26:44.789 [2024-05-15 17:17:32.321483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.789 [2024-05-15 17:17:32.321690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.789 [2024-05-15 17:17:32.321719] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:44.789 qpair failed and we were unable to recover it. 00:26:44.789 [2024-05-15 17:17:32.321873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.789 [2024-05-15 17:17:32.322027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.789 [2024-05-15 17:17:32.322056] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:44.789 qpair failed and we were unable to recover it. 00:26:44.789 [2024-05-15 17:17:32.322349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.789 [2024-05-15 17:17:32.322668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.789 [2024-05-15 17:17:32.322697] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:44.789 qpair failed and we were unable to recover it. 00:26:44.789 [2024-05-15 17:17:32.322865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.789 [2024-05-15 17:17:32.323128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.789 [2024-05-15 17:17:32.323162] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:44.789 qpair failed and we were unable to recover it. 00:26:44.789 [2024-05-15 17:17:32.323402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.789 [2024-05-15 17:17:32.323587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.789 [2024-05-15 17:17:32.323615] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:44.789 qpair failed and we were unable to recover it. 00:26:44.789 [2024-05-15 17:17:32.323912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.789 [2024-05-15 17:17:32.324105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.789 [2024-05-15 17:17:32.324134] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:44.789 qpair failed and we were unable to recover it. 00:26:44.789 [2024-05-15 17:17:32.324521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.789 [2024-05-15 17:17:32.324841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.789 [2024-05-15 17:17:32.324858] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:44.789 qpair failed and we were unable to recover it. 00:26:44.789 [2024-05-15 17:17:32.325062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.789 [2024-05-15 17:17:32.325344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.789 [2024-05-15 17:17:32.325359] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:44.789 qpair failed and we were unable to recover it. 00:26:44.789 [2024-05-15 17:17:32.325551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.789 [2024-05-15 17:17:32.325820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.789 [2024-05-15 17:17:32.325850] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:44.789 qpair failed and we were unable to recover it. 00:26:44.789 [2024-05-15 17:17:32.326147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.789 [2024-05-15 17:17:32.326480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.789 [2024-05-15 17:17:32.326510] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:44.789 qpair failed and we were unable to recover it. 00:26:44.789 [2024-05-15 17:17:32.326803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.789 [2024-05-15 17:17:32.327110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.789 [2024-05-15 17:17:32.327139] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:44.789 qpair failed and we were unable to recover it. 00:26:44.789 [2024-05-15 17:17:32.327386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.789 [2024-05-15 17:17:32.327652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.789 [2024-05-15 17:17:32.327681] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:44.789 qpair failed and we were unable to recover it. 00:26:44.789 [2024-05-15 17:17:32.327911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.789 [2024-05-15 17:17:32.328114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.789 [2024-05-15 17:17:32.328143] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:44.789 qpair failed and we were unable to recover it. 00:26:44.789 [2024-05-15 17:17:32.328362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.789 [2024-05-15 17:17:32.328625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.789 [2024-05-15 17:17:32.328662] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:44.789 qpair failed and we were unable to recover it. 00:26:44.789 [2024-05-15 17:17:32.328958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.789 [2024-05-15 17:17:32.329191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.789 [2024-05-15 17:17:32.329221] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:44.789 qpair failed and we were unable to recover it. 00:26:44.789 [2024-05-15 17:17:32.329520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.789 [2024-05-15 17:17:32.329789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.789 [2024-05-15 17:17:32.329802] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:44.789 qpair failed and we were unable to recover it. 00:26:44.789 [2024-05-15 17:17:32.330022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.789 [2024-05-15 17:17:32.330203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.789 [2024-05-15 17:17:32.330217] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:44.789 qpair failed and we were unable to recover it. 00:26:44.789 [2024-05-15 17:17:32.330393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.789 [2024-05-15 17:17:32.330653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.789 [2024-05-15 17:17:32.330666] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:44.789 qpair failed and we were unable to recover it. 00:26:44.789 [2024-05-15 17:17:32.330869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.789 [2024-05-15 17:17:32.331100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.789 [2024-05-15 17:17:32.331114] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:44.789 qpair failed and we were unable to recover it. 00:26:44.789 [2024-05-15 17:17:32.331349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.789 [2024-05-15 17:17:32.331587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.789 [2024-05-15 17:17:32.331615] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:44.789 qpair failed and we were unable to recover it. 00:26:44.789 [2024-05-15 17:17:32.331908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.789 [2024-05-15 17:17:32.332201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.789 [2024-05-15 17:17:32.332231] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:44.789 qpair failed and we were unable to recover it. 00:26:44.789 [2024-05-15 17:17:32.332557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.789 [2024-05-15 17:17:32.332851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.789 [2024-05-15 17:17:32.332879] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:44.789 qpair failed and we were unable to recover it. 00:26:44.789 [2024-05-15 17:17:32.333147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.789 [2024-05-15 17:17:32.333440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.789 [2024-05-15 17:17:32.333469] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:44.789 qpair failed and we were unable to recover it. 00:26:44.789 [2024-05-15 17:17:32.333703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.789 [2024-05-15 17:17:32.333986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.789 [2024-05-15 17:17:32.334016] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:44.789 qpair failed and we were unable to recover it. 00:26:44.789 [2024-05-15 17:17:32.334292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.789 [2024-05-15 17:17:32.334563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.789 [2024-05-15 17:17:32.334577] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:44.789 qpair failed and we were unable to recover it. 00:26:44.789 [2024-05-15 17:17:32.334743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.789 [2024-05-15 17:17:32.335006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.789 [2024-05-15 17:17:32.335035] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:44.789 qpair failed and we were unable to recover it. 00:26:44.789 [2024-05-15 17:17:32.335314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.789 [2024-05-15 17:17:32.335555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.789 [2024-05-15 17:17:32.335583] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:44.789 qpair failed and we were unable to recover it. 00:26:44.789 [2024-05-15 17:17:32.335855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.789 [2024-05-15 17:17:32.336131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.789 [2024-05-15 17:17:32.336159] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:44.789 qpair failed and we were unable to recover it. 00:26:44.789 [2024-05-15 17:17:32.336465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.789 [2024-05-15 17:17:32.336740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.789 [2024-05-15 17:17:32.336754] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:44.789 qpair failed and we were unable to recover it. 00:26:44.789 [2024-05-15 17:17:32.336924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.789 [2024-05-15 17:17:32.337178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.789 [2024-05-15 17:17:32.337192] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:44.789 qpair failed and we were unable to recover it. 00:26:44.789 [2024-05-15 17:17:32.337431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.789 [2024-05-15 17:17:32.337701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.789 [2024-05-15 17:17:32.337731] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:44.789 qpair failed and we were unable to recover it. 00:26:44.789 [2024-05-15 17:17:32.338029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.789 [2024-05-15 17:17:32.338323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.789 [2024-05-15 17:17:32.338354] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:44.789 qpair failed and we were unable to recover it. 00:26:44.789 [2024-05-15 17:17:32.338576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.789 [2024-05-15 17:17:32.338840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.789 [2024-05-15 17:17:32.338869] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:44.789 qpair failed and we were unable to recover it. 00:26:44.789 [2024-05-15 17:17:32.339134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.789 [2024-05-15 17:17:32.339373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.789 [2024-05-15 17:17:32.339387] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:44.789 qpair failed and we were unable to recover it. 00:26:44.789 [2024-05-15 17:17:32.339650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.789 [2024-05-15 17:17:32.339830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.789 [2024-05-15 17:17:32.339844] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:44.789 qpair failed and we were unable to recover it. 00:26:44.789 [2024-05-15 17:17:32.340104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.789 [2024-05-15 17:17:32.340273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.789 [2024-05-15 17:17:32.340303] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:44.789 qpair failed and we were unable to recover it. 00:26:44.789 [2024-05-15 17:17:32.340518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.789 [2024-05-15 17:17:32.340747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.789 [2024-05-15 17:17:32.340760] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:44.789 qpair failed and we were unable to recover it. 00:26:44.789 [2024-05-15 17:17:32.341030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.789 [2024-05-15 17:17:32.341282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.789 [2024-05-15 17:17:32.341296] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:44.789 qpair failed and we were unable to recover it. 00:26:44.789 [2024-05-15 17:17:32.341549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.789 [2024-05-15 17:17:32.341801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.789 [2024-05-15 17:17:32.341814] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:44.789 qpair failed and we were unable to recover it. 00:26:44.789 [2024-05-15 17:17:32.341949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.789 [2024-05-15 17:17:32.342125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.789 [2024-05-15 17:17:32.342138] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:44.789 qpair failed and we were unable to recover it. 00:26:44.789 [2024-05-15 17:17:32.342434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.789 [2024-05-15 17:17:32.342646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.789 [2024-05-15 17:17:32.342675] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:44.789 qpair failed and we were unable to recover it. 00:26:44.789 [2024-05-15 17:17:32.342945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.789 [2024-05-15 17:17:32.343207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.789 [2024-05-15 17:17:32.343237] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:44.789 qpair failed and we were unable to recover it. 00:26:44.789 [2024-05-15 17:17:32.343457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.789 [2024-05-15 17:17:32.343661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.789 [2024-05-15 17:17:32.343674] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:44.789 qpair failed and we were unable to recover it. 00:26:44.789 [2024-05-15 17:17:32.343881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.789 [2024-05-15 17:17:32.344063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.789 [2024-05-15 17:17:32.344076] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:44.789 qpair failed and we were unable to recover it. 00:26:44.789 [2024-05-15 17:17:32.344313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.789 [2024-05-15 17:17:32.344569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.789 [2024-05-15 17:17:32.344597] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:44.789 qpair failed and we were unable to recover it. 00:26:44.789 [2024-05-15 17:17:32.344881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.789 [2024-05-15 17:17:32.345142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.789 [2024-05-15 17:17:32.345180] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:44.789 qpair failed and we were unable to recover it. 00:26:44.789 [2024-05-15 17:17:32.345461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.789 [2024-05-15 17:17:32.345715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.789 [2024-05-15 17:17:32.345729] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:44.789 qpair failed and we were unable to recover it. 00:26:44.789 [2024-05-15 17:17:32.345934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.789 [2024-05-15 17:17:32.346147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.789 [2024-05-15 17:17:32.346184] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:44.789 qpair failed and we were unable to recover it. 00:26:44.789 [2024-05-15 17:17:32.346351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.789 [2024-05-15 17:17:32.346613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.789 [2024-05-15 17:17:32.346642] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:44.789 qpair failed and we were unable to recover it. 00:26:44.789 [2024-05-15 17:17:32.346940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.789 [2024-05-15 17:17:32.347215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.789 [2024-05-15 17:17:32.347245] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:44.789 qpair failed and we were unable to recover it. 00:26:44.789 [2024-05-15 17:17:32.347534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.789 [2024-05-15 17:17:32.347814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.789 [2024-05-15 17:17:32.347843] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:44.789 qpair failed and we were unable to recover it. 00:26:44.789 [2024-05-15 17:17:32.348079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.789 [2024-05-15 17:17:32.348370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.790 [2024-05-15 17:17:32.348400] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:44.790 qpair failed and we were unable to recover it. 00:26:44.790 [2024-05-15 17:17:32.348623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.790 [2024-05-15 17:17:32.348909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.790 [2024-05-15 17:17:32.348938] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:44.790 qpair failed and we were unable to recover it. 00:26:44.790 [2024-05-15 17:17:32.349220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.790 [2024-05-15 17:17:32.349501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.790 [2024-05-15 17:17:32.349514] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:44.790 qpair failed and we were unable to recover it. 00:26:44.790 [2024-05-15 17:17:32.349730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.790 [2024-05-15 17:17:32.349910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.790 [2024-05-15 17:17:32.349923] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:44.790 qpair failed and we were unable to recover it. 00:26:44.790 [2024-05-15 17:17:32.350099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.790 [2024-05-15 17:17:32.350357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.790 [2024-05-15 17:17:32.350370] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:44.790 qpair failed and we were unable to recover it. 00:26:44.790 [2024-05-15 17:17:32.350539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.790 [2024-05-15 17:17:32.350707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.790 [2024-05-15 17:17:32.350736] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:44.790 qpair failed and we were unable to recover it. 00:26:44.790 [2024-05-15 17:17:32.351040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.790 [2024-05-15 17:17:32.351330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.790 [2024-05-15 17:17:32.351360] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:44.790 qpair failed and we were unable to recover it. 00:26:44.790 [2024-05-15 17:17:32.351642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.790 [2024-05-15 17:17:32.351870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.790 [2024-05-15 17:17:32.351898] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:44.790 qpair failed and we were unable to recover it. 00:26:44.790 [2024-05-15 17:17:32.352182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.790 [2024-05-15 17:17:32.352377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.790 [2024-05-15 17:17:32.352406] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:44.790 qpair failed and we were unable to recover it. 00:26:44.790 [2024-05-15 17:17:32.352669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.790 [2024-05-15 17:17:32.352863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.790 [2024-05-15 17:17:32.352876] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:44.790 qpair failed and we were unable to recover it. 00:26:44.790 [2024-05-15 17:17:32.353073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.790 [2024-05-15 17:17:32.353335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.790 [2024-05-15 17:17:32.353366] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:44.790 qpair failed and we were unable to recover it. 00:26:44.790 [2024-05-15 17:17:32.353652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.790 [2024-05-15 17:17:32.353933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.790 [2024-05-15 17:17:32.353962] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:44.790 qpair failed and we were unable to recover it. 00:26:44.790 [2024-05-15 17:17:32.354182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.790 [2024-05-15 17:17:32.354416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.790 [2024-05-15 17:17:32.354445] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:44.790 qpair failed and we were unable to recover it. 00:26:44.790 [2024-05-15 17:17:32.354663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.790 [2024-05-15 17:17:32.354929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.790 [2024-05-15 17:17:32.354958] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:44.790 qpair failed and we were unable to recover it. 00:26:44.790 [2024-05-15 17:17:32.355122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.790 [2024-05-15 17:17:32.355448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.790 [2024-05-15 17:17:32.355479] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:44.790 qpair failed and we were unable to recover it. 00:26:44.790 [2024-05-15 17:17:32.355778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.790 [2024-05-15 17:17:32.356087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.790 [2024-05-15 17:17:32.356115] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:44.790 qpair failed and we were unable to recover it. 00:26:44.790 [2024-05-15 17:17:32.356437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.790 [2024-05-15 17:17:32.356741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.790 [2024-05-15 17:17:32.356755] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:44.790 qpair failed and we were unable to recover it. 00:26:44.790 [2024-05-15 17:17:32.357016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.790 [2024-05-15 17:17:32.357265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.790 [2024-05-15 17:17:32.357279] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:44.790 qpair failed and we were unable to recover it. 00:26:44.790 [2024-05-15 17:17:32.357527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.790 [2024-05-15 17:17:32.357647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.790 [2024-05-15 17:17:32.357660] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:44.790 qpair failed and we were unable to recover it. 00:26:44.790 [2024-05-15 17:17:32.357900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.790 [2024-05-15 17:17:32.358183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.790 [2024-05-15 17:17:32.358213] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:44.790 qpair failed and we were unable to recover it. 00:26:44.790 [2024-05-15 17:17:32.358503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.790 [2024-05-15 17:17:32.358785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.790 [2024-05-15 17:17:32.358814] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:44.790 qpair failed and we were unable to recover it. 00:26:44.790 [2024-05-15 17:17:32.359017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.790 [2024-05-15 17:17:32.359312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.790 [2024-05-15 17:17:32.359342] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:44.790 qpair failed and we were unable to recover it. 00:26:44.790 [2024-05-15 17:17:32.359623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.790 [2024-05-15 17:17:32.359863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.790 [2024-05-15 17:17:32.359892] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:44.790 qpair failed and we were unable to recover it. 00:26:44.790 [2024-05-15 17:17:32.360206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.790 [2024-05-15 17:17:32.360434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.790 [2024-05-15 17:17:32.360463] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:44.790 qpair failed and we were unable to recover it. 00:26:44.790 [2024-05-15 17:17:32.360716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.790 [2024-05-15 17:17:32.360865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.790 [2024-05-15 17:17:32.360895] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:44.790 qpair failed and we were unable to recover it. 00:26:44.790 [2024-05-15 17:17:32.361200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.790 [2024-05-15 17:17:32.361352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.790 [2024-05-15 17:17:32.361365] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:44.790 qpair failed and we were unable to recover it. 00:26:44.790 [2024-05-15 17:17:32.361638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.790 [2024-05-15 17:17:32.361848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.790 [2024-05-15 17:17:32.361876] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:44.790 qpair failed and we were unable to recover it. 00:26:44.790 [2024-05-15 17:17:32.362090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.790 [2024-05-15 17:17:32.362359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.790 [2024-05-15 17:17:32.362389] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:44.790 qpair failed and we were unable to recover it. 00:26:44.790 [2024-05-15 17:17:32.362547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.790 [2024-05-15 17:17:32.362829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.790 [2024-05-15 17:17:32.362842] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:44.790 qpair failed and we were unable to recover it. 00:26:44.790 [2024-05-15 17:17:32.363124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.790 [2024-05-15 17:17:32.363411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.790 [2024-05-15 17:17:32.363425] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:44.790 qpair failed and we were unable to recover it. 00:26:44.790 [2024-05-15 17:17:32.363676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.790 [2024-05-15 17:17:32.363936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.790 [2024-05-15 17:17:32.363950] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:44.790 qpair failed and we were unable to recover it. 00:26:44.790 [2024-05-15 17:17:32.364134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.790 [2024-05-15 17:17:32.364392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.790 [2024-05-15 17:17:32.364406] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:44.790 qpair failed and we were unable to recover it. 00:26:44.790 [2024-05-15 17:17:32.364683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.790 [2024-05-15 17:17:32.364897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.790 [2024-05-15 17:17:32.364926] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:44.790 qpair failed and we were unable to recover it. 00:26:44.790 [2024-05-15 17:17:32.365221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.790 [2024-05-15 17:17:32.365364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.790 [2024-05-15 17:17:32.365392] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:44.790 qpair failed and we were unable to recover it. 00:26:44.790 [2024-05-15 17:17:32.365680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.790 [2024-05-15 17:17:32.365973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.790 [2024-05-15 17:17:32.366002] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:44.790 qpair failed and we were unable to recover it. 00:26:44.790 [2024-05-15 17:17:32.366288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.790 [2024-05-15 17:17:32.366574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.790 [2024-05-15 17:17:32.366603] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:44.790 qpair failed and we were unable to recover it. 00:26:44.790 [2024-05-15 17:17:32.366899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.790 [2024-05-15 17:17:32.367124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.790 [2024-05-15 17:17:32.367153] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:44.790 qpair failed and we were unable to recover it. 00:26:44.790 [2024-05-15 17:17:32.367449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.790 [2024-05-15 17:17:32.367660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.790 [2024-05-15 17:17:32.367688] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:44.790 qpair failed and we were unable to recover it. 00:26:44.790 [2024-05-15 17:17:32.367900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.790 [2024-05-15 17:17:32.368135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.790 [2024-05-15 17:17:32.368185] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:44.790 qpair failed and we were unable to recover it. 00:26:44.790 [2024-05-15 17:17:32.368460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.790 [2024-05-15 17:17:32.368663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.790 [2024-05-15 17:17:32.368676] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:44.790 qpair failed and we were unable to recover it. 00:26:44.790 [2024-05-15 17:17:32.368941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.790 [2024-05-15 17:17:32.369186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.790 [2024-05-15 17:17:32.369200] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:44.790 qpair failed and we were unable to recover it. 00:26:44.790 [2024-05-15 17:17:32.369404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.790 [2024-05-15 17:17:32.369640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.790 [2024-05-15 17:17:32.369668] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:44.790 qpair failed and we were unable to recover it. 00:26:44.790 [2024-05-15 17:17:32.369965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.790 [2024-05-15 17:17:32.370231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.790 [2024-05-15 17:17:32.370262] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:44.790 qpair failed and we were unable to recover it. 00:26:44.790 [2024-05-15 17:17:32.370613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.790 [2024-05-15 17:17:32.370887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.790 [2024-05-15 17:17:32.370916] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:44.790 qpair failed and we were unable to recover it. 00:26:44.790 [2024-05-15 17:17:32.371211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.790 [2024-05-15 17:17:32.371491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.790 [2024-05-15 17:17:32.371520] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:44.790 qpair failed and we were unable to recover it. 00:26:44.790 [2024-05-15 17:17:32.371792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.790 [2024-05-15 17:17:32.371988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.790 [2024-05-15 17:17:32.372017] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:44.790 qpair failed and we were unable to recover it. 00:26:44.790 [2024-05-15 17:17:32.372289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.790 [2024-05-15 17:17:32.372531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.790 [2024-05-15 17:17:32.372560] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:44.790 qpair failed and we were unable to recover it. 00:26:44.790 [2024-05-15 17:17:32.372848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.790 [2024-05-15 17:17:32.373032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.790 [2024-05-15 17:17:32.373045] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:44.790 qpair failed and we were unable to recover it. 00:26:44.790 [2024-05-15 17:17:32.373305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.790 [2024-05-15 17:17:32.373506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.790 [2024-05-15 17:17:32.373520] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:44.790 qpair failed and we were unable to recover it. 00:26:44.790 [2024-05-15 17:17:32.373650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.790 [2024-05-15 17:17:32.373904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.790 [2024-05-15 17:17:32.373917] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:44.790 qpair failed and we were unable to recover it. 00:26:44.790 [2024-05-15 17:17:32.374170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.790 [2024-05-15 17:17:32.374373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.790 [2024-05-15 17:17:32.374386] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:44.790 qpair failed and we were unable to recover it. 00:26:44.790 [2024-05-15 17:17:32.374580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.790 [2024-05-15 17:17:32.374854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.790 [2024-05-15 17:17:32.374883] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:44.790 qpair failed and we were unable to recover it. 00:26:44.790 [2024-05-15 17:17:32.375180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.790 [2024-05-15 17:17:32.375462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.790 [2024-05-15 17:17:32.375490] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:44.790 qpair failed and we were unable to recover it. 00:26:44.790 [2024-05-15 17:17:32.375828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.790 [2024-05-15 17:17:32.376028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.790 [2024-05-15 17:17:32.376057] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:44.790 qpair failed and we were unable to recover it. 00:26:44.790 [2024-05-15 17:17:32.376352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.790 [2024-05-15 17:17:32.376583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.790 [2024-05-15 17:17:32.376614] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:44.791 qpair failed and we were unable to recover it. 00:26:44.791 [2024-05-15 17:17:32.376839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.791 [2024-05-15 17:17:32.377123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.791 [2024-05-15 17:17:32.377152] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:44.791 qpair failed and we were unable to recover it. 00:26:44.791 [2024-05-15 17:17:32.377381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.791 [2024-05-15 17:17:32.377696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.791 [2024-05-15 17:17:32.377724] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:44.791 qpair failed and we were unable to recover it. 00:26:44.791 [2024-05-15 17:17:32.377977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.791 [2024-05-15 17:17:32.378189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.791 [2024-05-15 17:17:32.378219] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:44.791 qpair failed and we were unable to recover it. 00:26:44.791 [2024-05-15 17:17:32.378422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.791 [2024-05-15 17:17:32.378524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.791 [2024-05-15 17:17:32.378537] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:44.791 qpair failed and we were unable to recover it. 00:26:44.791 [2024-05-15 17:17:32.378793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.791 [2024-05-15 17:17:32.379028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.791 [2024-05-15 17:17:32.379041] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:44.791 qpair failed and we were unable to recover it. 00:26:44.791 [2024-05-15 17:17:32.379309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.791 [2024-05-15 17:17:32.379491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.791 [2024-05-15 17:17:32.379504] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:44.791 qpair failed and we were unable to recover it. 00:26:44.791 [2024-05-15 17:17:32.379687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.791 [2024-05-15 17:17:32.379967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.791 [2024-05-15 17:17:32.379980] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:44.791 qpair failed and we were unable to recover it. 00:26:44.791 [2024-05-15 17:17:32.380232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.791 [2024-05-15 17:17:32.380401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.791 [2024-05-15 17:17:32.380415] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:44.791 qpair failed and we were unable to recover it. 00:26:44.791 [2024-05-15 17:17:32.380580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.791 [2024-05-15 17:17:32.380790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.791 [2024-05-15 17:17:32.380804] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:44.791 qpair failed and we were unable to recover it. 00:26:44.791 [2024-05-15 17:17:32.381103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.791 [2024-05-15 17:17:32.381399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.791 [2024-05-15 17:17:32.381429] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:44.791 qpair failed and we were unable to recover it. 00:26:44.791 [2024-05-15 17:17:32.381652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.791 [2024-05-15 17:17:32.381857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.791 [2024-05-15 17:17:32.381886] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:44.791 qpair failed and we were unable to recover it. 00:26:44.791 [2024-05-15 17:17:32.382185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.791 [2024-05-15 17:17:32.382450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.791 [2024-05-15 17:17:32.382479] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:44.791 qpair failed and we were unable to recover it. 00:26:44.791 [2024-05-15 17:17:32.382753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.791 [2024-05-15 17:17:32.382931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.791 [2024-05-15 17:17:32.382945] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:44.791 qpair failed and we were unable to recover it. 00:26:44.791 [2024-05-15 17:17:32.383204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.791 [2024-05-15 17:17:32.383413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.791 [2024-05-15 17:17:32.383427] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:44.791 qpair failed and we were unable to recover it. 00:26:44.791 [2024-05-15 17:17:32.383673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.791 [2024-05-15 17:17:32.383861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.791 [2024-05-15 17:17:32.383890] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:44.791 qpair failed and we were unable to recover it. 00:26:44.791 [2024-05-15 17:17:32.384209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.791 [2024-05-15 17:17:32.384507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.791 [2024-05-15 17:17:32.384536] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:44.791 qpair failed and we were unable to recover it. 00:26:44.791 [2024-05-15 17:17:32.384828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.791 [2024-05-15 17:17:32.385073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.791 [2024-05-15 17:17:32.385102] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:44.791 qpair failed and we were unable to recover it. 00:26:44.791 [2024-05-15 17:17:32.385397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.791 [2024-05-15 17:17:32.385605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.791 [2024-05-15 17:17:32.385646] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:44.791 qpair failed and we were unable to recover it. 00:26:44.791 [2024-05-15 17:17:32.385909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.791 [2024-05-15 17:17:32.386117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.791 [2024-05-15 17:17:32.386131] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:44.791 qpair failed and we were unable to recover it. 00:26:44.791 [2024-05-15 17:17:32.386370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.791 [2024-05-15 17:17:32.386628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.791 [2024-05-15 17:17:32.386657] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:44.791 qpair failed and we were unable to recover it. 00:26:44.791 [2024-05-15 17:17:32.386929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.791 [2024-05-15 17:17:32.387221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.791 [2024-05-15 17:17:32.387260] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:44.791 qpair failed and we were unable to recover it. 00:26:44.791 [2024-05-15 17:17:32.387523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.791 [2024-05-15 17:17:32.387779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.791 [2024-05-15 17:17:32.387792] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:44.791 qpair failed and we were unable to recover it. 00:26:44.791 [2024-05-15 17:17:32.388040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.791 [2024-05-15 17:17:32.388298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.791 [2024-05-15 17:17:32.388312] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:44.791 qpair failed and we were unable to recover it. 00:26:44.791 [2024-05-15 17:17:32.388483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.791 [2024-05-15 17:17:32.388757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.791 [2024-05-15 17:17:32.388786] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:44.791 qpair failed and we were unable to recover it. 00:26:44.791 [2024-05-15 17:17:32.389057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.791 [2024-05-15 17:17:32.389277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.791 [2024-05-15 17:17:32.389307] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:44.791 qpair failed and we were unable to recover it. 00:26:44.791 [2024-05-15 17:17:32.389621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.791 [2024-05-15 17:17:32.389785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.791 [2024-05-15 17:17:32.389814] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:44.791 qpair failed and we were unable to recover it. 00:26:44.791 [2024-05-15 17:17:32.389971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.791 [2024-05-15 17:17:32.390236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.791 [2024-05-15 17:17:32.390266] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:44.791 qpair failed and we were unable to recover it. 00:26:44.791 [2024-05-15 17:17:32.390557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.791 [2024-05-15 17:17:32.390816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.791 [2024-05-15 17:17:32.390829] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:44.791 qpair failed and we were unable to recover it. 00:26:44.791 [2024-05-15 17:17:32.391066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.791 [2024-05-15 17:17:32.391325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.791 [2024-05-15 17:17:32.391339] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:44.791 qpair failed and we were unable to recover it. 00:26:44.791 [2024-05-15 17:17:32.391604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.791 [2024-05-15 17:17:32.391843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.791 [2024-05-15 17:17:32.391857] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:44.791 qpair failed and we were unable to recover it. 00:26:44.791 [2024-05-15 17:17:32.392116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.791 [2024-05-15 17:17:32.392298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.791 [2024-05-15 17:17:32.392312] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:44.791 qpair failed and we were unable to recover it. 00:26:44.791 [2024-05-15 17:17:32.392517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.791 [2024-05-15 17:17:32.392704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.791 [2024-05-15 17:17:32.392736] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:44.791 qpair failed and we were unable to recover it. 00:26:44.791 [2024-05-15 17:17:32.392986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.791 [2024-05-15 17:17:32.393206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.791 [2024-05-15 17:17:32.393237] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:44.791 qpair failed and we were unable to recover it. 00:26:44.791 [2024-05-15 17:17:32.393479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.791 [2024-05-15 17:17:32.393676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.791 [2024-05-15 17:17:32.393706] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:44.791 qpair failed and we were unable to recover it. 00:26:44.791 [2024-05-15 17:17:32.393919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.791 [2024-05-15 17:17:32.394131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.791 [2024-05-15 17:17:32.394160] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:44.791 qpair failed and we were unable to recover it. 00:26:44.791 [2024-05-15 17:17:32.394478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.791 [2024-05-15 17:17:32.394688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.791 [2024-05-15 17:17:32.394717] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:44.791 qpair failed and we were unable to recover it. 00:26:44.791 [2024-05-15 17:17:32.395034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.791 [2024-05-15 17:17:32.395222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.791 [2024-05-15 17:17:32.395236] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:44.791 qpair failed and we were unable to recover it. 00:26:44.791 [2024-05-15 17:17:32.395406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.791 [2024-05-15 17:17:32.395669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.791 [2024-05-15 17:17:32.395697] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:44.791 qpair failed and we were unable to recover it. 00:26:44.791 [2024-05-15 17:17:32.395911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.791 [2024-05-15 17:17:32.396204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.791 [2024-05-15 17:17:32.396239] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:44.791 qpair failed and we were unable to recover it. 00:26:44.791 [2024-05-15 17:17:32.396472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.791 [2024-05-15 17:17:32.396746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.791 [2024-05-15 17:17:32.396759] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:44.791 qpair failed and we were unable to recover it. 00:26:44.791 [2024-05-15 17:17:32.396944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.791 [2024-05-15 17:17:32.397233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.791 [2024-05-15 17:17:32.397263] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:44.791 qpair failed and we were unable to recover it. 00:26:44.791 [2024-05-15 17:17:32.397546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.791 [2024-05-15 17:17:32.397811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.791 [2024-05-15 17:17:32.397840] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:44.791 qpair failed and we were unable to recover it. 00:26:44.791 [2024-05-15 17:17:32.398138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.791 [2024-05-15 17:17:32.398434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.791 [2024-05-15 17:17:32.398464] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:44.791 qpair failed and we were unable to recover it. 00:26:44.791 [2024-05-15 17:17:32.398739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.791 [2024-05-15 17:17:32.398979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.791 [2024-05-15 17:17:32.399007] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:44.791 qpair failed and we were unable to recover it. 00:26:44.791 [2024-05-15 17:17:32.399281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.791 [2024-05-15 17:17:32.399573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.791 [2024-05-15 17:17:32.399602] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:44.791 qpair failed and we were unable to recover it. 00:26:44.791 [2024-05-15 17:17:32.399923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.791 [2024-05-15 17:17:32.400205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.791 [2024-05-15 17:17:32.400235] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:44.791 qpair failed and we were unable to recover it. 00:26:44.791 [2024-05-15 17:17:32.400534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.791 [2024-05-15 17:17:32.400740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.791 [2024-05-15 17:17:32.400753] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:44.791 qpair failed and we were unable to recover it. 00:26:44.791 [2024-05-15 17:17:32.400965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.791 [2024-05-15 17:17:32.401207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.791 [2024-05-15 17:17:32.401237] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:44.791 qpair failed and we were unable to recover it. 00:26:44.791 [2024-05-15 17:17:32.401487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.791 [2024-05-15 17:17:32.401684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.791 [2024-05-15 17:17:32.401727] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:44.791 qpair failed and we were unable to recover it. 00:26:44.792 [2024-05-15 17:17:32.401974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.792 [2024-05-15 17:17:32.402233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.792 [2024-05-15 17:17:32.402246] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:44.792 qpair failed and we were unable to recover it. 00:26:44.792 [2024-05-15 17:17:32.402438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.792 [2024-05-15 17:17:32.402769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.792 [2024-05-15 17:17:32.402798] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:44.792 qpair failed and we were unable to recover it. 00:26:44.792 [2024-05-15 17:17:32.403003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.792 [2024-05-15 17:17:32.403317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.792 [2024-05-15 17:17:32.403348] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:44.792 qpair failed and we were unable to recover it. 00:26:44.792 [2024-05-15 17:17:32.403549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.792 [2024-05-15 17:17:32.403811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.792 [2024-05-15 17:17:32.403841] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:44.792 qpair failed and we were unable to recover it. 00:26:44.792 [2024-05-15 17:17:32.404013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.792 [2024-05-15 17:17:32.404260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.792 [2024-05-15 17:17:32.404290] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:44.792 qpair failed and we were unable to recover it. 00:26:44.792 [2024-05-15 17:17:32.404600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.792 [2024-05-15 17:17:32.404862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.792 [2024-05-15 17:17:32.404890] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:44.792 qpair failed and we were unable to recover it. 00:26:44.792 [2024-05-15 17:17:32.405189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.792 [2024-05-15 17:17:32.405414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.792 [2024-05-15 17:17:32.405443] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:44.792 qpair failed and we were unable to recover it. 00:26:44.792 [2024-05-15 17:17:32.405679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.792 [2024-05-15 17:17:32.405880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.792 [2024-05-15 17:17:32.405894] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:44.792 qpair failed and we were unable to recover it. 00:26:44.792 [2024-05-15 17:17:32.406097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.792 [2024-05-15 17:17:32.406361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.792 [2024-05-15 17:17:32.406375] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:44.792 qpair failed and we were unable to recover it. 00:26:44.792 [2024-05-15 17:17:32.406664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.792 [2024-05-15 17:17:32.406959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.792 [2024-05-15 17:17:32.406992] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:44.792 qpair failed and we were unable to recover it. 00:26:44.792 [2024-05-15 17:17:32.407206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.792 [2024-05-15 17:17:32.407432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.792 [2024-05-15 17:17:32.407460] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:44.792 qpair failed and we were unable to recover it. 00:26:44.792 [2024-05-15 17:17:32.407755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.792 [2024-05-15 17:17:32.407917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.792 [2024-05-15 17:17:32.407945] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:44.792 qpair failed and we were unable to recover it. 00:26:44.792 [2024-05-15 17:17:32.408180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.792 [2024-05-15 17:17:32.408476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.792 [2024-05-15 17:17:32.408505] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:44.792 qpair failed and we were unable to recover it. 00:26:44.792 [2024-05-15 17:17:32.408736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.792 [2024-05-15 17:17:32.409003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.792 [2024-05-15 17:17:32.409033] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:44.792 qpair failed and we were unable to recover it. 00:26:44.792 [2024-05-15 17:17:32.409199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.792 [2024-05-15 17:17:32.409508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.792 [2024-05-15 17:17:32.409538] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:44.792 qpair failed and we were unable to recover it. 00:26:44.792 [2024-05-15 17:17:32.409759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.792 [2024-05-15 17:17:32.409954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.792 [2024-05-15 17:17:32.409983] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:44.792 qpair failed and we were unable to recover it. 00:26:44.792 [2024-05-15 17:17:32.410191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.792 [2024-05-15 17:17:32.410412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.792 [2024-05-15 17:17:32.410440] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:44.792 qpair failed and we were unable to recover it. 00:26:44.792 [2024-05-15 17:17:32.410583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.792 [2024-05-15 17:17:32.410766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.792 [2024-05-15 17:17:32.410799] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:44.792 qpair failed and we were unable to recover it. 00:26:44.792 [2024-05-15 17:17:32.411084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.792 [2024-05-15 17:17:32.411316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.792 [2024-05-15 17:17:32.411347] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:44.792 qpair failed and we were unable to recover it. 00:26:44.792 [2024-05-15 17:17:32.411642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.792 [2024-05-15 17:17:32.411808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.792 [2024-05-15 17:17:32.411824] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:44.792 qpair failed and we were unable to recover it. 00:26:44.792 [2024-05-15 17:17:32.412012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.792 [2024-05-15 17:17:32.412295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.792 [2024-05-15 17:17:32.412324] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:44.792 qpair failed and we were unable to recover it. 00:26:44.792 [2024-05-15 17:17:32.412609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.792 [2024-05-15 17:17:32.412806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.792 [2024-05-15 17:17:32.412819] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:44.792 qpair failed and we were unable to recover it. 00:26:44.792 [2024-05-15 17:17:32.413080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.792 [2024-05-15 17:17:32.413331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.792 [2024-05-15 17:17:32.413346] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:44.792 qpair failed and we were unable to recover it. 00:26:44.792 [2024-05-15 17:17:32.413602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.792 [2024-05-15 17:17:32.413799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.792 [2024-05-15 17:17:32.413813] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:44.792 qpair failed and we were unable to recover it. 00:26:44.792 [2024-05-15 17:17:32.413995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.792 [2024-05-15 17:17:32.414096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.792 [2024-05-15 17:17:32.414110] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:44.792 qpair failed and we were unable to recover it. 00:26:44.792 [2024-05-15 17:17:32.414302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.792 [2024-05-15 17:17:32.414503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.792 [2024-05-15 17:17:32.414517] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:44.792 qpair failed and we were unable to recover it. 00:26:44.792 [2024-05-15 17:17:32.414726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.792 [2024-05-15 17:17:32.415027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.792 [2024-05-15 17:17:32.415056] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:44.792 qpair failed and we were unable to recover it. 00:26:44.792 [2024-05-15 17:17:32.415375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.792 [2024-05-15 17:17:32.415533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.792 [2024-05-15 17:17:32.415562] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:44.792 qpair failed and we were unable to recover it. 00:26:44.792 [2024-05-15 17:17:32.415890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.792 [2024-05-15 17:17:32.416107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.792 [2024-05-15 17:17:32.416136] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:44.792 qpair failed and we were unable to recover it. 00:26:44.792 [2024-05-15 17:17:32.416442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.792 [2024-05-15 17:17:32.416710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.792 [2024-05-15 17:17:32.416738] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:44.792 qpair failed and we were unable to recover it. 00:26:44.792 [2024-05-15 17:17:32.416949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.792 [2024-05-15 17:17:32.417225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.792 [2024-05-15 17:17:32.417256] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:44.792 qpair failed and we were unable to recover it. 00:26:44.792 [2024-05-15 17:17:32.417532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.792 [2024-05-15 17:17:32.417820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.792 [2024-05-15 17:17:32.417833] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:44.792 qpair failed and we were unable to recover it. 00:26:44.792 [2024-05-15 17:17:32.418095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.792 [2024-05-15 17:17:32.418285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.792 [2024-05-15 17:17:32.418299] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:44.792 qpair failed and we were unable to recover it. 00:26:44.792 [2024-05-15 17:17:32.418565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.792 [2024-05-15 17:17:32.418804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.792 [2024-05-15 17:17:32.418818] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:44.792 qpair failed and we were unable to recover it. 00:26:44.792 [2024-05-15 17:17:32.419078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.792 [2024-05-15 17:17:32.419314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.792 [2024-05-15 17:17:32.419329] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:44.792 qpair failed and we were unable to recover it. 00:26:44.792 [2024-05-15 17:17:32.419595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.792 [2024-05-15 17:17:32.419807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.792 [2024-05-15 17:17:32.419821] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:44.792 qpair failed and we were unable to recover it. 00:26:44.792 [2024-05-15 17:17:32.420035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.792 [2024-05-15 17:17:32.420327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.792 [2024-05-15 17:17:32.420358] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:44.792 qpair failed and we were unable to recover it. 00:26:44.792 [2024-05-15 17:17:32.420636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.792 [2024-05-15 17:17:32.420833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.792 [2024-05-15 17:17:32.420862] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:44.792 qpair failed and we were unable to recover it. 00:26:44.792 [2024-05-15 17:17:32.421136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.792 [2024-05-15 17:17:32.421463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.792 [2024-05-15 17:17:32.421497] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:44.792 qpair failed and we were unable to recover it. 00:26:44.792 [2024-05-15 17:17:32.421795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.792 [2024-05-15 17:17:32.422033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.792 [2024-05-15 17:17:32.422062] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:44.792 qpair failed and we were unable to recover it. 00:26:44.792 [2024-05-15 17:17:32.422366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.792 [2024-05-15 17:17:32.422657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.792 [2024-05-15 17:17:32.422686] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:44.792 qpair failed and we were unable to recover it. 00:26:44.792 [2024-05-15 17:17:32.422971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.792 [2024-05-15 17:17:32.423262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.792 [2024-05-15 17:17:32.423293] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:44.792 qpair failed and we were unable to recover it. 00:26:44.792 [2024-05-15 17:17:32.423589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.792 [2024-05-15 17:17:32.423821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.792 [2024-05-15 17:17:32.423850] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:44.792 qpair failed and we were unable to recover it. 00:26:44.792 [2024-05-15 17:17:32.424077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.792 [2024-05-15 17:17:32.424277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.792 [2024-05-15 17:17:32.424307] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:44.792 qpair failed and we were unable to recover it. 00:26:44.792 [2024-05-15 17:17:32.424522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.792 [2024-05-15 17:17:32.424757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.792 [2024-05-15 17:17:32.424786] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:44.792 qpair failed and we were unable to recover it. 00:26:44.792 [2024-05-15 17:17:32.425110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.792 [2024-05-15 17:17:32.425357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.792 [2024-05-15 17:17:32.425387] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:44.792 qpair failed and we were unable to recover it. 00:26:44.792 [2024-05-15 17:17:32.425659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.792 [2024-05-15 17:17:32.425895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.792 [2024-05-15 17:17:32.425908] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:44.792 qpair failed and we were unable to recover it. 00:26:44.792 [2024-05-15 17:17:32.426150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.792 [2024-05-15 17:17:32.426277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.792 [2024-05-15 17:17:32.426292] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:44.792 qpair failed and we were unable to recover it. 00:26:44.792 [2024-05-15 17:17:32.426575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.792 [2024-05-15 17:17:32.426860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.792 [2024-05-15 17:17:32.426891] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:44.792 qpair failed and we were unable to recover it. 00:26:44.792 [2024-05-15 17:17:32.427194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.792 [2024-05-15 17:17:32.427352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.792 [2024-05-15 17:17:32.427383] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:44.792 qpair failed and we were unable to recover it. 00:26:44.792 [2024-05-15 17:17:32.427611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.792 [2024-05-15 17:17:32.427809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.792 [2024-05-15 17:17:32.427837] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:44.792 qpair failed and we were unable to recover it. 00:26:44.792 [2024-05-15 17:17:32.428058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.792 [2024-05-15 17:17:32.428381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.792 [2024-05-15 17:17:32.428411] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:44.792 qpair failed and we were unable to recover it. 00:26:44.792 [2024-05-15 17:17:32.428714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.792 [2024-05-15 17:17:32.428925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.792 [2024-05-15 17:17:32.428953] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:44.792 qpair failed and we were unable to recover it. 00:26:44.792 [2024-05-15 17:17:32.429239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.792 [2024-05-15 17:17:32.429492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.793 [2024-05-15 17:17:32.429514] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:44.793 qpair failed and we were unable to recover it. 00:26:44.793 [2024-05-15 17:17:32.429714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.793 [2024-05-15 17:17:32.429994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.793 [2024-05-15 17:17:32.430008] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:44.793 qpair failed and we were unable to recover it. 00:26:44.793 [2024-05-15 17:17:32.430308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.793 [2024-05-15 17:17:32.430532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.793 [2024-05-15 17:17:32.430565] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:44.793 qpair failed and we were unable to recover it. 00:26:44.793 [2024-05-15 17:17:32.430827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.793 [2024-05-15 17:17:32.431110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.793 [2024-05-15 17:17:32.431128] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:44.793 qpair failed and we were unable to recover it. 00:26:44.793 [2024-05-15 17:17:32.431265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.793 [2024-05-15 17:17:32.431479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.793 [2024-05-15 17:17:32.431509] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:44.793 qpair failed and we were unable to recover it. 00:26:44.793 [2024-05-15 17:17:32.431736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.793 [2024-05-15 17:17:32.431980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.793 [2024-05-15 17:17:32.432022] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:44.793 qpair failed and we were unable to recover it. 00:26:44.793 [2024-05-15 17:17:32.432346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.061 [2024-05-15 17:17:32.432514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.061 [2024-05-15 17:17:32.432544] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:45.061 qpair failed and we were unable to recover it. 00:26:45.061 [2024-05-15 17:17:32.432780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.061 [2024-05-15 17:17:32.433044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.061 [2024-05-15 17:17:32.433059] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:45.061 qpair failed and we were unable to recover it. 00:26:45.061 [2024-05-15 17:17:32.433229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.061 [2024-05-15 17:17:32.433426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.061 [2024-05-15 17:17:32.433440] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:45.061 qpair failed and we were unable to recover it. 00:26:45.061 [2024-05-15 17:17:32.433708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.061 [2024-05-15 17:17:32.434006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.061 [2024-05-15 17:17:32.434020] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:45.061 qpair failed and we were unable to recover it. 00:26:45.061 [2024-05-15 17:17:32.434258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.061 [2024-05-15 17:17:32.434496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.061 [2024-05-15 17:17:32.434510] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:45.061 qpair failed and we were unable to recover it. 00:26:45.061 [2024-05-15 17:17:32.434771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.061 [2024-05-15 17:17:32.435010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.061 [2024-05-15 17:17:32.435024] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:45.061 qpair failed and we were unable to recover it. 00:26:45.062 [2024-05-15 17:17:32.435311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.062 [2024-05-15 17:17:32.435571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.062 [2024-05-15 17:17:32.435585] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:45.062 qpair failed and we were unable to recover it. 00:26:45.062 [2024-05-15 17:17:32.435860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.062 [2024-05-15 17:17:32.436098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.062 [2024-05-15 17:17:32.436112] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:45.062 qpair failed and we were unable to recover it. 00:26:45.062 [2024-05-15 17:17:32.436300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.062 [2024-05-15 17:17:32.436486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.062 [2024-05-15 17:17:32.436500] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:45.062 qpair failed and we were unable to recover it. 00:26:45.062 [2024-05-15 17:17:32.436725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.062 [2024-05-15 17:17:32.436940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.062 [2024-05-15 17:17:32.436969] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:45.062 qpair failed and we were unable to recover it. 00:26:45.062 [2024-05-15 17:17:32.437187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.062 [2024-05-15 17:17:32.437441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.062 [2024-05-15 17:17:32.437471] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:45.062 qpair failed and we were unable to recover it. 00:26:45.062 [2024-05-15 17:17:32.437690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.062 [2024-05-15 17:17:32.437897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.062 [2024-05-15 17:17:32.437912] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:45.062 qpair failed and we were unable to recover it. 00:26:45.062 [2024-05-15 17:17:32.438186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.062 [2024-05-15 17:17:32.438384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.062 [2024-05-15 17:17:32.438398] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:45.062 qpair failed and we were unable to recover it. 00:26:45.062 [2024-05-15 17:17:32.438507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.062 [2024-05-15 17:17:32.438687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.062 [2024-05-15 17:17:32.438700] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:45.062 qpair failed and we were unable to recover it. 00:26:45.062 [2024-05-15 17:17:32.438944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.062 [2024-05-15 17:17:32.439119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.062 [2024-05-15 17:17:32.439133] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:45.062 qpair failed and we were unable to recover it. 00:26:45.062 [2024-05-15 17:17:32.439445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.062 [2024-05-15 17:17:32.439640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.062 [2024-05-15 17:17:32.439655] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:45.062 qpair failed and we were unable to recover it. 00:26:45.062 [2024-05-15 17:17:32.439921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.062 [2024-05-15 17:17:32.440134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.062 [2024-05-15 17:17:32.440148] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:45.062 qpair failed and we were unable to recover it. 00:26:45.062 [2024-05-15 17:17:32.440407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.062 [2024-05-15 17:17:32.440553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.062 [2024-05-15 17:17:32.440582] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:45.062 qpair failed and we were unable to recover it. 00:26:45.062 [2024-05-15 17:17:32.440826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.062 [2024-05-15 17:17:32.441124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.062 [2024-05-15 17:17:32.441153] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:45.062 qpair failed and we were unable to recover it. 00:26:45.062 [2024-05-15 17:17:32.441395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.062 [2024-05-15 17:17:32.441545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.062 [2024-05-15 17:17:32.441573] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:45.062 qpair failed and we were unable to recover it. 00:26:45.062 [2024-05-15 17:17:32.441742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.062 [2024-05-15 17:17:32.441919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.062 [2024-05-15 17:17:32.441952] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:45.062 qpair failed and we were unable to recover it. 00:26:45.062 [2024-05-15 17:17:32.442237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.062 [2024-05-15 17:17:32.442532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.062 [2024-05-15 17:17:32.442561] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:45.062 qpair failed and we were unable to recover it. 00:26:45.062 [2024-05-15 17:17:32.442889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.062 [2024-05-15 17:17:32.443115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.062 [2024-05-15 17:17:32.443143] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:45.062 qpair failed and we were unable to recover it. 00:26:45.062 [2024-05-15 17:17:32.443415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.062 [2024-05-15 17:17:32.443582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.062 [2024-05-15 17:17:32.443596] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:45.062 qpair failed and we were unable to recover it. 00:26:45.062 [2024-05-15 17:17:32.443723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.062 [2024-05-15 17:17:32.444000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.062 [2024-05-15 17:17:32.444029] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:45.062 qpair failed and we were unable to recover it. 00:26:45.062 [2024-05-15 17:17:32.444316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.062 [2024-05-15 17:17:32.444586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.062 [2024-05-15 17:17:32.444615] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:45.062 qpair failed and we were unable to recover it. 00:26:45.062 [2024-05-15 17:17:32.444987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.062 [2024-05-15 17:17:32.445259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.062 [2024-05-15 17:17:32.445290] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:45.062 qpair failed and we were unable to recover it. 00:26:45.062 [2024-05-15 17:17:32.445523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.062 [2024-05-15 17:17:32.445789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.062 [2024-05-15 17:17:32.445818] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:45.062 qpair failed and we were unable to recover it. 00:26:45.062 [2024-05-15 17:17:32.446059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.062 [2024-05-15 17:17:32.446317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.062 [2024-05-15 17:17:32.446332] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:45.062 qpair failed and we were unable to recover it. 00:26:45.062 [2024-05-15 17:17:32.446467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.062 [2024-05-15 17:17:32.446702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.062 [2024-05-15 17:17:32.446715] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:45.062 qpair failed and we were unable to recover it. 00:26:45.062 [2024-05-15 17:17:32.447007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.062 [2024-05-15 17:17:32.447291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.062 [2024-05-15 17:17:32.447321] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:45.062 qpair failed and we were unable to recover it. 00:26:45.062 [2024-05-15 17:17:32.447659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.062 [2024-05-15 17:17:32.447823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.062 [2024-05-15 17:17:32.447852] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:45.062 qpair failed and we were unable to recover it. 00:26:45.062 [2024-05-15 17:17:32.448147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.062 [2024-05-15 17:17:32.448372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.062 [2024-05-15 17:17:32.448403] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:45.062 qpair failed and we were unable to recover it. 00:26:45.062 [2024-05-15 17:17:32.448612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.062 [2024-05-15 17:17:32.448773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.062 [2024-05-15 17:17:32.448801] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:45.062 qpair failed and we were unable to recover it. 00:26:45.062 [2024-05-15 17:17:32.449032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.063 [2024-05-15 17:17:32.449207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.063 [2024-05-15 17:17:32.449221] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:45.063 qpair failed and we were unable to recover it. 00:26:45.063 [2024-05-15 17:17:32.449433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.063 [2024-05-15 17:17:32.449723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.063 [2024-05-15 17:17:32.449755] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:45.063 qpair failed and we were unable to recover it. 00:26:45.063 [2024-05-15 17:17:32.450091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.063 [2024-05-15 17:17:32.450373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.063 [2024-05-15 17:17:32.450406] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:45.063 qpair failed and we were unable to recover it. 00:26:45.063 [2024-05-15 17:17:32.450631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.063 [2024-05-15 17:17:32.450832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.063 [2024-05-15 17:17:32.450863] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:45.063 qpair failed and we were unable to recover it. 00:26:45.063 [2024-05-15 17:17:32.451177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.063 [2024-05-15 17:17:32.451435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.063 [2024-05-15 17:17:32.451449] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:45.063 qpair failed and we were unable to recover it. 00:26:45.063 [2024-05-15 17:17:32.451640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.063 [2024-05-15 17:17:32.451964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.063 [2024-05-15 17:17:32.451999] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:45.063 qpair failed and we were unable to recover it. 00:26:45.063 [2024-05-15 17:17:32.452267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.063 [2024-05-15 17:17:32.452454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.063 [2024-05-15 17:17:32.452469] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:45.063 qpair failed and we were unable to recover it. 00:26:45.063 [2024-05-15 17:17:32.452658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.063 [2024-05-15 17:17:32.452936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.063 [2024-05-15 17:17:32.452953] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:45.063 qpair failed and we were unable to recover it. 00:26:45.063 [2024-05-15 17:17:32.453221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.063 [2024-05-15 17:17:32.453350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.063 [2024-05-15 17:17:32.453364] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:45.063 qpair failed and we were unable to recover it. 00:26:45.063 [2024-05-15 17:17:32.453618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.063 [2024-05-15 17:17:32.453852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.063 [2024-05-15 17:17:32.453882] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:45.063 qpair failed and we were unable to recover it. 00:26:45.063 [2024-05-15 17:17:32.454096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.063 [2024-05-15 17:17:32.454321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.063 [2024-05-15 17:17:32.454351] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:45.063 qpair failed and we were unable to recover it. 00:26:45.063 [2024-05-15 17:17:32.454590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.063 [2024-05-15 17:17:32.454831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.063 [2024-05-15 17:17:32.454846] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:45.063 qpair failed and we were unable to recover it. 00:26:45.063 [2024-05-15 17:17:32.454973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.063 [2024-05-15 17:17:32.455150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.063 [2024-05-15 17:17:32.455179] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:45.063 qpair failed and we were unable to recover it. 00:26:45.063 [2024-05-15 17:17:32.455425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.063 [2024-05-15 17:17:32.455664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.063 [2024-05-15 17:17:32.455678] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:45.063 qpair failed and we were unable to recover it. 00:26:45.063 [2024-05-15 17:17:32.455870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.063 [2024-05-15 17:17:32.456051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.063 [2024-05-15 17:17:32.456064] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:45.063 qpair failed and we were unable to recover it. 00:26:45.063 [2024-05-15 17:17:32.456272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.063 [2024-05-15 17:17:32.456530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.063 [2024-05-15 17:17:32.456544] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:45.063 qpair failed and we were unable to recover it. 00:26:45.063 [2024-05-15 17:17:32.456841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.063 [2024-05-15 17:17:32.457185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.063 [2024-05-15 17:17:32.457216] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:45.063 qpair failed and we were unable to recover it. 00:26:45.063 [2024-05-15 17:17:32.457433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.063 [2024-05-15 17:17:32.457670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.063 [2024-05-15 17:17:32.457700] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:45.063 qpair failed and we were unable to recover it. 00:26:45.063 [2024-05-15 17:17:32.457943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.063 [2024-05-15 17:17:32.458251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.063 [2024-05-15 17:17:32.458282] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:45.063 qpair failed and we were unable to recover it. 00:26:45.063 [2024-05-15 17:17:32.458584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.063 [2024-05-15 17:17:32.458911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.063 [2024-05-15 17:17:32.458941] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:45.063 qpair failed and we were unable to recover it. 00:26:45.063 [2024-05-15 17:17:32.459249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.063 [2024-05-15 17:17:32.459493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.063 [2024-05-15 17:17:32.459522] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:45.063 qpair failed and we were unable to recover it. 00:26:45.063 [2024-05-15 17:17:32.459843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.063 [2024-05-15 17:17:32.460137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.063 [2024-05-15 17:17:32.460181] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:45.063 qpair failed and we were unable to recover it. 00:26:45.063 [2024-05-15 17:17:32.460353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.063 [2024-05-15 17:17:32.460599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.063 [2024-05-15 17:17:32.460628] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:45.063 qpair failed and we were unable to recover it. 00:26:45.063 [2024-05-15 17:17:32.460939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.063 [2024-05-15 17:17:32.461256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.063 [2024-05-15 17:17:32.461288] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:45.063 qpair failed and we were unable to recover it. 00:26:45.063 [2024-05-15 17:17:32.461520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.063 [2024-05-15 17:17:32.461739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.063 [2024-05-15 17:17:32.461768] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:45.063 qpair failed and we were unable to recover it. 00:26:45.063 [2024-05-15 17:17:32.462015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.063 [2024-05-15 17:17:32.462180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.063 [2024-05-15 17:17:32.462211] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:45.063 qpair failed and we were unable to recover it. 00:26:45.063 [2024-05-15 17:17:32.462489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.063 [2024-05-15 17:17:32.462702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.063 [2024-05-15 17:17:32.462715] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:45.063 qpair failed and we were unable to recover it. 00:26:45.063 [2024-05-15 17:17:32.462917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.063 [2024-05-15 17:17:32.463214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.064 [2024-05-15 17:17:32.463247] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:45.064 qpair failed and we were unable to recover it. 00:26:45.064 [2024-05-15 17:17:32.463485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.064 [2024-05-15 17:17:32.463690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.064 [2024-05-15 17:17:32.463721] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:45.064 qpair failed and we were unable to recover it. 00:26:45.064 [2024-05-15 17:17:32.463871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.064 [2024-05-15 17:17:32.464178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.064 [2024-05-15 17:17:32.464209] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:45.064 qpair failed and we were unable to recover it. 00:26:45.064 [2024-05-15 17:17:32.464504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.064 [2024-05-15 17:17:32.464681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.064 [2024-05-15 17:17:32.464710] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:45.064 qpair failed and we were unable to recover it. 00:26:45.064 [2024-05-15 17:17:32.464946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.064 [2024-05-15 17:17:32.465126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.064 [2024-05-15 17:17:32.465142] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:45.064 qpair failed and we were unable to recover it. 00:26:45.064 [2024-05-15 17:17:32.465337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.064 [2024-05-15 17:17:32.465566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.064 [2024-05-15 17:17:32.465595] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:45.064 qpair failed and we were unable to recover it. 00:26:45.064 [2024-05-15 17:17:32.465893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.064 [2024-05-15 17:17:32.466080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.064 [2024-05-15 17:17:32.466094] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:45.064 qpair failed and we were unable to recover it. 00:26:45.064 [2024-05-15 17:17:32.466357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.064 [2024-05-15 17:17:32.466649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.064 [2024-05-15 17:17:32.466678] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:45.064 qpair failed and we were unable to recover it. 00:26:45.064 [2024-05-15 17:17:32.467022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.064 [2024-05-15 17:17:32.467333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.064 [2024-05-15 17:17:32.467362] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:45.064 qpair failed and we were unable to recover it. 00:26:45.064 [2024-05-15 17:17:32.467706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.064 [2024-05-15 17:17:32.467923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.064 [2024-05-15 17:17:32.467952] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:45.064 qpair failed and we were unable to recover it. 00:26:45.064 [2024-05-15 17:17:32.468235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.064 [2024-05-15 17:17:32.468520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.064 [2024-05-15 17:17:32.468550] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:45.064 qpair failed and we were unable to recover it. 00:26:45.064 [2024-05-15 17:17:32.468776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.064 [2024-05-15 17:17:32.469057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.064 [2024-05-15 17:17:32.469086] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:45.064 qpair failed and we were unable to recover it. 00:26:45.064 [2024-05-15 17:17:32.469334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.064 [2024-05-15 17:17:32.469582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.064 [2024-05-15 17:17:32.469621] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:45.064 qpair failed and we were unable to recover it. 00:26:45.064 [2024-05-15 17:17:32.469835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.064 [2024-05-15 17:17:32.470093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.064 [2024-05-15 17:17:32.470106] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:45.064 qpair failed and we were unable to recover it. 00:26:45.064 [2024-05-15 17:17:32.470296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.064 [2024-05-15 17:17:32.470488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.064 [2024-05-15 17:17:32.470501] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:45.064 qpair failed and we were unable to recover it. 00:26:45.064 [2024-05-15 17:17:32.470693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.064 [2024-05-15 17:17:32.470833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.064 [2024-05-15 17:17:32.470846] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:45.064 qpair failed and we were unable to recover it. 00:26:45.064 [2024-05-15 17:17:32.471039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.064 [2024-05-15 17:17:32.471295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.064 [2024-05-15 17:17:32.471327] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:45.064 qpair failed and we were unable to recover it. 00:26:45.064 [2024-05-15 17:17:32.471630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.064 [2024-05-15 17:17:32.471905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.064 [2024-05-15 17:17:32.471934] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:45.064 qpair failed and we were unable to recover it. 00:26:45.064 [2024-05-15 17:17:32.472157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.064 [2024-05-15 17:17:32.472391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.064 [2024-05-15 17:17:32.472420] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:45.064 qpair failed and we were unable to recover it. 00:26:45.064 [2024-05-15 17:17:32.472706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.064 [2024-05-15 17:17:32.472990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.064 [2024-05-15 17:17:32.473019] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:45.064 qpair failed and we were unable to recover it. 00:26:45.064 [2024-05-15 17:17:32.473235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.064 [2024-05-15 17:17:32.473457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.064 [2024-05-15 17:17:32.473486] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:45.064 qpair failed and we were unable to recover it. 00:26:45.064 [2024-05-15 17:17:32.473705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.064 [2024-05-15 17:17:32.473892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.064 [2024-05-15 17:17:32.473921] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:45.064 qpair failed and we were unable to recover it. 00:26:45.064 [2024-05-15 17:17:32.474158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.064 [2024-05-15 17:17:32.474335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.064 [2024-05-15 17:17:32.474365] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:45.064 qpair failed and we were unable to recover it. 00:26:45.064 [2024-05-15 17:17:32.474662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.064 [2024-05-15 17:17:32.474827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.064 [2024-05-15 17:17:32.474841] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:45.064 qpair failed and we were unable to recover it. 00:26:45.064 [2024-05-15 17:17:32.475032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.064 [2024-05-15 17:17:32.475203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.064 [2024-05-15 17:17:32.475233] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:45.064 qpair failed and we were unable to recover it. 00:26:45.064 [2024-05-15 17:17:32.475481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.064 [2024-05-15 17:17:32.475699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.064 [2024-05-15 17:17:32.475729] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:45.064 qpair failed and we were unable to recover it. 00:26:45.064 [2024-05-15 17:17:32.476012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.064 [2024-05-15 17:17:32.476293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.064 [2024-05-15 17:17:32.476323] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:45.064 qpair failed and we were unable to recover it. 00:26:45.064 [2024-05-15 17:17:32.476529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.064 [2024-05-15 17:17:32.476690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.064 [2024-05-15 17:17:32.476720] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:45.064 qpair failed and we were unable to recover it. 00:26:45.064 [2024-05-15 17:17:32.477036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.064 [2024-05-15 17:17:32.477252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.064 [2024-05-15 17:17:32.477282] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:45.064 qpair failed and we were unable to recover it. 00:26:45.064 [2024-05-15 17:17:32.477504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.064 [2024-05-15 17:17:32.477800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.064 [2024-05-15 17:17:32.477830] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:45.064 qpair failed and we were unable to recover it. 00:26:45.064 [2024-05-15 17:17:32.478068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.064 [2024-05-15 17:17:32.478279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.064 [2024-05-15 17:17:32.478320] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:45.065 qpair failed and we were unable to recover it. 00:26:45.065 [2024-05-15 17:17:32.478492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.065 [2024-05-15 17:17:32.478802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.065 [2024-05-15 17:17:32.478832] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:45.065 qpair failed and we were unable to recover it. 00:26:45.065 [2024-05-15 17:17:32.479101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.065 [2024-05-15 17:17:32.479380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.065 [2024-05-15 17:17:32.479397] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:45.065 qpair failed and we were unable to recover it. 00:26:45.065 [2024-05-15 17:17:32.479626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.065 [2024-05-15 17:17:32.479758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.065 [2024-05-15 17:17:32.479772] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:45.065 qpair failed and we were unable to recover it. 00:26:45.065 [2024-05-15 17:17:32.480009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.065 [2024-05-15 17:17:32.480274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.065 [2024-05-15 17:17:32.480305] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:45.065 qpair failed and we were unable to recover it. 00:26:45.065 [2024-05-15 17:17:32.480576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.065 [2024-05-15 17:17:32.480859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.065 [2024-05-15 17:17:32.480876] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:45.065 qpair failed and we were unable to recover it. 00:26:45.065 [2024-05-15 17:17:32.481065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.065 [2024-05-15 17:17:32.481330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.065 [2024-05-15 17:17:32.481361] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:45.065 qpair failed and we were unable to recover it. 00:26:45.065 [2024-05-15 17:17:32.481672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.065 [2024-05-15 17:17:32.481807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.065 [2024-05-15 17:17:32.481823] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:45.065 qpair failed and we were unable to recover it. 00:26:45.065 [2024-05-15 17:17:32.482084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.065 [2024-05-15 17:17:32.486409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.065 [2024-05-15 17:17:32.486443] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:45.065 qpair failed and we were unable to recover it. 00:26:45.065 [2024-05-15 17:17:32.486741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.065 [2024-05-15 17:17:32.487053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.065 [2024-05-15 17:17:32.487082] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:45.065 qpair failed and we were unable to recover it. 00:26:45.065 [2024-05-15 17:17:32.487234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.065 [2024-05-15 17:17:32.487495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.065 [2024-05-15 17:17:32.487512] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:45.065 qpair failed and we were unable to recover it. 00:26:45.065 [2024-05-15 17:17:32.487703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.065 [2024-05-15 17:17:32.487952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.065 [2024-05-15 17:17:32.487980] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:45.065 qpair failed and we were unable to recover it. 00:26:45.065 [2024-05-15 17:17:32.488191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.065 [2024-05-15 17:17:32.488348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.065 [2024-05-15 17:17:32.488377] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:45.065 qpair failed and we were unable to recover it. 00:26:45.065 [2024-05-15 17:17:32.488601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.065 [2024-05-15 17:17:32.488776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.065 [2024-05-15 17:17:32.488805] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:45.065 qpair failed and we were unable to recover it. 00:26:45.065 [2024-05-15 17:17:32.489082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.065 [2024-05-15 17:17:32.489266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.065 [2024-05-15 17:17:32.489281] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:45.065 qpair failed and we were unable to recover it. 00:26:45.065 [2024-05-15 17:17:32.489540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.065 [2024-05-15 17:17:32.489872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.065 [2024-05-15 17:17:32.489901] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:45.065 qpair failed and we were unable to recover it. 00:26:45.065 [2024-05-15 17:17:32.490120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.065 [2024-05-15 17:17:32.490337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.065 [2024-05-15 17:17:32.490367] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:45.065 qpair failed and we were unable to recover it. 00:26:45.065 [2024-05-15 17:17:32.490588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.065 [2024-05-15 17:17:32.490829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.065 [2024-05-15 17:17:32.490858] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:45.065 qpair failed and we were unable to recover it. 00:26:45.065 [2024-05-15 17:17:32.491068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.065 [2024-05-15 17:17:32.491345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.065 [2024-05-15 17:17:32.491360] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:45.065 qpair failed and we were unable to recover it. 00:26:45.065 [2024-05-15 17:17:32.491551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.065 [2024-05-15 17:17:32.491666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.065 [2024-05-15 17:17:32.491679] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:45.065 qpair failed and we were unable to recover it. 00:26:45.065 [2024-05-15 17:17:32.491812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.065 [2024-05-15 17:17:32.492049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.065 [2024-05-15 17:17:32.492066] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:45.065 qpair failed and we were unable to recover it. 00:26:45.065 [2024-05-15 17:17:32.492250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.065 [2024-05-15 17:17:32.492439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.065 [2024-05-15 17:17:32.492452] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:45.065 qpair failed and we were unable to recover it. 00:26:45.065 [2024-05-15 17:17:32.492708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.065 [2024-05-15 17:17:32.492877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.065 [2024-05-15 17:17:32.492891] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:45.065 qpair failed and we were unable to recover it. 00:26:45.065 [2024-05-15 17:17:32.493119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.065 [2024-05-15 17:17:32.493257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.065 [2024-05-15 17:17:32.493271] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:45.065 qpair failed and we were unable to recover it. 00:26:45.065 [2024-05-15 17:17:32.493403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.065 [2024-05-15 17:17:32.493645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.065 [2024-05-15 17:17:32.493674] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:45.065 qpair failed and we were unable to recover it. 00:26:45.065 [2024-05-15 17:17:32.494001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.065 [2024-05-15 17:17:32.494326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.065 [2024-05-15 17:17:32.494357] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:45.065 qpair failed and we were unable to recover it. 00:26:45.065 [2024-05-15 17:17:32.494659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.065 [2024-05-15 17:17:32.494880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.065 [2024-05-15 17:17:32.494894] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:45.065 qpair failed and we were unable to recover it. 00:26:45.065 [2024-05-15 17:17:32.495132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.065 [2024-05-15 17:17:32.495374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.065 [2024-05-15 17:17:32.495389] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:45.065 qpair failed and we were unable to recover it. 00:26:45.065 [2024-05-15 17:17:32.495581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.065 [2024-05-15 17:17:32.495778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.065 [2024-05-15 17:17:32.495791] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:45.065 qpair failed and we were unable to recover it. 00:26:45.065 [2024-05-15 17:17:32.495926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.065 [2024-05-15 17:17:32.496041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.065 [2024-05-15 17:17:32.496055] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:45.065 qpair failed and we were unable to recover it. 00:26:45.065 [2024-05-15 17:17:32.496352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.065 [2024-05-15 17:17:32.496565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.065 [2024-05-15 17:17:32.496582] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:45.065 qpair failed and we were unable to recover it. 00:26:45.065 [2024-05-15 17:17:32.496760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.065 [2024-05-15 17:17:32.496928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.065 [2024-05-15 17:17:32.496965] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:45.065 qpair failed and we were unable to recover it. 00:26:45.065 [2024-05-15 17:17:32.497251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.065 [2024-05-15 17:17:32.497456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.065 [2024-05-15 17:17:32.497485] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:45.065 qpair failed and we were unable to recover it. 00:26:45.065 [2024-05-15 17:17:32.497707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.065 [2024-05-15 17:17:32.498001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.065 [2024-05-15 17:17:32.498030] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:45.065 qpair failed and we were unable to recover it. 00:26:45.065 [2024-05-15 17:17:32.498311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.065 [2024-05-15 17:17:32.498534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.065 [2024-05-15 17:17:32.498563] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:45.065 qpair failed and we were unable to recover it. 00:26:45.065 [2024-05-15 17:17:32.498779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.065 [2024-05-15 17:17:32.499019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.065 [2024-05-15 17:17:32.499034] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:45.065 qpair failed and we were unable to recover it. 00:26:45.066 [2024-05-15 17:17:32.499327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.066 [2024-05-15 17:17:32.499538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.066 [2024-05-15 17:17:32.499552] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:45.066 qpair failed and we were unable to recover it. 00:26:45.066 [2024-05-15 17:17:32.499725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.066 [2024-05-15 17:17:32.499911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.066 [2024-05-15 17:17:32.499941] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:45.066 qpair failed and we were unable to recover it. 00:26:45.066 [2024-05-15 17:17:32.500106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.066 [2024-05-15 17:17:32.500395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.066 [2024-05-15 17:17:32.500426] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:45.066 qpair failed and we were unable to recover it. 00:26:45.066 [2024-05-15 17:17:32.500632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.066 [2024-05-15 17:17:32.500808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.066 [2024-05-15 17:17:32.500837] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:45.066 qpair failed and we were unable to recover it. 00:26:45.066 [2024-05-15 17:17:32.501006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.066 [2024-05-15 17:17:32.501318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.066 [2024-05-15 17:17:32.501348] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:45.066 qpair failed and we were unable to recover it. 00:26:45.066 [2024-05-15 17:17:32.501520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.066 [2024-05-15 17:17:32.501678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.066 [2024-05-15 17:17:32.501707] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:45.066 qpair failed and we were unable to recover it. 00:26:45.066 [2024-05-15 17:17:32.502027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.066 [2024-05-15 17:17:32.502319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.066 [2024-05-15 17:17:32.502333] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:45.066 qpair failed and we were unable to recover it. 00:26:45.066 [2024-05-15 17:17:32.502453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.066 [2024-05-15 17:17:32.502715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.066 [2024-05-15 17:17:32.502728] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:45.066 qpair failed and we were unable to recover it. 00:26:45.066 [2024-05-15 17:17:32.502839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.066 [2024-05-15 17:17:32.503081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.066 [2024-05-15 17:17:32.503110] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:45.066 qpair failed and we were unable to recover it. 00:26:45.066 [2024-05-15 17:17:32.503337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.066 [2024-05-15 17:17:32.503624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.066 [2024-05-15 17:17:32.503653] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:45.066 qpair failed and we were unable to recover it. 00:26:45.066 [2024-05-15 17:17:32.503878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.066 [2024-05-15 17:17:32.504147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.066 [2024-05-15 17:17:32.504190] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:45.066 qpair failed and we were unable to recover it. 00:26:45.066 [2024-05-15 17:17:32.504417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.066 [2024-05-15 17:17:32.504633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.066 [2024-05-15 17:17:32.504662] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:45.066 qpair failed and we were unable to recover it. 00:26:45.066 [2024-05-15 17:17:32.504918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.066 [2024-05-15 17:17:32.505211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.066 [2024-05-15 17:17:32.505247] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:45.066 qpair failed and we were unable to recover it. 00:26:45.066 [2024-05-15 17:17:32.505524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.066 [2024-05-15 17:17:32.505781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.066 [2024-05-15 17:17:32.505810] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:45.066 qpair failed and we were unable to recover it. 00:26:45.066 [2024-05-15 17:17:32.506068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.066 [2024-05-15 17:17:32.506208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.066 [2024-05-15 17:17:32.506222] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:45.066 qpair failed and we were unable to recover it. 00:26:45.066 [2024-05-15 17:17:32.506468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.066 [2024-05-15 17:17:32.506635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.066 [2024-05-15 17:17:32.506664] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:45.066 qpair failed and we were unable to recover it. 00:26:45.066 [2024-05-15 17:17:32.506971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.066 [2024-05-15 17:17:32.507220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.066 [2024-05-15 17:17:32.507252] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:45.066 qpair failed and we were unable to recover it. 00:26:45.066 [2024-05-15 17:17:32.507409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.066 [2024-05-15 17:17:32.507571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.066 [2024-05-15 17:17:32.507600] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:45.066 qpair failed and we were unable to recover it. 00:26:45.066 [2024-05-15 17:17:32.507818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.066 [2024-05-15 17:17:32.508110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.066 [2024-05-15 17:17:32.508138] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:45.066 qpair failed and we were unable to recover it. 00:26:45.066 [2024-05-15 17:17:32.508370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.066 [2024-05-15 17:17:32.508587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.066 [2024-05-15 17:17:32.508616] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:45.066 qpair failed and we were unable to recover it. 00:26:45.066 [2024-05-15 17:17:32.508881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.066 [2024-05-15 17:17:32.509088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.066 [2024-05-15 17:17:32.509102] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:45.066 qpair failed and we were unable to recover it. 00:26:45.066 [2024-05-15 17:17:32.509310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.066 [2024-05-15 17:17:32.509602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.066 [2024-05-15 17:17:32.509632] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:45.066 qpair failed and we were unable to recover it. 00:26:45.066 [2024-05-15 17:17:32.509965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.066 [2024-05-15 17:17:32.510276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.066 [2024-05-15 17:17:32.510290] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:45.066 qpair failed and we were unable to recover it. 00:26:45.066 [2024-05-15 17:17:32.510519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.066 [2024-05-15 17:17:32.510705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.066 [2024-05-15 17:17:32.510726] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:45.066 qpair failed and we were unable to recover it. 00:26:45.066 [2024-05-15 17:17:32.511005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.066 [2024-05-15 17:17:32.511213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.066 [2024-05-15 17:17:32.511243] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:45.066 qpair failed and we were unable to recover it. 00:26:45.066 [2024-05-15 17:17:32.511503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.066 [2024-05-15 17:17:32.511667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.066 [2024-05-15 17:17:32.511696] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:45.066 qpair failed and we were unable to recover it. 00:26:45.067 [2024-05-15 17:17:32.512013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.067 [2024-05-15 17:17:32.512259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.067 [2024-05-15 17:17:32.512273] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:45.067 qpair failed and we were unable to recover it. 00:26:45.067 [2024-05-15 17:17:32.512423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.067 [2024-05-15 17:17:32.512661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.067 [2024-05-15 17:17:32.512691] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:45.067 qpair failed and we were unable to recover it. 00:26:45.067 [2024-05-15 17:17:32.512967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.067 [2024-05-15 17:17:32.513188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.067 [2024-05-15 17:17:32.513221] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:45.067 qpair failed and we were unable to recover it. 00:26:45.067 [2024-05-15 17:17:32.513442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.067 [2024-05-15 17:17:32.513660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.067 [2024-05-15 17:17:32.513689] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:45.067 qpair failed and we were unable to recover it. 00:26:45.067 [2024-05-15 17:17:32.514003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.067 [2024-05-15 17:17:32.514187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.067 [2024-05-15 17:17:32.514201] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:45.067 qpair failed and we were unable to recover it. 00:26:45.067 [2024-05-15 17:17:32.514413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.067 [2024-05-15 17:17:32.514656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.067 [2024-05-15 17:17:32.514686] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:45.067 qpair failed and we were unable to recover it. 00:26:45.067 [2024-05-15 17:17:32.514993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.067 [2024-05-15 17:17:32.515233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.067 [2024-05-15 17:17:32.515263] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:45.067 qpair failed and we were unable to recover it. 00:26:45.067 [2024-05-15 17:17:32.515428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.067 [2024-05-15 17:17:32.515716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.067 [2024-05-15 17:17:32.515745] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:45.067 qpair failed and we were unable to recover it. 00:26:45.067 [2024-05-15 17:17:32.516038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.067 [2024-05-15 17:17:32.516340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.067 [2024-05-15 17:17:32.516354] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:45.067 qpair failed and we were unable to recover it. 00:26:45.067 [2024-05-15 17:17:32.516597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.067 [2024-05-15 17:17:32.516733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.067 [2024-05-15 17:17:32.516747] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:45.067 qpair failed and we were unable to recover it. 00:26:45.067 [2024-05-15 17:17:32.517009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.067 [2024-05-15 17:17:32.517217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.067 [2024-05-15 17:17:32.517248] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:45.067 qpair failed and we were unable to recover it. 00:26:45.067 [2024-05-15 17:17:32.517428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.067 [2024-05-15 17:17:32.517595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.067 [2024-05-15 17:17:32.517624] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:45.067 qpair failed and we were unable to recover it. 00:26:45.067 [2024-05-15 17:17:32.517988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.067 [2024-05-15 17:17:32.518277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.067 [2024-05-15 17:17:32.518308] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:45.067 qpair failed and we were unable to recover it. 00:26:45.067 [2024-05-15 17:17:32.518602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.067 [2024-05-15 17:17:32.518897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.067 [2024-05-15 17:17:32.518927] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:45.067 qpair failed and we were unable to recover it. 00:26:45.067 [2024-05-15 17:17:32.519224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.067 [2024-05-15 17:17:32.519561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.067 [2024-05-15 17:17:32.519590] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:45.067 qpair failed and we were unable to recover it. 00:26:45.067 [2024-05-15 17:17:32.519839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.067 [2024-05-15 17:17:32.520048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.067 [2024-05-15 17:17:32.520076] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:45.067 qpair failed and we were unable to recover it. 00:26:45.067 [2024-05-15 17:17:32.520295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.067 [2024-05-15 17:17:32.520535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.067 [2024-05-15 17:17:32.520565] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:45.067 qpair failed and we were unable to recover it. 00:26:45.067 [2024-05-15 17:17:32.520784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.067 [2024-05-15 17:17:32.520985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.067 [2024-05-15 17:17:32.521014] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:45.067 qpair failed and we were unable to recover it. 00:26:45.067 [2024-05-15 17:17:32.521228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.067 [2024-05-15 17:17:32.521532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.067 [2024-05-15 17:17:32.521561] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:45.067 qpair failed and we were unable to recover it. 00:26:45.067 [2024-05-15 17:17:32.521794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.067 [2024-05-15 17:17:32.522059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.067 [2024-05-15 17:17:32.522088] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:45.067 qpair failed and we were unable to recover it. 00:26:45.067 [2024-05-15 17:17:32.522290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.067 [2024-05-15 17:17:32.522475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.067 [2024-05-15 17:17:32.522489] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:45.067 qpair failed and we were unable to recover it. 00:26:45.067 [2024-05-15 17:17:32.522681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.067 [2024-05-15 17:17:32.522855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.067 [2024-05-15 17:17:32.522869] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:45.067 qpair failed and we were unable to recover it. 00:26:45.067 [2024-05-15 17:17:32.523132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.067 [2024-05-15 17:17:32.523323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.067 [2024-05-15 17:17:32.523344] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:45.067 qpair failed and we were unable to recover it. 00:26:45.067 [2024-05-15 17:17:32.523515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.067 [2024-05-15 17:17:32.523707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.067 [2024-05-15 17:17:32.523720] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:45.067 qpair failed and we were unable to recover it. 00:26:45.067 [2024-05-15 17:17:32.523986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.067 [2024-05-15 17:17:32.524222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.067 [2024-05-15 17:17:32.524237] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:45.067 qpair failed and we were unable to recover it. 00:26:45.067 [2024-05-15 17:17:32.524381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.067 [2024-05-15 17:17:32.524570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.067 [2024-05-15 17:17:32.524583] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:45.067 qpair failed and we were unable to recover it. 00:26:45.067 [2024-05-15 17:17:32.524711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.067 [2024-05-15 17:17:32.524936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.067 [2024-05-15 17:17:32.524964] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:45.067 qpair failed and we were unable to recover it. 00:26:45.067 [2024-05-15 17:17:32.525207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.067 [2024-05-15 17:17:32.525411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.067 [2024-05-15 17:17:32.525440] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:45.067 qpair failed and we were unable to recover it. 00:26:45.067 [2024-05-15 17:17:32.525668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.067 [2024-05-15 17:17:32.525825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.067 [2024-05-15 17:17:32.525838] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:45.067 qpair failed and we were unable to recover it. 00:26:45.067 [2024-05-15 17:17:32.526049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.067 [2024-05-15 17:17:32.526359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.067 [2024-05-15 17:17:32.526390] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:45.067 qpair failed and we were unable to recover it. 00:26:45.067 [2024-05-15 17:17:32.526540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.067 [2024-05-15 17:17:32.526756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.067 [2024-05-15 17:17:32.526785] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:45.067 qpair failed and we were unable to recover it. 00:26:45.067 [2024-05-15 17:17:32.526999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.067 [2024-05-15 17:17:32.527192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.067 [2024-05-15 17:17:32.527206] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:45.067 qpair failed and we were unable to recover it. 00:26:45.067 [2024-05-15 17:17:32.527399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.067 [2024-05-15 17:17:32.527515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.067 [2024-05-15 17:17:32.527529] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:45.067 qpair failed and we were unable to recover it. 00:26:45.067 [2024-05-15 17:17:32.527654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.067 [2024-05-15 17:17:32.527851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.067 [2024-05-15 17:17:32.527864] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:45.067 qpair failed and we were unable to recover it. 00:26:45.067 [2024-05-15 17:17:32.528133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.067 [2024-05-15 17:17:32.528329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.067 [2024-05-15 17:17:32.528343] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:45.067 qpair failed and we were unable to recover it. 00:26:45.067 [2024-05-15 17:17:32.528555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.068 [2024-05-15 17:17:32.528744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.068 [2024-05-15 17:17:32.528774] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:45.068 qpair failed and we were unable to recover it. 00:26:45.068 [2024-05-15 17:17:32.529015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.068 [2024-05-15 17:17:32.529185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.068 [2024-05-15 17:17:32.529215] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:45.068 qpair failed and we were unable to recover it. 00:26:45.068 [2024-05-15 17:17:32.529444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.068 [2024-05-15 17:17:32.529611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.068 [2024-05-15 17:17:32.529640] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:45.068 qpair failed and we were unable to recover it. 00:26:45.068 [2024-05-15 17:17:32.529924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.068 [2024-05-15 17:17:32.530158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.068 [2024-05-15 17:17:32.530202] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:45.068 qpair failed and we were unable to recover it. 00:26:45.068 [2024-05-15 17:17:32.530380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.068 [2024-05-15 17:17:32.530592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.068 [2024-05-15 17:17:32.530620] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:45.068 qpair failed and we were unable to recover it. 00:26:45.068 [2024-05-15 17:17:32.530878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.068 [2024-05-15 17:17:32.531143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.068 [2024-05-15 17:17:32.531156] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:45.068 qpair failed and we were unable to recover it. 00:26:45.068 [2024-05-15 17:17:32.531386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.068 [2024-05-15 17:17:32.531602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.068 [2024-05-15 17:17:32.531616] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:45.068 qpair failed and we were unable to recover it. 00:26:45.068 [2024-05-15 17:17:32.531788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.068 [2024-05-15 17:17:32.531928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.068 [2024-05-15 17:17:32.531942] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:45.068 qpair failed and we were unable to recover it. 00:26:45.068 [2024-05-15 17:17:32.532205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.068 [2024-05-15 17:17:32.532343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.068 [2024-05-15 17:17:32.532357] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:45.068 qpair failed and we were unable to recover it. 00:26:45.068 [2024-05-15 17:17:32.532565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.068 [2024-05-15 17:17:32.532739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.068 [2024-05-15 17:17:32.532753] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:45.068 qpair failed and we were unable to recover it. 00:26:45.068 [2024-05-15 17:17:32.532941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.068 [2024-05-15 17:17:32.533228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.068 [2024-05-15 17:17:32.533242] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:45.068 qpair failed and we were unable to recover it. 00:26:45.068 [2024-05-15 17:17:32.533444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.068 [2024-05-15 17:17:32.533636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.068 [2024-05-15 17:17:32.533649] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:45.068 qpair failed and we were unable to recover it. 00:26:45.068 [2024-05-15 17:17:32.533934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.068 [2024-05-15 17:17:32.534204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.068 [2024-05-15 17:17:32.534219] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:45.068 qpair failed and we were unable to recover it. 00:26:45.068 [2024-05-15 17:17:32.534356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.068 [2024-05-15 17:17:32.534610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.068 [2024-05-15 17:17:32.534623] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:45.068 qpair failed and we were unable to recover it. 00:26:45.068 [2024-05-15 17:17:32.534839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.068 [2024-05-15 17:17:32.535036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.068 [2024-05-15 17:17:32.535050] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:45.068 qpair failed and we were unable to recover it. 00:26:45.068 [2024-05-15 17:17:32.535234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.068 [2024-05-15 17:17:32.535372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.068 [2024-05-15 17:17:32.535385] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:45.068 qpair failed and we were unable to recover it. 00:26:45.068 [2024-05-15 17:17:32.535565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.068 [2024-05-15 17:17:32.535670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.068 [2024-05-15 17:17:32.535685] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:45.068 qpair failed and we were unable to recover it. 00:26:45.068 [2024-05-15 17:17:32.535989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.068 [2024-05-15 17:17:32.536258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.068 [2024-05-15 17:17:32.536272] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:45.068 qpair failed and we were unable to recover it. 00:26:45.068 [2024-05-15 17:17:32.536459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.068 [2024-05-15 17:17:32.536629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.068 [2024-05-15 17:17:32.536642] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:45.068 qpair failed and we were unable to recover it. 00:26:45.068 [2024-05-15 17:17:32.536939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.068 [2024-05-15 17:17:32.537179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.068 [2024-05-15 17:17:32.537193] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:45.068 qpair failed and we were unable to recover it. 00:26:45.068 [2024-05-15 17:17:32.537390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.068 [2024-05-15 17:17:32.537570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.068 [2024-05-15 17:17:32.537584] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:45.068 qpair failed and we were unable to recover it. 00:26:45.068 [2024-05-15 17:17:32.537777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.068 [2024-05-15 17:17:32.538076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.068 [2024-05-15 17:17:32.538089] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:45.068 qpair failed and we were unable to recover it. 00:26:45.068 [2024-05-15 17:17:32.538215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.068 [2024-05-15 17:17:32.538348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.068 [2024-05-15 17:17:32.538362] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:45.068 qpair failed and we were unable to recover it. 00:26:45.068 [2024-05-15 17:17:32.538568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.068 [2024-05-15 17:17:32.538701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.068 [2024-05-15 17:17:32.538715] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:45.068 qpair failed and we were unable to recover it. 00:26:45.068 [2024-05-15 17:17:32.538864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.068 [2024-05-15 17:17:32.538989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.068 [2024-05-15 17:17:32.539002] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:45.068 qpair failed and we were unable to recover it. 00:26:45.068 [2024-05-15 17:17:32.539271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.068 [2024-05-15 17:17:32.539407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.068 [2024-05-15 17:17:32.539421] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:45.068 qpair failed and we were unable to recover it. 00:26:45.068 [2024-05-15 17:17:32.539612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.068 [2024-05-15 17:17:32.539923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.068 [2024-05-15 17:17:32.539937] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:45.068 qpair failed and we were unable to recover it. 00:26:45.068 [2024-05-15 17:17:32.540147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.068 [2024-05-15 17:17:32.540335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.068 [2024-05-15 17:17:32.540349] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:45.068 qpair failed and we were unable to recover it. 00:26:45.068 [2024-05-15 17:17:32.540642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.068 [2024-05-15 17:17:32.540922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.068 [2024-05-15 17:17:32.540936] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:45.068 qpair failed and we were unable to recover it. 00:26:45.068 [2024-05-15 17:17:32.541109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.068 [2024-05-15 17:17:32.541267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.068 [2024-05-15 17:17:32.541282] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:45.068 qpair failed and we were unable to recover it. 00:26:45.068 [2024-05-15 17:17:32.541492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.068 [2024-05-15 17:17:32.541626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.068 [2024-05-15 17:17:32.541641] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:45.068 qpair failed and we were unable to recover it. 00:26:45.068 [2024-05-15 17:17:32.541891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.068 [2024-05-15 17:17:32.542131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.068 [2024-05-15 17:17:32.542145] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:45.068 qpair failed and we were unable to recover it. 00:26:45.068 [2024-05-15 17:17:32.542338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.068 [2024-05-15 17:17:32.542450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.068 [2024-05-15 17:17:32.542463] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:45.068 qpair failed and we were unable to recover it. 00:26:45.069 [2024-05-15 17:17:32.542653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.069 [2024-05-15 17:17:32.542904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.069 [2024-05-15 17:17:32.542917] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:45.069 qpair failed and we were unable to recover it. 00:26:45.069 [2024-05-15 17:17:32.543193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.069 [2024-05-15 17:17:32.543385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.069 [2024-05-15 17:17:32.543399] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:45.069 qpair failed and we were unable to recover it. 00:26:45.069 [2024-05-15 17:17:32.543576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.069 [2024-05-15 17:17:32.543700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.069 [2024-05-15 17:17:32.543714] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:45.069 qpair failed and we were unable to recover it. 00:26:45.069 [2024-05-15 17:17:32.543926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.069 [2024-05-15 17:17:32.544194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.069 [2024-05-15 17:17:32.544208] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:45.069 qpair failed and we were unable to recover it. 00:26:45.069 [2024-05-15 17:17:32.544339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.069 [2024-05-15 17:17:32.544484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.069 [2024-05-15 17:17:32.544497] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:45.069 qpair failed and we were unable to recover it. 00:26:45.069 [2024-05-15 17:17:32.544682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.069 [2024-05-15 17:17:32.544895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.069 [2024-05-15 17:17:32.544908] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:45.069 qpair failed and we were unable to recover it. 00:26:45.069 [2024-05-15 17:17:32.545104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.069 [2024-05-15 17:17:32.545296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.069 [2024-05-15 17:17:32.545311] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:45.069 qpair failed and we were unable to recover it. 00:26:45.069 [2024-05-15 17:17:32.545438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.069 [2024-05-15 17:17:32.545622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.069 [2024-05-15 17:17:32.545636] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:45.069 qpair failed and we were unable to recover it. 00:26:45.069 [2024-05-15 17:17:32.545896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.069 [2024-05-15 17:17:32.546179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.069 [2024-05-15 17:17:32.546193] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:45.069 qpair failed and we were unable to recover it. 00:26:45.069 [2024-05-15 17:17:32.546335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.069 [2024-05-15 17:17:32.546515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.069 [2024-05-15 17:17:32.546529] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:45.069 qpair failed and we were unable to recover it. 00:26:45.069 [2024-05-15 17:17:32.546720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.069 [2024-05-15 17:17:32.546959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.069 [2024-05-15 17:17:32.546973] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:45.069 qpair failed and we were unable to recover it. 00:26:45.069 [2024-05-15 17:17:32.547284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.069 [2024-05-15 17:17:32.547458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.069 [2024-05-15 17:17:32.547472] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:45.069 qpair failed and we were unable to recover it. 00:26:45.069 [2024-05-15 17:17:32.547591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.069 [2024-05-15 17:17:32.547777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.069 [2024-05-15 17:17:32.547791] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:45.069 qpair failed and we were unable to recover it. 00:26:45.069 [2024-05-15 17:17:32.548056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.069 [2024-05-15 17:17:32.548187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.069 [2024-05-15 17:17:32.548201] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:45.069 qpair failed and we were unable to recover it. 00:26:45.069 [2024-05-15 17:17:32.548412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.069 [2024-05-15 17:17:32.548534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.069 [2024-05-15 17:17:32.548547] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:45.069 qpair failed and we were unable to recover it. 00:26:45.069 [2024-05-15 17:17:32.548735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.069 [2024-05-15 17:17:32.548914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.069 [2024-05-15 17:17:32.548928] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:45.069 qpair failed and we were unable to recover it. 00:26:45.069 [2024-05-15 17:17:32.549174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.069 [2024-05-15 17:17:32.549348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.069 [2024-05-15 17:17:32.549363] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:45.069 qpair failed and we were unable to recover it. 00:26:45.069 [2024-05-15 17:17:32.549555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.069 [2024-05-15 17:17:32.549749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.069 [2024-05-15 17:17:32.549762] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:45.069 qpair failed and we were unable to recover it. 00:26:45.069 [2024-05-15 17:17:32.550020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.069 [2024-05-15 17:17:32.550151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.069 [2024-05-15 17:17:32.550182] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:45.069 qpair failed and we were unable to recover it. 00:26:45.069 [2024-05-15 17:17:32.550420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.069 [2024-05-15 17:17:32.550680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.069 [2024-05-15 17:17:32.550693] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:45.069 qpair failed and we were unable to recover it. 00:26:45.069 [2024-05-15 17:17:32.550933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.069 [2024-05-15 17:17:32.551176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.069 [2024-05-15 17:17:32.551191] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:45.069 qpair failed and we were unable to recover it. 00:26:45.069 [2024-05-15 17:17:32.551398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.069 [2024-05-15 17:17:32.551528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.069 [2024-05-15 17:17:32.551541] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:45.069 qpair failed and we were unable to recover it. 00:26:45.069 [2024-05-15 17:17:32.551735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.069 [2024-05-15 17:17:32.551940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.069 [2024-05-15 17:17:32.551954] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:45.069 qpair failed and we were unable to recover it. 00:26:45.069 [2024-05-15 17:17:32.552234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.069 [2024-05-15 17:17:32.552478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.069 [2024-05-15 17:17:32.552492] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:45.069 qpair failed and we were unable to recover it. 00:26:45.069 [2024-05-15 17:17:32.552733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.069 [2024-05-15 17:17:32.552943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.069 [2024-05-15 17:17:32.552956] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:45.069 qpair failed and we were unable to recover it. 00:26:45.069 [2024-05-15 17:17:32.553269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.069 [2024-05-15 17:17:32.553463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.069 [2024-05-15 17:17:32.553477] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:45.069 qpair failed and we were unable to recover it. 00:26:45.069 [2024-05-15 17:17:32.553662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.069 [2024-05-15 17:17:32.553839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.069 [2024-05-15 17:17:32.553852] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:45.069 qpair failed and we were unable to recover it. 00:26:45.069 [2024-05-15 17:17:32.554125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.069 [2024-05-15 17:17:32.554369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.069 [2024-05-15 17:17:32.554383] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:45.069 qpair failed and we were unable to recover it. 00:26:45.069 [2024-05-15 17:17:32.554575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.069 [2024-05-15 17:17:32.554761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.069 [2024-05-15 17:17:32.554774] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:45.069 qpair failed and we were unable to recover it. 00:26:45.069 [2024-05-15 17:17:32.555105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.069 [2024-05-15 17:17:32.555319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.069 [2024-05-15 17:17:32.555333] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:45.069 qpair failed and we were unable to recover it. 00:26:45.069 [2024-05-15 17:17:32.555458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.069 [2024-05-15 17:17:32.555603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.069 [2024-05-15 17:17:32.555616] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:45.069 qpair failed and we were unable to recover it. 00:26:45.069 [2024-05-15 17:17:32.555899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.069 [2024-05-15 17:17:32.556140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.069 [2024-05-15 17:17:32.556156] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:45.069 qpair failed and we were unable to recover it. 00:26:45.069 [2024-05-15 17:17:32.556367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.069 [2024-05-15 17:17:32.556565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.069 [2024-05-15 17:17:32.556578] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:45.069 qpair failed and we were unable to recover it. 00:26:45.069 [2024-05-15 17:17:32.556758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.069 [2024-05-15 17:17:32.556880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.069 [2024-05-15 17:17:32.556893] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:45.070 qpair failed and we were unable to recover it. 00:26:45.070 [2024-05-15 17:17:32.557160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.070 [2024-05-15 17:17:32.557368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.070 [2024-05-15 17:17:32.557382] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:45.070 qpair failed and we were unable to recover it. 00:26:45.070 [2024-05-15 17:17:32.557497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.070 [2024-05-15 17:17:32.557680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.070 [2024-05-15 17:17:32.557693] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:45.070 qpair failed and we were unable to recover it. 00:26:45.070 [2024-05-15 17:17:32.557985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.070 [2024-05-15 17:17:32.558179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.070 [2024-05-15 17:17:32.558194] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:45.070 qpair failed and we were unable to recover it. 00:26:45.070 [2024-05-15 17:17:32.558379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.070 [2024-05-15 17:17:32.558647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.070 [2024-05-15 17:17:32.558660] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:45.070 qpair failed and we were unable to recover it. 00:26:45.070 [2024-05-15 17:17:32.558921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.070 [2024-05-15 17:17:32.559183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.070 [2024-05-15 17:17:32.559197] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:45.070 qpair failed and we were unable to recover it. 00:26:45.070 [2024-05-15 17:17:32.559430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.070 [2024-05-15 17:17:32.559651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.070 [2024-05-15 17:17:32.559664] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:45.070 qpair failed and we were unable to recover it. 00:26:45.070 [2024-05-15 17:17:32.559887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.070 [2024-05-15 17:17:32.560092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.070 [2024-05-15 17:17:32.560105] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:45.070 qpair failed and we were unable to recover it. 00:26:45.070 [2024-05-15 17:17:32.560372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.070 [2024-05-15 17:17:32.560610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.070 [2024-05-15 17:17:32.560627] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:45.070 qpair failed and we were unable to recover it. 00:26:45.070 [2024-05-15 17:17:32.560812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.070 [2024-05-15 17:17:32.561027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.070 [2024-05-15 17:17:32.561041] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:45.070 qpair failed and we were unable to recover it. 00:26:45.070 [2024-05-15 17:17:32.561324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.070 [2024-05-15 17:17:32.561457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.070 [2024-05-15 17:17:32.561471] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:45.070 qpair failed and we were unable to recover it. 00:26:45.070 [2024-05-15 17:17:32.561697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.070 [2024-05-15 17:17:32.561869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.070 [2024-05-15 17:17:32.561883] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:45.070 qpair failed and we were unable to recover it. 00:26:45.070 [2024-05-15 17:17:32.562148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.070 [2024-05-15 17:17:32.562345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.070 [2024-05-15 17:17:32.562359] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:45.070 qpair failed and we were unable to recover it. 00:26:45.070 [2024-05-15 17:17:32.562595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.070 [2024-05-15 17:17:32.562851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.070 [2024-05-15 17:17:32.562865] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:45.070 qpair failed and we were unable to recover it. 00:26:45.070 [2024-05-15 17:17:32.562990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.070 [2024-05-15 17:17:32.563252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.070 [2024-05-15 17:17:32.563266] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:45.070 qpair failed and we were unable to recover it. 00:26:45.070 [2024-05-15 17:17:32.563534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.070 [2024-05-15 17:17:32.563792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.070 [2024-05-15 17:17:32.563805] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:45.070 qpair failed and we were unable to recover it. 00:26:45.070 [2024-05-15 17:17:32.564040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.070 [2024-05-15 17:17:32.564231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.070 [2024-05-15 17:17:32.564245] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:45.070 qpair failed and we were unable to recover it. 00:26:45.070 [2024-05-15 17:17:32.564453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.070 [2024-05-15 17:17:32.564643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.070 [2024-05-15 17:17:32.564656] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:45.070 qpair failed and we were unable to recover it. 00:26:45.070 [2024-05-15 17:17:32.564871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.070 [2024-05-15 17:17:32.565058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.070 [2024-05-15 17:17:32.565074] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:45.070 qpair failed and we were unable to recover it. 00:26:45.070 [2024-05-15 17:17:32.565330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.070 [2024-05-15 17:17:32.565550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.070 [2024-05-15 17:17:32.565564] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:45.070 qpair failed and we were unable to recover it. 00:26:45.070 [2024-05-15 17:17:32.565835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.070 [2024-05-15 17:17:32.566104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.070 [2024-05-15 17:17:32.566117] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:45.070 qpair failed and we were unable to recover it. 00:26:45.070 [2024-05-15 17:17:32.566312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.070 [2024-05-15 17:17:32.566454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.070 [2024-05-15 17:17:32.566468] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:45.070 qpair failed and we were unable to recover it. 00:26:45.070 [2024-05-15 17:17:32.566606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.070 [2024-05-15 17:17:32.566904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.070 [2024-05-15 17:17:32.566918] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:45.070 qpair failed and we were unable to recover it. 00:26:45.070 [2024-05-15 17:17:32.567190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.070 [2024-05-15 17:17:32.567431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.070 [2024-05-15 17:17:32.567444] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:45.070 qpair failed and we were unable to recover it. 00:26:45.070 [2024-05-15 17:17:32.567649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.070 [2024-05-15 17:17:32.567920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.070 [2024-05-15 17:17:32.567933] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:45.070 qpair failed and we were unable to recover it. 00:26:45.070 [2024-05-15 17:17:32.568179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.070 [2024-05-15 17:17:32.568395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.070 [2024-05-15 17:17:32.568409] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:45.070 qpair failed and we were unable to recover it. 00:26:45.070 [2024-05-15 17:17:32.568609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.070 [2024-05-15 17:17:32.568746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.070 [2024-05-15 17:17:32.568759] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:45.070 qpair failed and we were unable to recover it. 00:26:45.070 [2024-05-15 17:17:32.568894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.070 [2024-05-15 17:17:32.569152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.070 [2024-05-15 17:17:32.569173] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:45.070 qpair failed and we were unable to recover it. 00:26:45.071 [2024-05-15 17:17:32.569362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.071 [2024-05-15 17:17:32.569594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.071 [2024-05-15 17:17:32.569607] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:45.071 qpair failed and we were unable to recover it. 00:26:45.071 [2024-05-15 17:17:32.569724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.071 [2024-05-15 17:17:32.569958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.071 [2024-05-15 17:17:32.569971] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:45.071 qpair failed and we were unable to recover it. 00:26:45.071 [2024-05-15 17:17:32.570142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.071 [2024-05-15 17:17:32.570270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.071 [2024-05-15 17:17:32.570284] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:45.071 qpair failed and we were unable to recover it. 00:26:45.071 [2024-05-15 17:17:32.570420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.071 [2024-05-15 17:17:32.570590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.071 [2024-05-15 17:17:32.570604] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:45.071 qpair failed and we were unable to recover it. 00:26:45.071 [2024-05-15 17:17:32.570846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.071 [2024-05-15 17:17:32.571085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.071 [2024-05-15 17:17:32.571098] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:45.071 qpair failed and we were unable to recover it. 00:26:45.071 [2024-05-15 17:17:32.571327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.071 [2024-05-15 17:17:32.571521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.071 [2024-05-15 17:17:32.571535] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:45.071 qpair failed and we were unable to recover it. 00:26:45.071 [2024-05-15 17:17:32.571646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.071 [2024-05-15 17:17:32.571783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.071 [2024-05-15 17:17:32.571796] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:45.071 qpair failed and we were unable to recover it. 00:26:45.071 [2024-05-15 17:17:32.572013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.071 [2024-05-15 17:17:32.572279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.071 [2024-05-15 17:17:32.572293] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:45.071 qpair failed and we were unable to recover it. 00:26:45.071 [2024-05-15 17:17:32.572480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.071 [2024-05-15 17:17:32.572660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.071 [2024-05-15 17:17:32.572674] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:45.071 qpair failed and we were unable to recover it. 00:26:45.071 [2024-05-15 17:17:32.572964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.071 [2024-05-15 17:17:32.573270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.071 [2024-05-15 17:17:32.573285] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:45.071 qpair failed and we were unable to recover it. 00:26:45.071 [2024-05-15 17:17:32.573463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.071 [2024-05-15 17:17:32.573593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.071 [2024-05-15 17:17:32.573607] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:45.071 qpair failed and we were unable to recover it. 00:26:45.071 [2024-05-15 17:17:32.573742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.071 [2024-05-15 17:17:32.573973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.071 [2024-05-15 17:17:32.573986] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:45.071 qpair failed and we were unable to recover it. 00:26:45.071 [2024-05-15 17:17:32.574250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.071 [2024-05-15 17:17:32.574381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.071 [2024-05-15 17:17:32.574395] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:45.071 qpair failed and we were unable to recover it. 00:26:45.071 [2024-05-15 17:17:32.574571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.071 [2024-05-15 17:17:32.574765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.071 [2024-05-15 17:17:32.574778] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:45.071 qpair failed and we were unable to recover it. 00:26:45.071 [2024-05-15 17:17:32.574959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.071 [2024-05-15 17:17:32.575227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.071 [2024-05-15 17:17:32.575242] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:45.071 qpair failed and we were unable to recover it. 00:26:45.071 [2024-05-15 17:17:32.575501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.071 [2024-05-15 17:17:32.575616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.071 [2024-05-15 17:17:32.575629] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:45.071 qpair failed and we were unable to recover it. 00:26:45.071 [2024-05-15 17:17:32.575744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.071 [2024-05-15 17:17:32.576022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.071 [2024-05-15 17:17:32.576035] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:45.071 qpair failed and we were unable to recover it. 00:26:45.071 [2024-05-15 17:17:32.576246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.071 [2024-05-15 17:17:32.576490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.071 [2024-05-15 17:17:32.576503] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:45.071 qpair failed and we were unable to recover it. 00:26:45.071 [2024-05-15 17:17:32.576646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.071 [2024-05-15 17:17:32.576919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.071 [2024-05-15 17:17:32.576933] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:45.071 qpair failed and we were unable to recover it. 00:26:45.071 [2024-05-15 17:17:32.577107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.071 [2024-05-15 17:17:32.577335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.071 [2024-05-15 17:17:32.577350] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:45.071 qpair failed and we were unable to recover it. 00:26:45.071 [2024-05-15 17:17:32.577541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.071 [2024-05-15 17:17:32.577833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.071 [2024-05-15 17:17:32.577846] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:45.071 qpair failed and we were unable to recover it. 00:26:45.071 [2024-05-15 17:17:32.578055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.071 [2024-05-15 17:17:32.578328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.071 [2024-05-15 17:17:32.578342] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:45.071 qpair failed and we were unable to recover it. 00:26:45.071 [2024-05-15 17:17:32.578585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.071 [2024-05-15 17:17:32.578719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.071 [2024-05-15 17:17:32.578733] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:45.071 qpair failed and we were unable to recover it. 00:26:45.071 [2024-05-15 17:17:32.578977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.071 [2024-05-15 17:17:32.579082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.071 [2024-05-15 17:17:32.579095] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:45.071 qpair failed and we were unable to recover it. 00:26:45.071 [2024-05-15 17:17:32.579335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.071 [2024-05-15 17:17:32.579465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.071 [2024-05-15 17:17:32.579479] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:45.071 qpair failed and we were unable to recover it. 00:26:45.071 [2024-05-15 17:17:32.579667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.071 [2024-05-15 17:17:32.579804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.071 [2024-05-15 17:17:32.579818] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:45.071 qpair failed and we were unable to recover it. 00:26:45.071 [2024-05-15 17:17:32.580111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.071 [2024-05-15 17:17:32.580283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.071 [2024-05-15 17:17:32.580297] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:45.071 qpair failed and we were unable to recover it. 00:26:45.071 [2024-05-15 17:17:32.580512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.071 [2024-05-15 17:17:32.580698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.071 [2024-05-15 17:17:32.580713] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:45.071 qpair failed and we were unable to recover it. 00:26:45.071 [2024-05-15 17:17:32.580898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.071 [2024-05-15 17:17:32.581008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.072 [2024-05-15 17:17:32.581022] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:45.072 qpair failed and we were unable to recover it. 00:26:45.072 [2024-05-15 17:17:32.581157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.072 [2024-05-15 17:17:32.581346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.072 [2024-05-15 17:17:32.581360] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:45.072 qpair failed and we were unable to recover it. 00:26:45.072 [2024-05-15 17:17:32.581544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.072 [2024-05-15 17:17:32.581677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.072 [2024-05-15 17:17:32.581691] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:45.072 qpair failed and we were unable to recover it. 00:26:45.072 [2024-05-15 17:17:32.582003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.072 [2024-05-15 17:17:32.582120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.072 [2024-05-15 17:17:32.582134] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:45.072 qpair failed and we were unable to recover it. 00:26:45.072 [2024-05-15 17:17:32.582396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.072 [2024-05-15 17:17:32.582665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.072 [2024-05-15 17:17:32.582679] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:45.072 qpair failed and we were unable to recover it. 00:26:45.072 [2024-05-15 17:17:32.582972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.072 [2024-05-15 17:17:32.583209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.072 [2024-05-15 17:17:32.583223] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:45.072 qpair failed and we were unable to recover it. 00:26:45.072 [2024-05-15 17:17:32.583410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.072 [2024-05-15 17:17:32.583551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.072 [2024-05-15 17:17:32.583565] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:45.072 qpair failed and we were unable to recover it. 00:26:45.072 [2024-05-15 17:17:32.583698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.072 [2024-05-15 17:17:32.583843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.072 [2024-05-15 17:17:32.583857] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:45.072 qpair failed and we were unable to recover it. 00:26:45.072 [2024-05-15 17:17:32.584041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.072 [2024-05-15 17:17:32.584328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.072 [2024-05-15 17:17:32.584342] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:45.072 qpair failed and we were unable to recover it. 00:26:45.072 [2024-05-15 17:17:32.584525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.072 [2024-05-15 17:17:32.584714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.072 [2024-05-15 17:17:32.584728] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:45.072 qpair failed and we were unable to recover it. 00:26:45.072 [2024-05-15 17:17:32.584913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.072 [2024-05-15 17:17:32.585150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.072 [2024-05-15 17:17:32.585169] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:45.072 qpair failed and we were unable to recover it. 00:26:45.072 [2024-05-15 17:17:32.585466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.072 [2024-05-15 17:17:32.585704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.072 [2024-05-15 17:17:32.585717] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:45.072 qpair failed and we were unable to recover it. 00:26:45.072 [2024-05-15 17:17:32.585967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.072 [2024-05-15 17:17:32.586155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.072 [2024-05-15 17:17:32.586184] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:45.072 qpair failed and we were unable to recover it. 00:26:45.072 [2024-05-15 17:17:32.586324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.072 [2024-05-15 17:17:32.586521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.072 [2024-05-15 17:17:32.586535] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:45.072 qpair failed and we were unable to recover it. 00:26:45.072 [2024-05-15 17:17:32.586746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.072 [2024-05-15 17:17:32.587016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.072 [2024-05-15 17:17:32.587029] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:45.072 qpair failed and we were unable to recover it. 00:26:45.072 [2024-05-15 17:17:32.587157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.072 [2024-05-15 17:17:32.587430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.072 [2024-05-15 17:17:32.587444] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:45.072 qpair failed and we were unable to recover it. 00:26:45.072 [2024-05-15 17:17:32.587650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.072 [2024-05-15 17:17:32.587770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.072 [2024-05-15 17:17:32.587784] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:45.072 qpair failed and we were unable to recover it. 00:26:45.072 [2024-05-15 17:17:32.588058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.072 [2024-05-15 17:17:32.588314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.072 [2024-05-15 17:17:32.588328] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:45.072 qpair failed and we were unable to recover it. 00:26:45.072 [2024-05-15 17:17:32.588483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.072 [2024-05-15 17:17:32.588675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.072 [2024-05-15 17:17:32.588689] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:45.072 qpair failed and we were unable to recover it. 00:26:45.072 [2024-05-15 17:17:32.588927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.072 [2024-05-15 17:17:32.589128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.072 [2024-05-15 17:17:32.589142] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:45.072 qpair failed and we were unable to recover it. 00:26:45.072 [2024-05-15 17:17:32.589467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.072 [2024-05-15 17:17:32.589603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.072 [2024-05-15 17:17:32.589617] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:45.072 qpair failed and we were unable to recover it. 00:26:45.072 [2024-05-15 17:17:32.589886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.072 [2024-05-15 17:17:32.590075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.072 [2024-05-15 17:17:32.590088] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:45.072 qpair failed and we were unable to recover it. 00:26:45.072 [2024-05-15 17:17:32.590342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.072 [2024-05-15 17:17:32.590583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.072 [2024-05-15 17:17:32.590596] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:45.072 qpair failed and we were unable to recover it. 00:26:45.072 [2024-05-15 17:17:32.590836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.072 [2024-05-15 17:17:32.591122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.072 [2024-05-15 17:17:32.591135] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:45.072 qpair failed and we were unable to recover it. 00:26:45.072 [2024-05-15 17:17:32.591253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.072 [2024-05-15 17:17:32.591440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.072 [2024-05-15 17:17:32.591453] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:45.072 qpair failed and we were unable to recover it. 00:26:45.072 [2024-05-15 17:17:32.591638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.072 [2024-05-15 17:17:32.591928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.072 [2024-05-15 17:17:32.591942] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:45.072 qpair failed and we were unable to recover it. 00:26:45.072 [2024-05-15 17:17:32.592228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.072 [2024-05-15 17:17:32.592417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.072 [2024-05-15 17:17:32.592431] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:45.072 qpair failed and we were unable to recover it. 00:26:45.072 [2024-05-15 17:17:32.592602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.072 [2024-05-15 17:17:32.592879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.072 [2024-05-15 17:17:32.592893] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:45.072 qpair failed and we were unable to recover it. 00:26:45.072 [2024-05-15 17:17:32.593084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.072 [2024-05-15 17:17:32.593374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.073 [2024-05-15 17:17:32.593388] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:45.073 qpair failed and we were unable to recover it. 00:26:45.073 [2024-05-15 17:17:32.593519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.073 [2024-05-15 17:17:32.593711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.073 [2024-05-15 17:17:32.593725] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:45.073 qpair failed and we were unable to recover it. 00:26:45.073 [2024-05-15 17:17:32.594004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.073 [2024-05-15 17:17:32.594217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.073 [2024-05-15 17:17:32.594232] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:45.073 qpair failed and we were unable to recover it. 00:26:45.073 [2024-05-15 17:17:32.594423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.073 [2024-05-15 17:17:32.594697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.073 [2024-05-15 17:17:32.594710] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:45.073 qpair failed and we were unable to recover it. 00:26:45.073 [2024-05-15 17:17:32.595035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.073 [2024-05-15 17:17:32.595273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.073 [2024-05-15 17:17:32.595288] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:45.073 qpair failed and we were unable to recover it. 00:26:45.073 [2024-05-15 17:17:32.595500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.073 [2024-05-15 17:17:32.595632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.073 [2024-05-15 17:17:32.595646] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:45.073 qpair failed and we were unable to recover it. 00:26:45.073 [2024-05-15 17:17:32.595867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.073 [2024-05-15 17:17:32.596044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.073 [2024-05-15 17:17:32.596058] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:45.073 qpair failed and we were unable to recover it. 00:26:45.073 [2024-05-15 17:17:32.596263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.073 [2024-05-15 17:17:32.596450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.073 [2024-05-15 17:17:32.596464] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:45.073 qpair failed and we were unable to recover it. 00:26:45.073 [2024-05-15 17:17:32.596648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.073 [2024-05-15 17:17:32.596844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.073 [2024-05-15 17:17:32.596858] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:45.073 qpair failed and we were unable to recover it. 00:26:45.073 [2024-05-15 17:17:32.597097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.073 [2024-05-15 17:17:32.597291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.073 [2024-05-15 17:17:32.597305] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:45.073 qpair failed and we were unable to recover it. 00:26:45.073 [2024-05-15 17:17:32.597543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.073 [2024-05-15 17:17:32.597728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.073 [2024-05-15 17:17:32.597741] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:45.073 qpair failed and we were unable to recover it. 00:26:45.073 [2024-05-15 17:17:32.597952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.073 [2024-05-15 17:17:32.598212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.073 [2024-05-15 17:17:32.598227] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:45.073 qpair failed and we were unable to recover it. 00:26:45.073 [2024-05-15 17:17:32.598422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.073 [2024-05-15 17:17:32.598612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.073 [2024-05-15 17:17:32.598627] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:45.073 qpair failed and we were unable to recover it. 00:26:45.073 [2024-05-15 17:17:32.598814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.073 [2024-05-15 17:17:32.598941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.073 [2024-05-15 17:17:32.598955] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:45.073 qpair failed and we were unable to recover it. 00:26:45.073 [2024-05-15 17:17:32.599142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.073 [2024-05-15 17:17:32.599359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.073 [2024-05-15 17:17:32.599373] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:45.073 qpair failed and we were unable to recover it. 00:26:45.073 [2024-05-15 17:17:32.599624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.073 [2024-05-15 17:17:32.599790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.073 [2024-05-15 17:17:32.599809] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:45.073 qpair failed and we were unable to recover it. 00:26:45.073 [2024-05-15 17:17:32.600080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.073 [2024-05-15 17:17:32.600289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.073 [2024-05-15 17:17:32.600305] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:45.073 qpair failed and we were unable to recover it. 00:26:45.073 [2024-05-15 17:17:32.600443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.073 [2024-05-15 17:17:32.600624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.073 [2024-05-15 17:17:32.600638] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:45.073 qpair failed and we were unable to recover it. 00:26:45.073 [2024-05-15 17:17:32.600871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.073 [2024-05-15 17:17:32.601124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.073 [2024-05-15 17:17:32.601137] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:45.073 qpair failed and we were unable to recover it. 00:26:45.073 [2024-05-15 17:17:32.601343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.073 [2024-05-15 17:17:32.601582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.073 [2024-05-15 17:17:32.601596] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:45.073 qpair failed and we were unable to recover it. 00:26:45.073 [2024-05-15 17:17:32.601798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.073 [2024-05-15 17:17:32.602023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.073 [2024-05-15 17:17:32.602037] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:45.073 qpair failed and we were unable to recover it. 00:26:45.073 [2024-05-15 17:17:32.602231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.073 [2024-05-15 17:17:32.602372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.073 [2024-05-15 17:17:32.602386] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:45.073 qpair failed and we were unable to recover it. 00:26:45.073 [2024-05-15 17:17:32.602503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.073 [2024-05-15 17:17:32.602764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.073 [2024-05-15 17:17:32.602778] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:45.073 qpair failed and we were unable to recover it. 00:26:45.073 [2024-05-15 17:17:32.603072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.073 [2024-05-15 17:17:32.603263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.073 [2024-05-15 17:17:32.603277] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:45.073 qpair failed and we were unable to recover it. 00:26:45.073 [2024-05-15 17:17:32.603527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.073 [2024-05-15 17:17:32.603713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.073 [2024-05-15 17:17:32.603730] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:45.073 qpair failed and we were unable to recover it. 00:26:45.073 [2024-05-15 17:17:32.604032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.073 [2024-05-15 17:17:32.604305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.073 [2024-05-15 17:17:32.604325] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:45.073 qpair failed and we were unable to recover it. 00:26:45.073 [2024-05-15 17:17:32.604448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.073 [2024-05-15 17:17:32.604591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.073 [2024-05-15 17:17:32.604605] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:45.073 qpair failed and we were unable to recover it. 00:26:45.073 [2024-05-15 17:17:32.604787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.073 [2024-05-15 17:17:32.605054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.073 [2024-05-15 17:17:32.605069] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:45.073 qpair failed and we were unable to recover it. 00:26:45.073 [2024-05-15 17:17:32.605312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.073 [2024-05-15 17:17:32.605432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.074 [2024-05-15 17:17:32.605445] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:45.074 qpair failed and we were unable to recover it. 00:26:45.074 [2024-05-15 17:17:32.605629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.074 [2024-05-15 17:17:32.605894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.074 [2024-05-15 17:17:32.605908] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:45.074 qpair failed and we were unable to recover it. 00:26:45.074 [2024-05-15 17:17:32.606021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.074 [2024-05-15 17:17:32.606277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.074 [2024-05-15 17:17:32.606292] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:45.074 qpair failed and we were unable to recover it. 00:26:45.074 [2024-05-15 17:17:32.606483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.074 [2024-05-15 17:17:32.606591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.074 [2024-05-15 17:17:32.606605] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:45.074 qpair failed and we were unable to recover it. 00:26:45.074 [2024-05-15 17:17:32.606740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.074 [2024-05-15 17:17:32.606863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.074 [2024-05-15 17:17:32.606877] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:45.074 qpair failed and we were unable to recover it. 00:26:45.074 [2024-05-15 17:17:32.607013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.074 [2024-05-15 17:17:32.607258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.074 [2024-05-15 17:17:32.607272] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:45.074 qpair failed and we were unable to recover it. 00:26:45.074 [2024-05-15 17:17:32.607403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.074 [2024-05-15 17:17:32.607522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.074 [2024-05-15 17:17:32.607536] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:45.074 qpair failed and we were unable to recover it. 00:26:45.074 [2024-05-15 17:17:32.607656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.074 [2024-05-15 17:17:32.607925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.074 [2024-05-15 17:17:32.607942] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:45.074 qpair failed and we were unable to recover it. 00:26:45.074 [2024-05-15 17:17:32.608175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.074 [2024-05-15 17:17:32.608347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.074 [2024-05-15 17:17:32.608360] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:45.074 qpair failed and we were unable to recover it. 00:26:45.074 [2024-05-15 17:17:32.608544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.074 [2024-05-15 17:17:32.608679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.074 [2024-05-15 17:17:32.608692] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:45.074 qpair failed and we were unable to recover it. 00:26:45.074 [2024-05-15 17:17:32.608888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.074 [2024-05-15 17:17:32.609121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.074 [2024-05-15 17:17:32.609134] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:45.074 qpair failed and we were unable to recover it. 00:26:45.074 [2024-05-15 17:17:32.609259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.074 [2024-05-15 17:17:32.609461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.074 [2024-05-15 17:17:32.609475] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:45.074 qpair failed and we were unable to recover it. 00:26:45.074 [2024-05-15 17:17:32.609709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.074 [2024-05-15 17:17:32.609977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.074 [2024-05-15 17:17:32.609991] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:45.074 qpair failed and we were unable to recover it. 00:26:45.074 [2024-05-15 17:17:32.610310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.074 [2024-05-15 17:17:32.610442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.074 [2024-05-15 17:17:32.610456] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:45.074 qpair failed and we were unable to recover it. 00:26:45.074 [2024-05-15 17:17:32.610588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.074 [2024-05-15 17:17:32.610762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.074 [2024-05-15 17:17:32.610776] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:45.074 qpair failed and we were unable to recover it. 00:26:45.074 [2024-05-15 17:17:32.611070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.074 [2024-05-15 17:17:32.611212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.074 [2024-05-15 17:17:32.611225] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:45.074 qpair failed and we were unable to recover it. 00:26:45.074 [2024-05-15 17:17:32.611414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.074 [2024-05-15 17:17:32.611585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.074 [2024-05-15 17:17:32.611600] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:45.074 qpair failed and we were unable to recover it. 00:26:45.074 [2024-05-15 17:17:32.611737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.074 [2024-05-15 17:17:32.611997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.074 [2024-05-15 17:17:32.612012] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:45.074 qpair failed and we were unable to recover it. 00:26:45.074 [2024-05-15 17:17:32.612197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.074 [2024-05-15 17:17:32.612341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.074 [2024-05-15 17:17:32.612355] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:45.074 qpair failed and we were unable to recover it. 00:26:45.074 [2024-05-15 17:17:32.612494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.074 [2024-05-15 17:17:32.612740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.074 [2024-05-15 17:17:32.612754] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:45.074 qpair failed and we were unable to recover it. 00:26:45.074 [2024-05-15 17:17:32.612933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.074 [2024-05-15 17:17:32.613112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.074 [2024-05-15 17:17:32.613125] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:45.074 qpair failed and we were unable to recover it. 00:26:45.074 [2024-05-15 17:17:32.613261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.074 [2024-05-15 17:17:32.613398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.074 [2024-05-15 17:17:32.613411] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:45.074 qpair failed and we were unable to recover it. 00:26:45.074 [2024-05-15 17:17:32.613601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.074 [2024-05-15 17:17:32.613734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.074 [2024-05-15 17:17:32.613747] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:45.074 qpair failed and we were unable to recover it. 00:26:45.074 [2024-05-15 17:17:32.613946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.074 [2024-05-15 17:17:32.614226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.074 [2024-05-15 17:17:32.614240] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:45.074 qpair failed and we were unable to recover it. 00:26:45.075 [2024-05-15 17:17:32.614417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.075 [2024-05-15 17:17:32.614602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.075 [2024-05-15 17:17:32.614616] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:45.075 qpair failed and we were unable to recover it. 00:26:45.075 [2024-05-15 17:17:32.614751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.075 [2024-05-15 17:17:32.615031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.075 [2024-05-15 17:17:32.615045] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:45.075 qpair failed and we were unable to recover it. 00:26:45.075 [2024-05-15 17:17:32.615285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.075 [2024-05-15 17:17:32.615417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.075 [2024-05-15 17:17:32.615431] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:45.075 qpair failed and we were unable to recover it. 00:26:45.075 [2024-05-15 17:17:32.615589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.075 [2024-05-15 17:17:32.615716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.075 [2024-05-15 17:17:32.615730] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:45.075 qpair failed and we were unable to recover it. 00:26:45.075 [2024-05-15 17:17:32.615941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.075 [2024-05-15 17:17:32.616213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.075 [2024-05-15 17:17:32.616228] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:45.075 qpair failed and we were unable to recover it. 00:26:45.075 [2024-05-15 17:17:32.616361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.075 [2024-05-15 17:17:32.616539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.075 [2024-05-15 17:17:32.616553] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:45.075 qpair failed and we were unable to recover it. 00:26:45.075 [2024-05-15 17:17:32.616682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.075 [2024-05-15 17:17:32.616861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.075 [2024-05-15 17:17:32.616874] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:45.075 qpair failed and we were unable to recover it. 00:26:45.075 [2024-05-15 17:17:32.617154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.075 [2024-05-15 17:17:32.617407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.075 [2024-05-15 17:17:32.617422] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:45.075 qpair failed and we were unable to recover it. 00:26:45.075 [2024-05-15 17:17:32.617556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.075 [2024-05-15 17:17:32.617669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.075 [2024-05-15 17:17:32.617682] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:45.075 qpair failed and we were unable to recover it. 00:26:45.075 [2024-05-15 17:17:32.617812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.075 [2024-05-15 17:17:32.618016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.075 [2024-05-15 17:17:32.618028] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:45.075 qpair failed and we were unable to recover it. 00:26:45.075 [2024-05-15 17:17:32.618219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.075 [2024-05-15 17:17:32.618352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.075 [2024-05-15 17:17:32.618366] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:45.075 qpair failed and we were unable to recover it. 00:26:45.075 [2024-05-15 17:17:32.618556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.075 [2024-05-15 17:17:32.618694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.075 [2024-05-15 17:17:32.618708] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:45.075 qpair failed and we were unable to recover it. 00:26:45.075 [2024-05-15 17:17:32.618907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.075 [2024-05-15 17:17:32.619074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.075 [2024-05-15 17:17:32.619087] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:45.075 qpair failed and we were unable to recover it. 00:26:45.075 [2024-05-15 17:17:32.619307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.075 [2024-05-15 17:17:32.619448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.075 [2024-05-15 17:17:32.619462] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:45.075 qpair failed and we were unable to recover it. 00:26:45.075 [2024-05-15 17:17:32.619675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.075 [2024-05-15 17:17:32.619920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.075 [2024-05-15 17:17:32.619935] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:45.075 qpair failed and we were unable to recover it. 00:26:45.075 [2024-05-15 17:17:32.620160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.075 [2024-05-15 17:17:32.620303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.075 [2024-05-15 17:17:32.620316] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:45.075 qpair failed and we were unable to recover it. 00:26:45.075 [2024-05-15 17:17:32.620498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.075 [2024-05-15 17:17:32.620633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.075 [2024-05-15 17:17:32.620647] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:45.075 qpair failed and we were unable to recover it. 00:26:45.075 [2024-05-15 17:17:32.620846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.075 [2024-05-15 17:17:32.621080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.075 [2024-05-15 17:17:32.621094] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:45.075 qpair failed and we were unable to recover it. 00:26:45.075 [2024-05-15 17:17:32.621294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.075 [2024-05-15 17:17:32.621417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.075 [2024-05-15 17:17:32.621431] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:45.075 qpair failed and we were unable to recover it. 00:26:45.075 [2024-05-15 17:17:32.621625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.075 [2024-05-15 17:17:32.621927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.075 [2024-05-15 17:17:32.621940] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:45.075 qpair failed and we were unable to recover it. 00:26:45.075 [2024-05-15 17:17:32.622177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.075 [2024-05-15 17:17:32.622415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.075 [2024-05-15 17:17:32.622428] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:45.075 qpair failed and we were unable to recover it. 00:26:45.075 [2024-05-15 17:17:32.622612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.075 [2024-05-15 17:17:32.622743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.075 [2024-05-15 17:17:32.622757] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:45.075 qpair failed and we were unable to recover it. 00:26:45.075 [2024-05-15 17:17:32.622881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.075 [2024-05-15 17:17:32.623075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.075 [2024-05-15 17:17:32.623089] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:45.075 qpair failed and we were unable to recover it. 00:26:45.075 [2024-05-15 17:17:32.623274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.075 [2024-05-15 17:17:32.623443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.075 [2024-05-15 17:17:32.623456] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:45.075 qpair failed and we were unable to recover it. 00:26:45.075 [2024-05-15 17:17:32.623621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.075 [2024-05-15 17:17:32.623784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.075 [2024-05-15 17:17:32.623801] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f0000b90 with addr=10.0.0.2, port=4420 00:26:45.075 qpair failed and we were unable to recover it. 00:26:45.075 [2024-05-15 17:17:32.624101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.075 [2024-05-15 17:17:32.624300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.075 [2024-05-15 17:17:32.624313] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.075 qpair failed and we were unable to recover it. 00:26:45.075 [2024-05-15 17:17:32.624506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.075 [2024-05-15 17:17:32.624745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.075 [2024-05-15 17:17:32.624756] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.075 qpair failed and we were unable to recover it. 00:26:45.075 [2024-05-15 17:17:32.625003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.075 [2024-05-15 17:17:32.625205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.075 [2024-05-15 17:17:32.625216] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.075 qpair failed and we were unable to recover it. 00:26:45.075 [2024-05-15 17:17:32.625391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.075 [2024-05-15 17:17:32.625521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.076 [2024-05-15 17:17:32.625531] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.076 qpair failed and we were unable to recover it. 00:26:45.076 [2024-05-15 17:17:32.625702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.076 [2024-05-15 17:17:32.625815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.076 [2024-05-15 17:17:32.625826] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.076 qpair failed and we were unable to recover it. 00:26:45.076 [2024-05-15 17:17:32.626097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.076 [2024-05-15 17:17:32.626277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.076 [2024-05-15 17:17:32.626289] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.076 qpair failed and we were unable to recover it. 00:26:45.076 [2024-05-15 17:17:32.626445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.076 [2024-05-15 17:17:32.626641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.076 [2024-05-15 17:17:32.626650] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.076 qpair failed and we were unable to recover it. 00:26:45.076 [2024-05-15 17:17:32.626894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.076 [2024-05-15 17:17:32.627070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.076 [2024-05-15 17:17:32.627080] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.076 qpair failed and we were unable to recover it. 00:26:45.076 [2024-05-15 17:17:32.627302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.076 [2024-05-15 17:17:32.627427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.076 [2024-05-15 17:17:32.627437] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.076 qpair failed and we were unable to recover it. 00:26:45.076 [2024-05-15 17:17:32.627645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.076 [2024-05-15 17:17:32.627933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.076 [2024-05-15 17:17:32.627943] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.076 qpair failed and we were unable to recover it. 00:26:45.076 [2024-05-15 17:17:32.628173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.076 [2024-05-15 17:17:32.628366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.076 [2024-05-15 17:17:32.628376] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.076 qpair failed and we were unable to recover it. 00:26:45.076 [2024-05-15 17:17:32.628555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.076 [2024-05-15 17:17:32.628689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.076 [2024-05-15 17:17:32.628699] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.076 qpair failed and we were unable to recover it. 00:26:45.076 [2024-05-15 17:17:32.628904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.076 [2024-05-15 17:17:32.629012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.076 [2024-05-15 17:17:32.629021] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.076 qpair failed and we were unable to recover it. 00:26:45.076 [2024-05-15 17:17:32.629268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.076 [2024-05-15 17:17:32.629495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.076 [2024-05-15 17:17:32.629505] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.076 qpair failed and we were unable to recover it. 00:26:45.076 [2024-05-15 17:17:32.629634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.076 [2024-05-15 17:17:32.629761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.076 [2024-05-15 17:17:32.629771] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.076 qpair failed and we were unable to recover it. 00:26:45.076 [2024-05-15 17:17:32.630039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.076 [2024-05-15 17:17:32.630237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.076 [2024-05-15 17:17:32.630248] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.076 qpair failed and we were unable to recover it. 00:26:45.076 [2024-05-15 17:17:32.630434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.076 [2024-05-15 17:17:32.630612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.076 [2024-05-15 17:17:32.630622] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.076 qpair failed and we were unable to recover it. 00:26:45.076 [2024-05-15 17:17:32.630742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.076 [2024-05-15 17:17:32.631001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.076 [2024-05-15 17:17:32.631011] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.076 qpair failed and we were unable to recover it. 00:26:45.076 [2024-05-15 17:17:32.631186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.076 [2024-05-15 17:17:32.631418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.076 [2024-05-15 17:17:32.631428] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.076 qpair failed and we were unable to recover it. 00:26:45.076 [2024-05-15 17:17:32.631533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.076 [2024-05-15 17:17:32.631661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.076 [2024-05-15 17:17:32.631671] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.076 qpair failed and we were unable to recover it. 00:26:45.076 [2024-05-15 17:17:32.631992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.076 [2024-05-15 17:17:32.632120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.076 [2024-05-15 17:17:32.632130] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.076 qpair failed and we were unable to recover it. 00:26:45.076 [2024-05-15 17:17:32.632358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.076 [2024-05-15 17:17:32.632539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.076 [2024-05-15 17:17:32.632549] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.076 qpair failed and we were unable to recover it. 00:26:45.076 [2024-05-15 17:17:32.632706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.076 [2024-05-15 17:17:32.632953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.076 [2024-05-15 17:17:32.632963] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.076 qpair failed and we were unable to recover it. 00:26:45.076 [2024-05-15 17:17:32.633197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.076 [2024-05-15 17:17:32.633306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.076 [2024-05-15 17:17:32.633316] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.076 qpair failed and we were unable to recover it. 00:26:45.076 [2024-05-15 17:17:32.633437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.076 [2024-05-15 17:17:32.633547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.076 [2024-05-15 17:17:32.633557] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.076 qpair failed and we were unable to recover it. 00:26:45.076 [2024-05-15 17:17:32.633738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.076 [2024-05-15 17:17:32.633975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.076 [2024-05-15 17:17:32.633985] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.076 qpair failed and we were unable to recover it. 00:26:45.076 [2024-05-15 17:17:32.634097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.076 [2024-05-15 17:17:32.634375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.076 [2024-05-15 17:17:32.634386] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.076 qpair failed and we were unable to recover it. 00:26:45.076 [2024-05-15 17:17:32.634515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.076 [2024-05-15 17:17:32.634702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.076 [2024-05-15 17:17:32.634712] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.076 qpair failed and we were unable to recover it. 00:26:45.076 [2024-05-15 17:17:32.634938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.076 [2024-05-15 17:17:32.635174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.076 [2024-05-15 17:17:32.635185] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.076 qpair failed and we were unable to recover it. 00:26:45.076 [2024-05-15 17:17:32.635436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.076 [2024-05-15 17:17:32.635597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.076 [2024-05-15 17:17:32.635607] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.076 qpair failed and we were unable to recover it. 00:26:45.076 [2024-05-15 17:17:32.635739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.076 [2024-05-15 17:17:32.635999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.076 [2024-05-15 17:17:32.636008] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.076 qpair failed and we were unable to recover it. 00:26:45.076 [2024-05-15 17:17:32.636170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.076 [2024-05-15 17:17:32.636283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.077 [2024-05-15 17:17:32.636293] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.077 qpair failed and we were unable to recover it. 00:26:45.077 [2024-05-15 17:17:32.636468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.077 [2024-05-15 17:17:32.636644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.077 [2024-05-15 17:17:32.636654] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.077 qpair failed and we were unable to recover it. 00:26:45.077 [2024-05-15 17:17:32.636957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.077 [2024-05-15 17:17:32.637183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.077 [2024-05-15 17:17:32.637194] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.077 qpair failed and we were unable to recover it. 00:26:45.077 [2024-05-15 17:17:32.637328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.077 [2024-05-15 17:17:32.637455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.077 [2024-05-15 17:17:32.637465] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.077 qpair failed and we were unable to recover it. 00:26:45.077 [2024-05-15 17:17:32.637579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.077 [2024-05-15 17:17:32.637761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.077 [2024-05-15 17:17:32.637771] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.077 qpair failed and we were unable to recover it. 00:26:45.077 [2024-05-15 17:17:32.637985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.077 [2024-05-15 17:17:32.638097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.077 [2024-05-15 17:17:32.638107] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.077 qpair failed and we were unable to recover it. 00:26:45.077 [2024-05-15 17:17:32.638376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.077 [2024-05-15 17:17:32.638541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.077 [2024-05-15 17:17:32.638569] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.077 qpair failed and we were unable to recover it. 00:26:45.077 [2024-05-15 17:17:32.638794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.077 [2024-05-15 17:17:32.639079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.077 [2024-05-15 17:17:32.639108] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.077 qpair failed and we were unable to recover it. 00:26:45.077 [2024-05-15 17:17:32.639351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.077 [2024-05-15 17:17:32.639596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.077 [2024-05-15 17:17:32.639625] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.077 qpair failed and we were unable to recover it. 00:26:45.077 [2024-05-15 17:17:32.639796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.077 [2024-05-15 17:17:32.640010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.077 [2024-05-15 17:17:32.640038] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.077 qpair failed and we were unable to recover it. 00:26:45.077 [2024-05-15 17:17:32.640202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.077 [2024-05-15 17:17:32.640421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.077 [2024-05-15 17:17:32.640449] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.077 qpair failed and we were unable to recover it. 00:26:45.077 [2024-05-15 17:17:32.640681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.077 [2024-05-15 17:17:32.640903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.077 [2024-05-15 17:17:32.640932] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.077 qpair failed and we were unable to recover it. 00:26:45.077 [2024-05-15 17:17:32.641149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.077 [2024-05-15 17:17:32.641353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.077 [2024-05-15 17:17:32.641363] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.077 qpair failed and we were unable to recover it. 00:26:45.077 [2024-05-15 17:17:32.641525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.077 [2024-05-15 17:17:32.641696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.077 [2024-05-15 17:17:32.641732] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.077 qpair failed and we were unable to recover it. 00:26:45.077 [2024-05-15 17:17:32.642066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.077 [2024-05-15 17:17:32.642300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.077 [2024-05-15 17:17:32.642332] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.077 qpair failed and we were unable to recover it. 00:26:45.077 [2024-05-15 17:17:32.642480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.077 [2024-05-15 17:17:32.642665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.077 [2024-05-15 17:17:32.642705] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.077 qpair failed and we were unable to recover it. 00:26:45.077 [2024-05-15 17:17:32.642845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.077 [2024-05-15 17:17:32.643131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.077 [2024-05-15 17:17:32.643159] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.077 qpair failed and we were unable to recover it. 00:26:45.077 [2024-05-15 17:17:32.643406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.077 [2024-05-15 17:17:32.643615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.077 [2024-05-15 17:17:32.643644] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.077 qpair failed and we were unable to recover it. 00:26:45.077 [2024-05-15 17:17:32.643974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.077 [2024-05-15 17:17:32.644138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.077 [2024-05-15 17:17:32.644190] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.077 qpair failed and we were unable to recover it. 00:26:45.077 [2024-05-15 17:17:32.644348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.077 [2024-05-15 17:17:32.644564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.077 [2024-05-15 17:17:32.644595] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.077 qpair failed and we were unable to recover it. 00:26:45.077 [2024-05-15 17:17:32.644818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.077 [2024-05-15 17:17:32.645107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.077 [2024-05-15 17:17:32.645135] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.077 qpair failed and we were unable to recover it. 00:26:45.077 [2024-05-15 17:17:32.645310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.077 [2024-05-15 17:17:32.645525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.077 [2024-05-15 17:17:32.645554] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.077 qpair failed and we were unable to recover it. 00:26:45.077 [2024-05-15 17:17:32.645760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.077 [2024-05-15 17:17:32.646047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.077 [2024-05-15 17:17:32.646076] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.077 qpair failed and we were unable to recover it. 00:26:45.077 [2024-05-15 17:17:32.646313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.077 [2024-05-15 17:17:32.646579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.077 [2024-05-15 17:17:32.646608] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.077 qpair failed and we were unable to recover it. 00:26:45.077 [2024-05-15 17:17:32.646758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.077 [2024-05-15 17:17:32.647021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.077 [2024-05-15 17:17:32.647050] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.077 qpair failed and we were unable to recover it. 00:26:45.077 [2024-05-15 17:17:32.647265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.077 [2024-05-15 17:17:32.647553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.077 [2024-05-15 17:17:32.647582] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.077 qpair failed and we were unable to recover it. 00:26:45.077 [2024-05-15 17:17:32.647809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.077 [2024-05-15 17:17:32.648092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.077 [2024-05-15 17:17:32.648121] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.077 qpair failed and we were unable to recover it. 00:26:45.077 [2024-05-15 17:17:32.648280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.077 [2024-05-15 17:17:32.648494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.077 [2024-05-15 17:17:32.648523] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.077 qpair failed and we were unable to recover it. 00:26:45.077 [2024-05-15 17:17:32.648683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.077 [2024-05-15 17:17:32.649018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.078 [2024-05-15 17:17:32.649051] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.078 qpair failed and we were unable to recover it. 00:26:45.078 [2024-05-15 17:17:32.649291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.078 [2024-05-15 17:17:32.649478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.078 [2024-05-15 17:17:32.649507] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.078 qpair failed and we were unable to recover it. 00:26:45.078 [2024-05-15 17:17:32.649739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.078 [2024-05-15 17:17:32.650010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.078 [2024-05-15 17:17:32.650038] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.078 qpair failed and we were unable to recover it. 00:26:45.078 [2024-05-15 17:17:32.650320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.078 [2024-05-15 17:17:32.650433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.078 [2024-05-15 17:17:32.650443] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.078 qpair failed and we were unable to recover it. 00:26:45.078 [2024-05-15 17:17:32.650571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.078 [2024-05-15 17:17:32.650698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.078 [2024-05-15 17:17:32.650708] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.078 qpair failed and we were unable to recover it. 00:26:45.078 [2024-05-15 17:17:32.650879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.078 [2024-05-15 17:17:32.651098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.078 [2024-05-15 17:17:32.651127] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.078 qpair failed and we were unable to recover it. 00:26:45.078 [2024-05-15 17:17:32.651292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.078 [2024-05-15 17:17:32.651453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.078 [2024-05-15 17:17:32.651482] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.078 qpair failed and we were unable to recover it. 00:26:45.078 [2024-05-15 17:17:32.651634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.078 [2024-05-15 17:17:32.651794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.078 [2024-05-15 17:17:32.651822] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.078 qpair failed and we were unable to recover it. 00:26:45.078 [2024-05-15 17:17:32.651974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.078 [2024-05-15 17:17:32.652223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.078 [2024-05-15 17:17:32.652233] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.078 qpair failed and we were unable to recover it. 00:26:45.078 [2024-05-15 17:17:32.652413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.078 [2024-05-15 17:17:32.652574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.078 [2024-05-15 17:17:32.652603] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.078 qpair failed and we were unable to recover it. 00:26:45.078 [2024-05-15 17:17:32.652842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.078 [2024-05-15 17:17:32.653075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.078 [2024-05-15 17:17:32.653109] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.078 qpair failed and we were unable to recover it. 00:26:45.078 [2024-05-15 17:17:32.653258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.078 [2024-05-15 17:17:32.653360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.078 [2024-05-15 17:17:32.653370] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.078 qpair failed and we were unable to recover it. 00:26:45.078 [2024-05-15 17:17:32.653466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.078 [2024-05-15 17:17:32.653689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.078 [2024-05-15 17:17:32.653698] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.078 qpair failed and we were unable to recover it. 00:26:45.078 [2024-05-15 17:17:32.653897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.078 [2024-05-15 17:17:32.654129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.078 [2024-05-15 17:17:32.654158] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.078 qpair failed and we were unable to recover it. 00:26:45.078 [2024-05-15 17:17:32.654478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.078 [2024-05-15 17:17:32.654690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.078 [2024-05-15 17:17:32.654718] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.078 qpair failed and we were unable to recover it. 00:26:45.078 [2024-05-15 17:17:32.655038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.078 [2024-05-15 17:17:32.655293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.078 [2024-05-15 17:17:32.655323] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.078 qpair failed and we were unable to recover it. 00:26:45.078 [2024-05-15 17:17:32.655621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.078 [2024-05-15 17:17:32.655823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.078 [2024-05-15 17:17:32.655851] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.078 qpair failed and we were unable to recover it. 00:26:45.078 [2024-05-15 17:17:32.656051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.078 [2024-05-15 17:17:32.656221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.078 [2024-05-15 17:17:32.656251] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.078 qpair failed and we were unable to recover it. 00:26:45.078 [2024-05-15 17:17:32.656472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.078 [2024-05-15 17:17:32.656687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.078 [2024-05-15 17:17:32.656716] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.078 qpair failed and we were unable to recover it. 00:26:45.078 [2024-05-15 17:17:32.657057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.078 [2024-05-15 17:17:32.657310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.078 [2024-05-15 17:17:32.657339] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.078 qpair failed and we were unable to recover it. 00:26:45.078 [2024-05-15 17:17:32.657611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.078 [2024-05-15 17:17:32.657773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.078 [2024-05-15 17:17:32.657807] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.078 qpair failed and we were unable to recover it. 00:26:45.078 [2024-05-15 17:17:32.658007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.078 [2024-05-15 17:17:32.658305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.078 [2024-05-15 17:17:32.658336] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.078 qpair failed and we were unable to recover it. 00:26:45.078 [2024-05-15 17:17:32.658497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.078 [2024-05-15 17:17:32.658678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.078 [2024-05-15 17:17:32.658706] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.078 qpair failed and we were unable to recover it. 00:26:45.078 [2024-05-15 17:17:32.658987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.078 [2024-05-15 17:17:32.659281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.078 [2024-05-15 17:17:32.659291] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.078 qpair failed and we were unable to recover it. 00:26:45.078 [2024-05-15 17:17:32.659472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.078 [2024-05-15 17:17:32.659648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.078 [2024-05-15 17:17:32.659658] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.078 qpair failed and we were unable to recover it. 00:26:45.078 [2024-05-15 17:17:32.659895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.078 [2024-05-15 17:17:32.660182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.078 [2024-05-15 17:17:32.660212] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.078 qpair failed and we were unable to recover it. 00:26:45.078 [2024-05-15 17:17:32.660457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.078 [2024-05-15 17:17:32.660768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.078 [2024-05-15 17:17:32.660797] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.078 qpair failed and we were unable to recover it. 00:26:45.078 [2024-05-15 17:17:32.661112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.078 [2024-05-15 17:17:32.661323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.078 [2024-05-15 17:17:32.661353] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.078 qpair failed and we were unable to recover it. 00:26:45.078 [2024-05-15 17:17:32.661599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.078 [2024-05-15 17:17:32.661888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.078 [2024-05-15 17:17:32.661898] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.079 qpair failed and we were unable to recover it. 00:26:45.079 [2024-05-15 17:17:32.662145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.079 [2024-05-15 17:17:32.662415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.079 [2024-05-15 17:17:32.662425] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.079 qpair failed and we were unable to recover it. 00:26:45.079 [2024-05-15 17:17:32.662628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.079 [2024-05-15 17:17:32.662747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.079 [2024-05-15 17:17:32.662757] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.079 qpair failed and we were unable to recover it. 00:26:45.079 [2024-05-15 17:17:32.663021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.079 [2024-05-15 17:17:32.663287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.079 [2024-05-15 17:17:32.663317] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.079 qpair failed and we were unable to recover it. 00:26:45.079 [2024-05-15 17:17:32.663616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.079 [2024-05-15 17:17:32.663967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.079 [2024-05-15 17:17:32.663995] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.079 qpair failed and we were unable to recover it. 00:26:45.079 [2024-05-15 17:17:32.664275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.079 [2024-05-15 17:17:32.664547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.079 [2024-05-15 17:17:32.664576] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.079 qpair failed and we were unable to recover it. 00:26:45.079 [2024-05-15 17:17:32.664923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.079 [2024-05-15 17:17:32.665197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.079 [2024-05-15 17:17:32.665208] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.079 qpair failed and we were unable to recover it. 00:26:45.079 [2024-05-15 17:17:32.665346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.079 [2024-05-15 17:17:32.665533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.079 [2024-05-15 17:17:32.665560] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.079 qpair failed and we were unable to recover it. 00:26:45.079 [2024-05-15 17:17:32.665858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.079 [2024-05-15 17:17:32.666086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.079 [2024-05-15 17:17:32.666115] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.079 qpair failed and we were unable to recover it. 00:26:45.079 [2024-05-15 17:17:32.666400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.079 [2024-05-15 17:17:32.666577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.079 [2024-05-15 17:17:32.666587] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.079 qpair failed and we were unable to recover it. 00:26:45.079 [2024-05-15 17:17:32.666839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.079 [2024-05-15 17:17:32.666989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.079 [2024-05-15 17:17:32.667018] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.079 qpair failed and we were unable to recover it. 00:26:45.079 [2024-05-15 17:17:32.667259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.079 [2024-05-15 17:17:32.667483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.079 [2024-05-15 17:17:32.667493] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.079 qpair failed and we were unable to recover it. 00:26:45.079 [2024-05-15 17:17:32.667614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.079 [2024-05-15 17:17:32.667795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.079 [2024-05-15 17:17:32.667804] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.079 qpair failed and we were unable to recover it. 00:26:45.079 [2024-05-15 17:17:32.667985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.079 [2024-05-15 17:17:32.668107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.079 [2024-05-15 17:17:32.668116] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.079 qpair failed and we were unable to recover it. 00:26:45.079 [2024-05-15 17:17:32.668290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.079 [2024-05-15 17:17:32.668433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.079 [2024-05-15 17:17:32.668462] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.079 qpair failed and we were unable to recover it. 00:26:45.079 [2024-05-15 17:17:32.668684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.079 [2024-05-15 17:17:32.669026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.079 [2024-05-15 17:17:32.669055] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.079 qpair failed and we were unable to recover it. 00:26:45.079 [2024-05-15 17:17:32.669302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.079 [2024-05-15 17:17:32.669446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.079 [2024-05-15 17:17:32.669475] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.079 qpair failed and we were unable to recover it. 00:26:45.079 [2024-05-15 17:17:32.669765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.079 [2024-05-15 17:17:32.670033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.079 [2024-05-15 17:17:32.670042] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.079 qpair failed and we were unable to recover it. 00:26:45.079 [2024-05-15 17:17:32.670271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.079 [2024-05-15 17:17:32.670469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.079 [2024-05-15 17:17:32.670498] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.079 qpair failed and we were unable to recover it. 00:26:45.079 [2024-05-15 17:17:32.670715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.079 [2024-05-15 17:17:32.670874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.079 [2024-05-15 17:17:32.670903] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.079 qpair failed and we were unable to recover it. 00:26:45.079 [2024-05-15 17:17:32.671197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.079 [2024-05-15 17:17:32.671465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.079 [2024-05-15 17:17:32.671495] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.079 qpair failed and we were unable to recover it. 00:26:45.079 [2024-05-15 17:17:32.671638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.079 [2024-05-15 17:17:32.671882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.079 [2024-05-15 17:17:32.671891] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.079 qpair failed and we were unable to recover it. 00:26:45.079 [2024-05-15 17:17:32.672014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.079 [2024-05-15 17:17:32.672148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.079 [2024-05-15 17:17:32.672187] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.079 qpair failed and we were unable to recover it. 00:26:45.079 [2024-05-15 17:17:32.672428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.079 [2024-05-15 17:17:32.672568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.079 [2024-05-15 17:17:32.672597] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.079 qpair failed and we were unable to recover it. 00:26:45.079 [2024-05-15 17:17:32.672836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.079 [2024-05-15 17:17:32.673101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.080 [2024-05-15 17:17:32.673130] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.080 qpair failed and we were unable to recover it. 00:26:45.080 [2024-05-15 17:17:32.673430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.080 [2024-05-15 17:17:32.673637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.080 [2024-05-15 17:17:32.673666] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.080 qpair failed and we were unable to recover it. 00:26:45.080 [2024-05-15 17:17:32.673884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.080 [2024-05-15 17:17:32.674151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.080 [2024-05-15 17:17:32.674190] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.080 qpair failed and we were unable to recover it. 00:26:45.080 [2024-05-15 17:17:32.674434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.080 [2024-05-15 17:17:32.674701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.080 [2024-05-15 17:17:32.674731] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.080 qpair failed and we were unable to recover it. 00:26:45.080 [2024-05-15 17:17:32.674958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.080 [2024-05-15 17:17:32.675245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.080 [2024-05-15 17:17:32.675276] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.080 qpair failed and we were unable to recover it. 00:26:45.080 [2024-05-15 17:17:32.675509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.080 [2024-05-15 17:17:32.675649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.080 [2024-05-15 17:17:32.675678] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.080 qpair failed and we were unable to recover it. 00:26:45.080 [2024-05-15 17:17:32.675974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.080 [2024-05-15 17:17:32.676260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.080 [2024-05-15 17:17:32.676290] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.080 qpair failed and we were unable to recover it. 00:26:45.080 [2024-05-15 17:17:32.676502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.080 [2024-05-15 17:17:32.676665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.080 [2024-05-15 17:17:32.676694] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.080 qpair failed and we were unable to recover it. 00:26:45.080 [2024-05-15 17:17:32.676965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.080 [2024-05-15 17:17:32.677252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.080 [2024-05-15 17:17:32.677282] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.080 qpair failed and we were unable to recover it. 00:26:45.080 [2024-05-15 17:17:32.677565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.080 [2024-05-15 17:17:32.677727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.080 [2024-05-15 17:17:32.677756] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.080 qpair failed and we were unable to recover it. 00:26:45.080 [2024-05-15 17:17:32.677989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.080 [2024-05-15 17:17:32.678269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.080 [2024-05-15 17:17:32.678279] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.080 qpair failed and we were unable to recover it. 00:26:45.080 [2024-05-15 17:17:32.678460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.080 [2024-05-15 17:17:32.678644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.080 [2024-05-15 17:17:32.678654] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.080 qpair failed and we were unable to recover it. 00:26:45.080 [2024-05-15 17:17:32.678777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.080 [2024-05-15 17:17:32.679050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.080 [2024-05-15 17:17:32.679060] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.080 qpair failed and we were unable to recover it. 00:26:45.080 [2024-05-15 17:17:32.679325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.080 [2024-05-15 17:17:32.679431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.080 [2024-05-15 17:17:32.679441] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.080 qpair failed and we were unable to recover it. 00:26:45.080 [2024-05-15 17:17:32.679562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.080 [2024-05-15 17:17:32.679764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.080 [2024-05-15 17:17:32.679775] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.080 qpair failed and we were unable to recover it. 00:26:45.080 [2024-05-15 17:17:32.679949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.080 [2024-05-15 17:17:32.680222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.080 [2024-05-15 17:17:32.680233] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.080 qpair failed and we were unable to recover it. 00:26:45.080 [2024-05-15 17:17:32.680493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.080 [2024-05-15 17:17:32.680664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.080 [2024-05-15 17:17:32.680695] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.080 qpair failed and we were unable to recover it. 00:26:45.080 [2024-05-15 17:17:32.681001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.080 [2024-05-15 17:17:32.681219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.080 [2024-05-15 17:17:32.681249] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.080 qpair failed and we were unable to recover it. 00:26:45.080 [2024-05-15 17:17:32.681523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.080 [2024-05-15 17:17:32.681745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.080 [2024-05-15 17:17:32.681774] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.080 qpair failed and we were unable to recover it. 00:26:45.080 [2024-05-15 17:17:32.682031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.080 [2024-05-15 17:17:32.682311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.080 [2024-05-15 17:17:32.682322] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.080 qpair failed and we were unable to recover it. 00:26:45.080 [2024-05-15 17:17:32.682523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.080 [2024-05-15 17:17:32.682702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.080 [2024-05-15 17:17:32.682732] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.080 qpair failed and we were unable to recover it. 00:26:45.080 [2024-05-15 17:17:32.682879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.080 [2024-05-15 17:17:32.683145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.080 [2024-05-15 17:17:32.683185] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.080 qpair failed and we were unable to recover it. 00:26:45.080 [2024-05-15 17:17:32.683486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.080 [2024-05-15 17:17:32.683678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.080 [2024-05-15 17:17:32.683689] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.080 qpair failed and we were unable to recover it. 00:26:45.080 [2024-05-15 17:17:32.683810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.080 [2024-05-15 17:17:32.684014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.080 [2024-05-15 17:17:32.684024] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.080 qpair failed and we were unable to recover it. 00:26:45.080 [2024-05-15 17:17:32.684183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.080 [2024-05-15 17:17:32.684366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.080 [2024-05-15 17:17:32.684376] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.080 qpair failed and we were unable to recover it. 00:26:45.080 [2024-05-15 17:17:32.684602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.080 [2024-05-15 17:17:32.684721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.080 [2024-05-15 17:17:32.684762] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.080 qpair failed and we were unable to recover it. 00:26:45.080 [2024-05-15 17:17:32.685044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.080 [2024-05-15 17:17:32.685255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.080 [2024-05-15 17:17:32.685284] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.080 qpair failed and we were unable to recover it. 00:26:45.080 [2024-05-15 17:17:32.685502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.080 [2024-05-15 17:17:32.685735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.080 [2024-05-15 17:17:32.685763] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.080 qpair failed and we were unable to recover it. 00:26:45.080 [2024-05-15 17:17:32.686034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.080 [2024-05-15 17:17:32.686299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.081 [2024-05-15 17:17:32.686309] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.081 qpair failed and we were unable to recover it. 00:26:45.081 [2024-05-15 17:17:32.686426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.081 [2024-05-15 17:17:32.686622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.081 [2024-05-15 17:17:32.686633] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.081 qpair failed and we were unable to recover it. 00:26:45.081 [2024-05-15 17:17:32.686881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.081 [2024-05-15 17:17:32.687176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.081 [2024-05-15 17:17:32.687187] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.081 qpair failed and we were unable to recover it. 00:26:45.081 [2024-05-15 17:17:32.687300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.081 [2024-05-15 17:17:32.687480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.081 [2024-05-15 17:17:32.687490] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.081 qpair failed and we were unable to recover it. 00:26:45.081 [2024-05-15 17:17:32.687660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.081 [2024-05-15 17:17:32.687871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.081 [2024-05-15 17:17:32.687881] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.081 qpair failed and we were unable to recover it. 00:26:45.081 [2024-05-15 17:17:32.688107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.081 [2024-05-15 17:17:32.688317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.081 [2024-05-15 17:17:32.688328] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.081 qpair failed and we were unable to recover it. 00:26:45.081 [2024-05-15 17:17:32.688585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.081 [2024-05-15 17:17:32.688704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.081 [2024-05-15 17:17:32.688714] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.081 qpair failed and we were unable to recover it. 00:26:45.081 [2024-05-15 17:17:32.688908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.081 [2024-05-15 17:17:32.689088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.081 [2024-05-15 17:17:32.689109] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.081 qpair failed and we were unable to recover it. 00:26:45.081 [2024-05-15 17:17:32.689237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.081 [2024-05-15 17:17:32.689355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.081 [2024-05-15 17:17:32.689366] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.081 qpair failed and we were unable to recover it. 00:26:45.081 [2024-05-15 17:17:32.689544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.081 [2024-05-15 17:17:32.689659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.081 [2024-05-15 17:17:32.689669] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.081 qpair failed and we were unable to recover it. 00:26:45.081 [2024-05-15 17:17:32.689847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.081 [2024-05-15 17:17:32.690056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.081 [2024-05-15 17:17:32.690066] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.081 qpair failed and we were unable to recover it. 00:26:45.081 [2024-05-15 17:17:32.690265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.081 [2024-05-15 17:17:32.690517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.081 [2024-05-15 17:17:32.690546] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.081 qpair failed and we were unable to recover it. 00:26:45.081 [2024-05-15 17:17:32.690857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.081 [2024-05-15 17:17:32.691068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.081 [2024-05-15 17:17:32.691097] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.081 qpair failed and we were unable to recover it. 00:26:45.081 [2024-05-15 17:17:32.691372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.081 [2024-05-15 17:17:32.691658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.081 [2024-05-15 17:17:32.691686] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.081 qpair failed and we were unable to recover it. 00:26:45.081 [2024-05-15 17:17:32.691948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.081 [2024-05-15 17:17:32.692211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.081 [2024-05-15 17:17:32.692242] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.081 qpair failed and we were unable to recover it. 00:26:45.081 [2024-05-15 17:17:32.692349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.081 [2024-05-15 17:17:32.692599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.081 [2024-05-15 17:17:32.692609] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.081 qpair failed and we were unable to recover it. 00:26:45.081 [2024-05-15 17:17:32.692770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.081 [2024-05-15 17:17:32.692956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.081 [2024-05-15 17:17:32.692977] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.081 qpair failed and we were unable to recover it. 00:26:45.081 [2024-05-15 17:17:32.693233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.081 [2024-05-15 17:17:32.693342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.081 [2024-05-15 17:17:32.693352] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.081 qpair failed and we were unable to recover it. 00:26:45.081 [2024-05-15 17:17:32.693586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.081 [2024-05-15 17:17:32.693804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.081 [2024-05-15 17:17:32.693834] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.081 qpair failed and we were unable to recover it. 00:26:45.081 [2024-05-15 17:17:32.694057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.081 [2024-05-15 17:17:32.694257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.081 [2024-05-15 17:17:32.694267] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.081 qpair failed and we were unable to recover it. 00:26:45.081 [2024-05-15 17:17:32.694493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.081 [2024-05-15 17:17:32.694659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.081 [2024-05-15 17:17:32.694688] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.081 qpair failed and we were unable to recover it. 00:26:45.081 [2024-05-15 17:17:32.694976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.081 [2024-05-15 17:17:32.695256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.081 [2024-05-15 17:17:32.695276] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:45.081 qpair failed and we were unable to recover it. 00:26:45.081 [2024-05-15 17:17:32.695539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.081 [2024-05-15 17:17:32.695826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.081 [2024-05-15 17:17:32.695855] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:45.081 qpair failed and we were unable to recover it. 00:26:45.081 [2024-05-15 17:17:32.696086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.081 [2024-05-15 17:17:32.696237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.081 [2024-05-15 17:17:32.696267] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:45.081 qpair failed and we were unable to recover it. 00:26:45.081 [2024-05-15 17:17:32.696540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.081 [2024-05-15 17:17:32.696714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.081 [2024-05-15 17:17:32.696743] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:45.081 qpair failed and we were unable to recover it. 00:26:45.081 [2024-05-15 17:17:32.696901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.081 [2024-05-15 17:17:32.697217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.081 [2024-05-15 17:17:32.697251] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:45.081 qpair failed and we were unable to recover it. 00:26:45.081 [2024-05-15 17:17:32.697563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.081 [2024-05-15 17:17:32.697784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.081 [2024-05-15 17:17:32.697814] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:45.081 qpair failed and we were unable to recover it. 00:26:45.081 [2024-05-15 17:17:32.698083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.081 [2024-05-15 17:17:32.698245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.081 [2024-05-15 17:17:32.698282] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:45.081 qpair failed and we were unable to recover it. 00:26:45.081 [2024-05-15 17:17:32.698493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.081 [2024-05-15 17:17:32.698727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.081 [2024-05-15 17:17:32.698741] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:45.081 qpair failed and we were unable to recover it. 00:26:45.082 [2024-05-15 17:17:32.698992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.082 [2024-05-15 17:17:32.699243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.082 [2024-05-15 17:17:32.699259] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:45.082 qpair failed and we were unable to recover it. 00:26:45.082 [2024-05-15 17:17:32.699379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.082 [2024-05-15 17:17:32.699514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.082 [2024-05-15 17:17:32.699528] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:45.082 qpair failed and we were unable to recover it. 00:26:45.082 [2024-05-15 17:17:32.699800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.082 [2024-05-15 17:17:32.700109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.082 [2024-05-15 17:17:32.700145] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:45.082 qpair failed and we were unable to recover it. 00:26:45.082 [2024-05-15 17:17:32.700480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.082 [2024-05-15 17:17:32.700627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.082 [2024-05-15 17:17:32.700641] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:45.082 qpair failed and we were unable to recover it. 00:26:45.082 [2024-05-15 17:17:32.700867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.082 [2024-05-15 17:17:32.701132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.082 [2024-05-15 17:17:32.701161] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:45.082 qpair failed and we were unable to recover it. 00:26:45.082 [2024-05-15 17:17:32.701439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.082 [2024-05-15 17:17:32.701691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.082 [2024-05-15 17:17:32.701721] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:45.082 qpair failed and we were unable to recover it. 00:26:45.082 [2024-05-15 17:17:32.702010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.082 [2024-05-15 17:17:32.702291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.082 [2024-05-15 17:17:32.702332] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:45.082 qpair failed and we were unable to recover it. 00:26:45.082 [2024-05-15 17:17:32.702524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.082 [2024-05-15 17:17:32.702734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.082 [2024-05-15 17:17:32.702747] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:45.082 qpair failed and we were unable to recover it. 00:26:45.082 [2024-05-15 17:17:32.703035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.082 [2024-05-15 17:17:32.703178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.082 [2024-05-15 17:17:32.703192] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:45.082 qpair failed and we were unable to recover it. 00:26:45.082 [2024-05-15 17:17:32.703410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.082 [2024-05-15 17:17:32.703698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.082 [2024-05-15 17:17:32.703727] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:45.082 qpair failed and we were unable to recover it. 00:26:45.082 [2024-05-15 17:17:32.704019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.082 [2024-05-15 17:17:32.704310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.082 [2024-05-15 17:17:32.704341] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:45.082 qpair failed and we were unable to recover it. 00:26:45.082 [2024-05-15 17:17:32.704618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.082 [2024-05-15 17:17:32.704772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.082 [2024-05-15 17:17:32.704786] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:45.082 qpair failed and we were unable to recover it. 00:26:45.082 [2024-05-15 17:17:32.705067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.082 [2024-05-15 17:17:32.705215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.082 [2024-05-15 17:17:32.705233] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:45.082 qpair failed and we were unable to recover it. 00:26:45.082 [2024-05-15 17:17:32.705368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.082 [2024-05-15 17:17:32.705496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.082 [2024-05-15 17:17:32.705506] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.082 qpair failed and we were unable to recover it. 00:26:45.082 [2024-05-15 17:17:32.705759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.082 [2024-05-15 17:17:32.705960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.082 [2024-05-15 17:17:32.705990] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.082 qpair failed and we were unable to recover it. 00:26:45.082 [2024-05-15 17:17:32.706228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.082 [2024-05-15 17:17:32.706400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.082 [2024-05-15 17:17:32.706430] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.082 qpair failed and we were unable to recover it. 00:26:45.082 [2024-05-15 17:17:32.706647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.082 [2024-05-15 17:17:32.706885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.082 [2024-05-15 17:17:32.706914] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.082 qpair failed and we were unable to recover it. 00:26:45.082 [2024-05-15 17:17:32.707209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.082 [2024-05-15 17:17:32.707496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.082 [2024-05-15 17:17:32.707524] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.082 qpair failed and we were unable to recover it. 00:26:45.082 [2024-05-15 17:17:32.707691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.082 [2024-05-15 17:17:32.708018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.082 [2024-05-15 17:17:32.708046] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.082 qpair failed and we were unable to recover it. 00:26:45.082 [2024-05-15 17:17:32.708249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.082 [2024-05-15 17:17:32.708528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.082 [2024-05-15 17:17:32.708537] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.082 qpair failed and we were unable to recover it. 00:26:45.082 [2024-05-15 17:17:32.708768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.082 [2024-05-15 17:17:32.708927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.082 [2024-05-15 17:17:32.708937] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.082 qpair failed and we were unable to recover it. 00:26:45.356 [2024-05-15 17:17:32.709192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.356 [2024-05-15 17:17:32.709379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.356 [2024-05-15 17:17:32.709389] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.356 qpair failed and we were unable to recover it. 00:26:45.356 [2024-05-15 17:17:32.709522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.356 [2024-05-15 17:17:32.709646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.356 [2024-05-15 17:17:32.709658] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.356 qpair failed and we were unable to recover it. 00:26:45.356 [2024-05-15 17:17:32.709958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.356 [2024-05-15 17:17:32.710146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.356 [2024-05-15 17:17:32.710156] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.356 qpair failed and we were unable to recover it. 00:26:45.356 [2024-05-15 17:17:32.710292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.356 [2024-05-15 17:17:32.710400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.356 [2024-05-15 17:17:32.710410] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.356 qpair failed and we were unable to recover it. 00:26:45.356 [2024-05-15 17:17:32.710662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.356 [2024-05-15 17:17:32.710772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.356 [2024-05-15 17:17:32.710781] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.356 qpair failed and we were unable to recover it. 00:26:45.356 [2024-05-15 17:17:32.710901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.356 [2024-05-15 17:17:32.711143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.356 [2024-05-15 17:17:32.711153] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.356 qpair failed and we were unable to recover it. 00:26:45.356 [2024-05-15 17:17:32.711284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.356 [2024-05-15 17:17:32.711453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.356 [2024-05-15 17:17:32.711463] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.356 qpair failed and we were unable to recover it. 00:26:45.356 [2024-05-15 17:17:32.711644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.356 [2024-05-15 17:17:32.711756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.356 [2024-05-15 17:17:32.711766] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.356 qpair failed and we were unable to recover it. 00:26:45.356 [2024-05-15 17:17:32.711944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.356 [2024-05-15 17:17:32.712112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.356 [2024-05-15 17:17:32.712122] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.356 qpair failed and we were unable to recover it. 00:26:45.356 [2024-05-15 17:17:32.712246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.356 [2024-05-15 17:17:32.712523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.356 [2024-05-15 17:17:32.712552] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.356 qpair failed and we were unable to recover it. 00:26:45.356 [2024-05-15 17:17:32.712801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.356 [2024-05-15 17:17:32.713085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.356 [2024-05-15 17:17:32.713113] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.356 qpair failed and we were unable to recover it. 00:26:45.356 [2024-05-15 17:17:32.713412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.356 [2024-05-15 17:17:32.713675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.356 [2024-05-15 17:17:32.713709] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.356 qpair failed and we were unable to recover it. 00:26:45.356 [2024-05-15 17:17:32.713994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.356 [2024-05-15 17:17:32.714190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.356 [2024-05-15 17:17:32.714220] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.356 qpair failed and we were unable to recover it. 00:26:45.356 [2024-05-15 17:17:32.714385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.356 [2024-05-15 17:17:32.714626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.356 [2024-05-15 17:17:32.714656] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.356 qpair failed and we were unable to recover it. 00:26:45.356 [2024-05-15 17:17:32.714893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.356 [2024-05-15 17:17:32.715206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.356 [2024-05-15 17:17:32.715216] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.356 qpair failed and we were unable to recover it. 00:26:45.356 [2024-05-15 17:17:32.715400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.356 [2024-05-15 17:17:32.715629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.356 [2024-05-15 17:17:32.715639] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.356 qpair failed and we were unable to recover it. 00:26:45.356 [2024-05-15 17:17:32.715829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.356 [2024-05-15 17:17:32.716027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.356 [2024-05-15 17:17:32.716055] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.356 qpair failed and we were unable to recover it. 00:26:45.356 [2024-05-15 17:17:32.716218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.356 [2024-05-15 17:17:32.716433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.356 [2024-05-15 17:17:32.716463] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.356 qpair failed and we were unable to recover it. 00:26:45.356 [2024-05-15 17:17:32.716775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.356 [2024-05-15 17:17:32.716979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.356 [2024-05-15 17:17:32.717008] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.356 qpair failed and we were unable to recover it. 00:26:45.356 [2024-05-15 17:17:32.717240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.356 [2024-05-15 17:17:32.717397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.356 [2024-05-15 17:17:32.717426] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.356 qpair failed and we were unable to recover it. 00:26:45.356 [2024-05-15 17:17:32.717593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.356 [2024-05-15 17:17:32.717749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.356 [2024-05-15 17:17:32.717775] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.356 qpair failed and we were unable to recover it. 00:26:45.356 [2024-05-15 17:17:32.718022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.356 [2024-05-15 17:17:32.718326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.356 [2024-05-15 17:17:32.718369] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.356 qpair failed and we were unable to recover it. 00:26:45.357 [2024-05-15 17:17:32.718618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.357 [2024-05-15 17:17:32.718793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.357 [2024-05-15 17:17:32.718803] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.357 qpair failed and we were unable to recover it. 00:26:45.357 [2024-05-15 17:17:32.719055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.357 [2024-05-15 17:17:32.719245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.357 [2024-05-15 17:17:32.719255] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.357 qpair failed and we were unable to recover it. 00:26:45.357 [2024-05-15 17:17:32.719421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.357 [2024-05-15 17:17:32.719596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.357 [2024-05-15 17:17:32.719606] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.357 qpair failed and we were unable to recover it. 00:26:45.357 [2024-05-15 17:17:32.719833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.357 [2024-05-15 17:17:32.720066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.357 [2024-05-15 17:17:32.720075] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.357 qpair failed and we were unable to recover it. 00:26:45.357 [2024-05-15 17:17:32.720248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.357 [2024-05-15 17:17:32.720376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.357 [2024-05-15 17:17:32.720386] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.357 qpair failed and we were unable to recover it. 00:26:45.357 [2024-05-15 17:17:32.720502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.357 [2024-05-15 17:17:32.720668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.357 [2024-05-15 17:17:32.720678] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.357 qpair failed and we were unable to recover it. 00:26:45.357 [2024-05-15 17:17:32.720954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.357 [2024-05-15 17:17:32.721230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.357 [2024-05-15 17:17:32.721260] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.357 qpair failed and we were unable to recover it. 00:26:45.357 [2024-05-15 17:17:32.721469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.357 [2024-05-15 17:17:32.721690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.357 [2024-05-15 17:17:32.721700] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.357 qpair failed and we were unable to recover it. 00:26:45.357 [2024-05-15 17:17:32.721828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.357 [2024-05-15 17:17:32.721938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.357 [2024-05-15 17:17:32.721948] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.357 qpair failed and we were unable to recover it. 00:26:45.357 [2024-05-15 17:17:32.722077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.357 [2024-05-15 17:17:32.722318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.357 [2024-05-15 17:17:32.722332] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.357 qpair failed and we were unable to recover it. 00:26:45.357 [2024-05-15 17:17:32.722453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.357 [2024-05-15 17:17:32.722579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.357 [2024-05-15 17:17:32.722589] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.357 qpair failed and we were unable to recover it. 00:26:45.357 [2024-05-15 17:17:32.722741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.357 [2024-05-15 17:17:32.722986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.357 [2024-05-15 17:17:32.723014] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.357 qpair failed and we were unable to recover it. 00:26:45.357 [2024-05-15 17:17:32.723245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.357 [2024-05-15 17:17:32.723362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.357 [2024-05-15 17:17:32.723371] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.357 qpair failed and we were unable to recover it. 00:26:45.357 [2024-05-15 17:17:32.723483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.357 [2024-05-15 17:17:32.723652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.357 [2024-05-15 17:17:32.723662] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.357 qpair failed and we were unable to recover it. 00:26:45.357 [2024-05-15 17:17:32.723784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.357 [2024-05-15 17:17:32.723956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.357 [2024-05-15 17:17:32.723966] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.357 qpair failed and we were unable to recover it. 00:26:45.357 [2024-05-15 17:17:32.724120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.357 [2024-05-15 17:17:32.724349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.357 [2024-05-15 17:17:32.724359] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.357 qpair failed and we were unable to recover it. 00:26:45.357 [2024-05-15 17:17:32.724612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.357 [2024-05-15 17:17:32.724892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.357 [2024-05-15 17:17:32.724921] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.357 qpair failed and we were unable to recover it. 00:26:45.357 [2024-05-15 17:17:32.725148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.357 [2024-05-15 17:17:32.725355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.357 [2024-05-15 17:17:32.725365] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.357 qpair failed and we were unable to recover it. 00:26:45.357 [2024-05-15 17:17:32.725494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.357 [2024-05-15 17:17:32.725664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.357 [2024-05-15 17:17:32.725674] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.357 qpair failed and we were unable to recover it. 00:26:45.357 [2024-05-15 17:17:32.725871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.357 [2024-05-15 17:17:32.726093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.357 [2024-05-15 17:17:32.726123] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.357 qpair failed and we were unable to recover it. 00:26:45.357 [2024-05-15 17:17:32.726318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.357 [2024-05-15 17:17:32.726527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.357 [2024-05-15 17:17:32.726547] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.357 qpair failed and we were unable to recover it. 00:26:45.357 [2024-05-15 17:17:32.726772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.357 [2024-05-15 17:17:32.726963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.357 [2024-05-15 17:17:32.726992] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.357 qpair failed and we were unable to recover it. 00:26:45.357 [2024-05-15 17:17:32.727332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.357 [2024-05-15 17:17:32.727540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.357 [2024-05-15 17:17:32.727568] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.357 qpair failed and we were unable to recover it. 00:26:45.357 [2024-05-15 17:17:32.727740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.357 [2024-05-15 17:17:32.727863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.357 [2024-05-15 17:17:32.727892] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.357 qpair failed and we were unable to recover it. 00:26:45.357 [2024-05-15 17:17:32.728187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.357 [2024-05-15 17:17:32.728440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.357 [2024-05-15 17:17:32.728450] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.357 qpair failed and we were unable to recover it. 00:26:45.357 [2024-05-15 17:17:32.728571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.357 [2024-05-15 17:17:32.728796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.357 [2024-05-15 17:17:32.728805] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.357 qpair failed and we were unable to recover it. 00:26:45.357 [2024-05-15 17:17:32.728964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.357 [2024-05-15 17:17:32.729077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.357 [2024-05-15 17:17:32.729114] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.357 qpair failed and we were unable to recover it. 00:26:45.357 [2024-05-15 17:17:32.729366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.357 [2024-05-15 17:17:32.729530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.357 [2024-05-15 17:17:32.729559] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.357 qpair failed and we were unable to recover it. 00:26:45.357 [2024-05-15 17:17:32.729760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.358 [2024-05-15 17:17:32.729898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.358 [2024-05-15 17:17:32.729927] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.358 qpair failed and we were unable to recover it. 00:26:45.358 [2024-05-15 17:17:32.730088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.358 [2024-05-15 17:17:32.730364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.358 [2024-05-15 17:17:32.730394] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.358 qpair failed and we were unable to recover it. 00:26:45.358 [2024-05-15 17:17:32.730734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.358 [2024-05-15 17:17:32.731044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.358 [2024-05-15 17:17:32.731072] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.358 qpair failed and we were unable to recover it. 00:26:45.358 [2024-05-15 17:17:32.731351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.358 [2024-05-15 17:17:32.731576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.358 [2024-05-15 17:17:32.731604] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.358 qpair failed and we were unable to recover it. 00:26:45.358 [2024-05-15 17:17:32.731820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.358 [2024-05-15 17:17:32.732032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.358 [2024-05-15 17:17:32.732061] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.358 qpair failed and we were unable to recover it. 00:26:45.358 [2024-05-15 17:17:32.732331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.358 [2024-05-15 17:17:32.732568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.358 [2024-05-15 17:17:32.732597] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.358 qpair failed and we were unable to recover it. 00:26:45.358 [2024-05-15 17:17:32.732891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.358 [2024-05-15 17:17:32.733111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.358 [2024-05-15 17:17:32.733140] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.358 qpair failed and we were unable to recover it. 00:26:45.358 [2024-05-15 17:17:32.733310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.358 [2024-05-15 17:17:32.733472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.358 [2024-05-15 17:17:32.733500] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.358 qpair failed and we were unable to recover it. 00:26:45.358 [2024-05-15 17:17:32.733803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.358 [2024-05-15 17:17:32.734085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.358 [2024-05-15 17:17:32.734113] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.358 qpair failed and we were unable to recover it. 00:26:45.358 [2024-05-15 17:17:32.734292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.358 [2024-05-15 17:17:32.734554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.358 [2024-05-15 17:17:32.734582] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.358 qpair failed and we were unable to recover it. 00:26:45.358 [2024-05-15 17:17:32.734737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.358 [2024-05-15 17:17:32.734973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.358 [2024-05-15 17:17:32.735001] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.358 qpair failed and we were unable to recover it. 00:26:45.358 [2024-05-15 17:17:32.735318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.358 [2024-05-15 17:17:32.735551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.358 [2024-05-15 17:17:32.735579] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.358 qpair failed and we were unable to recover it. 00:26:45.358 [2024-05-15 17:17:32.735898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.358 [2024-05-15 17:17:32.736109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.358 [2024-05-15 17:17:32.736137] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.358 qpair failed and we were unable to recover it. 00:26:45.358 [2024-05-15 17:17:32.736417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.358 [2024-05-15 17:17:32.736571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.358 [2024-05-15 17:17:32.736581] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.358 qpair failed and we were unable to recover it. 00:26:45.358 [2024-05-15 17:17:32.736787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.358 [2024-05-15 17:17:32.737074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.358 [2024-05-15 17:17:32.737103] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.358 qpair failed and we were unable to recover it. 00:26:45.358 [2024-05-15 17:17:32.737419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.358 [2024-05-15 17:17:32.737653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.358 [2024-05-15 17:17:32.737681] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.358 qpair failed and we were unable to recover it. 00:26:45.358 [2024-05-15 17:17:32.737980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.358 [2024-05-15 17:17:32.738241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.358 [2024-05-15 17:17:32.738270] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.358 qpair failed and we were unable to recover it. 00:26:45.358 [2024-05-15 17:17:32.738540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.358 [2024-05-15 17:17:32.738787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.358 [2024-05-15 17:17:32.738815] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.358 qpair failed and we were unable to recover it. 00:26:45.358 [2024-05-15 17:17:32.739033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.358 [2024-05-15 17:17:32.739326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.358 [2024-05-15 17:17:32.739356] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.358 qpair failed and we were unable to recover it. 00:26:45.358 [2024-05-15 17:17:32.739510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.358 [2024-05-15 17:17:32.739733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.358 [2024-05-15 17:17:32.739743] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.358 qpair failed and we were unable to recover it. 00:26:45.358 [2024-05-15 17:17:32.740037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.358 [2024-05-15 17:17:32.740304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.358 [2024-05-15 17:17:32.740333] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.358 qpair failed and we were unable to recover it. 00:26:45.358 [2024-05-15 17:17:32.740534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.358 [2024-05-15 17:17:32.740758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.358 [2024-05-15 17:17:32.740767] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.358 qpair failed and we were unable to recover it. 00:26:45.358 [2024-05-15 17:17:32.741033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.358 [2024-05-15 17:17:32.741270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.358 [2024-05-15 17:17:32.741280] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.358 qpair failed and we were unable to recover it. 00:26:45.358 [2024-05-15 17:17:32.741373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.358 [2024-05-15 17:17:32.741583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.358 [2024-05-15 17:17:32.741593] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.358 qpair failed and we were unable to recover it. 00:26:45.358 [2024-05-15 17:17:32.741753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.358 [2024-05-15 17:17:32.741881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.358 [2024-05-15 17:17:32.741891] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.358 qpair failed and we were unable to recover it. 00:26:45.358 [2024-05-15 17:17:32.742139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.358 [2024-05-15 17:17:32.742351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.358 [2024-05-15 17:17:32.742361] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.358 qpair failed and we were unable to recover it. 00:26:45.358 [2024-05-15 17:17:32.742533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.358 [2024-05-15 17:17:32.742645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.358 [2024-05-15 17:17:32.742655] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.358 qpair failed and we were unable to recover it. 00:26:45.358 [2024-05-15 17:17:32.742817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.358 [2024-05-15 17:17:32.743050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.358 [2024-05-15 17:17:32.743079] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.358 qpair failed and we were unable to recover it. 00:26:45.358 [2024-05-15 17:17:32.743303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.358 [2024-05-15 17:17:32.743442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.359 [2024-05-15 17:17:32.743470] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.359 qpair failed and we were unable to recover it. 00:26:45.359 [2024-05-15 17:17:32.743606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.359 [2024-05-15 17:17:32.743721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.359 [2024-05-15 17:17:32.743731] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.359 qpair failed and we were unable to recover it. 00:26:45.359 [2024-05-15 17:17:32.743964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.359 [2024-05-15 17:17:32.744149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.359 [2024-05-15 17:17:32.744159] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.359 qpair failed and we were unable to recover it. 00:26:45.359 [2024-05-15 17:17:32.744356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.359 [2024-05-15 17:17:32.744463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.359 [2024-05-15 17:17:32.744472] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.359 qpair failed and we were unable to recover it. 00:26:45.359 [2024-05-15 17:17:32.744598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.359 [2024-05-15 17:17:32.744773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.359 [2024-05-15 17:17:32.744783] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.359 qpair failed and we were unable to recover it. 00:26:45.359 [2024-05-15 17:17:32.744973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.359 [2024-05-15 17:17:32.745186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.359 [2024-05-15 17:17:32.745217] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.359 qpair failed and we were unable to recover it. 00:26:45.359 [2024-05-15 17:17:32.745452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.359 [2024-05-15 17:17:32.745606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.359 [2024-05-15 17:17:32.745635] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.359 qpair failed and we were unable to recover it. 00:26:45.359 [2024-05-15 17:17:32.745798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.359 [2024-05-15 17:17:32.745999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.359 [2024-05-15 17:17:32.746028] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.359 qpair failed and we were unable to recover it. 00:26:45.359 [2024-05-15 17:17:32.746176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.359 [2024-05-15 17:17:32.746344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.359 [2024-05-15 17:17:32.746381] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.359 qpair failed and we were unable to recover it. 00:26:45.359 [2024-05-15 17:17:32.746559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.359 [2024-05-15 17:17:32.746724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.359 [2024-05-15 17:17:32.746733] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.359 qpair failed and we were unable to recover it. 00:26:45.359 [2024-05-15 17:17:32.746942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.359 [2024-05-15 17:17:32.747256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.359 [2024-05-15 17:17:32.747286] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.359 qpair failed and we were unable to recover it. 00:26:45.359 [2024-05-15 17:17:32.747579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.359 [2024-05-15 17:17:32.747797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.359 [2024-05-15 17:17:32.747825] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.359 qpair failed and we were unable to recover it. 00:26:45.359 [2024-05-15 17:17:32.748032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.359 [2024-05-15 17:17:32.748245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.359 [2024-05-15 17:17:32.748275] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.359 qpair failed and we were unable to recover it. 00:26:45.359 [2024-05-15 17:17:32.748523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.359 [2024-05-15 17:17:32.748678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.359 [2024-05-15 17:17:32.748688] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.359 qpair failed and we were unable to recover it. 00:26:45.359 [2024-05-15 17:17:32.748938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.359 [2024-05-15 17:17:32.749199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.359 [2024-05-15 17:17:32.749229] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.359 qpair failed and we were unable to recover it. 00:26:45.359 [2024-05-15 17:17:32.749401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.359 [2024-05-15 17:17:32.749567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.359 [2024-05-15 17:17:32.749577] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.359 qpair failed and we were unable to recover it. 00:26:45.359 [2024-05-15 17:17:32.749735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.359 [2024-05-15 17:17:32.750048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.359 [2024-05-15 17:17:32.750076] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.359 qpair failed and we were unable to recover it. 00:26:45.359 [2024-05-15 17:17:32.750349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.359 [2024-05-15 17:17:32.750558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.359 [2024-05-15 17:17:32.750587] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.359 qpair failed and we were unable to recover it. 00:26:45.359 [2024-05-15 17:17:32.750724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.359 [2024-05-15 17:17:32.751001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.359 [2024-05-15 17:17:32.751011] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.359 qpair failed and we were unable to recover it. 00:26:45.359 [2024-05-15 17:17:32.751131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.359 [2024-05-15 17:17:32.751399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.359 [2024-05-15 17:17:32.751409] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.359 qpair failed and we were unable to recover it. 00:26:45.359 [2024-05-15 17:17:32.751535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.359 [2024-05-15 17:17:32.751758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.359 [2024-05-15 17:17:32.751768] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.359 qpair failed and we were unable to recover it. 00:26:45.359 [2024-05-15 17:17:32.752011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.359 [2024-05-15 17:17:32.752251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.359 [2024-05-15 17:17:32.752261] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.359 qpair failed and we were unable to recover it. 00:26:45.359 [2024-05-15 17:17:32.752433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.359 [2024-05-15 17:17:32.752554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.359 [2024-05-15 17:17:32.752564] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.359 qpair failed and we were unable to recover it. 00:26:45.359 [2024-05-15 17:17:32.752707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.359 [2024-05-15 17:17:32.752936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.359 [2024-05-15 17:17:32.752945] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.359 qpair failed and we were unable to recover it. 00:26:45.359 [2024-05-15 17:17:32.753127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.359 [2024-05-15 17:17:32.753377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.359 [2024-05-15 17:17:32.753388] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.359 qpair failed and we were unable to recover it. 00:26:45.359 [2024-05-15 17:17:32.753555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.359 [2024-05-15 17:17:32.753676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.359 [2024-05-15 17:17:32.753717] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.359 qpair failed and we were unable to recover it. 00:26:45.359 [2024-05-15 17:17:32.753887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.359 [2024-05-15 17:17:32.754162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.359 [2024-05-15 17:17:32.754203] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.359 qpair failed and we were unable to recover it. 00:26:45.359 [2024-05-15 17:17:32.754426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.359 [2024-05-15 17:17:32.754603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.359 [2024-05-15 17:17:32.754632] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.359 qpair failed and we were unable to recover it. 00:26:45.359 [2024-05-15 17:17:32.754991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.359 [2024-05-15 17:17:32.755284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.359 [2024-05-15 17:17:32.755314] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.360 qpair failed and we were unable to recover it. 00:26:45.360 [2024-05-15 17:17:32.755483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.360 [2024-05-15 17:17:32.755761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.360 [2024-05-15 17:17:32.755790] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.360 qpair failed and we were unable to recover it. 00:26:45.360 [2024-05-15 17:17:32.756068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.360 [2024-05-15 17:17:32.756277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.360 [2024-05-15 17:17:32.756307] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.360 qpair failed and we were unable to recover it. 00:26:45.360 [2024-05-15 17:17:32.756524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.360 [2024-05-15 17:17:32.756835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.360 [2024-05-15 17:17:32.756864] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.360 qpair failed and we were unable to recover it. 00:26:45.360 [2024-05-15 17:17:32.757079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.360 [2024-05-15 17:17:32.757343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.360 [2024-05-15 17:17:32.757372] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.360 qpair failed and we were unable to recover it. 00:26:45.360 [2024-05-15 17:17:32.757634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.360 [2024-05-15 17:17:32.757722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.360 [2024-05-15 17:17:32.757732] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.360 qpair failed and we were unable to recover it. 00:26:45.360 [2024-05-15 17:17:32.757855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.360 [2024-05-15 17:17:32.758042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.360 [2024-05-15 17:17:32.758051] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.360 qpair failed and we were unable to recover it. 00:26:45.360 [2024-05-15 17:17:32.758237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.360 [2024-05-15 17:17:32.758369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.360 [2024-05-15 17:17:32.758379] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.360 qpair failed and we were unable to recover it. 00:26:45.360 [2024-05-15 17:17:32.758619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.360 [2024-05-15 17:17:32.758736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.360 [2024-05-15 17:17:32.758746] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.360 qpair failed and we were unable to recover it. 00:26:45.360 [2024-05-15 17:17:32.758939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.360 [2024-05-15 17:17:32.759145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.360 [2024-05-15 17:17:32.759187] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.360 qpair failed and we were unable to recover it. 00:26:45.360 [2024-05-15 17:17:32.759480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.360 [2024-05-15 17:17:32.759753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.360 [2024-05-15 17:17:32.759763] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.360 qpair failed and we were unable to recover it. 00:26:45.360 [2024-05-15 17:17:32.760009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.360 [2024-05-15 17:17:32.760184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.360 [2024-05-15 17:17:32.760194] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.360 qpair failed and we were unable to recover it. 00:26:45.360 [2024-05-15 17:17:32.760320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.360 [2024-05-15 17:17:32.760410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.360 [2024-05-15 17:17:32.760419] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.360 qpair failed and we were unable to recover it. 00:26:45.360 [2024-05-15 17:17:32.760584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.360 [2024-05-15 17:17:32.760703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.360 [2024-05-15 17:17:32.760713] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.360 qpair failed and we were unable to recover it. 00:26:45.360 [2024-05-15 17:17:32.760967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.360 [2024-05-15 17:17:32.761158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.360 [2024-05-15 17:17:32.761173] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.360 qpair failed and we were unable to recover it. 00:26:45.360 [2024-05-15 17:17:32.761368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.360 [2024-05-15 17:17:32.761530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.360 [2024-05-15 17:17:32.761558] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.360 qpair failed and we were unable to recover it. 00:26:45.360 [2024-05-15 17:17:32.761833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.360 [2024-05-15 17:17:32.762053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.360 [2024-05-15 17:17:32.762082] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.360 qpair failed and we were unable to recover it. 00:26:45.360 [2024-05-15 17:17:32.762257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.360 [2024-05-15 17:17:32.762482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.360 [2024-05-15 17:17:32.762510] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.360 qpair failed and we were unable to recover it. 00:26:45.360 [2024-05-15 17:17:32.762677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.360 [2024-05-15 17:17:32.762916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.360 [2024-05-15 17:17:32.762926] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.360 qpair failed and we were unable to recover it. 00:26:45.360 [2024-05-15 17:17:32.763149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.360 [2024-05-15 17:17:32.763268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.360 [2024-05-15 17:17:32.763278] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.360 qpair failed and we were unable to recover it. 00:26:45.360 [2024-05-15 17:17:32.763388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.360 [2024-05-15 17:17:32.763545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.360 [2024-05-15 17:17:32.763554] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.360 qpair failed and we were unable to recover it. 00:26:45.360 [2024-05-15 17:17:32.763839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.360 [2024-05-15 17:17:32.764102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.360 [2024-05-15 17:17:32.764140] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.360 qpair failed and we were unable to recover it. 00:26:45.360 [2024-05-15 17:17:32.764487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.360 [2024-05-15 17:17:32.764715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.360 [2024-05-15 17:17:32.764733] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:45.360 qpair failed and we were unable to recover it. 00:26:45.360 [2024-05-15 17:17:32.764972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.360 [2024-05-15 17:17:32.765154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.361 [2024-05-15 17:17:32.765176] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:45.361 qpair failed and we were unable to recover it. 00:26:45.361 [2024-05-15 17:17:32.765394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.361 [2024-05-15 17:17:32.765534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.361 [2024-05-15 17:17:32.765564] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:45.361 qpair failed and we were unable to recover it. 00:26:45.361 [2024-05-15 17:17:32.765737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.361 [2024-05-15 17:17:32.766006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.361 [2024-05-15 17:17:32.766036] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:45.361 qpair failed and we were unable to recover it. 00:26:45.361 [2024-05-15 17:17:32.766206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.361 [2024-05-15 17:17:32.766386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.361 [2024-05-15 17:17:32.766417] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:45.361 qpair failed and we were unable to recover it. 00:26:45.361 [2024-05-15 17:17:32.766673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.361 [2024-05-15 17:17:32.766803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.361 [2024-05-15 17:17:32.766817] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:45.361 qpair failed and we were unable to recover it. 00:26:45.361 [2024-05-15 17:17:32.767053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.361 [2024-05-15 17:17:32.767290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.361 [2024-05-15 17:17:32.767320] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:45.361 qpair failed and we were unable to recover it. 00:26:45.361 [2024-05-15 17:17:32.767616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.361 [2024-05-15 17:17:32.767835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.361 [2024-05-15 17:17:32.767864] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:45.361 qpair failed and we were unable to recover it. 00:26:45.361 [2024-05-15 17:17:32.768064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.361 [2024-05-15 17:17:32.768302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.361 [2024-05-15 17:17:32.768316] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:45.361 qpair failed and we were unable to recover it. 00:26:45.361 [2024-05-15 17:17:32.768503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.361 [2024-05-15 17:17:32.768668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.361 [2024-05-15 17:17:32.768682] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:45.361 qpair failed and we were unable to recover it. 00:26:45.361 [2024-05-15 17:17:32.768798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.361 [2024-05-15 17:17:32.769077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.361 [2024-05-15 17:17:32.769106] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:45.361 qpair failed and we were unable to recover it. 00:26:45.361 [2024-05-15 17:17:32.769339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.361 [2024-05-15 17:17:32.769513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.361 [2024-05-15 17:17:32.769542] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:45.361 qpair failed and we were unable to recover it. 00:26:45.361 [2024-05-15 17:17:32.769753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.361 [2024-05-15 17:17:32.770038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.361 [2024-05-15 17:17:32.770067] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:45.361 qpair failed and we were unable to recover it. 00:26:45.361 [2024-05-15 17:17:32.770372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.361 [2024-05-15 17:17:32.770519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.361 [2024-05-15 17:17:32.770548] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:45.361 qpair failed and we were unable to recover it. 00:26:45.361 [2024-05-15 17:17:32.770760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.361 [2024-05-15 17:17:32.771009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.361 [2024-05-15 17:17:32.771044] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:45.361 qpair failed and we were unable to recover it. 00:26:45.361 [2024-05-15 17:17:32.771210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.361 [2024-05-15 17:17:32.771376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.361 [2024-05-15 17:17:32.771405] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:45.361 qpair failed and we were unable to recover it. 00:26:45.361 [2024-05-15 17:17:32.771687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.361 [2024-05-15 17:17:32.771959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.361 [2024-05-15 17:17:32.771972] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:45.361 qpair failed and we were unable to recover it. 00:26:45.361 [2024-05-15 17:17:32.772231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.361 [2024-05-15 17:17:32.772361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.361 [2024-05-15 17:17:32.772375] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:45.361 qpair failed and we were unable to recover it. 00:26:45.361 [2024-05-15 17:17:32.772502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.361 [2024-05-15 17:17:32.772703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.361 [2024-05-15 17:17:32.772716] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:45.361 qpair failed and we were unable to recover it. 00:26:45.361 [2024-05-15 17:17:32.772913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.361 [2024-05-15 17:17:32.773099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.361 [2024-05-15 17:17:32.773128] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:45.361 qpair failed and we were unable to recover it. 00:26:45.361 [2024-05-15 17:17:32.773318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.361 [2024-05-15 17:17:32.773470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.361 [2024-05-15 17:17:32.773499] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:45.361 qpair failed and we were unable to recover it. 00:26:45.361 [2024-05-15 17:17:32.773740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.361 [2024-05-15 17:17:32.774018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.361 [2024-05-15 17:17:32.774047] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:45.361 qpair failed and we were unable to recover it. 00:26:45.361 [2024-05-15 17:17:32.774247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.361 [2024-05-15 17:17:32.774477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.361 [2024-05-15 17:17:32.774505] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:45.361 qpair failed and we were unable to recover it. 00:26:45.361 [2024-05-15 17:17:32.774800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.361 [2024-05-15 17:17:32.774946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.361 [2024-05-15 17:17:32.774975] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:45.361 qpair failed and we were unable to recover it. 00:26:45.361 [2024-05-15 17:17:32.775138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.361 [2024-05-15 17:17:32.775356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.361 [2024-05-15 17:17:32.775386] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:45.361 qpair failed and we were unable to recover it. 00:26:45.361 [2024-05-15 17:17:32.775640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.361 [2024-05-15 17:17:32.775774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.361 [2024-05-15 17:17:32.775788] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:45.361 qpair failed and we were unable to recover it. 00:26:45.361 [2024-05-15 17:17:32.775956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.361 [2024-05-15 17:17:32.776127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.361 [2024-05-15 17:17:32.776140] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:45.361 qpair failed and we were unable to recover it. 00:26:45.361 [2024-05-15 17:17:32.776327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.361 [2024-05-15 17:17:32.776455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.361 [2024-05-15 17:17:32.776468] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:45.361 qpair failed and we were unable to recover it. 00:26:45.361 [2024-05-15 17:17:32.776725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.361 [2024-05-15 17:17:32.776914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.361 [2024-05-15 17:17:32.776928] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:45.361 qpair failed and we were unable to recover it. 00:26:45.361 [2024-05-15 17:17:32.777128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.361 [2024-05-15 17:17:32.777291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.361 [2024-05-15 17:17:32.777307] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:45.361 qpair failed and we were unable to recover it. 00:26:45.361 [2024-05-15 17:17:32.777493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.362 [2024-05-15 17:17:32.777675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.362 [2024-05-15 17:17:32.777688] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:45.362 qpair failed and we were unable to recover it. 00:26:45.362 [2024-05-15 17:17:32.777925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.362 [2024-05-15 17:17:32.778051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.362 [2024-05-15 17:17:32.778079] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:45.362 qpair failed and we were unable to recover it. 00:26:45.362 [2024-05-15 17:17:32.778415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.362 [2024-05-15 17:17:32.778692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.362 [2024-05-15 17:17:32.778722] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:45.362 qpair failed and we were unable to recover it. 00:26:45.362 [2024-05-15 17:17:32.779029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.362 [2024-05-15 17:17:32.779293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.362 [2024-05-15 17:17:32.779323] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:45.362 qpair failed and we were unable to recover it. 00:26:45.362 [2024-05-15 17:17:32.779588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.362 [2024-05-15 17:17:32.779827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.362 [2024-05-15 17:17:32.779840] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:45.362 qpair failed and we were unable to recover it. 00:26:45.362 [2024-05-15 17:17:32.780023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.362 [2024-05-15 17:17:32.780160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.362 [2024-05-15 17:17:32.780180] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:45.362 qpair failed and we were unable to recover it. 00:26:45.362 [2024-05-15 17:17:32.780435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.362 [2024-05-15 17:17:32.780555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.362 [2024-05-15 17:17:32.780567] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:45.362 qpair failed and we were unable to recover it. 00:26:45.362 [2024-05-15 17:17:32.780752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.362 [2024-05-15 17:17:32.780875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.362 [2024-05-15 17:17:32.780888] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:45.362 qpair failed and we were unable to recover it. 00:26:45.362 [2024-05-15 17:17:32.781051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.362 [2024-05-15 17:17:32.781267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.362 [2024-05-15 17:17:32.781281] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:45.362 qpair failed and we were unable to recover it. 00:26:45.362 [2024-05-15 17:17:32.781468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.362 [2024-05-15 17:17:32.781696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.362 [2024-05-15 17:17:32.781709] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:45.362 qpair failed and we were unable to recover it. 00:26:45.362 [2024-05-15 17:17:32.781874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.362 [2024-05-15 17:17:32.782039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.362 [2024-05-15 17:17:32.782052] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:45.362 qpair failed and we were unable to recover it. 00:26:45.362 [2024-05-15 17:17:32.782340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.362 [2024-05-15 17:17:32.782486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.362 [2024-05-15 17:17:32.782499] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:45.362 qpair failed and we were unable to recover it. 00:26:45.362 [2024-05-15 17:17:32.782693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.362 [2024-05-15 17:17:32.782833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.362 [2024-05-15 17:17:32.782846] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:45.362 qpair failed and we were unable to recover it. 00:26:45.362 [2024-05-15 17:17:32.783141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.362 [2024-05-15 17:17:32.783256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.362 [2024-05-15 17:17:32.783286] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:45.362 qpair failed and we were unable to recover it. 00:26:45.362 [2024-05-15 17:17:32.783454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.362 [2024-05-15 17:17:32.783567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.362 [2024-05-15 17:17:32.783581] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:45.362 qpair failed and we were unable to recover it. 00:26:45.362 [2024-05-15 17:17:32.783789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.362 [2024-05-15 17:17:32.783934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.362 [2024-05-15 17:17:32.783947] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.362 qpair failed and we were unable to recover it. 00:26:45.362 [2024-05-15 17:17:32.784055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.362 [2024-05-15 17:17:32.784279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.362 [2024-05-15 17:17:32.784289] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.362 qpair failed and we were unable to recover it. 00:26:45.362 [2024-05-15 17:17:32.784462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.362 [2024-05-15 17:17:32.784624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.362 [2024-05-15 17:17:32.784634] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.362 qpair failed and we were unable to recover it. 00:26:45.362 [2024-05-15 17:17:32.784876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.362 [2024-05-15 17:17:32.785124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.362 [2024-05-15 17:17:32.785134] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.362 qpair failed and we were unable to recover it. 00:26:45.362 [2024-05-15 17:17:32.785361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.362 [2024-05-15 17:17:32.785533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.362 [2024-05-15 17:17:32.785543] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.362 qpair failed and we were unable to recover it. 00:26:45.362 [2024-05-15 17:17:32.785665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.362 [2024-05-15 17:17:32.785782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.362 [2024-05-15 17:17:32.785791] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.362 qpair failed and we were unable to recover it. 00:26:45.362 [2024-05-15 17:17:32.785920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.362 [2024-05-15 17:17:32.786098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.362 [2024-05-15 17:17:32.786108] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.362 qpair failed and we were unable to recover it. 00:26:45.362 [2024-05-15 17:17:32.786281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.362 [2024-05-15 17:17:32.786458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.362 [2024-05-15 17:17:32.786468] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.362 qpair failed and we were unable to recover it. 00:26:45.362 [2024-05-15 17:17:32.786576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.362 [2024-05-15 17:17:32.786739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.362 [2024-05-15 17:17:32.786749] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.362 qpair failed and we were unable to recover it. 00:26:45.362 [2024-05-15 17:17:32.786916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.362 [2024-05-15 17:17:32.787126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.362 [2024-05-15 17:17:32.787136] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.362 qpair failed and we were unable to recover it. 00:26:45.362 [2024-05-15 17:17:32.787364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.362 [2024-05-15 17:17:32.787534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.362 [2024-05-15 17:17:32.787544] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.362 qpair failed and we were unable to recover it. 00:26:45.362 [2024-05-15 17:17:32.787712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.362 [2024-05-15 17:17:32.787934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.362 [2024-05-15 17:17:32.787943] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.362 qpair failed and we were unable to recover it. 00:26:45.362 [2024-05-15 17:17:32.788210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.362 [2024-05-15 17:17:32.788388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.362 [2024-05-15 17:17:32.788398] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.362 qpair failed and we were unable to recover it. 00:26:45.362 [2024-05-15 17:17:32.788532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.362 [2024-05-15 17:17:32.788717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.362 [2024-05-15 17:17:32.788727] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.363 qpair failed and we were unable to recover it. 00:26:45.363 [2024-05-15 17:17:32.788986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.363 [2024-05-15 17:17:32.789159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.363 [2024-05-15 17:17:32.789173] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.363 qpair failed and we were unable to recover it. 00:26:45.363 [2024-05-15 17:17:32.789403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.363 [2024-05-15 17:17:32.789644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.363 [2024-05-15 17:17:32.789654] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.363 qpair failed and we were unable to recover it. 00:26:45.363 [2024-05-15 17:17:32.789949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.363 [2024-05-15 17:17:32.790175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.363 [2024-05-15 17:17:32.790185] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.363 qpair failed and we were unable to recover it. 00:26:45.363 [2024-05-15 17:17:32.790388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.363 [2024-05-15 17:17:32.790554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.363 [2024-05-15 17:17:32.790564] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.363 qpair failed and we were unable to recover it. 00:26:45.363 [2024-05-15 17:17:32.790684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.363 [2024-05-15 17:17:32.790857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.363 [2024-05-15 17:17:32.790866] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.363 qpair failed and we were unable to recover it. 00:26:45.363 [2024-05-15 17:17:32.791025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.363 [2024-05-15 17:17:32.791209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.363 [2024-05-15 17:17:32.791219] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.363 qpair failed and we were unable to recover it. 00:26:45.363 [2024-05-15 17:17:32.791378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.363 [2024-05-15 17:17:32.791480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.363 [2024-05-15 17:17:32.791490] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.363 qpair failed and we were unable to recover it. 00:26:45.363 [2024-05-15 17:17:32.791718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.363 [2024-05-15 17:17:32.791886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.363 [2024-05-15 17:17:32.791897] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.363 qpair failed and we were unable to recover it. 00:26:45.363 [2024-05-15 17:17:32.792152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.363 [2024-05-15 17:17:32.792363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.363 [2024-05-15 17:17:32.792373] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.363 qpair failed and we were unable to recover it. 00:26:45.363 [2024-05-15 17:17:32.792554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.363 [2024-05-15 17:17:32.792727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.363 [2024-05-15 17:17:32.792736] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.363 qpair failed and we were unable to recover it. 00:26:45.363 [2024-05-15 17:17:32.793028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.363 [2024-05-15 17:17:32.793231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.363 [2024-05-15 17:17:32.793242] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.363 qpair failed and we were unable to recover it. 00:26:45.363 [2024-05-15 17:17:32.793467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.363 [2024-05-15 17:17:32.793588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.363 [2024-05-15 17:17:32.793598] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.363 qpair failed and we were unable to recover it. 00:26:45.363 [2024-05-15 17:17:32.793832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.363 [2024-05-15 17:17:32.794005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.363 [2024-05-15 17:17:32.794015] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.363 qpair failed and we were unable to recover it. 00:26:45.363 [2024-05-15 17:17:32.794190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.363 [2024-05-15 17:17:32.794384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.363 [2024-05-15 17:17:32.794394] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.363 qpair failed and we were unable to recover it. 00:26:45.363 [2024-05-15 17:17:32.794576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.363 [2024-05-15 17:17:32.794744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.363 [2024-05-15 17:17:32.794754] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.363 qpair failed and we were unable to recover it. 00:26:45.363 [2024-05-15 17:17:32.794941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.363 [2024-05-15 17:17:32.795139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.363 [2024-05-15 17:17:32.795148] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.363 qpair failed and we were unable to recover it. 00:26:45.363 [2024-05-15 17:17:32.795343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.363 [2024-05-15 17:17:32.795524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.363 [2024-05-15 17:17:32.795534] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.363 qpair failed and we were unable to recover it. 00:26:45.363 [2024-05-15 17:17:32.795651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.363 [2024-05-15 17:17:32.795919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.363 [2024-05-15 17:17:32.795928] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.363 qpair failed and we were unable to recover it. 00:26:45.363 [2024-05-15 17:17:32.796177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.363 [2024-05-15 17:17:32.796426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.363 [2024-05-15 17:17:32.796436] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.363 qpair failed and we were unable to recover it. 00:26:45.363 [2024-05-15 17:17:32.796597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.363 [2024-05-15 17:17:32.796818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.363 [2024-05-15 17:17:32.796828] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.363 qpair failed and we were unable to recover it. 00:26:45.363 [2024-05-15 17:17:32.797053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.363 [2024-05-15 17:17:32.797281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.363 [2024-05-15 17:17:32.797291] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.363 qpair failed and we were unable to recover it. 00:26:45.363 [2024-05-15 17:17:32.797458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.363 [2024-05-15 17:17:32.797557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.363 [2024-05-15 17:17:32.797567] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.363 qpair failed and we were unable to recover it. 00:26:45.363 [2024-05-15 17:17:32.797673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.363 [2024-05-15 17:17:32.797849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.363 [2024-05-15 17:17:32.797858] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.363 qpair failed and we were unable to recover it. 00:26:45.363 [2024-05-15 17:17:32.797974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.363 [2024-05-15 17:17:32.798149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.363 [2024-05-15 17:17:32.798159] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.363 qpair failed and we were unable to recover it. 00:26:45.363 [2024-05-15 17:17:32.798424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.363 [2024-05-15 17:17:32.798586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.363 [2024-05-15 17:17:32.798596] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.363 qpair failed and we were unable to recover it. 00:26:45.363 [2024-05-15 17:17:32.798718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.363 [2024-05-15 17:17:32.798916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.363 [2024-05-15 17:17:32.798926] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.363 qpair failed and we were unable to recover it. 00:26:45.363 [2024-05-15 17:17:32.799172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.363 [2024-05-15 17:17:32.799370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.363 [2024-05-15 17:17:32.799380] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.363 qpair failed and we were unable to recover it. 00:26:45.363 [2024-05-15 17:17:32.799495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.363 [2024-05-15 17:17:32.799716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.363 [2024-05-15 17:17:32.799726] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.363 qpair failed and we were unable to recover it. 00:26:45.364 [2024-05-15 17:17:32.799925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.364 [2024-05-15 17:17:32.800171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.364 [2024-05-15 17:17:32.800181] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.364 qpair failed and we were unable to recover it. 00:26:45.364 [2024-05-15 17:17:32.800343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.364 [2024-05-15 17:17:32.800458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.364 [2024-05-15 17:17:32.800468] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.364 qpair failed and we were unable to recover it. 00:26:45.364 [2024-05-15 17:17:32.800595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.364 [2024-05-15 17:17:32.800875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.364 [2024-05-15 17:17:32.800885] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.364 qpair failed and we were unable to recover it. 00:26:45.364 [2024-05-15 17:17:32.801060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.364 [2024-05-15 17:17:32.801324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.364 [2024-05-15 17:17:32.801334] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.364 qpair failed and we were unable to recover it. 00:26:45.364 [2024-05-15 17:17:32.801515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.364 [2024-05-15 17:17:32.801626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.364 [2024-05-15 17:17:32.801635] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.364 qpair failed and we were unable to recover it. 00:26:45.364 [2024-05-15 17:17:32.801796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.364 [2024-05-15 17:17:32.802003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.364 [2024-05-15 17:17:32.802013] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.364 qpair failed and we were unable to recover it. 00:26:45.364 [2024-05-15 17:17:32.802193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.364 [2024-05-15 17:17:32.802452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.364 [2024-05-15 17:17:32.802462] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.364 qpair failed and we were unable to recover it. 00:26:45.364 [2024-05-15 17:17:32.802561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.364 [2024-05-15 17:17:32.802668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.364 [2024-05-15 17:17:32.802678] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.364 qpair failed and we were unable to recover it. 00:26:45.364 [2024-05-15 17:17:32.802840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.364 [2024-05-15 17:17:32.802954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.364 [2024-05-15 17:17:32.802966] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.364 qpair failed and we were unable to recover it. 00:26:45.364 [2024-05-15 17:17:32.803224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.364 [2024-05-15 17:17:32.803352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.364 [2024-05-15 17:17:32.803362] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.364 qpair failed and we were unable to recover it. 00:26:45.364 [2024-05-15 17:17:32.803545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.364 [2024-05-15 17:17:32.803703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.364 [2024-05-15 17:17:32.803713] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.364 qpair failed and we were unable to recover it. 00:26:45.364 [2024-05-15 17:17:32.803837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.364 [2024-05-15 17:17:32.804059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.364 [2024-05-15 17:17:32.804069] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.364 qpair failed and we were unable to recover it. 00:26:45.364 [2024-05-15 17:17:32.804316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.364 [2024-05-15 17:17:32.804499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.364 [2024-05-15 17:17:32.804509] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.364 qpair failed and we were unable to recover it. 00:26:45.364 [2024-05-15 17:17:32.804627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.364 [2024-05-15 17:17:32.804903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.364 [2024-05-15 17:17:32.804914] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.364 qpair failed and we were unable to recover it. 00:26:45.364 [2024-05-15 17:17:32.805113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.364 [2024-05-15 17:17:32.805362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.364 [2024-05-15 17:17:32.805373] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.364 qpair failed and we were unable to recover it. 00:26:45.364 [2024-05-15 17:17:32.805627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.364 [2024-05-15 17:17:32.805825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.364 [2024-05-15 17:17:32.805834] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.364 qpair failed and we were unable to recover it. 00:26:45.364 [2024-05-15 17:17:32.805965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.364 [2024-05-15 17:17:32.806212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.364 [2024-05-15 17:17:32.806222] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.364 qpair failed and we were unable to recover it. 00:26:45.364 [2024-05-15 17:17:32.806327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.364 [2024-05-15 17:17:32.806453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.364 [2024-05-15 17:17:32.806462] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.364 qpair failed and we were unable to recover it. 00:26:45.364 [2024-05-15 17:17:32.806582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.364 [2024-05-15 17:17:32.806668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.364 [2024-05-15 17:17:32.806682] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.364 qpair failed and we were unable to recover it. 00:26:45.364 [2024-05-15 17:17:32.806784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.364 [2024-05-15 17:17:32.806958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.364 [2024-05-15 17:17:32.806969] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.364 qpair failed and we were unable to recover it. 00:26:45.364 [2024-05-15 17:17:32.807154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.364 [2024-05-15 17:17:32.807262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.364 [2024-05-15 17:17:32.807272] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.364 qpair failed and we were unable to recover it. 00:26:45.364 [2024-05-15 17:17:32.807392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.364 [2024-05-15 17:17:32.807502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.364 [2024-05-15 17:17:32.807512] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.364 qpair failed and we were unable to recover it. 00:26:45.364 [2024-05-15 17:17:32.807735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.364 [2024-05-15 17:17:32.808031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.364 [2024-05-15 17:17:32.808042] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.364 qpair failed and we were unable to recover it. 00:26:45.364 [2024-05-15 17:17:32.808175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.364 [2024-05-15 17:17:32.808405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.364 [2024-05-15 17:17:32.808414] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.364 qpair failed and we were unable to recover it. 00:26:45.364 [2024-05-15 17:17:32.808530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.364 [2024-05-15 17:17:32.808726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.364 [2024-05-15 17:17:32.808736] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.364 qpair failed and we were unable to recover it. 00:26:45.364 [2024-05-15 17:17:32.808972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.364 [2024-05-15 17:17:32.809171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.364 [2024-05-15 17:17:32.809182] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.364 qpair failed and we were unable to recover it. 00:26:45.364 [2024-05-15 17:17:32.809350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.364 [2024-05-15 17:17:32.809471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.364 [2024-05-15 17:17:32.809481] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.364 qpair failed and we were unable to recover it. 00:26:45.364 [2024-05-15 17:17:32.809593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.364 [2024-05-15 17:17:32.809701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.364 [2024-05-15 17:17:32.809711] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.364 qpair failed and we were unable to recover it. 00:26:45.364 [2024-05-15 17:17:32.809815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.364 [2024-05-15 17:17:32.809952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.365 [2024-05-15 17:17:32.809964] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.365 qpair failed and we were unable to recover it. 00:26:45.365 [2024-05-15 17:17:32.810066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.365 [2024-05-15 17:17:32.810242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.365 [2024-05-15 17:17:32.810253] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.365 qpair failed and we were unable to recover it. 00:26:45.365 [2024-05-15 17:17:32.810413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.365 [2024-05-15 17:17:32.810659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.365 [2024-05-15 17:17:32.810670] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.365 qpair failed and we were unable to recover it. 00:26:45.365 [2024-05-15 17:17:32.810897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.365 [2024-05-15 17:17:32.811070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.365 [2024-05-15 17:17:32.811079] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.365 qpair failed and we were unable to recover it. 00:26:45.365 [2024-05-15 17:17:32.811327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.365 [2024-05-15 17:17:32.811505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.365 [2024-05-15 17:17:32.811515] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.365 qpair failed and we were unable to recover it. 00:26:45.365 [2024-05-15 17:17:32.811634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.365 [2024-05-15 17:17:32.811728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.365 [2024-05-15 17:17:32.811738] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.365 qpair failed and we were unable to recover it. 00:26:45.365 [2024-05-15 17:17:32.811829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.365 [2024-05-15 17:17:32.811992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.365 [2024-05-15 17:17:32.812002] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.365 qpair failed and we were unable to recover it. 00:26:45.365 [2024-05-15 17:17:32.812269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.365 [2024-05-15 17:17:32.812437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.365 [2024-05-15 17:17:32.812448] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.365 qpair failed and we were unable to recover it. 00:26:45.365 [2024-05-15 17:17:32.812717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.365 [2024-05-15 17:17:32.812916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.365 [2024-05-15 17:17:32.812926] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.365 qpair failed and we were unable to recover it. 00:26:45.365 [2024-05-15 17:17:32.813049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.365 [2024-05-15 17:17:32.813219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.365 [2024-05-15 17:17:32.813229] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.365 qpair failed and we were unable to recover it. 00:26:45.365 [2024-05-15 17:17:32.813438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.365 [2024-05-15 17:17:32.813635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.365 [2024-05-15 17:17:32.813646] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.365 qpair failed and we were unable to recover it. 00:26:45.365 [2024-05-15 17:17:32.813861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.365 [2024-05-15 17:17:32.814075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.365 [2024-05-15 17:17:32.814084] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.365 qpair failed and we were unable to recover it. 00:26:45.365 [2024-05-15 17:17:32.814264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.365 [2024-05-15 17:17:32.814375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.365 [2024-05-15 17:17:32.814385] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.365 qpair failed and we were unable to recover it. 00:26:45.365 [2024-05-15 17:17:32.814584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.365 [2024-05-15 17:17:32.814707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.365 [2024-05-15 17:17:32.814716] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.365 qpair failed and we were unable to recover it. 00:26:45.365 [2024-05-15 17:17:32.814836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.365 [2024-05-15 17:17:32.815009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.365 [2024-05-15 17:17:32.815018] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.365 qpair failed and we were unable to recover it. 00:26:45.365 [2024-05-15 17:17:32.815281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.365 [2024-05-15 17:17:32.815386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.365 [2024-05-15 17:17:32.815396] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.365 qpair failed and we were unable to recover it. 00:26:45.365 [2024-05-15 17:17:32.815519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.365 [2024-05-15 17:17:32.815690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.365 [2024-05-15 17:17:32.815699] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.365 qpair failed and we were unable to recover it. 00:26:45.365 [2024-05-15 17:17:32.815873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.365 [2024-05-15 17:17:32.815981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.365 [2024-05-15 17:17:32.815991] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.365 qpair failed and we were unable to recover it. 00:26:45.365 [2024-05-15 17:17:32.816152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.365 [2024-05-15 17:17:32.816269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.365 [2024-05-15 17:17:32.816280] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.365 qpair failed and we were unable to recover it. 00:26:45.365 [2024-05-15 17:17:32.816409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.365 [2024-05-15 17:17:32.816517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.365 [2024-05-15 17:17:32.816528] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.365 qpair failed and we were unable to recover it. 00:26:45.365 [2024-05-15 17:17:32.816624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.365 [2024-05-15 17:17:32.816742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.365 [2024-05-15 17:17:32.816751] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.365 qpair failed and we were unable to recover it. 00:26:45.365 [2024-05-15 17:17:32.817049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.365 [2024-05-15 17:17:32.817238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.365 [2024-05-15 17:17:32.817249] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.365 qpair failed and we were unable to recover it. 00:26:45.365 [2024-05-15 17:17:32.817433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.365 [2024-05-15 17:17:32.817605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.365 [2024-05-15 17:17:32.817616] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.365 qpair failed and we were unable to recover it. 00:26:45.365 [2024-05-15 17:17:32.817854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.365 [2024-05-15 17:17:32.818035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.365 [2024-05-15 17:17:32.818044] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.365 qpair failed and we were unable to recover it. 00:26:45.365 [2024-05-15 17:17:32.818380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.366 [2024-05-15 17:17:32.818565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.366 [2024-05-15 17:17:32.818575] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.366 qpair failed and we were unable to recover it. 00:26:45.366 [2024-05-15 17:17:32.818699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.366 [2024-05-15 17:17:32.818895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.366 [2024-05-15 17:17:32.818905] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.366 qpair failed and we were unable to recover it. 00:26:45.366 [2024-05-15 17:17:32.819023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.366 [2024-05-15 17:17:32.819219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.366 [2024-05-15 17:17:32.819230] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.366 qpair failed and we were unable to recover it. 00:26:45.366 [2024-05-15 17:17:32.819405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.366 [2024-05-15 17:17:32.819525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.366 [2024-05-15 17:17:32.819535] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.366 qpair failed and we were unable to recover it. 00:26:45.366 [2024-05-15 17:17:32.819818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.366 [2024-05-15 17:17:32.820107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.366 [2024-05-15 17:17:32.820118] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.366 qpair failed and we were unable to recover it. 00:26:45.366 [2024-05-15 17:17:32.820272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.366 [2024-05-15 17:17:32.820500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.366 [2024-05-15 17:17:32.820510] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.366 qpair failed and we were unable to recover it. 00:26:45.366 [2024-05-15 17:17:32.820630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.366 [2024-05-15 17:17:32.820851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.366 [2024-05-15 17:17:32.820861] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.366 qpair failed and we were unable to recover it. 00:26:45.366 [2024-05-15 17:17:32.821096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.366 [2024-05-15 17:17:32.821306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.366 [2024-05-15 17:17:32.821317] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.366 qpair failed and we were unable to recover it. 00:26:45.366 [2024-05-15 17:17:32.821424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.366 [2024-05-15 17:17:32.821532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.366 [2024-05-15 17:17:32.821542] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.366 qpair failed and we were unable to recover it. 00:26:45.366 [2024-05-15 17:17:32.821660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.366 [2024-05-15 17:17:32.821828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.366 [2024-05-15 17:17:32.821838] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.366 qpair failed and we were unable to recover it. 00:26:45.366 [2024-05-15 17:17:32.822010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.366 [2024-05-15 17:17:32.822140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.366 [2024-05-15 17:17:32.822150] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.366 qpair failed and we were unable to recover it. 00:26:45.366 [2024-05-15 17:17:32.822255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.366 [2024-05-15 17:17:32.822377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.366 [2024-05-15 17:17:32.822386] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.366 qpair failed and we were unable to recover it. 00:26:45.366 [2024-05-15 17:17:32.822552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.366 [2024-05-15 17:17:32.822664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.366 [2024-05-15 17:17:32.822674] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.366 qpair failed and we were unable to recover it. 00:26:45.366 [2024-05-15 17:17:32.822831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.366 [2024-05-15 17:17:32.823018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.366 [2024-05-15 17:17:32.823028] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.366 qpair failed and we were unable to recover it. 00:26:45.366 [2024-05-15 17:17:32.823119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.366 [2024-05-15 17:17:32.823340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.366 [2024-05-15 17:17:32.823350] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.366 qpair failed and we were unable to recover it. 00:26:45.366 [2024-05-15 17:17:32.823617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.366 [2024-05-15 17:17:32.823790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.366 [2024-05-15 17:17:32.823800] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.366 qpair failed and we were unable to recover it. 00:26:45.366 [2024-05-15 17:17:32.824044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.366 [2024-05-15 17:17:32.824196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.366 [2024-05-15 17:17:32.824207] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.366 qpair failed and we were unable to recover it. 00:26:45.366 [2024-05-15 17:17:32.824329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.366 [2024-05-15 17:17:32.824450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.366 [2024-05-15 17:17:32.824460] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.366 qpair failed and we were unable to recover it. 00:26:45.366 [2024-05-15 17:17:32.824681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.366 [2024-05-15 17:17:32.824778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.366 [2024-05-15 17:17:32.824788] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.366 qpair failed and we were unable to recover it. 00:26:45.366 [2024-05-15 17:17:32.825050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.366 [2024-05-15 17:17:32.825216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.366 [2024-05-15 17:17:32.825227] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.366 qpair failed and we were unable to recover it. 00:26:45.366 [2024-05-15 17:17:32.825394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.366 [2024-05-15 17:17:32.825551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.366 [2024-05-15 17:17:32.825561] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.366 qpair failed and we were unable to recover it. 00:26:45.366 [2024-05-15 17:17:32.825673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.366 [2024-05-15 17:17:32.825804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.366 [2024-05-15 17:17:32.825814] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.366 qpair failed and we were unable to recover it. 00:26:45.366 [2024-05-15 17:17:32.825988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.366 [2024-05-15 17:17:32.826107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.366 [2024-05-15 17:17:32.826119] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.366 qpair failed and we were unable to recover it. 00:26:45.366 [2024-05-15 17:17:32.826242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.366 [2024-05-15 17:17:32.826329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.366 [2024-05-15 17:17:32.826339] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.366 qpair failed and we were unable to recover it. 00:26:45.366 [2024-05-15 17:17:32.826463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.366 [2024-05-15 17:17:32.826632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.366 [2024-05-15 17:17:32.826644] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.366 qpair failed and we were unable to recover it. 00:26:45.366 [2024-05-15 17:17:32.826757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.366 [2024-05-15 17:17:32.826854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.366 [2024-05-15 17:17:32.826864] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.366 qpair failed and we were unable to recover it. 00:26:45.366 [2024-05-15 17:17:32.826992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.366 [2024-05-15 17:17:32.827152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.366 [2024-05-15 17:17:32.827163] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.366 qpair failed and we were unable to recover it. 00:26:45.366 [2024-05-15 17:17:32.827364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.366 [2024-05-15 17:17:32.827466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.367 [2024-05-15 17:17:32.827477] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.367 qpair failed and we were unable to recover it. 00:26:45.367 [2024-05-15 17:17:32.827593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.367 [2024-05-15 17:17:32.827734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.367 [2024-05-15 17:17:32.827745] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.367 qpair failed and we were unable to recover it. 00:26:45.367 [2024-05-15 17:17:32.828012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.367 [2024-05-15 17:17:32.828126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.367 [2024-05-15 17:17:32.828136] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.367 qpair failed and we were unable to recover it. 00:26:45.367 [2024-05-15 17:17:32.828268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.367 [2024-05-15 17:17:32.828366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.367 [2024-05-15 17:17:32.828376] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.367 qpair failed and we were unable to recover it. 00:26:45.367 [2024-05-15 17:17:32.828605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.367 [2024-05-15 17:17:32.828782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.367 [2024-05-15 17:17:32.828793] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.367 qpair failed and we were unable to recover it. 00:26:45.367 [2024-05-15 17:17:32.828907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.367 [2024-05-15 17:17:32.829059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.367 [2024-05-15 17:17:32.829069] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.367 qpair failed and we were unable to recover it. 00:26:45.367 [2024-05-15 17:17:32.829225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.367 [2024-05-15 17:17:32.829448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.367 [2024-05-15 17:17:32.829458] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.367 qpair failed and we were unable to recover it. 00:26:45.367 [2024-05-15 17:17:32.829575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.367 [2024-05-15 17:17:32.829676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.367 [2024-05-15 17:17:32.829686] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.367 qpair failed and we were unable to recover it. 00:26:45.367 [2024-05-15 17:17:32.829903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.367 [2024-05-15 17:17:32.830075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.367 [2024-05-15 17:17:32.830085] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.367 qpair failed and we were unable to recover it. 00:26:45.367 [2024-05-15 17:17:32.830206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.367 [2024-05-15 17:17:32.830366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.367 [2024-05-15 17:17:32.830376] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.367 qpair failed and we were unable to recover it. 00:26:45.367 [2024-05-15 17:17:32.830501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.367 [2024-05-15 17:17:32.830674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.367 [2024-05-15 17:17:32.830684] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.367 qpair failed and we were unable to recover it. 00:26:45.367 [2024-05-15 17:17:32.830806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.367 [2024-05-15 17:17:32.830927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.367 [2024-05-15 17:17:32.830937] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.367 qpair failed and we were unable to recover it. 00:26:45.367 [2024-05-15 17:17:32.831113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.367 [2024-05-15 17:17:32.831223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.367 [2024-05-15 17:17:32.831234] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.367 qpair failed and we were unable to recover it. 00:26:45.367 [2024-05-15 17:17:32.831413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.367 [2024-05-15 17:17:32.831520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.367 [2024-05-15 17:17:32.831530] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.367 qpair failed and we were unable to recover it. 00:26:45.367 [2024-05-15 17:17:32.831633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.367 [2024-05-15 17:17:32.831724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.367 [2024-05-15 17:17:32.831734] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.367 qpair failed and we were unable to recover it. 00:26:45.367 [2024-05-15 17:17:32.831905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.367 [2024-05-15 17:17:32.832015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.367 [2024-05-15 17:17:32.832025] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.367 qpair failed and we were unable to recover it. 00:26:45.367 [2024-05-15 17:17:32.832187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.367 [2024-05-15 17:17:32.832396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.367 [2024-05-15 17:17:32.832408] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.367 qpair failed and we were unable to recover it. 00:26:45.367 [2024-05-15 17:17:32.832635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.367 [2024-05-15 17:17:32.832746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.367 [2024-05-15 17:17:32.832757] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.367 qpair failed and we were unable to recover it. 00:26:45.367 [2024-05-15 17:17:32.832866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.367 [2024-05-15 17:17:32.832974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.367 [2024-05-15 17:17:32.832984] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.367 qpair failed and we were unable to recover it. 00:26:45.367 [2024-05-15 17:17:32.833101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.367 [2024-05-15 17:17:32.833226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.367 [2024-05-15 17:17:32.833238] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.367 qpair failed and we were unable to recover it. 00:26:45.367 [2024-05-15 17:17:32.833347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.367 [2024-05-15 17:17:32.833449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.367 [2024-05-15 17:17:32.833459] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.367 qpair failed and we were unable to recover it. 00:26:45.367 [2024-05-15 17:17:32.833559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.367 [2024-05-15 17:17:32.833678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.367 [2024-05-15 17:17:32.833689] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.367 qpair failed and we were unable to recover it. 00:26:45.367 [2024-05-15 17:17:32.833789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.367 [2024-05-15 17:17:32.833902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.367 [2024-05-15 17:17:32.833912] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.367 qpair failed and we were unable to recover it. 00:26:45.367 [2024-05-15 17:17:32.834020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.367 [2024-05-15 17:17:32.834129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.367 [2024-05-15 17:17:32.834139] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.367 qpair failed and we were unable to recover it. 00:26:45.367 [2024-05-15 17:17:32.834259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.367 [2024-05-15 17:17:32.834440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.367 [2024-05-15 17:17:32.834450] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.367 qpair failed and we were unable to recover it. 00:26:45.367 [2024-05-15 17:17:32.834558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.367 [2024-05-15 17:17:32.834664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.367 [2024-05-15 17:17:32.834674] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.367 qpair failed and we were unable to recover it. 00:26:45.367 [2024-05-15 17:17:32.834741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.367 [2024-05-15 17:17:32.834852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.367 [2024-05-15 17:17:32.834862] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.367 qpair failed and we were unable to recover it. 00:26:45.367 [2024-05-15 17:17:32.834964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.367 [2024-05-15 17:17:32.835062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.367 [2024-05-15 17:17:32.835073] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.367 qpair failed and we were unable to recover it. 00:26:45.368 [2024-05-15 17:17:32.835184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.368 [2024-05-15 17:17:32.835305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.368 [2024-05-15 17:17:32.835316] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.368 qpair failed and we were unable to recover it. 00:26:45.368 [2024-05-15 17:17:32.835480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.368 [2024-05-15 17:17:32.835576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.368 [2024-05-15 17:17:32.835586] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.368 qpair failed and we were unable to recover it. 00:26:45.368 [2024-05-15 17:17:32.835696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.368 [2024-05-15 17:17:32.835804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.368 [2024-05-15 17:17:32.835814] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.368 qpair failed and we were unable to recover it. 00:26:45.368 [2024-05-15 17:17:32.835971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.368 [2024-05-15 17:17:32.836059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.368 [2024-05-15 17:17:32.836068] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.368 qpair failed and we were unable to recover it. 00:26:45.368 [2024-05-15 17:17:32.836162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.368 [2024-05-15 17:17:32.836283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.368 [2024-05-15 17:17:32.836294] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.368 qpair failed and we were unable to recover it. 00:26:45.368 [2024-05-15 17:17:32.836387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.368 [2024-05-15 17:17:32.836610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.368 [2024-05-15 17:17:32.836620] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.368 qpair failed and we were unable to recover it. 00:26:45.368 [2024-05-15 17:17:32.836708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.368 [2024-05-15 17:17:32.836872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.368 [2024-05-15 17:17:32.836882] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.368 qpair failed and we were unable to recover it. 00:26:45.368 [2024-05-15 17:17:32.836974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.368 [2024-05-15 17:17:32.837156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.368 [2024-05-15 17:17:32.837174] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.368 qpair failed and we were unable to recover it. 00:26:45.368 [2024-05-15 17:17:32.837276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.368 [2024-05-15 17:17:32.837436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.368 [2024-05-15 17:17:32.837447] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.368 qpair failed and we were unable to recover it. 00:26:45.368 [2024-05-15 17:17:32.837550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.368 [2024-05-15 17:17:32.837652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.368 [2024-05-15 17:17:32.837662] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.368 qpair failed and we were unable to recover it. 00:26:45.368 [2024-05-15 17:17:32.837785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.368 [2024-05-15 17:17:32.837877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.368 [2024-05-15 17:17:32.837886] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.368 qpair failed and we were unable to recover it. 00:26:45.368 [2024-05-15 17:17:32.838122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.368 [2024-05-15 17:17:32.838230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.368 [2024-05-15 17:17:32.838241] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.368 qpair failed and we were unable to recover it. 00:26:45.368 [2024-05-15 17:17:32.838343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.368 [2024-05-15 17:17:32.838454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.368 [2024-05-15 17:17:32.838464] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.368 qpair failed and we were unable to recover it. 00:26:45.368 [2024-05-15 17:17:32.838631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.368 [2024-05-15 17:17:32.838789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.368 [2024-05-15 17:17:32.838799] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.368 qpair failed and we were unable to recover it. 00:26:45.368 [2024-05-15 17:17:32.838893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.368 [2024-05-15 17:17:32.838986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.368 [2024-05-15 17:17:32.838997] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.368 qpair failed and we were unable to recover it. 00:26:45.368 [2024-05-15 17:17:32.839086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.368 [2024-05-15 17:17:32.839189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.368 [2024-05-15 17:17:32.839200] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.368 qpair failed and we were unable to recover it. 00:26:45.368 [2024-05-15 17:17:32.839291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.368 [2024-05-15 17:17:32.839388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.368 [2024-05-15 17:17:32.839399] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.368 qpair failed and we were unable to recover it. 00:26:45.368 [2024-05-15 17:17:32.839494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.368 [2024-05-15 17:17:32.839585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.368 [2024-05-15 17:17:32.839595] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.368 qpair failed and we were unable to recover it. 00:26:45.368 [2024-05-15 17:17:32.839690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.368 [2024-05-15 17:17:32.839778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.368 [2024-05-15 17:17:32.839788] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.368 qpair failed and we were unable to recover it. 00:26:45.368 [2024-05-15 17:17:32.839874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.368 [2024-05-15 17:17:32.839961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.368 [2024-05-15 17:17:32.839970] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.368 qpair failed and we were unable to recover it. 00:26:45.368 [2024-05-15 17:17:32.840079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.368 [2024-05-15 17:17:32.840170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.368 [2024-05-15 17:17:32.840181] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.368 qpair failed and we were unable to recover it. 00:26:45.368 [2024-05-15 17:17:32.840284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.368 [2024-05-15 17:17:32.840392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.368 [2024-05-15 17:17:32.840402] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.368 qpair failed and we were unable to recover it. 00:26:45.368 [2024-05-15 17:17:32.840498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.368 [2024-05-15 17:17:32.840590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.368 [2024-05-15 17:17:32.840599] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.368 qpair failed and we were unable to recover it. 00:26:45.368 [2024-05-15 17:17:32.840695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.368 [2024-05-15 17:17:32.840786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.368 [2024-05-15 17:17:32.840796] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.368 qpair failed and we were unable to recover it. 00:26:45.368 [2024-05-15 17:17:32.840908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.368 [2024-05-15 17:17:32.841027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.368 [2024-05-15 17:17:32.841036] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.368 qpair failed and we were unable to recover it. 00:26:45.368 [2024-05-15 17:17:32.841126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.368 [2024-05-15 17:17:32.841317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.368 [2024-05-15 17:17:32.841328] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.368 qpair failed and we were unable to recover it. 00:26:45.368 [2024-05-15 17:17:32.841420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.368 [2024-05-15 17:17:32.841506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.368 [2024-05-15 17:17:32.841516] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.368 qpair failed and we were unable to recover it. 00:26:45.368 [2024-05-15 17:17:32.841674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.368 [2024-05-15 17:17:32.841857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.368 [2024-05-15 17:17:32.841867] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.368 qpair failed and we were unable to recover it. 00:26:45.369 [2024-05-15 17:17:32.842032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.369 [2024-05-15 17:17:32.842132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.369 [2024-05-15 17:17:32.842142] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.369 qpair failed and we were unable to recover it. 00:26:45.369 [2024-05-15 17:17:32.842247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.369 [2024-05-15 17:17:32.842403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.369 [2024-05-15 17:17:32.842413] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.369 qpair failed and we were unable to recover it. 00:26:45.369 [2024-05-15 17:17:32.842513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.369 [2024-05-15 17:17:32.842761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.369 [2024-05-15 17:17:32.842771] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.369 qpair failed and we were unable to recover it. 00:26:45.369 [2024-05-15 17:17:32.842951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.369 [2024-05-15 17:17:32.843051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.369 [2024-05-15 17:17:32.843062] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.369 qpair failed and we were unable to recover it. 00:26:45.369 [2024-05-15 17:17:32.843188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.369 [2024-05-15 17:17:32.843307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.369 [2024-05-15 17:17:32.843317] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.369 qpair failed and we were unable to recover it. 00:26:45.369 [2024-05-15 17:17:32.843490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.369 [2024-05-15 17:17:32.843614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.369 [2024-05-15 17:17:32.843624] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.369 qpair failed and we were unable to recover it. 00:26:45.369 [2024-05-15 17:17:32.843787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.369 [2024-05-15 17:17:32.843947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.369 [2024-05-15 17:17:32.843957] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.369 qpair failed and we were unable to recover it. 00:26:45.369 [2024-05-15 17:17:32.844113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.369 [2024-05-15 17:17:32.844208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.369 [2024-05-15 17:17:32.844221] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.369 qpair failed and we were unable to recover it. 00:26:45.369 [2024-05-15 17:17:32.844290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.369 [2024-05-15 17:17:32.844405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.369 [2024-05-15 17:17:32.844415] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.369 qpair failed and we were unable to recover it. 00:26:45.369 [2024-05-15 17:17:32.844576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.369 [2024-05-15 17:17:32.844667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.369 [2024-05-15 17:17:32.844677] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.369 qpair failed and we were unable to recover it. 00:26:45.369 [2024-05-15 17:17:32.844775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.369 [2024-05-15 17:17:32.844864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.369 [2024-05-15 17:17:32.844874] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.369 qpair failed and we were unable to recover it. 00:26:45.369 [2024-05-15 17:17:32.844974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.369 [2024-05-15 17:17:32.845092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.369 [2024-05-15 17:17:32.845103] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.369 qpair failed and we were unable to recover it. 00:26:45.369 [2024-05-15 17:17:32.845214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.369 [2024-05-15 17:17:32.845382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.369 [2024-05-15 17:17:32.845392] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.369 qpair failed and we were unable to recover it. 00:26:45.369 [2024-05-15 17:17:32.845491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.369 [2024-05-15 17:17:32.845662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.369 [2024-05-15 17:17:32.845672] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.369 qpair failed and we were unable to recover it. 00:26:45.369 [2024-05-15 17:17:32.845765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.369 [2024-05-15 17:17:32.845849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.369 [2024-05-15 17:17:32.845859] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.369 qpair failed and we were unable to recover it. 00:26:45.369 [2024-05-15 17:17:32.845948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.369 [2024-05-15 17:17:32.846086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.369 [2024-05-15 17:17:32.846096] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.369 qpair failed and we were unable to recover it. 00:26:45.369 [2024-05-15 17:17:32.846241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.369 [2024-05-15 17:17:32.846400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.369 [2024-05-15 17:17:32.846421] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.369 qpair failed and we were unable to recover it. 00:26:45.369 [2024-05-15 17:17:32.846604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.369 [2024-05-15 17:17:32.846777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.369 [2024-05-15 17:17:32.846787] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.369 qpair failed and we were unable to recover it. 00:26:45.369 [2024-05-15 17:17:32.846882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.369 [2024-05-15 17:17:32.847047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.369 [2024-05-15 17:17:32.847057] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.369 qpair failed and we were unable to recover it. 00:26:45.369 [2024-05-15 17:17:32.847234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.369 [2024-05-15 17:17:32.847395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.369 [2024-05-15 17:17:32.847404] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.369 qpair failed and we were unable to recover it. 00:26:45.369 [2024-05-15 17:17:32.847523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.369 [2024-05-15 17:17:32.847633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.369 [2024-05-15 17:17:32.847642] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.369 qpair failed and we were unable to recover it. 00:26:45.369 [2024-05-15 17:17:32.847848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.369 [2024-05-15 17:17:32.848004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.369 [2024-05-15 17:17:32.848013] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.369 qpair failed and we were unable to recover it. 00:26:45.369 [2024-05-15 17:17:32.848101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.369 [2024-05-15 17:17:32.848257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.369 [2024-05-15 17:17:32.848267] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.369 qpair failed and we were unable to recover it. 00:26:45.369 [2024-05-15 17:17:32.848376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.369 [2024-05-15 17:17:32.848515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.369 [2024-05-15 17:17:32.848524] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.369 qpair failed and we were unable to recover it. 00:26:45.369 [2024-05-15 17:17:32.848631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.369 [2024-05-15 17:17:32.848770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.369 [2024-05-15 17:17:32.848780] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.369 qpair failed and we were unable to recover it. 00:26:45.369 [2024-05-15 17:17:32.848947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.369 [2024-05-15 17:17:32.849056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.369 [2024-05-15 17:17:32.849065] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.369 qpair failed and we were unable to recover it. 00:26:45.369 [2024-05-15 17:17:32.849201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.369 [2024-05-15 17:17:32.849424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.369 [2024-05-15 17:17:32.849434] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.369 qpair failed and we were unable to recover it. 00:26:45.369 [2024-05-15 17:17:32.849601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.369 [2024-05-15 17:17:32.849695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.369 [2024-05-15 17:17:32.849704] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.369 qpair failed and we were unable to recover it. 00:26:45.370 [2024-05-15 17:17:32.849809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.370 [2024-05-15 17:17:32.849883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.370 [2024-05-15 17:17:32.849893] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.370 qpair failed and we were unable to recover it. 00:26:45.370 [2024-05-15 17:17:32.849983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.370 [2024-05-15 17:17:32.850090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.370 [2024-05-15 17:17:32.850100] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.370 qpair failed and we were unable to recover it. 00:26:45.370 [2024-05-15 17:17:32.850201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.370 [2024-05-15 17:17:32.850360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.370 [2024-05-15 17:17:32.850369] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.370 qpair failed and we were unable to recover it. 00:26:45.370 [2024-05-15 17:17:32.850438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.370 [2024-05-15 17:17:32.850541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.370 [2024-05-15 17:17:32.850550] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.370 qpair failed and we were unable to recover it. 00:26:45.370 [2024-05-15 17:17:32.850666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.370 [2024-05-15 17:17:32.850763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.370 [2024-05-15 17:17:32.850772] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.370 qpair failed and we were unable to recover it. 00:26:45.370 [2024-05-15 17:17:32.850863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.370 [2024-05-15 17:17:32.850976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.370 [2024-05-15 17:17:32.850986] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.370 qpair failed and we were unable to recover it. 00:26:45.370 [2024-05-15 17:17:32.851076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.370 [2024-05-15 17:17:32.851191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.370 [2024-05-15 17:17:32.851207] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.370 qpair failed and we were unable to recover it. 00:26:45.370 [2024-05-15 17:17:32.851307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.370 [2024-05-15 17:17:32.851531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.370 [2024-05-15 17:17:32.851541] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.370 qpair failed and we were unable to recover it. 00:26:45.370 [2024-05-15 17:17:32.851665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.370 [2024-05-15 17:17:32.851792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.370 [2024-05-15 17:17:32.851802] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.370 qpair failed and we were unable to recover it. 00:26:45.370 [2024-05-15 17:17:32.851916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.370 [2024-05-15 17:17:32.851998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.370 [2024-05-15 17:17:32.852008] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.370 qpair failed and we were unable to recover it. 00:26:45.370 [2024-05-15 17:17:32.852097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.370 [2024-05-15 17:17:32.852256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.370 [2024-05-15 17:17:32.852266] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.370 qpair failed and we were unable to recover it. 00:26:45.370 [2024-05-15 17:17:32.852375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.370 [2024-05-15 17:17:32.852471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.370 [2024-05-15 17:17:32.852481] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.370 qpair failed and we were unable to recover it. 00:26:45.370 [2024-05-15 17:17:32.852574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.370 [2024-05-15 17:17:32.852677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.370 [2024-05-15 17:17:32.852688] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.370 qpair failed and we were unable to recover it. 00:26:45.370 [2024-05-15 17:17:32.852871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.370 [2024-05-15 17:17:32.852968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.370 [2024-05-15 17:17:32.852978] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.370 qpair failed and we were unable to recover it. 00:26:45.370 [2024-05-15 17:17:32.853081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.370 [2024-05-15 17:17:32.853236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.370 [2024-05-15 17:17:32.853246] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.370 qpair failed and we were unable to recover it. 00:26:45.370 [2024-05-15 17:17:32.853474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.370 [2024-05-15 17:17:32.853581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.370 [2024-05-15 17:17:32.853590] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.370 qpair failed and we were unable to recover it. 00:26:45.370 [2024-05-15 17:17:32.853714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.370 [2024-05-15 17:17:32.853878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.370 [2024-05-15 17:17:32.853890] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.370 qpair failed and we were unable to recover it. 00:26:45.370 [2024-05-15 17:17:32.853990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.370 [2024-05-15 17:17:32.854157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.370 [2024-05-15 17:17:32.854178] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.370 qpair failed and we were unable to recover it. 00:26:45.370 [2024-05-15 17:17:32.854281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.370 [2024-05-15 17:17:32.854373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.370 [2024-05-15 17:17:32.854383] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.370 qpair failed and we were unable to recover it. 00:26:45.370 [2024-05-15 17:17:32.854542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.370 [2024-05-15 17:17:32.854662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.370 [2024-05-15 17:17:32.854672] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.370 qpair failed and we were unable to recover it. 00:26:45.370 [2024-05-15 17:17:32.854758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.370 [2024-05-15 17:17:32.855004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.370 [2024-05-15 17:17:32.855013] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.370 qpair failed and we were unable to recover it. 00:26:45.370 [2024-05-15 17:17:32.855119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.370 [2024-05-15 17:17:32.855242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.370 [2024-05-15 17:17:32.855252] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.370 qpair failed and we were unable to recover it. 00:26:45.370 [2024-05-15 17:17:32.855396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.370 [2024-05-15 17:17:32.855522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.370 [2024-05-15 17:17:32.855531] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.370 qpair failed and we were unable to recover it. 00:26:45.370 [2024-05-15 17:17:32.855640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.370 [2024-05-15 17:17:32.855738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.370 [2024-05-15 17:17:32.855748] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.370 qpair failed and we were unable to recover it. 00:26:45.371 [2024-05-15 17:17:32.855851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.371 [2024-05-15 17:17:32.855967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.371 [2024-05-15 17:17:32.855977] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.371 qpair failed and we were unable to recover it. 00:26:45.371 [2024-05-15 17:17:32.856200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.371 [2024-05-15 17:17:32.856298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.371 [2024-05-15 17:17:32.856309] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.371 qpair failed and we were unable to recover it. 00:26:45.371 [2024-05-15 17:17:32.856428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.371 [2024-05-15 17:17:32.856604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.371 [2024-05-15 17:17:32.856616] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.371 qpair failed and we were unable to recover it. 00:26:45.371 [2024-05-15 17:17:32.856712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.371 [2024-05-15 17:17:32.856815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.371 [2024-05-15 17:17:32.856824] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.371 qpair failed and we were unable to recover it. 00:26:45.371 [2024-05-15 17:17:32.856924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.371 [2024-05-15 17:17:32.857014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.371 [2024-05-15 17:17:32.857024] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.371 qpair failed and we were unable to recover it. 00:26:45.371 [2024-05-15 17:17:32.857124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.371 [2024-05-15 17:17:32.857212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.371 [2024-05-15 17:17:32.857222] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.371 qpair failed and we were unable to recover it. 00:26:45.371 [2024-05-15 17:17:32.857380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.371 [2024-05-15 17:17:32.857442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.371 [2024-05-15 17:17:32.857452] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.371 qpair failed and we were unable to recover it. 00:26:45.371 [2024-05-15 17:17:32.857540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.371 [2024-05-15 17:17:32.857695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.371 [2024-05-15 17:17:32.857705] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.371 qpair failed and we were unable to recover it. 00:26:45.371 [2024-05-15 17:17:32.857799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.371 [2024-05-15 17:17:32.857907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.371 [2024-05-15 17:17:32.857917] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.371 qpair failed and we were unable to recover it. 00:26:45.371 [2024-05-15 17:17:32.858072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.371 [2024-05-15 17:17:32.858185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.371 [2024-05-15 17:17:32.858195] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.371 qpair failed and we were unable to recover it. 00:26:45.371 [2024-05-15 17:17:32.858305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.371 [2024-05-15 17:17:32.858392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.371 [2024-05-15 17:17:32.858401] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.371 qpair failed and we were unable to recover it. 00:26:45.371 [2024-05-15 17:17:32.858490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.371 [2024-05-15 17:17:32.858658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.371 [2024-05-15 17:17:32.858667] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.371 qpair failed and we were unable to recover it. 00:26:45.371 [2024-05-15 17:17:32.858753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.371 [2024-05-15 17:17:32.858858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.371 [2024-05-15 17:17:32.858870] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.371 qpair failed and we were unable to recover it. 00:26:45.371 [2024-05-15 17:17:32.858959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.371 [2024-05-15 17:17:32.859140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.371 [2024-05-15 17:17:32.859150] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.371 qpair failed and we were unable to recover it. 00:26:45.371 [2024-05-15 17:17:32.859268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.371 [2024-05-15 17:17:32.859409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.371 [2024-05-15 17:17:32.859418] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.371 qpair failed and we were unable to recover it. 00:26:45.371 [2024-05-15 17:17:32.859522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.371 [2024-05-15 17:17:32.859677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.371 [2024-05-15 17:17:32.859687] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.371 qpair failed and we were unable to recover it. 00:26:45.371 [2024-05-15 17:17:32.859778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.371 [2024-05-15 17:17:32.859879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.371 [2024-05-15 17:17:32.859888] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.371 qpair failed and we were unable to recover it. 00:26:45.371 [2024-05-15 17:17:32.860070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.371 [2024-05-15 17:17:32.860176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.371 [2024-05-15 17:17:32.860187] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.371 qpair failed and we were unable to recover it. 00:26:45.371 [2024-05-15 17:17:32.860294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.371 [2024-05-15 17:17:32.860401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.371 [2024-05-15 17:17:32.860411] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.371 qpair failed and we were unable to recover it. 00:26:45.371 [2024-05-15 17:17:32.860500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.371 [2024-05-15 17:17:32.860608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.371 [2024-05-15 17:17:32.860618] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.371 qpair failed and we were unable to recover it. 00:26:45.371 [2024-05-15 17:17:32.860720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.371 [2024-05-15 17:17:32.860816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.371 [2024-05-15 17:17:32.860826] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.371 qpair failed and we were unable to recover it. 00:26:45.371 [2024-05-15 17:17:32.860928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.371 [2024-05-15 17:17:32.861032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.371 [2024-05-15 17:17:32.861042] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.371 qpair failed and we were unable to recover it. 00:26:45.371 [2024-05-15 17:17:32.861137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.371 [2024-05-15 17:17:32.861250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.371 [2024-05-15 17:17:32.861260] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.371 qpair failed and we were unable to recover it. 00:26:45.371 [2024-05-15 17:17:32.861362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.371 [2024-05-15 17:17:32.861461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.371 [2024-05-15 17:17:32.861470] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.371 qpair failed and we were unable to recover it. 00:26:45.371 [2024-05-15 17:17:32.861647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.371 [2024-05-15 17:17:32.861747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.371 [2024-05-15 17:17:32.861757] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.371 qpair failed and we were unable to recover it. 00:26:45.371 [2024-05-15 17:17:32.861931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.371 [2024-05-15 17:17:32.862026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.371 [2024-05-15 17:17:32.862035] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.371 qpair failed and we were unable to recover it. 00:26:45.371 [2024-05-15 17:17:32.862149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.371 [2024-05-15 17:17:32.862259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.371 [2024-05-15 17:17:32.862269] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.371 qpair failed and we were unable to recover it. 00:26:45.371 [2024-05-15 17:17:32.862438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.371 [2024-05-15 17:17:32.862523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.371 [2024-05-15 17:17:32.862533] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.371 qpair failed and we were unable to recover it. 00:26:45.372 [2024-05-15 17:17:32.862683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.372 [2024-05-15 17:17:32.862781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.372 [2024-05-15 17:17:32.862791] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.372 qpair failed and we were unable to recover it. 00:26:45.372 [2024-05-15 17:17:32.862892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.372 [2024-05-15 17:17:32.863058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.372 [2024-05-15 17:17:32.863068] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.372 qpair failed and we were unable to recover it. 00:26:45.372 [2024-05-15 17:17:32.863229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.372 [2024-05-15 17:17:32.863387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.372 [2024-05-15 17:17:32.863397] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.372 qpair failed and we were unable to recover it. 00:26:45.372 [2024-05-15 17:17:32.863496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.372 [2024-05-15 17:17:32.863591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.372 [2024-05-15 17:17:32.863601] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.372 qpair failed and we were unable to recover it. 00:26:45.372 [2024-05-15 17:17:32.863769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.372 [2024-05-15 17:17:32.864008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.372 [2024-05-15 17:17:32.864018] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.372 qpair failed and we were unable to recover it. 00:26:45.372 [2024-05-15 17:17:32.864281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.372 [2024-05-15 17:17:32.864434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.372 [2024-05-15 17:17:32.864443] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.372 qpair failed and we were unable to recover it. 00:26:45.372 [2024-05-15 17:17:32.864554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.372 [2024-05-15 17:17:32.864712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.372 [2024-05-15 17:17:32.864722] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.372 qpair failed and we were unable to recover it. 00:26:45.372 [2024-05-15 17:17:32.864934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.372 [2024-05-15 17:17:32.865206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.372 [2024-05-15 17:17:32.865216] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.372 qpair failed and we were unable to recover it. 00:26:45.372 [2024-05-15 17:17:32.865374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.372 [2024-05-15 17:17:32.865478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.372 [2024-05-15 17:17:32.865488] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.372 qpair failed and we were unable to recover it. 00:26:45.372 [2024-05-15 17:17:32.865607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.372 [2024-05-15 17:17:32.865872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.372 [2024-05-15 17:17:32.865882] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.372 qpair failed and we were unable to recover it. 00:26:45.372 [2024-05-15 17:17:32.866035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.372 [2024-05-15 17:17:32.866188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.372 [2024-05-15 17:17:32.866198] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.372 qpair failed and we were unable to recover it. 00:26:45.372 [2024-05-15 17:17:32.866364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.372 [2024-05-15 17:17:32.866458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.372 [2024-05-15 17:17:32.866467] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.372 qpair failed and we were unable to recover it. 00:26:45.372 [2024-05-15 17:17:32.866686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.372 [2024-05-15 17:17:32.866880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.372 [2024-05-15 17:17:32.866889] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.372 qpair failed and we were unable to recover it. 00:26:45.372 [2024-05-15 17:17:32.867114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.372 [2024-05-15 17:17:32.867294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.372 [2024-05-15 17:17:32.867304] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.372 qpair failed and we were unable to recover it. 00:26:45.372 [2024-05-15 17:17:32.867441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.372 [2024-05-15 17:17:32.867599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.372 [2024-05-15 17:17:32.867609] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.372 qpair failed and we were unable to recover it. 00:26:45.372 [2024-05-15 17:17:32.867840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.372 [2024-05-15 17:17:32.868041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.372 [2024-05-15 17:17:32.868051] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.372 qpair failed and we were unable to recover it. 00:26:45.372 [2024-05-15 17:17:32.868315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.372 [2024-05-15 17:17:32.868435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.372 [2024-05-15 17:17:32.868445] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.372 qpair failed and we were unable to recover it. 00:26:45.372 [2024-05-15 17:17:32.868620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.372 [2024-05-15 17:17:32.868793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.372 [2024-05-15 17:17:32.868802] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.372 qpair failed and we were unable to recover it. 00:26:45.372 [2024-05-15 17:17:32.868984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.372 [2024-05-15 17:17:32.869209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.372 [2024-05-15 17:17:32.869220] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.372 qpair failed and we were unable to recover it. 00:26:45.372 [2024-05-15 17:17:32.869343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.372 [2024-05-15 17:17:32.869468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.372 [2024-05-15 17:17:32.869477] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.372 qpair failed and we were unable to recover it. 00:26:45.372 [2024-05-15 17:17:32.869652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.372 [2024-05-15 17:17:32.869832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.372 [2024-05-15 17:17:32.869842] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.372 qpair failed and we were unable to recover it. 00:26:45.372 [2024-05-15 17:17:32.870005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.372 [2024-05-15 17:17:32.870176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.372 [2024-05-15 17:17:32.870186] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.372 qpair failed and we were unable to recover it. 00:26:45.372 [2024-05-15 17:17:32.870301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.372 [2024-05-15 17:17:32.870496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.372 [2024-05-15 17:17:32.870505] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.372 qpair failed and we were unable to recover it. 00:26:45.372 [2024-05-15 17:17:32.870680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.372 [2024-05-15 17:17:32.870892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.372 [2024-05-15 17:17:32.870902] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.372 qpair failed and we were unable to recover it. 00:26:45.372 [2024-05-15 17:17:32.871076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.372 [2024-05-15 17:17:32.871192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.372 [2024-05-15 17:17:32.871201] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.372 qpair failed and we were unable to recover it. 00:26:45.372 [2024-05-15 17:17:32.871361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.372 [2024-05-15 17:17:32.871494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.372 [2024-05-15 17:17:32.871503] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.372 qpair failed and we were unable to recover it. 00:26:45.372 [2024-05-15 17:17:32.871674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.372 [2024-05-15 17:17:32.871782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.372 [2024-05-15 17:17:32.871791] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.372 qpair failed and we were unable to recover it. 00:26:45.372 [2024-05-15 17:17:32.871957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.372 [2024-05-15 17:17:32.872220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.372 [2024-05-15 17:17:32.872230] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.373 qpair failed and we were unable to recover it. 00:26:45.373 [2024-05-15 17:17:32.872354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.373 [2024-05-15 17:17:32.872466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.373 [2024-05-15 17:17:32.872476] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.373 qpair failed and we were unable to recover it. 00:26:45.373 [2024-05-15 17:17:32.872602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.373 [2024-05-15 17:17:32.872829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.373 [2024-05-15 17:17:32.872838] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.373 qpair failed and we were unable to recover it. 00:26:45.373 [2024-05-15 17:17:32.873080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.373 [2024-05-15 17:17:32.873251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.373 [2024-05-15 17:17:32.873261] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.373 qpair failed and we were unable to recover it. 00:26:45.373 [2024-05-15 17:17:32.873416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.373 [2024-05-15 17:17:32.873537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.373 [2024-05-15 17:17:32.873547] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.373 qpair failed and we were unable to recover it. 00:26:45.373 [2024-05-15 17:17:32.873781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.373 [2024-05-15 17:17:32.873964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.373 [2024-05-15 17:17:32.873974] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.373 qpair failed and we were unable to recover it. 00:26:45.373 [2024-05-15 17:17:32.874149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.373 [2024-05-15 17:17:32.874366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.373 [2024-05-15 17:17:32.874376] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.373 qpair failed and we were unable to recover it. 00:26:45.373 [2024-05-15 17:17:32.874596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.373 [2024-05-15 17:17:32.874770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.373 [2024-05-15 17:17:32.874780] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.373 qpair failed and we were unable to recover it. 00:26:45.373 [2024-05-15 17:17:32.875043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.373 [2024-05-15 17:17:32.875202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.373 [2024-05-15 17:17:32.875212] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.373 qpair failed and we were unable to recover it. 00:26:45.373 [2024-05-15 17:17:32.875466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.373 [2024-05-15 17:17:32.875635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.373 [2024-05-15 17:17:32.875645] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.373 qpair failed and we were unable to recover it. 00:26:45.373 [2024-05-15 17:17:32.875775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.373 [2024-05-15 17:17:32.875950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.373 [2024-05-15 17:17:32.875960] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.373 qpair failed and we were unable to recover it. 00:26:45.373 [2024-05-15 17:17:32.876116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.373 [2024-05-15 17:17:32.876296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.373 [2024-05-15 17:17:32.876307] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.373 qpair failed and we were unable to recover it. 00:26:45.373 [2024-05-15 17:17:32.876484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.373 [2024-05-15 17:17:32.876659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.373 [2024-05-15 17:17:32.876669] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.373 qpair failed and we were unable to recover it. 00:26:45.373 [2024-05-15 17:17:32.876915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.373 [2024-05-15 17:17:32.877092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.373 [2024-05-15 17:17:32.877103] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.373 qpair failed and we were unable to recover it. 00:26:45.373 [2024-05-15 17:17:32.877321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.373 [2024-05-15 17:17:32.877444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.373 [2024-05-15 17:17:32.877454] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.373 qpair failed and we were unable to recover it. 00:26:45.373 [2024-05-15 17:17:32.877555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.373 [2024-05-15 17:17:32.877800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.373 [2024-05-15 17:17:32.877809] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.373 qpair failed and we were unable to recover it. 00:26:45.373 [2024-05-15 17:17:32.878038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.373 [2024-05-15 17:17:32.878141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.373 [2024-05-15 17:17:32.878178] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.373 qpair failed and we were unable to recover it. 00:26:45.373 [2024-05-15 17:17:32.878368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.373 [2024-05-15 17:17:32.878524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.373 [2024-05-15 17:17:32.878553] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.373 qpair failed and we were unable to recover it. 00:26:45.373 [2024-05-15 17:17:32.878763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.373 [2024-05-15 17:17:32.879048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.373 [2024-05-15 17:17:32.879076] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.373 qpair failed and we were unable to recover it. 00:26:45.373 [2024-05-15 17:17:32.879295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.373 [2024-05-15 17:17:32.879512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.373 [2024-05-15 17:17:32.879541] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.373 qpair failed and we were unable to recover it. 00:26:45.373 [2024-05-15 17:17:32.879792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.373 [2024-05-15 17:17:32.880050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.373 [2024-05-15 17:17:32.880079] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.373 qpair failed and we were unable to recover it. 00:26:45.373 [2024-05-15 17:17:32.880287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.373 [2024-05-15 17:17:32.880466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.373 [2024-05-15 17:17:32.880495] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.373 qpair failed and we were unable to recover it. 00:26:45.373 [2024-05-15 17:17:32.880714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.373 [2024-05-15 17:17:32.880872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.373 [2024-05-15 17:17:32.880901] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.373 qpair failed and we were unable to recover it. 00:26:45.373 [2024-05-15 17:17:32.881113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.373 [2024-05-15 17:17:32.881453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.373 [2024-05-15 17:17:32.881484] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.373 qpair failed and we were unable to recover it. 00:26:45.373 [2024-05-15 17:17:32.881707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.373 [2024-05-15 17:17:32.881983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.373 [2024-05-15 17:17:32.882012] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.373 qpair failed and we were unable to recover it. 00:26:45.373 [2024-05-15 17:17:32.882197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.373 [2024-05-15 17:17:32.882404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.373 [2024-05-15 17:17:32.882433] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.373 qpair failed and we were unable to recover it. 00:26:45.373 [2024-05-15 17:17:32.882674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.373 [2024-05-15 17:17:32.882917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.373 [2024-05-15 17:17:32.882927] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.373 qpair failed and we were unable to recover it. 00:26:45.373 [2024-05-15 17:17:32.883094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.373 [2024-05-15 17:17:32.883266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.373 [2024-05-15 17:17:32.883276] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.373 qpair failed and we were unable to recover it. 00:26:45.374 [2024-05-15 17:17:32.883536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.374 [2024-05-15 17:17:32.883732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.374 [2024-05-15 17:17:32.883761] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.374 qpair failed and we were unable to recover it. 00:26:45.374 [2024-05-15 17:17:32.884052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.374 [2024-05-15 17:17:32.884217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.374 [2024-05-15 17:17:32.884227] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.374 qpair failed and we were unable to recover it. 00:26:45.374 [2024-05-15 17:17:32.884350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.374 [2024-05-15 17:17:32.884599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.374 [2024-05-15 17:17:32.884628] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.374 qpair failed and we were unable to recover it. 00:26:45.374 [2024-05-15 17:17:32.884909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.374 [2024-05-15 17:17:32.885110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.374 [2024-05-15 17:17:32.885139] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.374 qpair failed and we were unable to recover it. 00:26:45.374 [2024-05-15 17:17:32.885438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.374 [2024-05-15 17:17:32.885597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.374 [2024-05-15 17:17:32.885607] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.374 qpair failed and we were unable to recover it. 00:26:45.374 [2024-05-15 17:17:32.885719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.374 [2024-05-15 17:17:32.885935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.374 [2024-05-15 17:17:32.885945] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.374 qpair failed and we were unable to recover it. 00:26:45.374 [2024-05-15 17:17:32.886211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.374 [2024-05-15 17:17:32.886367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.374 [2024-05-15 17:17:32.886377] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.374 qpair failed and we were unable to recover it. 00:26:45.374 [2024-05-15 17:17:32.886532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.374 [2024-05-15 17:17:32.886647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.374 [2024-05-15 17:17:32.886657] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.374 qpair failed and we were unable to recover it. 00:26:45.374 [2024-05-15 17:17:32.886771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.374 [2024-05-15 17:17:32.886936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.374 [2024-05-15 17:17:32.886946] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.374 qpair failed and we were unable to recover it. 00:26:45.374 [2024-05-15 17:17:32.887121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.374 [2024-05-15 17:17:32.887277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.374 [2024-05-15 17:17:32.887288] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.374 qpair failed and we were unable to recover it. 00:26:45.374 [2024-05-15 17:17:32.887405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.374 [2024-05-15 17:17:32.887658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.374 [2024-05-15 17:17:32.887687] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.374 qpair failed and we were unable to recover it. 00:26:45.374 [2024-05-15 17:17:32.887930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.374 [2024-05-15 17:17:32.888197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.374 [2024-05-15 17:17:32.888227] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.374 qpair failed and we were unable to recover it. 00:26:45.374 [2024-05-15 17:17:32.888493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.374 [2024-05-15 17:17:32.888750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.374 [2024-05-15 17:17:32.888779] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.374 qpair failed and we were unable to recover it. 00:26:45.374 [2024-05-15 17:17:32.889053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.374 [2024-05-15 17:17:32.889319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.374 [2024-05-15 17:17:32.889329] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.374 qpair failed and we were unable to recover it. 00:26:45.374 [2024-05-15 17:17:32.889509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.374 [2024-05-15 17:17:32.889635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.374 [2024-05-15 17:17:32.889644] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.374 qpair failed and we were unable to recover it. 00:26:45.374 [2024-05-15 17:17:32.889970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.374 [2024-05-15 17:17:32.890168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.374 [2024-05-15 17:17:32.890178] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.374 qpair failed and we were unable to recover it. 00:26:45.374 [2024-05-15 17:17:32.890300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.374 [2024-05-15 17:17:32.890405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.374 [2024-05-15 17:17:32.890415] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.374 qpair failed and we were unable to recover it. 00:26:45.374 [2024-05-15 17:17:32.890576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.374 [2024-05-15 17:17:32.890835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.374 [2024-05-15 17:17:32.890864] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.374 qpair failed and we were unable to recover it. 00:26:45.374 [2024-05-15 17:17:32.891154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.374 [2024-05-15 17:17:32.891480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.374 [2024-05-15 17:17:32.891509] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.374 qpair failed and we were unable to recover it. 00:26:45.374 [2024-05-15 17:17:32.891779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.374 [2024-05-15 17:17:32.892031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.374 [2024-05-15 17:17:32.892060] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.374 qpair failed and we were unable to recover it. 00:26:45.374 [2024-05-15 17:17:32.892215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.374 [2024-05-15 17:17:32.892463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.374 [2024-05-15 17:17:32.892492] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.374 qpair failed and we were unable to recover it. 00:26:45.374 [2024-05-15 17:17:32.892774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.374 [2024-05-15 17:17:32.893054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.374 [2024-05-15 17:17:32.893083] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.374 qpair failed and we were unable to recover it. 00:26:45.374 [2024-05-15 17:17:32.893291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.374 [2024-05-15 17:17:32.893505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.374 [2024-05-15 17:17:32.893533] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.374 qpair failed and we were unable to recover it. 00:26:45.374 [2024-05-15 17:17:32.893679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.374 [2024-05-15 17:17:32.893823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.374 [2024-05-15 17:17:32.893851] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.374 qpair failed and we were unable to recover it. 00:26:45.374 [2024-05-15 17:17:32.894142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.374 [2024-05-15 17:17:32.894372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.375 [2024-05-15 17:17:32.894382] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.375 qpair failed and we were unable to recover it. 00:26:45.375 [2024-05-15 17:17:32.894538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.375 [2024-05-15 17:17:32.894771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.375 [2024-05-15 17:17:32.894799] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.375 qpair failed and we were unable to recover it. 00:26:45.375 [2024-05-15 17:17:32.895063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.375 [2024-05-15 17:17:32.895358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.375 [2024-05-15 17:17:32.895388] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.375 qpair failed and we were unable to recover it. 00:26:45.375 [2024-05-15 17:17:32.895674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.375 [2024-05-15 17:17:32.895890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.375 [2024-05-15 17:17:32.895919] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.375 qpair failed and we were unable to recover it. 00:26:45.375 [2024-05-15 17:17:32.896119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.375 [2024-05-15 17:17:32.896355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.375 [2024-05-15 17:17:32.896385] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.375 qpair failed and we were unable to recover it. 00:26:45.375 [2024-05-15 17:17:32.896607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.375 [2024-05-15 17:17:32.896793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.375 [2024-05-15 17:17:32.896802] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.375 qpair failed and we were unable to recover it. 00:26:45.375 [2024-05-15 17:17:32.896969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.375 [2024-05-15 17:17:32.897129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.375 [2024-05-15 17:17:32.897155] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.375 qpair failed and we were unable to recover it. 00:26:45.375 [2024-05-15 17:17:32.897401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.375 [2024-05-15 17:17:32.897557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.375 [2024-05-15 17:17:32.897585] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.375 qpair failed and we were unable to recover it. 00:26:45.375 [2024-05-15 17:17:32.897789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.375 [2024-05-15 17:17:32.898047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.375 [2024-05-15 17:17:32.898075] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.375 qpair failed and we were unable to recover it. 00:26:45.375 [2024-05-15 17:17:32.898390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.375 [2024-05-15 17:17:32.898606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.375 [2024-05-15 17:17:32.898634] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.375 qpair failed and we were unable to recover it. 00:26:45.375 [2024-05-15 17:17:32.898930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.375 [2024-05-15 17:17:32.899177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.375 [2024-05-15 17:17:32.899187] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.375 qpair failed and we were unable to recover it. 00:26:45.375 [2024-05-15 17:17:32.899344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.375 [2024-05-15 17:17:32.899587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.375 [2024-05-15 17:17:32.899597] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.375 qpair failed and we were unable to recover it. 00:26:45.375 [2024-05-15 17:17:32.899720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.375 [2024-05-15 17:17:32.899901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.375 [2024-05-15 17:17:32.899911] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.375 qpair failed and we were unable to recover it. 00:26:45.375 [2024-05-15 17:17:32.900134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.375 [2024-05-15 17:17:32.900358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.375 [2024-05-15 17:17:32.900368] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.375 qpair failed and we were unable to recover it. 00:26:45.375 [2024-05-15 17:17:32.900494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.375 [2024-05-15 17:17:32.900668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.375 [2024-05-15 17:17:32.900678] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.375 qpair failed and we were unable to recover it. 00:26:45.375 [2024-05-15 17:17:32.900920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.375 [2024-05-15 17:17:32.901168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.375 [2024-05-15 17:17:32.901178] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.375 qpair failed and we were unable to recover it. 00:26:45.375 [2024-05-15 17:17:32.901450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.375 [2024-05-15 17:17:32.901681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.375 [2024-05-15 17:17:32.901691] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.375 qpair failed and we were unable to recover it. 00:26:45.375 [2024-05-15 17:17:32.901887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.375 [2024-05-15 17:17:32.902136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.375 [2024-05-15 17:17:32.902145] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.375 qpair failed and we were unable to recover it. 00:26:45.375 [2024-05-15 17:17:32.902353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.375 [2024-05-15 17:17:32.902548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.375 [2024-05-15 17:17:32.902576] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.375 qpair failed and we were unable to recover it. 00:26:45.375 [2024-05-15 17:17:32.902740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.375 [2024-05-15 17:17:32.903021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.375 [2024-05-15 17:17:32.903049] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.375 qpair failed and we were unable to recover it. 00:26:45.375 [2024-05-15 17:17:32.903344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.375 [2024-05-15 17:17:32.903551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.375 [2024-05-15 17:17:32.903579] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.375 qpair failed and we were unable to recover it. 00:26:45.375 [2024-05-15 17:17:32.903880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.375 [2024-05-15 17:17:32.904176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.375 [2024-05-15 17:17:32.904206] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.375 qpair failed and we were unable to recover it. 00:26:45.375 [2024-05-15 17:17:32.904366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.375 [2024-05-15 17:17:32.904561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.375 [2024-05-15 17:17:32.904591] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.375 qpair failed and we were unable to recover it. 00:26:45.375 [2024-05-15 17:17:32.904750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.375 [2024-05-15 17:17:32.905017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.375 [2024-05-15 17:17:32.905045] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.375 qpair failed and we were unable to recover it. 00:26:45.375 [2024-05-15 17:17:32.905319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.375 [2024-05-15 17:17:32.905552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.375 [2024-05-15 17:17:32.905580] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.375 qpair failed and we were unable to recover it. 00:26:45.375 [2024-05-15 17:17:32.905792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.375 [2024-05-15 17:17:32.905972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.376 [2024-05-15 17:17:32.906000] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.376 qpair failed and we were unable to recover it. 00:26:45.376 [2024-05-15 17:17:32.906271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.376 [2024-05-15 17:17:32.906521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.376 [2024-05-15 17:17:32.906550] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.376 qpair failed and we were unable to recover it. 00:26:45.376 [2024-05-15 17:17:32.906847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.376 [2024-05-15 17:17:32.907090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.376 [2024-05-15 17:17:32.907099] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.376 qpair failed and we were unable to recover it. 00:26:45.376 [2024-05-15 17:17:32.907354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.376 [2024-05-15 17:17:32.907524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.376 [2024-05-15 17:17:32.907533] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.376 qpair failed and we were unable to recover it. 00:26:45.376 [2024-05-15 17:17:32.907764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.376 [2024-05-15 17:17:32.908022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.376 [2024-05-15 17:17:32.908050] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.376 qpair failed and we were unable to recover it. 00:26:45.376 [2024-05-15 17:17:32.908330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.376 [2024-05-15 17:17:32.908545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.376 [2024-05-15 17:17:32.908574] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.376 qpair failed and we were unable to recover it. 00:26:45.376 [2024-05-15 17:17:32.908887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.376 [2024-05-15 17:17:32.909191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.376 [2024-05-15 17:17:32.909221] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.376 qpair failed and we were unable to recover it. 00:26:45.376 [2024-05-15 17:17:32.909388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.376 [2024-05-15 17:17:32.909677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.376 [2024-05-15 17:17:32.909706] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.376 qpair failed and we were unable to recover it. 00:26:45.376 [2024-05-15 17:17:32.909979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.376 [2024-05-15 17:17:32.910243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.376 [2024-05-15 17:17:32.910253] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.376 qpair failed and we were unable to recover it. 00:26:45.376 [2024-05-15 17:17:32.910439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.376 [2024-05-15 17:17:32.910679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.376 [2024-05-15 17:17:32.910688] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.376 qpair failed and we were unable to recover it. 00:26:45.376 [2024-05-15 17:17:32.910981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.376 [2024-05-15 17:17:32.911122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.376 [2024-05-15 17:17:32.911150] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.376 qpair failed and we were unable to recover it. 00:26:45.376 [2024-05-15 17:17:32.911482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.376 [2024-05-15 17:17:32.911653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.376 [2024-05-15 17:17:32.911690] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.376 qpair failed and we were unable to recover it. 00:26:45.376 [2024-05-15 17:17:32.911877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.376 [2024-05-15 17:17:32.912139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.376 [2024-05-15 17:17:32.912177] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.376 qpair failed and we were unable to recover it. 00:26:45.376 [2024-05-15 17:17:32.912384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.376 [2024-05-15 17:17:32.912670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.376 [2024-05-15 17:17:32.912698] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.376 qpair failed and we were unable to recover it. 00:26:45.376 [2024-05-15 17:17:32.912986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.376 [2024-05-15 17:17:32.913143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.376 [2024-05-15 17:17:32.913152] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.376 qpair failed and we were unable to recover it. 00:26:45.376 [2024-05-15 17:17:32.913396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.376 [2024-05-15 17:17:32.913503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.376 [2024-05-15 17:17:32.913512] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.376 qpair failed and we were unable to recover it. 00:26:45.376 [2024-05-15 17:17:32.913811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.376 [2024-05-15 17:17:32.913971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.376 [2024-05-15 17:17:32.914000] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.376 qpair failed and we were unable to recover it. 00:26:45.376 [2024-05-15 17:17:32.914318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.376 [2024-05-15 17:17:32.914533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.376 [2024-05-15 17:17:32.914561] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.376 qpair failed and we were unable to recover it. 00:26:45.376 [2024-05-15 17:17:32.914836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.376 [2024-05-15 17:17:32.915090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.376 [2024-05-15 17:17:32.915099] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.376 qpair failed and we were unable to recover it. 00:26:45.376 [2024-05-15 17:17:32.915367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.376 [2024-05-15 17:17:32.915635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.376 [2024-05-15 17:17:32.915645] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.376 qpair failed and we were unable to recover it. 00:26:45.376 [2024-05-15 17:17:32.915767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.376 [2024-05-15 17:17:32.916021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.376 [2024-05-15 17:17:32.916050] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.376 qpair failed and we were unable to recover it. 00:26:45.376 [2024-05-15 17:17:32.916191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.376 [2024-05-15 17:17:32.916468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.376 [2024-05-15 17:17:32.916502] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.376 qpair failed and we were unable to recover it. 00:26:45.376 [2024-05-15 17:17:32.916789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.376 [2024-05-15 17:17:32.917059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.376 [2024-05-15 17:17:32.917068] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.376 qpair failed and we were unable to recover it. 00:26:45.376 [2024-05-15 17:17:32.917315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.376 [2024-05-15 17:17:32.917538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.376 [2024-05-15 17:17:32.917547] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.376 qpair failed and we were unable to recover it. 00:26:45.376 [2024-05-15 17:17:32.917712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.376 [2024-05-15 17:17:32.917942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.376 [2024-05-15 17:17:32.917970] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.376 qpair failed and we were unable to recover it. 00:26:45.376 [2024-05-15 17:17:32.918241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.376 [2024-05-15 17:17:32.918450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.376 [2024-05-15 17:17:32.918478] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.376 qpair failed and we were unable to recover it. 00:26:45.376 [2024-05-15 17:17:32.918682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.376 [2024-05-15 17:17:32.918911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.376 [2024-05-15 17:17:32.918939] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.376 qpair failed and we were unable to recover it. 00:26:45.376 [2024-05-15 17:17:32.919135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.376 [2024-05-15 17:17:32.919346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.376 [2024-05-15 17:17:32.919376] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.376 qpair failed and we were unable to recover it. 00:26:45.377 [2024-05-15 17:17:32.919620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.377 [2024-05-15 17:17:32.919850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.377 [2024-05-15 17:17:32.919879] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.377 qpair failed and we were unable to recover it. 00:26:45.377 [2024-05-15 17:17:32.920119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.377 [2024-05-15 17:17:32.920313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.377 [2024-05-15 17:17:32.920323] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.377 qpair failed and we were unable to recover it. 00:26:45.377 [2024-05-15 17:17:32.920427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.377 [2024-05-15 17:17:32.920655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.377 [2024-05-15 17:17:32.920665] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.377 qpair failed and we were unable to recover it. 00:26:45.377 [2024-05-15 17:17:32.920887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.377 [2024-05-15 17:17:32.921128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.377 [2024-05-15 17:17:32.921162] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.377 qpair failed and we were unable to recover it. 00:26:45.377 [2024-05-15 17:17:32.921475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.377 [2024-05-15 17:17:32.921784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.377 [2024-05-15 17:17:32.921813] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.377 qpair failed and we were unable to recover it. 00:26:45.377 [2024-05-15 17:17:32.922101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.377 [2024-05-15 17:17:32.922259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.377 [2024-05-15 17:17:32.922304] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.377 qpair failed and we were unable to recover it. 00:26:45.377 [2024-05-15 17:17:32.922572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.377 [2024-05-15 17:17:32.922716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.377 [2024-05-15 17:17:32.922745] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.377 qpair failed and we were unable to recover it. 00:26:45.377 [2024-05-15 17:17:32.922969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.377 [2024-05-15 17:17:32.923210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.377 [2024-05-15 17:17:32.923240] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.377 qpair failed and we were unable to recover it. 00:26:45.377 [2024-05-15 17:17:32.923551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.377 [2024-05-15 17:17:32.923858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.377 [2024-05-15 17:17:32.923887] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.377 qpair failed and we were unable to recover it. 00:26:45.377 [2024-05-15 17:17:32.924191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.377 [2024-05-15 17:17:32.924386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.377 [2024-05-15 17:17:32.924396] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.377 qpair failed and we were unable to recover it. 00:26:45.377 [2024-05-15 17:17:32.924642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.377 [2024-05-15 17:17:32.924896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.377 [2024-05-15 17:17:32.924925] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.377 qpair failed and we were unable to recover it. 00:26:45.377 [2024-05-15 17:17:32.925134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.377 [2024-05-15 17:17:32.925336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.377 [2024-05-15 17:17:32.925366] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.377 qpair failed and we were unable to recover it. 00:26:45.377 [2024-05-15 17:17:32.925661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.377 [2024-05-15 17:17:32.925943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.377 [2024-05-15 17:17:32.925978] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.377 qpair failed and we were unable to recover it. 00:26:45.377 [2024-05-15 17:17:32.926260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.377 [2024-05-15 17:17:32.926367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.377 [2024-05-15 17:17:32.926379] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.377 qpair failed and we were unable to recover it. 00:26:45.377 [2024-05-15 17:17:32.926553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.377 [2024-05-15 17:17:32.926835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.377 [2024-05-15 17:17:32.926863] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.377 qpair failed and we were unable to recover it. 00:26:45.377 [2024-05-15 17:17:32.927075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.377 [2024-05-15 17:17:32.927358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.377 [2024-05-15 17:17:32.927388] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.377 qpair failed and we were unable to recover it. 00:26:45.377 [2024-05-15 17:17:32.927677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.377 [2024-05-15 17:17:32.928011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.377 [2024-05-15 17:17:32.928039] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.377 qpair failed and we were unable to recover it. 00:26:45.377 [2024-05-15 17:17:32.928273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.377 [2024-05-15 17:17:32.928486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.377 [2024-05-15 17:17:32.928514] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.377 qpair failed and we were unable to recover it. 00:26:45.377 [2024-05-15 17:17:32.928801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.377 [2024-05-15 17:17:32.928940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.377 [2024-05-15 17:17:32.928968] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.377 qpair failed and we were unable to recover it. 00:26:45.377 [2024-05-15 17:17:32.929239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.377 [2024-05-15 17:17:32.929430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.377 [2024-05-15 17:17:32.929440] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.377 qpair failed and we were unable to recover it. 00:26:45.377 [2024-05-15 17:17:32.929540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.377 [2024-05-15 17:17:32.929715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.377 [2024-05-15 17:17:32.929724] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.377 qpair failed and we were unable to recover it. 00:26:45.377 [2024-05-15 17:17:32.929970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.377 [2024-05-15 17:17:32.930143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.377 [2024-05-15 17:17:32.930152] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.377 qpair failed and we were unable to recover it. 00:26:45.377 [2024-05-15 17:17:32.930378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.377 [2024-05-15 17:17:32.930493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.377 [2024-05-15 17:17:32.930502] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.377 qpair failed and we were unable to recover it. 00:26:45.377 [2024-05-15 17:17:32.930658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.377 [2024-05-15 17:17:32.930827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.377 [2024-05-15 17:17:32.930855] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.377 qpair failed and we were unable to recover it. 00:26:45.377 [2024-05-15 17:17:32.931088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.377 [2024-05-15 17:17:32.931318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.377 [2024-05-15 17:17:32.931349] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.377 qpair failed and we were unable to recover it. 00:26:45.377 [2024-05-15 17:17:32.931496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.377 [2024-05-15 17:17:32.931778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.377 [2024-05-15 17:17:32.931806] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.377 qpair failed and we were unable to recover it. 00:26:45.377 [2024-05-15 17:17:32.932020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.377 [2024-05-15 17:17:32.932229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.377 [2024-05-15 17:17:32.932258] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.377 qpair failed and we were unable to recover it. 00:26:45.377 [2024-05-15 17:17:32.932469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.377 [2024-05-15 17:17:32.932684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.378 [2024-05-15 17:17:32.932712] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.378 qpair failed and we were unable to recover it. 00:26:45.378 [2024-05-15 17:17:32.932949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.378 [2024-05-15 17:17:32.933234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.378 [2024-05-15 17:17:32.933264] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.378 qpair failed and we were unable to recover it. 00:26:45.378 [2024-05-15 17:17:32.933549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.378 [2024-05-15 17:17:32.933760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.378 [2024-05-15 17:17:32.933789] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.378 qpair failed and we were unable to recover it. 00:26:45.378 [2024-05-15 17:17:32.933965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.378 [2024-05-15 17:17:32.934131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.378 [2024-05-15 17:17:32.934141] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.378 qpair failed and we were unable to recover it. 00:26:45.378 [2024-05-15 17:17:32.934302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.378 [2024-05-15 17:17:32.934405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.378 [2024-05-15 17:17:32.934415] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.378 qpair failed and we were unable to recover it. 00:26:45.378 [2024-05-15 17:17:32.934666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.378 [2024-05-15 17:17:32.934910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.378 [2024-05-15 17:17:32.934919] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.378 qpair failed and we were unable to recover it. 00:26:45.378 [2024-05-15 17:17:32.935083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.378 [2024-05-15 17:17:32.935256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.378 [2024-05-15 17:17:32.935266] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.378 qpair failed and we were unable to recover it. 00:26:45.378 [2024-05-15 17:17:32.935481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.378 [2024-05-15 17:17:32.935702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.378 [2024-05-15 17:17:32.935711] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.378 qpair failed and we were unable to recover it. 00:26:45.378 [2024-05-15 17:17:32.935867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.378 [2024-05-15 17:17:32.936032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.378 [2024-05-15 17:17:32.936041] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.378 qpair failed and we were unable to recover it. 00:26:45.378 [2024-05-15 17:17:32.936215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.378 [2024-05-15 17:17:32.936439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.378 [2024-05-15 17:17:32.936448] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.378 qpair failed and we were unable to recover it. 00:26:45.378 [2024-05-15 17:17:32.936563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.378 [2024-05-15 17:17:32.936811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.378 [2024-05-15 17:17:32.936821] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.378 qpair failed and we were unable to recover it. 00:26:45.378 [2024-05-15 17:17:32.937086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.378 [2024-05-15 17:17:32.937272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.378 [2024-05-15 17:17:32.937282] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.378 qpair failed and we were unable to recover it. 00:26:45.378 [2024-05-15 17:17:32.937530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.378 [2024-05-15 17:17:32.937712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.378 [2024-05-15 17:17:32.937741] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.378 qpair failed and we were unable to recover it. 00:26:45.378 [2024-05-15 17:17:32.937957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.378 [2024-05-15 17:17:32.938173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.378 [2024-05-15 17:17:32.938203] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.378 qpair failed and we were unable to recover it. 00:26:45.378 [2024-05-15 17:17:32.938412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.378 [2024-05-15 17:17:32.938567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.378 [2024-05-15 17:17:32.938596] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.378 qpair failed and we were unable to recover it. 00:26:45.378 [2024-05-15 17:17:32.938880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.378 [2024-05-15 17:17:32.939111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.378 [2024-05-15 17:17:32.939120] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.378 qpair failed and we were unable to recover it. 00:26:45.378 [2024-05-15 17:17:32.939316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.378 [2024-05-15 17:17:32.939509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.378 [2024-05-15 17:17:32.939518] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.378 qpair failed and we were unable to recover it. 00:26:45.378 [2024-05-15 17:17:32.939784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.378 [2024-05-15 17:17:32.940074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.378 [2024-05-15 17:17:32.940102] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.378 qpair failed and we were unable to recover it. 00:26:45.378 [2024-05-15 17:17:32.940394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.378 [2024-05-15 17:17:32.940666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.378 [2024-05-15 17:17:32.940694] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.378 qpair failed and we were unable to recover it. 00:26:45.378 [2024-05-15 17:17:32.940852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.378 [2024-05-15 17:17:32.941078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.378 [2024-05-15 17:17:32.941106] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.378 qpair failed and we were unable to recover it. 00:26:45.378 [2024-05-15 17:17:32.941333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.378 [2024-05-15 17:17:32.941533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.378 [2024-05-15 17:17:32.941562] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.378 qpair failed and we were unable to recover it. 00:26:45.378 [2024-05-15 17:17:32.941828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.378 [2024-05-15 17:17:32.942018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.378 [2024-05-15 17:17:32.942047] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.378 qpair failed and we were unable to recover it. 00:26:45.378 [2024-05-15 17:17:32.942362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.378 [2024-05-15 17:17:32.942509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.378 [2024-05-15 17:17:32.942538] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.378 qpair failed and we were unable to recover it. 00:26:45.378 [2024-05-15 17:17:32.942740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.378 [2024-05-15 17:17:32.943016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.378 [2024-05-15 17:17:32.943045] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.378 qpair failed and we were unable to recover it. 00:26:45.378 [2024-05-15 17:17:32.943239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.378 [2024-05-15 17:17:32.943486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.378 [2024-05-15 17:17:32.943496] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.378 qpair failed and we were unable to recover it. 00:26:45.378 [2024-05-15 17:17:32.943742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.378 [2024-05-15 17:17:32.943973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.378 [2024-05-15 17:17:32.943982] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.378 qpair failed and we were unable to recover it. 00:26:45.378 [2024-05-15 17:17:32.944255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.378 [2024-05-15 17:17:32.944508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.378 [2024-05-15 17:17:32.944517] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.378 qpair failed and we were unable to recover it. 00:26:45.378 [2024-05-15 17:17:32.944752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.378 [2024-05-15 17:17:32.944941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.379 [2024-05-15 17:17:32.944951] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.379 qpair failed and we were unable to recover it. 00:26:45.379 [2024-05-15 17:17:32.945124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.379 [2024-05-15 17:17:32.945379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.379 [2024-05-15 17:17:32.945409] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.379 qpair failed and we were unable to recover it. 00:26:45.379 [2024-05-15 17:17:32.945712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.379 [2024-05-15 17:17:32.946021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.379 [2024-05-15 17:17:32.946051] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.379 qpair failed and we were unable to recover it. 00:26:45.379 [2024-05-15 17:17:32.946252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.379 [2024-05-15 17:17:32.946533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.379 [2024-05-15 17:17:32.946563] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.379 qpair failed and we were unable to recover it. 00:26:45.379 [2024-05-15 17:17:32.946760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.379 [2024-05-15 17:17:32.947048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.379 [2024-05-15 17:17:32.947079] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.379 qpair failed and we were unable to recover it. 00:26:45.379 [2024-05-15 17:17:32.947370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.379 [2024-05-15 17:17:32.947601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.379 [2024-05-15 17:17:32.947629] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.379 qpair failed and we were unable to recover it. 00:26:45.379 [2024-05-15 17:17:32.947942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.379 [2024-05-15 17:17:32.948254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.379 [2024-05-15 17:17:32.948284] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.379 qpair failed and we were unable to recover it. 00:26:45.379 [2024-05-15 17:17:32.948599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.379 [2024-05-15 17:17:32.948892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.379 [2024-05-15 17:17:32.948901] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.379 qpair failed and we were unable to recover it. 00:26:45.379 [2024-05-15 17:17:32.949153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.379 [2024-05-15 17:17:32.949378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.379 [2024-05-15 17:17:32.949388] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.379 qpair failed and we were unable to recover it. 00:26:45.379 [2024-05-15 17:17:32.949658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.379 [2024-05-15 17:17:32.949812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.379 [2024-05-15 17:17:32.949822] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.379 qpair failed and we were unable to recover it. 00:26:45.379 [2024-05-15 17:17:32.950026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.379 [2024-05-15 17:17:32.950286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.379 [2024-05-15 17:17:32.950316] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.379 qpair failed and we were unable to recover it. 00:26:45.379 [2024-05-15 17:17:32.950472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.379 [2024-05-15 17:17:32.950758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.379 [2024-05-15 17:17:32.950798] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.379 qpair failed and we were unable to recover it. 00:26:45.379 [2024-05-15 17:17:32.951023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.379 [2024-05-15 17:17:32.951203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.379 [2024-05-15 17:17:32.951233] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.379 qpair failed and we were unable to recover it. 00:26:45.379 [2024-05-15 17:17:32.951500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.379 [2024-05-15 17:17:32.951790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.379 [2024-05-15 17:17:32.951819] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.379 qpair failed and we were unable to recover it. 00:26:45.379 [2024-05-15 17:17:32.952105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.379 [2024-05-15 17:17:32.952260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.379 [2024-05-15 17:17:32.952270] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.379 qpair failed and we were unable to recover it. 00:26:45.379 [2024-05-15 17:17:32.952525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.379 [2024-05-15 17:17:32.952785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.379 [2024-05-15 17:17:32.952813] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.379 qpair failed and we were unable to recover it. 00:26:45.379 [2024-05-15 17:17:32.953029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.379 [2024-05-15 17:17:32.953318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.379 [2024-05-15 17:17:32.953347] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.379 qpair failed and we were unable to recover it. 00:26:45.379 [2024-05-15 17:17:32.953577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.379 [2024-05-15 17:17:32.953855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.379 [2024-05-15 17:17:32.953864] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.379 qpair failed and we were unable to recover it. 00:26:45.379 [2024-05-15 17:17:32.953966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.379 [2024-05-15 17:17:32.954217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.379 [2024-05-15 17:17:32.954227] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.379 qpair failed and we were unable to recover it. 00:26:45.379 [2024-05-15 17:17:32.954425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.379 [2024-05-15 17:17:32.954597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.379 [2024-05-15 17:17:32.954617] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.379 qpair failed and we were unable to recover it. 00:26:45.379 [2024-05-15 17:17:32.954867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.379 [2024-05-15 17:17:32.955122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.379 [2024-05-15 17:17:32.955132] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.379 qpair failed and we were unable to recover it. 00:26:45.379 [2024-05-15 17:17:32.955382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.379 [2024-05-15 17:17:32.955556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.379 [2024-05-15 17:17:32.955566] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.380 qpair failed and we were unable to recover it. 00:26:45.380 [2024-05-15 17:17:32.955825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.380 [2024-05-15 17:17:32.956111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.380 [2024-05-15 17:17:32.956139] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.380 qpair failed and we were unable to recover it. 00:26:45.380 [2024-05-15 17:17:32.956395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.380 [2024-05-15 17:17:32.956596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.380 [2024-05-15 17:17:32.956624] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.380 qpair failed and we were unable to recover it. 00:26:45.380 [2024-05-15 17:17:32.956839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.380 [2024-05-15 17:17:32.957047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.380 [2024-05-15 17:17:32.957075] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.380 qpair failed and we were unable to recover it. 00:26:45.380 [2024-05-15 17:17:32.957379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.380 [2024-05-15 17:17:32.957586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.380 [2024-05-15 17:17:32.957595] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.380 qpair failed and we were unable to recover it. 00:26:45.380 [2024-05-15 17:17:32.957849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.380 [2024-05-15 17:17:32.958154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.380 [2024-05-15 17:17:32.958192] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.380 qpair failed and we were unable to recover it. 00:26:45.380 [2024-05-15 17:17:32.958464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.380 [2024-05-15 17:17:32.958798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.380 [2024-05-15 17:17:32.958826] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.380 qpair failed and we were unable to recover it. 00:26:45.380 [2024-05-15 17:17:32.959060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.380 [2024-05-15 17:17:32.959292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.380 [2024-05-15 17:17:32.959323] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.380 qpair failed and we were unable to recover it. 00:26:45.380 [2024-05-15 17:17:32.959588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.380 [2024-05-15 17:17:32.959807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.380 [2024-05-15 17:17:32.959835] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.380 qpair failed and we were unable to recover it. 00:26:45.380 [2024-05-15 17:17:32.960124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.380 [2024-05-15 17:17:32.960299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.380 [2024-05-15 17:17:32.960309] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.380 qpair failed and we were unable to recover it. 00:26:45.380 [2024-05-15 17:17:32.960492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.380 [2024-05-15 17:17:32.960665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.380 [2024-05-15 17:17:32.960674] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.380 qpair failed and we were unable to recover it. 00:26:45.380 [2024-05-15 17:17:32.960927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.380 [2024-05-15 17:17:32.961190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.380 [2024-05-15 17:17:32.961199] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.380 qpair failed and we were unable to recover it. 00:26:45.380 [2024-05-15 17:17:32.961306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.380 [2024-05-15 17:17:32.961551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.380 [2024-05-15 17:17:32.961561] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.380 qpair failed and we were unable to recover it. 00:26:45.380 [2024-05-15 17:17:32.961737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.380 [2024-05-15 17:17:32.961977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.380 [2024-05-15 17:17:32.961987] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.380 qpair failed and we were unable to recover it. 00:26:45.380 [2024-05-15 17:17:32.962209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.380 [2024-05-15 17:17:32.962463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.380 [2024-05-15 17:17:32.962492] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.380 qpair failed and we were unable to recover it. 00:26:45.380 [2024-05-15 17:17:32.962779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.380 [2024-05-15 17:17:32.963116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.380 [2024-05-15 17:17:32.963144] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.380 qpair failed and we were unable to recover it. 00:26:45.380 [2024-05-15 17:17:32.963390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.380 [2024-05-15 17:17:32.963671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.380 [2024-05-15 17:17:32.963699] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.380 qpair failed and we were unable to recover it. 00:26:45.380 [2024-05-15 17:17:32.963931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.380 [2024-05-15 17:17:32.964184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.380 [2024-05-15 17:17:32.964214] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.380 qpair failed and we were unable to recover it. 00:26:45.380 [2024-05-15 17:17:32.964507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.380 [2024-05-15 17:17:32.964783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.380 [2024-05-15 17:17:32.964812] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.380 qpair failed and we were unable to recover it. 00:26:45.380 [2024-05-15 17:17:32.965089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.380 [2024-05-15 17:17:32.965319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.380 [2024-05-15 17:17:32.965328] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.380 qpair failed and we were unable to recover it. 00:26:45.380 [2024-05-15 17:17:32.965503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.380 [2024-05-15 17:17:32.965763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.380 [2024-05-15 17:17:32.965772] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.380 qpair failed and we were unable to recover it. 00:26:45.380 [2024-05-15 17:17:32.965889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.380 [2024-05-15 17:17:32.966087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.380 [2024-05-15 17:17:32.966096] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.380 qpair failed and we were unable to recover it. 00:26:45.380 [2024-05-15 17:17:32.966250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.380 [2024-05-15 17:17:32.966501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.380 [2024-05-15 17:17:32.966511] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.380 qpair failed and we were unable to recover it. 00:26:45.380 [2024-05-15 17:17:32.966769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.380 [2024-05-15 17:17:32.966963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.380 [2024-05-15 17:17:32.966973] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.380 qpair failed and we were unable to recover it. 00:26:45.380 [2024-05-15 17:17:32.967091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.380 [2024-05-15 17:17:32.967361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.380 [2024-05-15 17:17:32.967370] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.380 qpair failed and we were unable to recover it. 00:26:45.380 [2024-05-15 17:17:32.967567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.380 [2024-05-15 17:17:32.967722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.380 [2024-05-15 17:17:32.967731] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.380 qpair failed and we were unable to recover it. 00:26:45.380 [2024-05-15 17:17:32.967899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.380 [2024-05-15 17:17:32.968068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.380 [2024-05-15 17:17:32.968078] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.380 qpair failed and we were unable to recover it. 00:26:45.380 [2024-05-15 17:17:32.968339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.380 [2024-05-15 17:17:32.968533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.380 [2024-05-15 17:17:32.968542] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.380 qpair failed and we were unable to recover it. 00:26:45.381 [2024-05-15 17:17:32.968767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.381 [2024-05-15 17:17:32.969002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.381 [2024-05-15 17:17:32.969031] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.381 qpair failed and we were unable to recover it. 00:26:45.381 [2024-05-15 17:17:32.969314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.381 [2024-05-15 17:17:32.969605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.381 [2024-05-15 17:17:32.969634] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.381 qpair failed and we were unable to recover it. 00:26:45.381 [2024-05-15 17:17:32.969864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.381 [2024-05-15 17:17:32.970125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.381 [2024-05-15 17:17:32.970153] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.381 qpair failed and we were unable to recover it. 00:26:45.381 [2024-05-15 17:17:32.970420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.381 [2024-05-15 17:17:32.970689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.381 [2024-05-15 17:17:32.970718] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.381 qpair failed and we were unable to recover it. 00:26:45.381 [2024-05-15 17:17:32.971009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.381 [2024-05-15 17:17:32.971277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.381 [2024-05-15 17:17:32.971287] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.381 qpair failed and we were unable to recover it. 00:26:45.381 [2024-05-15 17:17:32.971460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.381 [2024-05-15 17:17:32.971648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.381 [2024-05-15 17:17:32.971677] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.381 qpair failed and we were unable to recover it. 00:26:45.381 [2024-05-15 17:17:32.971995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.381 [2024-05-15 17:17:32.972258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.381 [2024-05-15 17:17:32.972268] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.381 qpair failed and we were unable to recover it. 00:26:45.381 [2024-05-15 17:17:32.972504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.381 [2024-05-15 17:17:32.972765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.381 [2024-05-15 17:17:32.972795] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.381 qpair failed and we were unable to recover it. 00:26:45.381 [2024-05-15 17:17:32.972993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.381 [2024-05-15 17:17:32.973301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.381 [2024-05-15 17:17:32.973332] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.381 qpair failed and we were unable to recover it. 00:26:45.381 [2024-05-15 17:17:32.973604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.381 [2024-05-15 17:17:32.973902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.381 [2024-05-15 17:17:32.973931] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.381 qpair failed and we were unable to recover it. 00:26:45.381 [2024-05-15 17:17:32.974206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.381 [2024-05-15 17:17:32.974415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.381 [2024-05-15 17:17:32.974443] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.381 qpair failed and we were unable to recover it. 00:26:45.381 [2024-05-15 17:17:32.974644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.381 [2024-05-15 17:17:32.974941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.381 [2024-05-15 17:17:32.974970] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.381 qpair failed and we were unable to recover it. 00:26:45.381 [2024-05-15 17:17:32.975241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.381 [2024-05-15 17:17:32.975466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.381 [2024-05-15 17:17:32.975475] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.381 qpair failed and we were unable to recover it. 00:26:45.381 [2024-05-15 17:17:32.975698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.381 [2024-05-15 17:17:32.975926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.381 [2024-05-15 17:17:32.975935] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.381 qpair failed and we were unable to recover it. 00:26:45.381 [2024-05-15 17:17:32.976229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.381 [2024-05-15 17:17:32.976421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.381 [2024-05-15 17:17:32.976431] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.381 qpair failed and we were unable to recover it. 00:26:45.381 [2024-05-15 17:17:32.976684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.381 [2024-05-15 17:17:32.976934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.381 [2024-05-15 17:17:32.976944] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.381 qpair failed and we were unable to recover it. 00:26:45.381 [2024-05-15 17:17:32.977105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.381 [2024-05-15 17:17:32.977346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.381 [2024-05-15 17:17:32.977356] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.381 qpair failed and we were unable to recover it. 00:26:45.381 [2024-05-15 17:17:32.977526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.381 [2024-05-15 17:17:32.977781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.381 [2024-05-15 17:17:32.977790] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.381 qpair failed and we were unable to recover it. 00:26:45.381 [2024-05-15 17:17:32.977947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.381 [2024-05-15 17:17:32.978190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.381 [2024-05-15 17:17:32.978220] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.381 qpair failed and we were unable to recover it. 00:26:45.381 [2024-05-15 17:17:32.978507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.381 [2024-05-15 17:17:32.978789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.381 [2024-05-15 17:17:32.978818] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.381 qpair failed and we were unable to recover it. 00:26:45.381 [2024-05-15 17:17:32.979106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.381 [2024-05-15 17:17:32.979390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.381 [2024-05-15 17:17:32.979420] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.381 qpair failed and we were unable to recover it. 00:26:45.381 [2024-05-15 17:17:32.979752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.381 [2024-05-15 17:17:32.980069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.381 [2024-05-15 17:17:32.980098] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.381 qpair failed and we were unable to recover it. 00:26:45.381 [2024-05-15 17:17:32.980329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.381 [2024-05-15 17:17:32.980540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.381 [2024-05-15 17:17:32.980569] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.381 qpair failed and we were unable to recover it. 00:26:45.381 [2024-05-15 17:17:32.980767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.381 [2024-05-15 17:17:32.981026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.381 [2024-05-15 17:17:32.981054] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.381 qpair failed and we were unable to recover it. 00:26:45.381 [2024-05-15 17:17:32.981275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.381 [2024-05-15 17:17:32.981568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.381 [2024-05-15 17:17:32.981597] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.381 qpair failed and we were unable to recover it. 00:26:45.381 [2024-05-15 17:17:32.981889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.381 [2024-05-15 17:17:32.982177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.381 [2024-05-15 17:17:32.982187] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.381 qpair failed and we were unable to recover it. 00:26:45.381 [2024-05-15 17:17:32.982435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.381 [2024-05-15 17:17:32.982695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.381 [2024-05-15 17:17:32.982704] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.382 qpair failed and we were unable to recover it. 00:26:45.382 [2024-05-15 17:17:32.982882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.382 [2024-05-15 17:17:32.983073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.382 [2024-05-15 17:17:32.983102] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.382 qpair failed and we were unable to recover it. 00:26:45.382 [2024-05-15 17:17:32.983308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.382 [2024-05-15 17:17:32.983590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.382 [2024-05-15 17:17:32.983618] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.382 qpair failed and we were unable to recover it. 00:26:45.382 [2024-05-15 17:17:32.983913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.382 [2024-05-15 17:17:32.984207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.382 [2024-05-15 17:17:32.984217] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.382 qpair failed and we were unable to recover it. 00:26:45.382 [2024-05-15 17:17:32.984323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.382 [2024-05-15 17:17:32.984586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.382 [2024-05-15 17:17:32.984595] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.382 qpair failed and we were unable to recover it. 00:26:45.382 [2024-05-15 17:17:32.984790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.382 [2024-05-15 17:17:32.984952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.382 [2024-05-15 17:17:32.984985] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.382 qpair failed and we were unable to recover it. 00:26:45.382 [2024-05-15 17:17:32.985195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.382 [2024-05-15 17:17:32.985459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.382 [2024-05-15 17:17:32.985487] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.382 qpair failed and we were unable to recover it. 00:26:45.382 [2024-05-15 17:17:32.985755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.382 [2024-05-15 17:17:32.986016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.382 [2024-05-15 17:17:32.986045] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.382 qpair failed and we were unable to recover it. 00:26:45.382 [2024-05-15 17:17:32.986278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.382 [2024-05-15 17:17:32.986546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.382 [2024-05-15 17:17:32.986556] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.382 qpair failed and we were unable to recover it. 00:26:45.382 [2024-05-15 17:17:32.986741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.382 [2024-05-15 17:17:32.986934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.382 [2024-05-15 17:17:32.986944] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.382 qpair failed and we were unable to recover it. 00:26:45.382 [2024-05-15 17:17:32.987172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.382 [2024-05-15 17:17:32.987435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.382 [2024-05-15 17:17:32.987463] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.382 qpair failed and we were unable to recover it. 00:26:45.382 [2024-05-15 17:17:32.987756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.382 [2024-05-15 17:17:32.988038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.382 [2024-05-15 17:17:32.988073] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.382 qpair failed and we were unable to recover it. 00:26:45.382 [2024-05-15 17:17:32.988260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.382 [2024-05-15 17:17:32.988512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.382 [2024-05-15 17:17:32.988522] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.382 qpair failed and we were unable to recover it. 00:26:45.382 [2024-05-15 17:17:32.988691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.382 [2024-05-15 17:17:32.988881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.382 [2024-05-15 17:17:32.988909] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.382 qpair failed and we were unable to recover it. 00:26:45.382 [2024-05-15 17:17:32.989203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.382 [2024-05-15 17:17:32.989428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.382 [2024-05-15 17:17:32.989456] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.382 qpair failed and we were unable to recover it. 00:26:45.382 [2024-05-15 17:17:32.989675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.382 [2024-05-15 17:17:32.989906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.382 [2024-05-15 17:17:32.989939] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.382 qpair failed and we were unable to recover it. 00:26:45.382 [2024-05-15 17:17:32.990182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.382 [2024-05-15 17:17:32.990340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.382 [2024-05-15 17:17:32.990370] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.382 qpair failed and we were unable to recover it. 00:26:45.382 [2024-05-15 17:17:32.990600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.382 [2024-05-15 17:17:32.990883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.382 [2024-05-15 17:17:32.990917] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.382 qpair failed and we were unable to recover it. 00:26:45.382 [2024-05-15 17:17:32.991090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.382 [2024-05-15 17:17:32.991219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.382 [2024-05-15 17:17:32.991229] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.382 qpair failed and we were unable to recover it. 00:26:45.382 [2024-05-15 17:17:32.991511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.382 [2024-05-15 17:17:32.991747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.382 [2024-05-15 17:17:32.991756] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.382 qpair failed and we were unable to recover it. 00:26:45.382 [2024-05-15 17:17:32.991875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.382 [2024-05-15 17:17:32.991997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.382 [2024-05-15 17:17:32.992007] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.382 qpair failed and we were unable to recover it. 00:26:45.382 [2024-05-15 17:17:32.992205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.382 [2024-05-15 17:17:32.992476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.382 [2024-05-15 17:17:32.992505] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.382 qpair failed and we were unable to recover it. 00:26:45.382 [2024-05-15 17:17:32.992727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.382 [2024-05-15 17:17:32.992859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.382 [2024-05-15 17:17:32.992887] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.382 qpair failed and we were unable to recover it. 00:26:45.382 [2024-05-15 17:17:32.993182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.382 [2024-05-15 17:17:32.993415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.382 [2024-05-15 17:17:32.993425] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.382 qpair failed and we were unable to recover it. 00:26:45.382 [2024-05-15 17:17:32.993595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.382 [2024-05-15 17:17:32.993764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.382 [2024-05-15 17:17:32.993773] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.382 qpair failed and we were unable to recover it. 00:26:45.382 [2024-05-15 17:17:32.994027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.382 [2024-05-15 17:17:32.994204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.382 [2024-05-15 17:17:32.994216] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.382 qpair failed and we were unable to recover it. 00:26:45.382 [2024-05-15 17:17:32.994339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.382 [2024-05-15 17:17:32.994507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.382 [2024-05-15 17:17:32.994516] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.382 qpair failed and we were unable to recover it. 00:26:45.382 [2024-05-15 17:17:32.994740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.382 [2024-05-15 17:17:32.994929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.382 [2024-05-15 17:17:32.994938] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.382 qpair failed and we were unable to recover it. 00:26:45.383 [2024-05-15 17:17:32.995178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.383 [2024-05-15 17:17:32.995463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.383 [2024-05-15 17:17:32.995491] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.383 qpair failed and we were unable to recover it. 00:26:45.383 [2024-05-15 17:17:32.995710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.383 [2024-05-15 17:17:32.995865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.383 [2024-05-15 17:17:32.995894] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.383 qpair failed and we were unable to recover it. 00:26:45.383 [2024-05-15 17:17:32.996160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.383 [2024-05-15 17:17:32.996429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.383 [2024-05-15 17:17:32.996438] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.383 qpair failed and we were unable to recover it. 00:26:45.383 [2024-05-15 17:17:32.996600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.383 [2024-05-15 17:17:32.996768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.383 [2024-05-15 17:17:32.996777] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.383 qpair failed and we were unable to recover it. 00:26:45.383 [2024-05-15 17:17:32.996897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.383 [2024-05-15 17:17:32.997127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.383 [2024-05-15 17:17:32.997156] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.383 qpair failed and we were unable to recover it. 00:26:45.383 [2024-05-15 17:17:32.997492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.383 [2024-05-15 17:17:32.997785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.383 [2024-05-15 17:17:32.997814] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.383 qpair failed and we were unable to recover it. 00:26:45.383 [2024-05-15 17:17:32.998047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.383 [2024-05-15 17:17:32.998301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.383 [2024-05-15 17:17:32.998311] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.383 qpair failed and we were unable to recover it. 00:26:45.383 [2024-05-15 17:17:32.998561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.383 [2024-05-15 17:17:32.998735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.383 [2024-05-15 17:17:32.998770] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.383 qpair failed and we were unable to recover it. 00:26:45.383 [2024-05-15 17:17:32.998998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.383 [2024-05-15 17:17:32.999143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.383 [2024-05-15 17:17:32.999178] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.383 qpair failed and we were unable to recover it. 00:26:45.383 [2024-05-15 17:17:32.999398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.383 [2024-05-15 17:17:32.999550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.383 [2024-05-15 17:17:32.999577] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.383 qpair failed and we were unable to recover it. 00:26:45.383 [2024-05-15 17:17:32.999846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.383 [2024-05-15 17:17:33.000109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.383 [2024-05-15 17:17:33.000138] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.383 qpair failed and we were unable to recover it. 00:26:45.653 [2024-05-15 17:17:33.000375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.653 [2024-05-15 17:17:33.000647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.653 [2024-05-15 17:17:33.000657] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.653 qpair failed and we were unable to recover it. 00:26:45.653 [2024-05-15 17:17:33.000830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.653 [2024-05-15 17:17:33.000952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.653 [2024-05-15 17:17:33.000961] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.653 qpair failed and we were unable to recover it. 00:26:45.653 [2024-05-15 17:17:33.001140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.653 [2024-05-15 17:17:33.001366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.653 [2024-05-15 17:17:33.001376] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.653 qpair failed and we were unable to recover it. 00:26:45.653 [2024-05-15 17:17:33.001669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.653 [2024-05-15 17:17:33.001836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.653 [2024-05-15 17:17:33.001846] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.653 qpair failed and we were unable to recover it. 00:26:45.653 [2024-05-15 17:17:33.001958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.653 [2024-05-15 17:17:33.002131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.653 [2024-05-15 17:17:33.002141] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.653 qpair failed and we were unable to recover it. 00:26:45.653 [2024-05-15 17:17:33.002301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.653 [2024-05-15 17:17:33.002472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.653 [2024-05-15 17:17:33.002481] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.653 qpair failed and we were unable to recover it. 00:26:45.653 [2024-05-15 17:17:33.002762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.653 [2024-05-15 17:17:33.003047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.653 [2024-05-15 17:17:33.003076] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.653 qpair failed and we were unable to recover it. 00:26:45.653 [2024-05-15 17:17:33.003268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.653 [2024-05-15 17:17:33.003539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.653 [2024-05-15 17:17:33.003549] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.653 qpair failed and we were unable to recover it. 00:26:45.653 [2024-05-15 17:17:33.003797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.653 [2024-05-15 17:17:33.004055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.653 [2024-05-15 17:17:33.004065] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.653 qpair failed and we were unable to recover it. 00:26:45.653 [2024-05-15 17:17:33.004240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.653 [2024-05-15 17:17:33.004490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.653 [2024-05-15 17:17:33.004500] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.653 qpair failed and we were unable to recover it. 00:26:45.653 [2024-05-15 17:17:33.004684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.653 [2024-05-15 17:17:33.004841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.653 [2024-05-15 17:17:33.004851] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.653 qpair failed and we were unable to recover it. 00:26:45.653 [2024-05-15 17:17:33.005099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.653 [2024-05-15 17:17:33.005232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.653 [2024-05-15 17:17:33.005242] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.653 qpair failed and we were unable to recover it. 00:26:45.653 [2024-05-15 17:17:33.005475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.653 [2024-05-15 17:17:33.005703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.653 [2024-05-15 17:17:33.005731] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.653 qpair failed and we were unable to recover it. 00:26:45.653 [2024-05-15 17:17:33.005959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.653 [2024-05-15 17:17:33.006237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.653 [2024-05-15 17:17:33.006247] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.653 qpair failed and we were unable to recover it. 00:26:45.653 [2024-05-15 17:17:33.006522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.653 [2024-05-15 17:17:33.006765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.653 [2024-05-15 17:17:33.006795] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.653 qpair failed and we were unable to recover it. 00:26:45.653 [2024-05-15 17:17:33.006942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.653 [2024-05-15 17:17:33.007117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.653 [2024-05-15 17:17:33.007127] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.653 qpair failed and we were unable to recover it. 00:26:45.653 [2024-05-15 17:17:33.007418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.653 [2024-05-15 17:17:33.007664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.653 [2024-05-15 17:17:33.007674] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.653 qpair failed and we were unable to recover it. 00:26:45.653 [2024-05-15 17:17:33.007898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.653 [2024-05-15 17:17:33.008176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.653 [2024-05-15 17:17:33.008186] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.653 qpair failed and we were unable to recover it. 00:26:45.653 [2024-05-15 17:17:33.008441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.653 [2024-05-15 17:17:33.008667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.653 [2024-05-15 17:17:33.008677] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.653 qpair failed and we were unable to recover it. 00:26:45.653 [2024-05-15 17:17:33.008867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.653 [2024-05-15 17:17:33.009037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.653 [2024-05-15 17:17:33.009066] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.653 qpair failed and we were unable to recover it. 00:26:45.653 [2024-05-15 17:17:33.009284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.653 [2024-05-15 17:17:33.009621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.653 [2024-05-15 17:17:33.009650] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.653 qpair failed and we were unable to recover it. 00:26:45.653 [2024-05-15 17:17:33.009945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.653 [2024-05-15 17:17:33.010268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.653 [2024-05-15 17:17:33.010277] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.653 qpair failed and we were unable to recover it. 00:26:45.653 [2024-05-15 17:17:33.010394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.653 [2024-05-15 17:17:33.010517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.653 [2024-05-15 17:17:33.010527] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.653 qpair failed and we were unable to recover it. 00:26:45.653 [2024-05-15 17:17:33.010709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.653 [2024-05-15 17:17:33.010995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.653 [2024-05-15 17:17:33.011023] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.653 qpair failed and we were unable to recover it. 00:26:45.653 [2024-05-15 17:17:33.011236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.653 [2024-05-15 17:17:33.011470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.653 [2024-05-15 17:17:33.011499] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.653 qpair failed and we were unable to recover it. 00:26:45.653 [2024-05-15 17:17:33.011819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.653 [2024-05-15 17:17:33.012070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.653 [2024-05-15 17:17:33.012080] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.653 qpair failed and we were unable to recover it. 00:26:45.653 [2024-05-15 17:17:33.012273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.653 [2024-05-15 17:17:33.012524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.653 [2024-05-15 17:17:33.012533] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.653 qpair failed and we were unable to recover it. 00:26:45.653 [2024-05-15 17:17:33.012789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.653 [2024-05-15 17:17:33.012960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.653 [2024-05-15 17:17:33.012969] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.653 qpair failed and we were unable to recover it. 00:26:45.653 [2024-05-15 17:17:33.013141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.653 [2024-05-15 17:17:33.013333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.653 [2024-05-15 17:17:33.013344] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.653 qpair failed and we were unable to recover it. 00:26:45.653 [2024-05-15 17:17:33.013535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.653 [2024-05-15 17:17:33.013748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.653 [2024-05-15 17:17:33.013778] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.653 qpair failed and we were unable to recover it. 00:26:45.653 [2024-05-15 17:17:33.014017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.653 [2024-05-15 17:17:33.014283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.653 [2024-05-15 17:17:33.014314] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.653 qpair failed and we were unable to recover it. 00:26:45.653 [2024-05-15 17:17:33.014496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.653 [2024-05-15 17:17:33.014779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.653 [2024-05-15 17:17:33.014809] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.653 qpair failed and we were unable to recover it. 00:26:45.653 [2024-05-15 17:17:33.015073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.653 [2024-05-15 17:17:33.015241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.653 [2024-05-15 17:17:33.015251] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.653 qpair failed and we were unable to recover it. 00:26:45.653 [2024-05-15 17:17:33.015498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.653 [2024-05-15 17:17:33.015734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.653 [2024-05-15 17:17:33.015744] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.653 qpair failed and we were unable to recover it. 00:26:45.653 [2024-05-15 17:17:33.016016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.653 [2024-05-15 17:17:33.016265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.653 [2024-05-15 17:17:33.016275] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.653 qpair failed and we were unable to recover it. 00:26:45.653 [2024-05-15 17:17:33.016523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.653 [2024-05-15 17:17:33.016785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.653 [2024-05-15 17:17:33.016795] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.653 qpair failed and we were unable to recover it. 00:26:45.653 [2024-05-15 17:17:33.016981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.653 [2024-05-15 17:17:33.017232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.653 [2024-05-15 17:17:33.017242] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.653 qpair failed and we were unable to recover it. 00:26:45.653 [2024-05-15 17:17:33.017529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.653 [2024-05-15 17:17:33.017723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.653 [2024-05-15 17:17:33.017733] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.653 qpair failed and we were unable to recover it. 00:26:45.653 [2024-05-15 17:17:33.017906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.653 [2024-05-15 17:17:33.018079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.653 [2024-05-15 17:17:33.018088] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.654 qpair failed and we were unable to recover it. 00:26:45.654 [2024-05-15 17:17:33.018270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.654 [2024-05-15 17:17:33.018513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.654 [2024-05-15 17:17:33.018523] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.654 qpair failed and we were unable to recover it. 00:26:45.654 [2024-05-15 17:17:33.018698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.654 [2024-05-15 17:17:33.018787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.654 [2024-05-15 17:17:33.018797] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.654 qpair failed and we were unable to recover it. 00:26:45.654 [2024-05-15 17:17:33.019042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.654 [2024-05-15 17:17:33.019217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.654 [2024-05-15 17:17:33.019227] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.654 qpair failed and we were unable to recover it. 00:26:45.654 [2024-05-15 17:17:33.019500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.654 [2024-05-15 17:17:33.019725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.654 [2024-05-15 17:17:33.019735] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.654 qpair failed and we were unable to recover it. 00:26:45.654 [2024-05-15 17:17:33.019924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.654 [2024-05-15 17:17:33.020172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.654 [2024-05-15 17:17:33.020202] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.654 qpair failed and we were unable to recover it. 00:26:45.654 [2024-05-15 17:17:33.020499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.654 [2024-05-15 17:17:33.020807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.654 [2024-05-15 17:17:33.020835] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.654 qpair failed and we were unable to recover it. 00:26:45.654 [2024-05-15 17:17:33.021152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.654 [2024-05-15 17:17:33.021351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.654 [2024-05-15 17:17:33.021361] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.654 qpair failed and we were unable to recover it. 00:26:45.654 [2024-05-15 17:17:33.021583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.654 [2024-05-15 17:17:33.021809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.654 [2024-05-15 17:17:33.021818] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.654 qpair failed and we were unable to recover it. 00:26:45.654 [2024-05-15 17:17:33.022013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.654 [2024-05-15 17:17:33.022206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.654 [2024-05-15 17:17:33.022236] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.654 qpair failed and we were unable to recover it. 00:26:45.654 [2024-05-15 17:17:33.022437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.654 [2024-05-15 17:17:33.022716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.654 [2024-05-15 17:17:33.022745] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.654 qpair failed and we were unable to recover it. 00:26:45.654 [2024-05-15 17:17:33.023070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.654 [2024-05-15 17:17:33.023275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.654 [2024-05-15 17:17:33.023285] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.654 qpair failed and we were unable to recover it. 00:26:45.654 [2024-05-15 17:17:33.023461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.654 [2024-05-15 17:17:33.023638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.654 [2024-05-15 17:17:33.023648] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.654 qpair failed and we were unable to recover it. 00:26:45.654 [2024-05-15 17:17:33.023825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.654 [2024-05-15 17:17:33.023995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.654 [2024-05-15 17:17:33.024004] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.654 qpair failed and we were unable to recover it. 00:26:45.654 [2024-05-15 17:17:33.024230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.654 [2024-05-15 17:17:33.024489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.654 [2024-05-15 17:17:33.024517] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.654 qpair failed and we were unable to recover it. 00:26:45.654 [2024-05-15 17:17:33.024735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.654 [2024-05-15 17:17:33.025023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.654 [2024-05-15 17:17:33.025051] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.654 qpair failed and we were unable to recover it. 00:26:45.654 [2024-05-15 17:17:33.025323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.654 [2024-05-15 17:17:33.025574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.654 [2024-05-15 17:17:33.025584] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.654 qpair failed and we were unable to recover it. 00:26:45.654 [2024-05-15 17:17:33.025710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.654 [2024-05-15 17:17:33.025982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.654 [2024-05-15 17:17:33.026010] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.654 qpair failed and we were unable to recover it. 00:26:45.654 [2024-05-15 17:17:33.026241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.654 [2024-05-15 17:17:33.026482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.654 [2024-05-15 17:17:33.026511] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.654 qpair failed and we were unable to recover it. 00:26:45.654 [2024-05-15 17:17:33.026735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.654 [2024-05-15 17:17:33.026886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.654 [2024-05-15 17:17:33.026914] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.654 qpair failed and we were unable to recover it. 00:26:45.654 [2024-05-15 17:17:33.027184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.654 [2024-05-15 17:17:33.027533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.654 [2024-05-15 17:17:33.027562] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.654 qpair failed and we were unable to recover it. 00:26:45.654 [2024-05-15 17:17:33.027765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.654 [2024-05-15 17:17:33.028034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.654 [2024-05-15 17:17:33.028063] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.654 qpair failed and we were unable to recover it. 00:26:45.654 [2024-05-15 17:17:33.028357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.654 [2024-05-15 17:17:33.028644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.654 [2024-05-15 17:17:33.028672] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.654 qpair failed and we were unable to recover it. 00:26:45.654 [2024-05-15 17:17:33.029005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.654 [2024-05-15 17:17:33.029297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.654 [2024-05-15 17:17:33.029326] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.654 qpair failed and we were unable to recover it. 00:26:45.654 [2024-05-15 17:17:33.029496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.654 [2024-05-15 17:17:33.029806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.654 [2024-05-15 17:17:33.029835] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.654 qpair failed and we were unable to recover it. 00:26:45.654 [2024-05-15 17:17:33.030026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.654 [2024-05-15 17:17:33.030280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.654 [2024-05-15 17:17:33.030310] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.654 qpair failed and we were unable to recover it. 00:26:45.654 [2024-05-15 17:17:33.030590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.654 [2024-05-15 17:17:33.030877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.654 [2024-05-15 17:17:33.030906] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.654 qpair failed and we were unable to recover it. 00:26:45.654 [2024-05-15 17:17:33.031193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.654 [2024-05-15 17:17:33.031515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.654 [2024-05-15 17:17:33.031525] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.654 qpair failed and we were unable to recover it. 00:26:45.654 [2024-05-15 17:17:33.031781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.654 [2024-05-15 17:17:33.032012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.654 [2024-05-15 17:17:33.032021] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.654 qpair failed and we were unable to recover it. 00:26:45.654 [2024-05-15 17:17:33.032255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.654 [2024-05-15 17:17:33.032437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.654 [2024-05-15 17:17:33.032446] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.654 qpair failed and we were unable to recover it. 00:26:45.654 [2024-05-15 17:17:33.032714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.654 [2024-05-15 17:17:33.032942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.654 [2024-05-15 17:17:33.032952] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.654 qpair failed and we were unable to recover it. 00:26:45.654 [2024-05-15 17:17:33.033231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.654 [2024-05-15 17:17:33.033502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.654 [2024-05-15 17:17:33.033512] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.654 qpair failed and we were unable to recover it. 00:26:45.654 [2024-05-15 17:17:33.033767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.654 [2024-05-15 17:17:33.034014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.654 [2024-05-15 17:17:33.034024] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.654 qpair failed and we were unable to recover it. 00:26:45.654 [2024-05-15 17:17:33.034216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.654 [2024-05-15 17:17:33.034394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.654 [2024-05-15 17:17:33.034403] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.654 qpair failed and we were unable to recover it. 00:26:45.654 [2024-05-15 17:17:33.034627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.654 [2024-05-15 17:17:33.034786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.654 [2024-05-15 17:17:33.034797] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.654 qpair failed and we were unable to recover it. 00:26:45.654 [2024-05-15 17:17:33.035044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.654 [2024-05-15 17:17:33.035207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.654 [2024-05-15 17:17:33.035217] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.654 qpair failed and we were unable to recover it. 00:26:45.654 [2024-05-15 17:17:33.035395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.654 [2024-05-15 17:17:33.035573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.654 [2024-05-15 17:17:33.035583] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.654 qpair failed and we were unable to recover it. 00:26:45.654 [2024-05-15 17:17:33.035813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.654 [2024-05-15 17:17:33.035968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.654 [2024-05-15 17:17:33.035978] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.654 qpair failed and we were unable to recover it. 00:26:45.654 [2024-05-15 17:17:33.036153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.654 [2024-05-15 17:17:33.036393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.654 [2024-05-15 17:17:33.036404] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.654 qpair failed and we were unable to recover it. 00:26:45.654 [2024-05-15 17:17:33.036657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.654 [2024-05-15 17:17:33.036755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.654 [2024-05-15 17:17:33.036764] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.654 qpair failed and we were unable to recover it. 00:26:45.654 [2024-05-15 17:17:33.036944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.654 [2024-05-15 17:17:33.037054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.654 [2024-05-15 17:17:33.037063] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.654 qpair failed and we were unable to recover it. 00:26:45.654 [2024-05-15 17:17:33.037261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.654 [2024-05-15 17:17:33.037419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.654 [2024-05-15 17:17:33.037429] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.654 qpair failed and we were unable to recover it. 00:26:45.654 [2024-05-15 17:17:33.037683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.654 [2024-05-15 17:17:33.037913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.654 [2024-05-15 17:17:33.037923] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.654 qpair failed and we were unable to recover it. 00:26:45.654 [2024-05-15 17:17:33.038100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.654 [2024-05-15 17:17:33.038278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.654 [2024-05-15 17:17:33.038288] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.654 qpair failed and we were unable to recover it. 00:26:45.654 [2024-05-15 17:17:33.038476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.654 [2024-05-15 17:17:33.038650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.654 [2024-05-15 17:17:33.038659] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.654 qpair failed and we were unable to recover it. 00:26:45.654 [2024-05-15 17:17:33.038862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.654 [2024-05-15 17:17:33.039115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.654 [2024-05-15 17:17:33.039125] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.654 qpair failed and we were unable to recover it. 00:26:45.654 [2024-05-15 17:17:33.039361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.654 [2024-05-15 17:17:33.039539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.654 [2024-05-15 17:17:33.039549] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.654 qpair failed and we were unable to recover it. 00:26:45.654 [2024-05-15 17:17:33.039798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.654 [2024-05-15 17:17:33.040033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.654 [2024-05-15 17:17:33.040043] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.654 qpair failed and we were unable to recover it. 00:26:45.654 [2024-05-15 17:17:33.040284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.654 [2024-05-15 17:17:33.040456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.654 [2024-05-15 17:17:33.040465] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.654 qpair failed and we were unable to recover it. 00:26:45.654 [2024-05-15 17:17:33.040622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.654 [2024-05-15 17:17:33.040740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.654 [2024-05-15 17:17:33.040750] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.654 qpair failed and we were unable to recover it. 00:26:45.654 [2024-05-15 17:17:33.040996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.654 [2024-05-15 17:17:33.041266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.654 [2024-05-15 17:17:33.041276] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.654 qpair failed and we were unable to recover it. 00:26:45.654 [2024-05-15 17:17:33.041497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.654 [2024-05-15 17:17:33.041623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.654 [2024-05-15 17:17:33.041632] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.654 qpair failed and we were unable to recover it. 00:26:45.655 [2024-05-15 17:17:33.041806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.655 [2024-05-15 17:17:33.041905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.655 [2024-05-15 17:17:33.041915] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.655 qpair failed and we were unable to recover it. 00:26:45.655 [2024-05-15 17:17:33.042156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.655 [2024-05-15 17:17:33.042336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.655 [2024-05-15 17:17:33.042346] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.655 qpair failed and we were unable to recover it. 00:26:45.655 [2024-05-15 17:17:33.042591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.655 [2024-05-15 17:17:33.042864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.655 [2024-05-15 17:17:33.042873] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.655 qpair failed and we were unable to recover it. 00:26:45.655 [2024-05-15 17:17:33.043041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.655 [2024-05-15 17:17:33.043269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.655 [2024-05-15 17:17:33.043279] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.655 qpair failed and we were unable to recover it. 00:26:45.655 [2024-05-15 17:17:33.043394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.655 [2024-05-15 17:17:33.043636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.655 [2024-05-15 17:17:33.043646] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.655 qpair failed and we were unable to recover it. 00:26:45.655 [2024-05-15 17:17:33.043892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.655 [2024-05-15 17:17:33.044087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.655 [2024-05-15 17:17:33.044096] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.655 qpair failed and we were unable to recover it. 00:26:45.655 [2024-05-15 17:17:33.044300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.655 [2024-05-15 17:17:33.044549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.655 [2024-05-15 17:17:33.044559] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.655 qpair failed and we were unable to recover it. 00:26:45.655 [2024-05-15 17:17:33.044806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.655 [2024-05-15 17:17:33.044979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.655 [2024-05-15 17:17:33.044988] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.655 qpair failed and we were unable to recover it. 00:26:45.655 [2024-05-15 17:17:33.045160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.655 [2024-05-15 17:17:33.045274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.655 [2024-05-15 17:17:33.045285] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.655 qpair failed and we were unable to recover it. 00:26:45.655 [2024-05-15 17:17:33.045463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.655 [2024-05-15 17:17:33.045622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.655 [2024-05-15 17:17:33.045632] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.655 qpair failed and we were unable to recover it. 00:26:45.655 [2024-05-15 17:17:33.045876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.655 [2024-05-15 17:17:33.046044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.655 [2024-05-15 17:17:33.046053] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.655 qpair failed and we were unable to recover it. 00:26:45.655 [2024-05-15 17:17:33.046287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.655 [2024-05-15 17:17:33.046532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.655 [2024-05-15 17:17:33.046542] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.655 qpair failed and we were unable to recover it. 00:26:45.655 [2024-05-15 17:17:33.046729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.655 [2024-05-15 17:17:33.046834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.655 [2024-05-15 17:17:33.046843] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.655 qpair failed and we were unable to recover it. 00:26:45.655 [2024-05-15 17:17:33.047099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.655 [2024-05-15 17:17:33.047334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.655 [2024-05-15 17:17:33.047345] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.655 qpair failed and we were unable to recover it. 00:26:45.655 [2024-05-15 17:17:33.047449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.655 [2024-05-15 17:17:33.047650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.655 [2024-05-15 17:17:33.047659] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.655 qpair failed and we were unable to recover it. 00:26:45.655 [2024-05-15 17:17:33.047879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.655 [2024-05-15 17:17:33.048087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.655 [2024-05-15 17:17:33.048096] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.655 qpair failed and we were unable to recover it. 00:26:45.655 [2024-05-15 17:17:33.048338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.655 [2024-05-15 17:17:33.048536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.655 [2024-05-15 17:17:33.048545] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.655 qpair failed and we were unable to recover it. 00:26:45.655 [2024-05-15 17:17:33.048742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.655 [2024-05-15 17:17:33.048963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.655 [2024-05-15 17:17:33.048972] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.655 qpair failed and we were unable to recover it. 00:26:45.655 [2024-05-15 17:17:33.049145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.655 [2024-05-15 17:17:33.049401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.655 [2024-05-15 17:17:33.049412] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.655 qpair failed and we were unable to recover it. 00:26:45.655 [2024-05-15 17:17:33.049604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.655 [2024-05-15 17:17:33.049773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.655 [2024-05-15 17:17:33.049782] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.655 qpair failed and we were unable to recover it. 00:26:45.655 [2024-05-15 17:17:33.050056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.655 [2024-05-15 17:17:33.050304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.655 [2024-05-15 17:17:33.050315] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.655 qpair failed and we were unable to recover it. 00:26:45.655 [2024-05-15 17:17:33.050492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.655 [2024-05-15 17:17:33.050660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.655 [2024-05-15 17:17:33.050670] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.655 qpair failed and we were unable to recover it. 00:26:45.655 [2024-05-15 17:17:33.050870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.655 [2024-05-15 17:17:33.051052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.655 [2024-05-15 17:17:33.051062] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.655 qpair failed and we were unable to recover it. 00:26:45.655 [2024-05-15 17:17:33.051284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.655 [2024-05-15 17:17:33.051453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.655 [2024-05-15 17:17:33.051463] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.655 qpair failed and we were unable to recover it. 00:26:45.655 [2024-05-15 17:17:33.051728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.655 [2024-05-15 17:17:33.051932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.655 [2024-05-15 17:17:33.051943] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.655 qpair failed and we were unable to recover it. 00:26:45.655 [2024-05-15 17:17:33.052049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.655 [2024-05-15 17:17:33.052273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.655 [2024-05-15 17:17:33.052283] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.655 qpair failed and we were unable to recover it. 00:26:45.655 [2024-05-15 17:17:33.052531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.655 [2024-05-15 17:17:33.052687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.655 [2024-05-15 17:17:33.052697] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.655 qpair failed and we were unable to recover it. 00:26:45.655 [2024-05-15 17:17:33.052858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.655 [2024-05-15 17:17:33.053084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.655 [2024-05-15 17:17:33.053093] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.655 qpair failed and we were unable to recover it. 00:26:45.655 [2024-05-15 17:17:33.053268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.655 [2024-05-15 17:17:33.053510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.655 [2024-05-15 17:17:33.053520] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.655 qpair failed and we were unable to recover it. 00:26:45.655 [2024-05-15 17:17:33.053768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.655 [2024-05-15 17:17:33.053964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.655 [2024-05-15 17:17:33.053974] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.655 qpair failed and we were unable to recover it. 00:26:45.655 [2024-05-15 17:17:33.054171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.655 [2024-05-15 17:17:33.054423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.655 [2024-05-15 17:17:33.054433] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.655 qpair failed and we were unable to recover it. 00:26:45.655 [2024-05-15 17:17:33.054713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.655 [2024-05-15 17:17:33.054811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.655 [2024-05-15 17:17:33.054820] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.655 qpair failed and we were unable to recover it. 00:26:45.655 [2024-05-15 17:17:33.055067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.655 [2024-05-15 17:17:33.055324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.655 [2024-05-15 17:17:33.055335] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.655 qpair failed and we were unable to recover it. 00:26:45.655 [2024-05-15 17:17:33.055448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.655 [2024-05-15 17:17:33.055668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.655 [2024-05-15 17:17:33.055678] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.655 qpair failed and we were unable to recover it. 00:26:45.655 [2024-05-15 17:17:33.055873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.655 [2024-05-15 17:17:33.056104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.655 [2024-05-15 17:17:33.056113] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.655 qpair failed and we were unable to recover it. 00:26:45.655 [2024-05-15 17:17:33.056275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.655 [2024-05-15 17:17:33.056446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.655 [2024-05-15 17:17:33.056455] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.655 qpair failed and we were unable to recover it. 00:26:45.655 [2024-05-15 17:17:33.056637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.655 [2024-05-15 17:17:33.056882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.655 [2024-05-15 17:17:33.056891] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.655 qpair failed and we were unable to recover it. 00:26:45.655 [2024-05-15 17:17:33.057162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.655 [2024-05-15 17:17:33.057427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.655 [2024-05-15 17:17:33.057439] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.655 qpair failed and we were unable to recover it. 00:26:45.655 [2024-05-15 17:17:33.057611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.655 [2024-05-15 17:17:33.057832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.655 [2024-05-15 17:17:33.057842] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.655 qpair failed and we were unable to recover it. 00:26:45.655 [2024-05-15 17:17:33.058039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.655 [2024-05-15 17:17:33.058243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.655 [2024-05-15 17:17:33.058253] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.655 qpair failed and we were unable to recover it. 00:26:45.655 [2024-05-15 17:17:33.058355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.655 [2024-05-15 17:17:33.058602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.655 [2024-05-15 17:17:33.058612] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.655 qpair failed and we were unable to recover it. 00:26:45.655 [2024-05-15 17:17:33.058884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.655 [2024-05-15 17:17:33.059058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.655 [2024-05-15 17:17:33.059068] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.655 qpair failed and we were unable to recover it. 00:26:45.655 [2024-05-15 17:17:33.059259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.655 [2024-05-15 17:17:33.059484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.655 [2024-05-15 17:17:33.059493] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.655 qpair failed and we were unable to recover it. 00:26:45.655 [2024-05-15 17:17:33.059738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.655 [2024-05-15 17:17:33.060027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.655 [2024-05-15 17:17:33.060036] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.655 qpair failed and we were unable to recover it. 00:26:45.655 [2024-05-15 17:17:33.060221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.655 [2024-05-15 17:17:33.060399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.655 [2024-05-15 17:17:33.060409] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.655 qpair failed and we were unable to recover it. 00:26:45.655 [2024-05-15 17:17:33.060580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.655 [2024-05-15 17:17:33.060828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.656 [2024-05-15 17:17:33.060837] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.656 qpair failed and we were unable to recover it. 00:26:45.656 [2024-05-15 17:17:33.061080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.656 [2024-05-15 17:17:33.061333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.656 [2024-05-15 17:17:33.061344] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.656 qpair failed and we were unable to recover it. 00:26:45.656 [2024-05-15 17:17:33.061512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.656 [2024-05-15 17:17:33.061763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.656 [2024-05-15 17:17:33.061775] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.656 qpair failed and we were unable to recover it. 00:26:45.656 [2024-05-15 17:17:33.062040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.656 [2024-05-15 17:17:33.062279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.656 [2024-05-15 17:17:33.062289] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.656 qpair failed and we were unable to recover it. 00:26:45.656 [2024-05-15 17:17:33.062410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.656 [2024-05-15 17:17:33.062633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.656 [2024-05-15 17:17:33.062643] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.656 qpair failed and we were unable to recover it. 00:26:45.656 [2024-05-15 17:17:33.062862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.656 [2024-05-15 17:17:33.063088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.656 [2024-05-15 17:17:33.063099] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.656 qpair failed and we were unable to recover it. 00:26:45.656 [2024-05-15 17:17:33.063350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.656 [2024-05-15 17:17:33.063513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.656 [2024-05-15 17:17:33.063523] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.656 qpair failed and we were unable to recover it. 00:26:45.656 [2024-05-15 17:17:33.063750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.656 [2024-05-15 17:17:33.063921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.656 [2024-05-15 17:17:33.063930] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.656 qpair failed and we were unable to recover it. 00:26:45.656 [2024-05-15 17:17:33.064115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.656 [2024-05-15 17:17:33.064361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.656 [2024-05-15 17:17:33.064372] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.656 qpair failed and we were unable to recover it. 00:26:45.656 [2024-05-15 17:17:33.064551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.656 [2024-05-15 17:17:33.064761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.656 [2024-05-15 17:17:33.064771] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.656 qpair failed and we were unable to recover it. 00:26:45.656 [2024-05-15 17:17:33.064892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.656 [2024-05-15 17:17:33.065068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.656 [2024-05-15 17:17:33.065077] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.656 qpair failed and we were unable to recover it. 00:26:45.656 [2024-05-15 17:17:33.065324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.656 [2024-05-15 17:17:33.065582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.656 [2024-05-15 17:17:33.065591] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.656 qpair failed and we were unable to recover it. 00:26:45.656 [2024-05-15 17:17:33.065758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.656 [2024-05-15 17:17:33.065922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.656 [2024-05-15 17:17:33.065935] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.656 qpair failed and we were unable to recover it. 00:26:45.656 [2024-05-15 17:17:33.066187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.656 [2024-05-15 17:17:33.066419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.656 [2024-05-15 17:17:33.066429] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.656 qpair failed and we were unable to recover it. 00:26:45.656 [2024-05-15 17:17:33.066604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.656 [2024-05-15 17:17:33.066781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.656 [2024-05-15 17:17:33.066791] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.656 qpair failed and we were unable to recover it. 00:26:45.656 [2024-05-15 17:17:33.067059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.656 [2024-05-15 17:17:33.067213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.656 [2024-05-15 17:17:33.067222] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.656 qpair failed and we were unable to recover it. 00:26:45.656 [2024-05-15 17:17:33.067422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.656 [2024-05-15 17:17:33.067643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.656 [2024-05-15 17:17:33.067653] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.656 qpair failed and we were unable to recover it. 00:26:45.656 [2024-05-15 17:17:33.067815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.656 [2024-05-15 17:17:33.068039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.656 [2024-05-15 17:17:33.068049] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.656 qpair failed and we were unable to recover it. 00:26:45.656 [2024-05-15 17:17:33.068138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.656 [2024-05-15 17:17:33.068341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.656 [2024-05-15 17:17:33.068351] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.656 qpair failed and we were unable to recover it. 00:26:45.656 [2024-05-15 17:17:33.068515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.656 [2024-05-15 17:17:33.068712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.656 [2024-05-15 17:17:33.068723] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.656 qpair failed and we were unable to recover it. 00:26:45.656 [2024-05-15 17:17:33.068997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.656 [2024-05-15 17:17:33.069153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.656 [2024-05-15 17:17:33.069168] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.656 qpair failed and we were unable to recover it. 00:26:45.656 [2024-05-15 17:17:33.069276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.656 [2024-05-15 17:17:33.069498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.656 [2024-05-15 17:17:33.069508] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.656 qpair failed and we were unable to recover it. 00:26:45.656 [2024-05-15 17:17:33.069736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.656 [2024-05-15 17:17:33.069991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.656 [2024-05-15 17:17:33.070003] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.656 qpair failed and we were unable to recover it. 00:26:45.656 [2024-05-15 17:17:33.070182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.656 [2024-05-15 17:17:33.070357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.656 [2024-05-15 17:17:33.070366] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.656 qpair failed and we were unable to recover it. 00:26:45.656 [2024-05-15 17:17:33.070638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.656 [2024-05-15 17:17:33.070837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.656 [2024-05-15 17:17:33.070848] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.656 qpair failed and we were unable to recover it. 00:26:45.656 [2024-05-15 17:17:33.071021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.656 [2024-05-15 17:17:33.071293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.656 [2024-05-15 17:17:33.071304] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.656 qpair failed and we were unable to recover it. 00:26:45.656 [2024-05-15 17:17:33.071529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.656 [2024-05-15 17:17:33.071696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.656 [2024-05-15 17:17:33.071705] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.656 qpair failed and we were unable to recover it. 00:26:45.656 [2024-05-15 17:17:33.071952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.656 [2024-05-15 17:17:33.072141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.656 [2024-05-15 17:17:33.072152] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.656 qpair failed and we were unable to recover it. 00:26:45.656 [2024-05-15 17:17:33.072289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.656 [2024-05-15 17:17:33.072539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.656 [2024-05-15 17:17:33.072550] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.656 qpair failed and we were unable to recover it. 00:26:45.656 [2024-05-15 17:17:33.072758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.656 [2024-05-15 17:17:33.072968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.656 [2024-05-15 17:17:33.072977] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.656 qpair failed and we were unable to recover it. 00:26:45.656 [2024-05-15 17:17:33.073152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.656 [2024-05-15 17:17:33.073350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.656 [2024-05-15 17:17:33.073360] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.656 qpair failed and we were unable to recover it. 00:26:45.656 [2024-05-15 17:17:33.073602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.656 [2024-05-15 17:17:33.073771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.656 [2024-05-15 17:17:33.073781] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.656 qpair failed and we were unable to recover it. 00:26:45.656 [2024-05-15 17:17:33.073943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.656 [2024-05-15 17:17:33.074068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.656 [2024-05-15 17:17:33.074078] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.656 qpair failed and we were unable to recover it. 00:26:45.656 [2024-05-15 17:17:33.074245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.656 [2024-05-15 17:17:33.074470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.656 [2024-05-15 17:17:33.074479] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.656 qpair failed and we were unable to recover it. 00:26:45.656 [2024-05-15 17:17:33.074727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.656 [2024-05-15 17:17:33.074920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.656 [2024-05-15 17:17:33.074930] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.656 qpair failed and we were unable to recover it. 00:26:45.656 [2024-05-15 17:17:33.075095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.656 [2024-05-15 17:17:33.075320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.656 [2024-05-15 17:17:33.075330] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.656 qpair failed and we were unable to recover it. 00:26:45.656 [2024-05-15 17:17:33.075577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.656 [2024-05-15 17:17:33.075738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.656 [2024-05-15 17:17:33.075748] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.656 qpair failed and we were unable to recover it. 00:26:45.656 [2024-05-15 17:17:33.075929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.656 [2024-05-15 17:17:33.076081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.656 [2024-05-15 17:17:33.076090] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.656 qpair failed and we were unable to recover it. 00:26:45.656 [2024-05-15 17:17:33.076319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.656 [2024-05-15 17:17:33.076516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.656 [2024-05-15 17:17:33.076526] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.656 qpair failed and we were unable to recover it. 00:26:45.656 [2024-05-15 17:17:33.076753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.656 [2024-05-15 17:17:33.076956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.656 [2024-05-15 17:17:33.076965] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.656 qpair failed and we were unable to recover it. 00:26:45.656 [2024-05-15 17:17:33.077190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.656 [2024-05-15 17:17:33.077324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.656 [2024-05-15 17:17:33.077333] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.656 qpair failed and we were unable to recover it. 00:26:45.656 [2024-05-15 17:17:33.077525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.656 [2024-05-15 17:17:33.077624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.656 [2024-05-15 17:17:33.077634] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.656 qpair failed and we were unable to recover it. 00:26:45.656 [2024-05-15 17:17:33.077876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.656 [2024-05-15 17:17:33.078128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.656 [2024-05-15 17:17:33.078138] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.656 qpair failed and we were unable to recover it. 00:26:45.656 [2024-05-15 17:17:33.078340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.656 [2024-05-15 17:17:33.078534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.656 [2024-05-15 17:17:33.078544] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.656 qpair failed and we were unable to recover it. 00:26:45.656 [2024-05-15 17:17:33.078719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.656 [2024-05-15 17:17:33.078898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.656 [2024-05-15 17:17:33.078908] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.656 qpair failed and we were unable to recover it. 00:26:45.656 [2024-05-15 17:17:33.079102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.656 [2024-05-15 17:17:33.079349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.656 [2024-05-15 17:17:33.079360] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.656 qpair failed and we were unable to recover it. 00:26:45.656 [2024-05-15 17:17:33.079613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.656 [2024-05-15 17:17:33.079734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.656 [2024-05-15 17:17:33.079743] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.656 qpair failed and we were unable to recover it. 00:26:45.656 [2024-05-15 17:17:33.080005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.656 [2024-05-15 17:17:33.080162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.656 [2024-05-15 17:17:33.080176] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.656 qpair failed and we were unable to recover it. 00:26:45.656 [2024-05-15 17:17:33.080353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.656 [2024-05-15 17:17:33.080508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.656 [2024-05-15 17:17:33.080518] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.656 qpair failed and we were unable to recover it. 00:26:45.656 [2024-05-15 17:17:33.080616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.656 [2024-05-15 17:17:33.080785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.656 [2024-05-15 17:17:33.080795] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.656 qpair failed and we were unable to recover it. 00:26:45.656 [2024-05-15 17:17:33.080960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.656 [2024-05-15 17:17:33.081181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.656 [2024-05-15 17:17:33.081191] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.656 qpair failed and we were unable to recover it. 00:26:45.656 [2024-05-15 17:17:33.081388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.656 [2024-05-15 17:17:33.081510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.657 [2024-05-15 17:17:33.081520] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.657 qpair failed and we were unable to recover it. 00:26:45.657 [2024-05-15 17:17:33.081689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.657 [2024-05-15 17:17:33.081860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.657 [2024-05-15 17:17:33.081871] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.657 qpair failed and we were unable to recover it. 00:26:45.657 [2024-05-15 17:17:33.082147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.657 [2024-05-15 17:17:33.082257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.657 [2024-05-15 17:17:33.082268] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.657 qpair failed and we were unable to recover it. 00:26:45.657 [2024-05-15 17:17:33.082448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.657 [2024-05-15 17:17:33.082611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.657 [2024-05-15 17:17:33.082621] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.657 qpair failed and we were unable to recover it. 00:26:45.657 [2024-05-15 17:17:33.082874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.657 [2024-05-15 17:17:33.083101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.657 [2024-05-15 17:17:33.083110] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.657 qpair failed and we were unable to recover it. 00:26:45.657 [2024-05-15 17:17:33.083375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.657 [2024-05-15 17:17:33.083637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.657 [2024-05-15 17:17:33.083646] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.657 qpair failed and we were unable to recover it. 00:26:45.657 [2024-05-15 17:17:33.083767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.657 [2024-05-15 17:17:33.084020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.657 [2024-05-15 17:17:33.084030] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.657 qpair failed and we were unable to recover it. 00:26:45.657 [2024-05-15 17:17:33.084185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.657 [2024-05-15 17:17:33.084351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.657 [2024-05-15 17:17:33.084360] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.657 qpair failed and we were unable to recover it. 00:26:45.657 [2024-05-15 17:17:33.084537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.657 [2024-05-15 17:17:33.084723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.657 [2024-05-15 17:17:33.084732] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.657 qpair failed and we were unable to recover it. 00:26:45.657 [2024-05-15 17:17:33.084947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.657 [2024-05-15 17:17:33.085201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.657 [2024-05-15 17:17:33.085211] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.657 qpair failed and we were unable to recover it. 00:26:45.657 [2024-05-15 17:17:33.085432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.657 [2024-05-15 17:17:33.085542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.657 [2024-05-15 17:17:33.085552] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.657 qpair failed and we were unable to recover it. 00:26:45.657 [2024-05-15 17:17:33.085742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.657 [2024-05-15 17:17:33.085917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.657 [2024-05-15 17:17:33.085926] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.657 qpair failed and we were unable to recover it. 00:26:45.657 [2024-05-15 17:17:33.086112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.657 [2024-05-15 17:17:33.086214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.657 [2024-05-15 17:17:33.086224] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.657 qpair failed and we were unable to recover it. 00:26:45.657 [2024-05-15 17:17:33.086412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.657 [2024-05-15 17:17:33.086658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.657 [2024-05-15 17:17:33.086668] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.657 qpair failed and we were unable to recover it. 00:26:45.657 [2024-05-15 17:17:33.086869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.657 [2024-05-15 17:17:33.087064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.657 [2024-05-15 17:17:33.087074] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.657 qpair failed and we were unable to recover it. 00:26:45.657 [2024-05-15 17:17:33.087326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.657 [2024-05-15 17:17:33.087498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.657 [2024-05-15 17:17:33.087508] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.657 qpair failed and we were unable to recover it. 00:26:45.657 [2024-05-15 17:17:33.087733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.657 [2024-05-15 17:17:33.087920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.657 [2024-05-15 17:17:33.087930] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.657 qpair failed and we were unable to recover it. 00:26:45.657 [2024-05-15 17:17:33.088086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.657 [2024-05-15 17:17:33.088248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.657 [2024-05-15 17:17:33.088259] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.657 qpair failed and we were unable to recover it. 00:26:45.657 [2024-05-15 17:17:33.088425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.657 [2024-05-15 17:17:33.088623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.657 [2024-05-15 17:17:33.088633] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.657 qpair failed and we were unable to recover it. 00:26:45.657 [2024-05-15 17:17:33.088859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.657 [2024-05-15 17:17:33.089130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.657 [2024-05-15 17:17:33.089140] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.657 qpair failed and we were unable to recover it. 00:26:45.657 [2024-05-15 17:17:33.089338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.657 [2024-05-15 17:17:33.089518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.657 [2024-05-15 17:17:33.089528] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.657 qpair failed and we were unable to recover it. 00:26:45.657 [2024-05-15 17:17:33.089751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.657 [2024-05-15 17:17:33.089992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.657 [2024-05-15 17:17:33.090002] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.657 qpair failed and we were unable to recover it. 00:26:45.657 [2024-05-15 17:17:33.090250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.657 [2024-05-15 17:17:33.090348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.657 [2024-05-15 17:17:33.090358] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.657 qpair failed and we were unable to recover it. 00:26:45.657 [2024-05-15 17:17:33.090592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.657 [2024-05-15 17:17:33.090824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.657 [2024-05-15 17:17:33.090833] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.657 qpair failed and we were unable to recover it. 00:26:45.657 [2024-05-15 17:17:33.091003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.657 [2024-05-15 17:17:33.091203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.657 [2024-05-15 17:17:33.091213] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.657 qpair failed and we were unable to recover it. 00:26:45.657 [2024-05-15 17:17:33.091383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.657 [2024-05-15 17:17:33.091556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.657 [2024-05-15 17:17:33.091565] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.657 qpair failed and we were unable to recover it. 00:26:45.657 [2024-05-15 17:17:33.091790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.657 [2024-05-15 17:17:33.092038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.657 [2024-05-15 17:17:33.092047] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.657 qpair failed and we were unable to recover it. 00:26:45.657 [2024-05-15 17:17:33.092272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.657 [2024-05-15 17:17:33.092516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.657 [2024-05-15 17:17:33.092525] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.657 qpair failed and we were unable to recover it. 00:26:45.657 [2024-05-15 17:17:33.092748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.657 [2024-05-15 17:17:33.092949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.657 [2024-05-15 17:17:33.092958] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.657 qpair failed and we were unable to recover it. 00:26:45.657 [2024-05-15 17:17:33.093152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.657 [2024-05-15 17:17:33.093430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.657 [2024-05-15 17:17:33.093440] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.657 qpair failed and we were unable to recover it. 00:26:45.657 [2024-05-15 17:17:33.093602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.657 [2024-05-15 17:17:33.093847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.657 [2024-05-15 17:17:33.093857] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.657 qpair failed and we were unable to recover it. 00:26:45.657 [2024-05-15 17:17:33.094050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.657 [2024-05-15 17:17:33.094225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.657 [2024-05-15 17:17:33.094234] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.657 qpair failed and we were unable to recover it. 00:26:45.657 [2024-05-15 17:17:33.094413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.657 [2024-05-15 17:17:33.094670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.657 [2024-05-15 17:17:33.094680] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.657 qpair failed and we were unable to recover it. 00:26:45.657 [2024-05-15 17:17:33.094785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.657 [2024-05-15 17:17:33.094951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.657 [2024-05-15 17:17:33.094961] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.657 qpair failed and we were unable to recover it. 00:26:45.657 [2024-05-15 17:17:33.095161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.657 [2024-05-15 17:17:33.095441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.657 [2024-05-15 17:17:33.095451] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.657 qpair failed and we were unable to recover it. 00:26:45.657 [2024-05-15 17:17:33.095575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.657 [2024-05-15 17:17:33.095796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.657 [2024-05-15 17:17:33.095805] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.657 qpair failed and we were unable to recover it. 00:26:45.657 [2024-05-15 17:17:33.096026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.657 [2024-05-15 17:17:33.096129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.657 [2024-05-15 17:17:33.096139] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.657 qpair failed and we were unable to recover it. 00:26:45.657 [2024-05-15 17:17:33.096313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.657 [2024-05-15 17:17:33.096556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.657 [2024-05-15 17:17:33.096566] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.657 qpair failed and we were unable to recover it. 00:26:45.657 [2024-05-15 17:17:33.096757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.657 [2024-05-15 17:17:33.096910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.657 [2024-05-15 17:17:33.096920] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.657 qpair failed and we were unable to recover it. 00:26:45.657 [2024-05-15 17:17:33.097158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.657 [2024-05-15 17:17:33.097344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.657 [2024-05-15 17:17:33.097354] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.657 qpair failed and we were unable to recover it. 00:26:45.657 [2024-05-15 17:17:33.097532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.657 [2024-05-15 17:17:33.097805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.657 [2024-05-15 17:17:33.097815] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.657 qpair failed and we were unable to recover it. 00:26:45.657 [2024-05-15 17:17:33.098040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.657 [2024-05-15 17:17:33.098297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.657 [2024-05-15 17:17:33.098307] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.657 qpair failed and we were unable to recover it. 00:26:45.657 [2024-05-15 17:17:33.098586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.657 [2024-05-15 17:17:33.098690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.657 [2024-05-15 17:17:33.098700] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.657 qpair failed and we were unable to recover it. 00:26:45.657 [2024-05-15 17:17:33.098959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.657 [2024-05-15 17:17:33.099118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.657 [2024-05-15 17:17:33.099128] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.657 qpair failed and we were unable to recover it. 00:26:45.657 [2024-05-15 17:17:33.099298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.657 [2024-05-15 17:17:33.099469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.657 [2024-05-15 17:17:33.099479] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.657 qpair failed and we were unable to recover it. 00:26:45.657 [2024-05-15 17:17:33.099728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.657 [2024-05-15 17:17:33.099971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.657 [2024-05-15 17:17:33.099981] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.657 qpair failed and we were unable to recover it. 00:26:45.657 [2024-05-15 17:17:33.100079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.657 [2024-05-15 17:17:33.100243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.657 [2024-05-15 17:17:33.100253] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.657 qpair failed and we were unable to recover it. 00:26:45.657 [2024-05-15 17:17:33.100515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.657 [2024-05-15 17:17:33.100765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.657 [2024-05-15 17:17:33.100774] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.657 qpair failed and we were unable to recover it. 00:26:45.657 [2024-05-15 17:17:33.100950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.657 [2024-05-15 17:17:33.101120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.657 [2024-05-15 17:17:33.101130] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.657 qpair failed and we were unable to recover it. 00:26:45.657 [2024-05-15 17:17:33.101388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.657 [2024-05-15 17:17:33.101506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.657 [2024-05-15 17:17:33.101516] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.657 qpair failed and we were unable to recover it. 00:26:45.657 [2024-05-15 17:17:33.101723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.657 [2024-05-15 17:17:33.101840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.657 [2024-05-15 17:17:33.101849] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.657 qpair failed and we were unable to recover it. 00:26:45.657 [2024-05-15 17:17:33.102070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.657 [2024-05-15 17:17:33.102321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.657 [2024-05-15 17:17:33.102331] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.657 qpair failed and we were unable to recover it. 00:26:45.658 [2024-05-15 17:17:33.102573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.658 [2024-05-15 17:17:33.102752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.658 [2024-05-15 17:17:33.102762] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.658 qpair failed and we were unable to recover it. 00:26:45.658 [2024-05-15 17:17:33.102962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.658 [2024-05-15 17:17:33.103150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.658 [2024-05-15 17:17:33.103159] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.658 qpair failed and we were unable to recover it. 00:26:45.658 [2024-05-15 17:17:33.103461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.658 [2024-05-15 17:17:33.103640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.658 [2024-05-15 17:17:33.103649] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.658 qpair failed and we were unable to recover it. 00:26:45.658 [2024-05-15 17:17:33.103934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.658 [2024-05-15 17:17:33.104183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.658 [2024-05-15 17:17:33.104193] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.658 qpair failed and we were unable to recover it. 00:26:45.658 [2024-05-15 17:17:33.104470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.658 [2024-05-15 17:17:33.104695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.658 [2024-05-15 17:17:33.104705] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.658 qpair failed and we were unable to recover it. 00:26:45.658 [2024-05-15 17:17:33.104970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.658 [2024-05-15 17:17:33.105124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.658 [2024-05-15 17:17:33.105133] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.658 qpair failed and we were unable to recover it. 00:26:45.658 [2024-05-15 17:17:33.105305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.658 [2024-05-15 17:17:33.105576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.658 [2024-05-15 17:17:33.105585] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.658 qpair failed and we were unable to recover it. 00:26:45.658 [2024-05-15 17:17:33.105809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.658 [2024-05-15 17:17:33.105979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.658 [2024-05-15 17:17:33.105988] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.658 qpair failed and we were unable to recover it. 00:26:45.658 [2024-05-15 17:17:33.106147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.658 [2024-05-15 17:17:33.106332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.658 [2024-05-15 17:17:33.106342] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.658 qpair failed and we were unable to recover it. 00:26:45.658 [2024-05-15 17:17:33.106502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.658 [2024-05-15 17:17:33.106674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.658 [2024-05-15 17:17:33.106683] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.658 qpair failed and we were unable to recover it. 00:26:45.658 [2024-05-15 17:17:33.106800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.658 [2024-05-15 17:17:33.106918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.658 [2024-05-15 17:17:33.106928] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.658 qpair failed and we were unable to recover it. 00:26:45.658 [2024-05-15 17:17:33.107198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.658 [2024-05-15 17:17:33.107371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.658 [2024-05-15 17:17:33.107380] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.658 qpair failed and we were unable to recover it. 00:26:45.658 [2024-05-15 17:17:33.107601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.658 [2024-05-15 17:17:33.107851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.658 [2024-05-15 17:17:33.107861] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.658 qpair failed and we were unable to recover it. 00:26:45.658 [2024-05-15 17:17:33.108032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.658 [2024-05-15 17:17:33.108212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.658 [2024-05-15 17:17:33.108222] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.658 qpair failed and we were unable to recover it. 00:26:45.658 [2024-05-15 17:17:33.108495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.658 [2024-05-15 17:17:33.108653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.658 [2024-05-15 17:17:33.108663] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.658 qpair failed and we were unable to recover it. 00:26:45.658 [2024-05-15 17:17:33.108908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.658 [2024-05-15 17:17:33.109064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.658 [2024-05-15 17:17:33.109073] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.658 qpair failed and we were unable to recover it. 00:26:45.658 [2024-05-15 17:17:33.109314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.658 [2024-05-15 17:17:33.109557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.658 [2024-05-15 17:17:33.109567] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.658 qpair failed and we were unable to recover it. 00:26:45.658 [2024-05-15 17:17:33.109744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.658 [2024-05-15 17:17:33.109994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.658 [2024-05-15 17:17:33.110004] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.658 qpair failed and we were unable to recover it. 00:26:45.658 [2024-05-15 17:17:33.110176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.658 [2024-05-15 17:17:33.110352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.658 [2024-05-15 17:17:33.110362] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.658 qpair failed and we were unable to recover it. 00:26:45.658 [2024-05-15 17:17:33.110553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.658 [2024-05-15 17:17:33.110654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.658 [2024-05-15 17:17:33.110664] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.658 qpair failed and we were unable to recover it. 00:26:45.658 [2024-05-15 17:17:33.110841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.658 [2024-05-15 17:17:33.111017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.658 [2024-05-15 17:17:33.111026] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.658 qpair failed and we were unable to recover it. 00:26:45.658 [2024-05-15 17:17:33.111285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.658 [2024-05-15 17:17:33.111559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.658 [2024-05-15 17:17:33.111569] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.658 qpair failed and we were unable to recover it. 00:26:45.658 [2024-05-15 17:17:33.111684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.658 [2024-05-15 17:17:33.111799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.658 [2024-05-15 17:17:33.111808] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.658 qpair failed and we were unable to recover it. 00:26:45.658 [2024-05-15 17:17:33.111978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.658 [2024-05-15 17:17:33.112156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.658 [2024-05-15 17:17:33.112196] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.658 qpair failed and we were unable to recover it. 00:26:45.658 [2024-05-15 17:17:33.112410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.658 [2024-05-15 17:17:33.112540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.658 [2024-05-15 17:17:33.112568] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.658 qpair failed and we were unable to recover it. 00:26:45.658 [2024-05-15 17:17:33.112781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.658 [2024-05-15 17:17:33.113047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.658 [2024-05-15 17:17:33.113076] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.658 qpair failed and we were unable to recover it. 00:26:45.658 [2024-05-15 17:17:33.113286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.658 [2024-05-15 17:17:33.113355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.658 [2024-05-15 17:17:33.113364] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.658 qpair failed and we were unable to recover it. 00:26:45.658 [2024-05-15 17:17:33.113468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.658 [2024-05-15 17:17:33.113689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.658 [2024-05-15 17:17:33.113699] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.658 qpair failed and we were unable to recover it. 00:26:45.658 [2024-05-15 17:17:33.113791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.658 [2024-05-15 17:17:33.113961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.658 [2024-05-15 17:17:33.113972] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.658 qpair failed and we were unable to recover it. 00:26:45.658 [2024-05-15 17:17:33.114076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.658 [2024-05-15 17:17:33.114175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.658 [2024-05-15 17:17:33.114185] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.658 qpair failed and we were unable to recover it. 00:26:45.658 [2024-05-15 17:17:33.114328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.658 [2024-05-15 17:17:33.114433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.658 [2024-05-15 17:17:33.114442] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.658 qpair failed and we were unable to recover it. 00:26:45.658 [2024-05-15 17:17:33.114599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.658 [2024-05-15 17:17:33.114854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.658 [2024-05-15 17:17:33.114883] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.658 qpair failed and we were unable to recover it. 00:26:45.658 [2024-05-15 17:17:33.115084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.658 [2024-05-15 17:17:33.115373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.658 [2024-05-15 17:17:33.115404] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.658 qpair failed and we were unable to recover it. 00:26:45.658 [2024-05-15 17:17:33.115615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.658 [2024-05-15 17:17:33.115818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.658 [2024-05-15 17:17:33.115847] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.658 qpair failed and we were unable to recover it. 00:26:45.658 [2024-05-15 17:17:33.115995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.658 [2024-05-15 17:17:33.116277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.658 [2024-05-15 17:17:33.116306] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.658 qpair failed and we were unable to recover it. 00:26:45.658 [2024-05-15 17:17:33.116549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.658 [2024-05-15 17:17:33.116651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.658 [2024-05-15 17:17:33.116660] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.658 qpair failed and we were unable to recover it. 00:26:45.658 [2024-05-15 17:17:33.116827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.658 [2024-05-15 17:17:33.116928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.658 [2024-05-15 17:17:33.116939] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.658 qpair failed and we were unable to recover it. 00:26:45.658 [2024-05-15 17:17:33.117185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.658 [2024-05-15 17:17:33.117285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.658 [2024-05-15 17:17:33.117295] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.658 qpair failed and we were unable to recover it. 00:26:45.658 [2024-05-15 17:17:33.117409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.658 [2024-05-15 17:17:33.117569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.658 [2024-05-15 17:17:33.117579] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.658 qpair failed and we were unable to recover it. 00:26:45.658 [2024-05-15 17:17:33.117744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.658 [2024-05-15 17:17:33.117969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.658 [2024-05-15 17:17:33.117997] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.658 qpair failed and we were unable to recover it. 00:26:45.658 [2024-05-15 17:17:33.118235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.658 [2024-05-15 17:17:33.118384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.658 [2024-05-15 17:17:33.118413] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.658 qpair failed and we were unable to recover it. 00:26:45.658 [2024-05-15 17:17:33.118715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.658 [2024-05-15 17:17:33.118806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.658 [2024-05-15 17:17:33.118815] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.658 qpair failed and we were unable to recover it. 00:26:45.658 [2024-05-15 17:17:33.119062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.658 [2024-05-15 17:17:33.119172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.658 [2024-05-15 17:17:33.119182] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.658 qpair failed and we were unable to recover it. 00:26:45.658 [2024-05-15 17:17:33.119308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.658 [2024-05-15 17:17:33.119542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.658 [2024-05-15 17:17:33.119571] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.658 qpair failed and we were unable to recover it. 00:26:45.658 [2024-05-15 17:17:33.119768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.658 [2024-05-15 17:17:33.119961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.658 [2024-05-15 17:17:33.119990] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.658 qpair failed and we were unable to recover it. 00:26:45.659 [2024-05-15 17:17:33.120223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.659 [2024-05-15 17:17:33.120484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.659 [2024-05-15 17:17:33.120512] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.659 qpair failed and we were unable to recover it. 00:26:45.659 [2024-05-15 17:17:33.120804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.659 [2024-05-15 17:17:33.121071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.659 [2024-05-15 17:17:33.121099] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.659 qpair failed and we were unable to recover it. 00:26:45.659 [2024-05-15 17:17:33.121233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.659 [2024-05-15 17:17:33.121471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.659 [2024-05-15 17:17:33.121500] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.659 qpair failed and we were unable to recover it. 00:26:45.659 [2024-05-15 17:17:33.121697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.659 [2024-05-15 17:17:33.121853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.659 [2024-05-15 17:17:33.121880] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.659 qpair failed and we were unable to recover it. 00:26:45.659 [2024-05-15 17:17:33.122077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.659 [2024-05-15 17:17:33.122299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.659 [2024-05-15 17:17:33.122329] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.659 qpair failed and we were unable to recover it. 00:26:45.659 [2024-05-15 17:17:33.122537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.659 [2024-05-15 17:17:33.122703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.659 [2024-05-15 17:17:33.122737] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.659 qpair failed and we were unable to recover it. 00:26:45.659 [2024-05-15 17:17:33.122936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.659 [2024-05-15 17:17:33.123134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.659 [2024-05-15 17:17:33.123174] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.659 qpair failed and we were unable to recover it. 00:26:45.659 [2024-05-15 17:17:33.123335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.659 [2024-05-15 17:17:33.123534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.659 [2024-05-15 17:17:33.123563] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.659 qpair failed and we were unable to recover it. 00:26:45.659 [2024-05-15 17:17:33.123783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.659 [2024-05-15 17:17:33.124052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.659 [2024-05-15 17:17:33.124080] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.659 qpair failed and we were unable to recover it. 00:26:45.659 [2024-05-15 17:17:33.124217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.659 [2024-05-15 17:17:33.124384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.659 [2024-05-15 17:17:33.124394] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.659 qpair failed and we were unable to recover it. 00:26:45.659 [2024-05-15 17:17:33.124514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.659 [2024-05-15 17:17:33.124670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.659 [2024-05-15 17:17:33.124680] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.659 qpair failed and we were unable to recover it. 00:26:45.659 [2024-05-15 17:17:33.124784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.659 [2024-05-15 17:17:33.124872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.659 [2024-05-15 17:17:33.124882] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.659 qpair failed and we were unable to recover it. 00:26:45.659 [2024-05-15 17:17:33.124983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.659 [2024-05-15 17:17:33.125084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.659 [2024-05-15 17:17:33.125094] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.659 qpair failed and we were unable to recover it. 00:26:45.659 [2024-05-15 17:17:33.125344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.659 [2024-05-15 17:17:33.125460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.659 [2024-05-15 17:17:33.125470] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.659 qpair failed and we were unable to recover it. 00:26:45.659 [2024-05-15 17:17:33.125733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.659 [2024-05-15 17:17:33.125925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.659 [2024-05-15 17:17:33.125935] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.659 qpair failed and we were unable to recover it. 00:26:45.659 [2024-05-15 17:17:33.126037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.659 [2024-05-15 17:17:33.126137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.659 [2024-05-15 17:17:33.126148] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.659 qpair failed and we were unable to recover it. 00:26:45.659 [2024-05-15 17:17:33.126418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.659 [2024-05-15 17:17:33.126612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.659 [2024-05-15 17:17:33.126641] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.659 qpair failed and we were unable to recover it. 00:26:45.659 [2024-05-15 17:17:33.126836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.659 [2024-05-15 17:17:33.127095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.659 [2024-05-15 17:17:33.127123] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.659 qpair failed and we were unable to recover it. 00:26:45.659 [2024-05-15 17:17:33.127330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.659 [2024-05-15 17:17:33.127631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.659 [2024-05-15 17:17:33.127660] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.659 qpair failed and we were unable to recover it. 00:26:45.659 [2024-05-15 17:17:33.127806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.659 [2024-05-15 17:17:33.128002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.659 [2024-05-15 17:17:33.128030] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.659 qpair failed and we were unable to recover it. 00:26:45.659 [2024-05-15 17:17:33.128245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.659 [2024-05-15 17:17:33.128474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.659 [2024-05-15 17:17:33.128502] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.659 qpair failed and we were unable to recover it. 00:26:45.659 [2024-05-15 17:17:33.128719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.659 [2024-05-15 17:17:33.128919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.659 [2024-05-15 17:17:33.128947] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.659 qpair failed and we were unable to recover it. 00:26:45.659 [2024-05-15 17:17:33.129225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.659 [2024-05-15 17:17:33.129437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.659 [2024-05-15 17:17:33.129465] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.659 qpair failed and we were unable to recover it. 00:26:45.659 [2024-05-15 17:17:33.129619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.659 [2024-05-15 17:17:33.129876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.659 [2024-05-15 17:17:33.129905] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.659 qpair failed and we were unable to recover it. 00:26:45.659 [2024-05-15 17:17:33.130048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.659 [2024-05-15 17:17:33.130278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.659 [2024-05-15 17:17:33.130319] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.659 qpair failed and we were unable to recover it. 00:26:45.659 [2024-05-15 17:17:33.130509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.659 [2024-05-15 17:17:33.130751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.659 [2024-05-15 17:17:33.130763] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.659 qpair failed and we were unable to recover it. 00:26:45.659 [2024-05-15 17:17:33.130867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.659 [2024-05-15 17:17:33.130984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.659 [2024-05-15 17:17:33.130994] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.659 qpair failed and we were unable to recover it. 00:26:45.659 [2024-05-15 17:17:33.131102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.659 [2024-05-15 17:17:33.131277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.659 [2024-05-15 17:17:33.131288] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.659 qpair failed and we were unable to recover it. 00:26:45.659 [2024-05-15 17:17:33.131517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.659 [2024-05-15 17:17:33.131635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.659 [2024-05-15 17:17:33.131663] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.659 qpair failed and we were unable to recover it. 00:26:45.659 [2024-05-15 17:17:33.131883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.659 [2024-05-15 17:17:33.132143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.659 [2024-05-15 17:17:33.132198] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.659 qpair failed and we were unable to recover it. 00:26:45.659 [2024-05-15 17:17:33.132315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.659 [2024-05-15 17:17:33.132479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.659 [2024-05-15 17:17:33.132489] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.659 qpair failed and we were unable to recover it. 00:26:45.659 [2024-05-15 17:17:33.132643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.659 [2024-05-15 17:17:33.132854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.659 [2024-05-15 17:17:33.132882] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.659 qpair failed and we were unable to recover it. 00:26:45.659 [2024-05-15 17:17:33.133101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.659 [2024-05-15 17:17:33.133269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.659 [2024-05-15 17:17:33.133299] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.659 qpair failed and we were unable to recover it. 00:26:45.659 [2024-05-15 17:17:33.133531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.659 [2024-05-15 17:17:33.133719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.659 [2024-05-15 17:17:33.133748] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.659 qpair failed and we were unable to recover it. 00:26:45.659 [2024-05-15 17:17:33.134038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.659 [2024-05-15 17:17:33.134194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.659 [2024-05-15 17:17:33.134224] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.659 qpair failed and we were unable to recover it. 00:26:45.659 [2024-05-15 17:17:33.134539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.659 [2024-05-15 17:17:33.134803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.659 [2024-05-15 17:17:33.134814] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.659 qpair failed and we were unable to recover it. 00:26:45.659 [2024-05-15 17:17:33.135035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.659 [2024-05-15 17:17:33.135257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.659 [2024-05-15 17:17:33.135267] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.659 qpair failed and we were unable to recover it. 00:26:45.659 [2024-05-15 17:17:33.135467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.659 [2024-05-15 17:17:33.135725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.659 [2024-05-15 17:17:33.135753] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.659 qpair failed and we were unable to recover it. 00:26:45.659 [2024-05-15 17:17:33.136029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.659 [2024-05-15 17:17:33.136265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.659 [2024-05-15 17:17:33.136275] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.659 qpair failed and we were unable to recover it. 00:26:45.659 [2024-05-15 17:17:33.136444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.659 [2024-05-15 17:17:33.136677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.659 [2024-05-15 17:17:33.136706] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.659 qpair failed and we were unable to recover it. 00:26:45.659 [2024-05-15 17:17:33.136861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.659 [2024-05-15 17:17:33.137088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.659 [2024-05-15 17:17:33.137116] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.659 qpair failed and we were unable to recover it. 00:26:45.659 [2024-05-15 17:17:33.137393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.659 [2024-05-15 17:17:33.137594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.659 [2024-05-15 17:17:33.137622] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.659 qpair failed and we were unable to recover it. 00:26:45.659 [2024-05-15 17:17:33.137778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.659 [2024-05-15 17:17:33.137982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.659 [2024-05-15 17:17:33.138010] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.659 qpair failed and we were unable to recover it. 00:26:45.659 [2024-05-15 17:17:33.138270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.659 [2024-05-15 17:17:33.138356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.659 [2024-05-15 17:17:33.138366] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.659 qpair failed and we were unable to recover it. 00:26:45.659 [2024-05-15 17:17:33.138512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.659 [2024-05-15 17:17:33.138701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.659 [2024-05-15 17:17:33.138711] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.659 qpair failed and we were unable to recover it. 00:26:45.659 [2024-05-15 17:17:33.138865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.659 [2024-05-15 17:17:33.139090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.659 [2024-05-15 17:17:33.139100] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.659 qpair failed and we were unable to recover it. 00:26:45.659 [2024-05-15 17:17:33.139283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.659 [2024-05-15 17:17:33.139439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.659 [2024-05-15 17:17:33.139462] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.659 qpair failed and we were unable to recover it. 00:26:45.659 [2024-05-15 17:17:33.139729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.659 [2024-05-15 17:17:33.139867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.659 [2024-05-15 17:17:33.139895] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.659 qpair failed and we were unable to recover it. 00:26:45.659 [2024-05-15 17:17:33.140111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.659 [2024-05-15 17:17:33.140245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.659 [2024-05-15 17:17:33.140275] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.659 qpair failed and we were unable to recover it. 00:26:45.659 [2024-05-15 17:17:33.140590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.659 [2024-05-15 17:17:33.140762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.659 [2024-05-15 17:17:33.140771] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.659 qpair failed and we were unable to recover it. 00:26:45.659 [2024-05-15 17:17:33.140871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.659 [2024-05-15 17:17:33.141093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.659 [2024-05-15 17:17:33.141103] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.659 qpair failed and we were unable to recover it. 00:26:45.659 [2024-05-15 17:17:33.141281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.659 [2024-05-15 17:17:33.141402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.659 [2024-05-15 17:17:33.141412] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.659 qpair failed and we were unable to recover it. 00:26:45.659 [2024-05-15 17:17:33.141586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.659 [2024-05-15 17:17:33.141737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.660 [2024-05-15 17:17:33.141747] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.660 qpair failed and we were unable to recover it. 00:26:45.660 [2024-05-15 17:17:33.141927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.660 [2024-05-15 17:17:33.142136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.660 [2024-05-15 17:17:33.142175] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.660 qpair failed and we were unable to recover it. 00:26:45.660 [2024-05-15 17:17:33.142318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.660 [2024-05-15 17:17:33.142609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.660 [2024-05-15 17:17:33.142637] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.660 qpair failed and we were unable to recover it. 00:26:45.660 [2024-05-15 17:17:33.142929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.660 [2024-05-15 17:17:33.143155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.660 [2024-05-15 17:17:33.143196] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.660 qpair failed and we were unable to recover it. 00:26:45.660 [2024-05-15 17:17:33.143436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.660 [2024-05-15 17:17:33.143662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.660 [2024-05-15 17:17:33.143690] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.660 qpair failed and we were unable to recover it. 00:26:45.660 [2024-05-15 17:17:33.143870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.660 [2024-05-15 17:17:33.144006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.660 [2024-05-15 17:17:33.144035] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.660 qpair failed and we were unable to recover it. 00:26:45.660 [2024-05-15 17:17:33.144180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.660 [2024-05-15 17:17:33.144372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.660 [2024-05-15 17:17:33.144400] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.660 qpair failed and we were unable to recover it. 00:26:45.660 [2024-05-15 17:17:33.144699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.660 [2024-05-15 17:17:33.144842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.660 [2024-05-15 17:17:33.144870] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.660 qpair failed and we were unable to recover it. 00:26:45.660 [2024-05-15 17:17:33.145069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.660 [2024-05-15 17:17:33.145281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.660 [2024-05-15 17:17:33.145311] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.660 qpair failed and we were unable to recover it. 00:26:45.660 [2024-05-15 17:17:33.145485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.660 [2024-05-15 17:17:33.145738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.660 [2024-05-15 17:17:33.145766] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.660 qpair failed and we were unable to recover it. 00:26:45.660 [2024-05-15 17:17:33.146051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.660 [2024-05-15 17:17:33.146193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.660 [2024-05-15 17:17:33.146223] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.660 qpair failed and we were unable to recover it. 00:26:45.660 [2024-05-15 17:17:33.146435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.660 [2024-05-15 17:17:33.146536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.660 [2024-05-15 17:17:33.146546] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.660 qpair failed and we were unable to recover it. 00:26:45.660 [2024-05-15 17:17:33.146736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.660 [2024-05-15 17:17:33.146988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.660 [2024-05-15 17:17:33.147016] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.660 qpair failed and we were unable to recover it. 00:26:45.660 [2024-05-15 17:17:33.147184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.660 [2024-05-15 17:17:33.147332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.660 [2024-05-15 17:17:33.147361] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.660 qpair failed and we were unable to recover it. 00:26:45.660 [2024-05-15 17:17:33.147639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.660 [2024-05-15 17:17:33.147738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.660 [2024-05-15 17:17:33.147748] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.660 qpair failed and we were unable to recover it. 00:26:45.660 [2024-05-15 17:17:33.147863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.660 [2024-05-15 17:17:33.148110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.660 [2024-05-15 17:17:33.148119] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.660 qpair failed and we were unable to recover it. 00:26:45.660 [2024-05-15 17:17:33.148302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.660 [2024-05-15 17:17:33.148457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.660 [2024-05-15 17:17:33.148466] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.660 qpair failed and we were unable to recover it. 00:26:45.660 [2024-05-15 17:17:33.148573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.660 [2024-05-15 17:17:33.148669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.660 [2024-05-15 17:17:33.148678] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.660 qpair failed and we were unable to recover it. 00:26:45.660 [2024-05-15 17:17:33.148851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.660 [2024-05-15 17:17:33.149019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.660 [2024-05-15 17:17:33.149029] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.660 qpair failed and we were unable to recover it. 00:26:45.660 [2024-05-15 17:17:33.149129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.660 [2024-05-15 17:17:33.149308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.660 [2024-05-15 17:17:33.149318] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.660 qpair failed and we were unable to recover it. 00:26:45.660 [2024-05-15 17:17:33.149568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.660 [2024-05-15 17:17:33.149746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.660 [2024-05-15 17:17:33.149755] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.660 qpair failed and we were unable to recover it. 00:26:45.660 [2024-05-15 17:17:33.149940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.660 [2024-05-15 17:17:33.150124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.660 [2024-05-15 17:17:33.150152] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.660 qpair failed and we were unable to recover it. 00:26:45.660 [2024-05-15 17:17:33.150323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.660 [2024-05-15 17:17:33.150603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.660 [2024-05-15 17:17:33.150637] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.660 qpair failed and we were unable to recover it. 00:26:45.660 [2024-05-15 17:17:33.150726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.660 [2024-05-15 17:17:33.150952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.660 [2024-05-15 17:17:33.150962] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.660 qpair failed and we were unable to recover it. 00:26:45.660 [2024-05-15 17:17:33.151076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.660 [2024-05-15 17:17:33.151248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.660 [2024-05-15 17:17:33.151258] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.660 qpair failed and we were unable to recover it. 00:26:45.660 [2024-05-15 17:17:33.151358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.660 [2024-05-15 17:17:33.151469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.660 [2024-05-15 17:17:33.151478] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.660 qpair failed and we were unable to recover it. 00:26:45.660 [2024-05-15 17:17:33.151566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.660 [2024-05-15 17:17:33.151732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.660 [2024-05-15 17:17:33.151741] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.660 qpair failed and we were unable to recover it. 00:26:45.660 [2024-05-15 17:17:33.151992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.660 [2024-05-15 17:17:33.152192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.660 [2024-05-15 17:17:33.152222] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.660 qpair failed and we were unable to recover it. 00:26:45.660 [2024-05-15 17:17:33.152381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.660 [2024-05-15 17:17:33.152656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.660 [2024-05-15 17:17:33.152684] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.660 qpair failed and we were unable to recover it. 00:26:45.660 [2024-05-15 17:17:33.152855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.660 [2024-05-15 17:17:33.153052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.660 [2024-05-15 17:17:33.153081] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.660 qpair failed and we were unable to recover it. 00:26:45.660 [2024-05-15 17:17:33.153312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.660 [2024-05-15 17:17:33.153614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.660 [2024-05-15 17:17:33.153643] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.660 qpair failed and we were unable to recover it. 00:26:45.660 [2024-05-15 17:17:33.153924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.660 [2024-05-15 17:17:33.154055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.660 [2024-05-15 17:17:33.154084] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.660 qpair failed and we were unable to recover it. 00:26:45.660 [2024-05-15 17:17:33.154330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.660 [2024-05-15 17:17:33.154437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.660 [2024-05-15 17:17:33.154446] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.660 qpair failed and we were unable to recover it. 00:26:45.660 [2024-05-15 17:17:33.154754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.660 [2024-05-15 17:17:33.155010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.660 [2024-05-15 17:17:33.155039] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.660 qpair failed and we were unable to recover it. 00:26:45.660 [2024-05-15 17:17:33.155284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.660 [2024-05-15 17:17:33.155384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.660 [2024-05-15 17:17:33.155394] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.660 qpair failed and we were unable to recover it. 00:26:45.660 [2024-05-15 17:17:33.155484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.660 [2024-05-15 17:17:33.155623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.660 [2024-05-15 17:17:33.155652] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.660 qpair failed and we were unable to recover it. 00:26:45.660 [2024-05-15 17:17:33.155854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.660 [2024-05-15 17:17:33.156135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.660 [2024-05-15 17:17:33.156172] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.660 qpair failed and we were unable to recover it. 00:26:45.660 [2024-05-15 17:17:33.156330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.660 [2024-05-15 17:17:33.156533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.660 [2024-05-15 17:17:33.156561] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.660 qpair failed and we were unable to recover it. 00:26:45.660 [2024-05-15 17:17:33.156696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.660 [2024-05-15 17:17:33.156854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.660 [2024-05-15 17:17:33.156863] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.660 qpair failed and we were unable to recover it. 00:26:45.660 [2024-05-15 17:17:33.157024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.660 [2024-05-15 17:17:33.157204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.660 [2024-05-15 17:17:33.157214] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.660 qpair failed and we were unable to recover it. 00:26:45.660 [2024-05-15 17:17:33.157437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.660 [2024-05-15 17:17:33.157612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.660 [2024-05-15 17:17:33.157621] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.660 qpair failed and we were unable to recover it. 00:26:45.660 [2024-05-15 17:17:33.157786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.660 [2024-05-15 17:17:33.157954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.660 [2024-05-15 17:17:33.157988] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.660 qpair failed and we were unable to recover it. 00:26:45.660 [2024-05-15 17:17:33.158202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.660 [2024-05-15 17:17:33.158416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.660 [2024-05-15 17:17:33.158444] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.660 qpair failed and we were unable to recover it. 00:26:45.660 [2024-05-15 17:17:33.158702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.660 [2024-05-15 17:17:33.158782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.660 [2024-05-15 17:17:33.158791] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.660 qpair failed and we were unable to recover it. 00:26:45.660 [2024-05-15 17:17:33.158960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.660 [2024-05-15 17:17:33.159121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.660 [2024-05-15 17:17:33.159131] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.660 qpair failed and we were unable to recover it. 00:26:45.660 [2024-05-15 17:17:33.159333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.660 [2024-05-15 17:17:33.159452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.660 [2024-05-15 17:17:33.159461] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.660 qpair failed and we were unable to recover it. 00:26:45.660 [2024-05-15 17:17:33.159614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.660 [2024-05-15 17:17:33.159771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.660 [2024-05-15 17:17:33.159780] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.660 qpair failed and we were unable to recover it. 00:26:45.661 [2024-05-15 17:17:33.159935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.661 [2024-05-15 17:17:33.160109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.661 [2024-05-15 17:17:33.160137] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.661 qpair failed and we were unable to recover it. 00:26:45.661 [2024-05-15 17:17:33.160305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.661 [2024-05-15 17:17:33.160462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.661 [2024-05-15 17:17:33.160491] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.661 qpair failed and we were unable to recover it. 00:26:45.661 [2024-05-15 17:17:33.160637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.661 [2024-05-15 17:17:33.160794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.661 [2024-05-15 17:17:33.160803] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.661 qpair failed and we were unable to recover it. 00:26:45.661 [2024-05-15 17:17:33.160905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.661 [2024-05-15 17:17:33.161002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.661 [2024-05-15 17:17:33.161011] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.661 qpair failed and we were unable to recover it. 00:26:45.661 [2024-05-15 17:17:33.161111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.661 [2024-05-15 17:17:33.161284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.661 [2024-05-15 17:17:33.161294] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.661 qpair failed and we were unable to recover it. 00:26:45.661 [2024-05-15 17:17:33.161385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.661 [2024-05-15 17:17:33.161554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.661 [2024-05-15 17:17:33.161563] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.661 qpair failed and we were unable to recover it. 00:26:45.661 [2024-05-15 17:17:33.161786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.661 [2024-05-15 17:17:33.161886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.661 [2024-05-15 17:17:33.161895] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.661 qpair failed and we were unable to recover it. 00:26:45.661 [2024-05-15 17:17:33.162114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.661 [2024-05-15 17:17:33.162282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.661 [2024-05-15 17:17:33.162313] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.661 qpair failed and we were unable to recover it. 00:26:45.661 [2024-05-15 17:17:33.162466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.661 [2024-05-15 17:17:33.162630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.661 [2024-05-15 17:17:33.162639] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.661 qpair failed and we were unable to recover it. 00:26:45.661 [2024-05-15 17:17:33.162870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.661 [2024-05-15 17:17:33.163009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.661 [2024-05-15 17:17:33.163037] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.661 qpair failed and we were unable to recover it. 00:26:45.661 [2024-05-15 17:17:33.163200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.661 [2024-05-15 17:17:33.163330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.661 [2024-05-15 17:17:33.163359] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.661 qpair failed and we were unable to recover it. 00:26:45.661 [2024-05-15 17:17:33.163556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.661 [2024-05-15 17:17:33.163779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.661 [2024-05-15 17:17:33.163789] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.661 qpair failed and we were unable to recover it. 00:26:45.661 [2024-05-15 17:17:33.163966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.661 [2024-05-15 17:17:33.164135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.661 [2024-05-15 17:17:33.164144] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.661 qpair failed and we were unable to recover it. 00:26:45.661 [2024-05-15 17:17:33.164306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.661 [2024-05-15 17:17:33.164565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.661 [2024-05-15 17:17:33.164594] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.661 qpair failed and we were unable to recover it. 00:26:45.661 [2024-05-15 17:17:33.164746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.661 [2024-05-15 17:17:33.164892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.661 [2024-05-15 17:17:33.164921] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.661 qpair failed and we were unable to recover it. 00:26:45.661 [2024-05-15 17:17:33.165136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.661 [2024-05-15 17:17:33.165354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.661 [2024-05-15 17:17:33.165384] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.661 qpair failed and we were unable to recover it. 00:26:45.661 [2024-05-15 17:17:33.165534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.661 [2024-05-15 17:17:33.165792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.661 [2024-05-15 17:17:33.165820] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.661 qpair failed and we were unable to recover it. 00:26:45.661 [2024-05-15 17:17:33.166149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.661 [2024-05-15 17:17:33.166397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.661 [2024-05-15 17:17:33.166426] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.661 qpair failed and we were unable to recover it. 00:26:45.661 [2024-05-15 17:17:33.166585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.661 [2024-05-15 17:17:33.166856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.661 [2024-05-15 17:17:33.166866] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.661 qpair failed and we were unable to recover it. 00:26:45.661 [2024-05-15 17:17:33.167095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.661 [2024-05-15 17:17:33.167226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.661 [2024-05-15 17:17:33.167257] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.661 qpair failed and we were unable to recover it. 00:26:45.661 [2024-05-15 17:17:33.167409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.661 [2024-05-15 17:17:33.167564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.661 [2024-05-15 17:17:33.167593] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.661 qpair failed and we were unable to recover it. 00:26:45.661 [2024-05-15 17:17:33.167804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.661 [2024-05-15 17:17:33.167970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.661 [2024-05-15 17:17:33.167980] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.661 qpair failed and we were unable to recover it. 00:26:45.661 [2024-05-15 17:17:33.168152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.661 [2024-05-15 17:17:33.168257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.661 [2024-05-15 17:17:33.168267] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.661 qpair failed and we were unable to recover it. 00:26:45.661 [2024-05-15 17:17:33.168528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.661 [2024-05-15 17:17:33.168665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.661 [2024-05-15 17:17:33.168694] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.661 qpair failed and we were unable to recover it. 00:26:45.661 [2024-05-15 17:17:33.168826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.661 [2024-05-15 17:17:33.168990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.661 [2024-05-15 17:17:33.169019] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.661 qpair failed and we were unable to recover it. 00:26:45.661 [2024-05-15 17:17:33.169234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.661 [2024-05-15 17:17:33.169435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.661 [2024-05-15 17:17:33.169463] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.661 qpair failed and we were unable to recover it. 00:26:45.661 [2024-05-15 17:17:33.169660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.661 [2024-05-15 17:17:33.169858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.661 [2024-05-15 17:17:33.169886] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.661 qpair failed and we were unable to recover it. 00:26:45.661 [2024-05-15 17:17:33.170151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.661 [2024-05-15 17:17:33.170378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.661 [2024-05-15 17:17:33.170408] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.661 qpair failed and we were unable to recover it. 00:26:45.661 [2024-05-15 17:17:33.170557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.661 [2024-05-15 17:17:33.170649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.661 [2024-05-15 17:17:33.170659] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.661 qpair failed and we were unable to recover it. 00:26:45.661 [2024-05-15 17:17:33.170748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.661 [2024-05-15 17:17:33.170928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.661 [2024-05-15 17:17:33.170937] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.661 qpair failed and we were unable to recover it. 00:26:45.661 [2024-05-15 17:17:33.171117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.661 [2024-05-15 17:17:33.171234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.661 [2024-05-15 17:17:33.171244] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.661 qpair failed and we were unable to recover it. 00:26:45.661 [2024-05-15 17:17:33.171411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.661 [2024-05-15 17:17:33.171513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.661 [2024-05-15 17:17:33.171523] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.661 qpair failed and we were unable to recover it. 00:26:45.661 [2024-05-15 17:17:33.171759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.661 [2024-05-15 17:17:33.171879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.661 [2024-05-15 17:17:33.171908] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.661 qpair failed and we were unable to recover it. 00:26:45.661 [2024-05-15 17:17:33.172127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.661 [2024-05-15 17:17:33.172403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.661 [2024-05-15 17:17:33.172440] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.661 qpair failed and we were unable to recover it. 00:26:45.661 [2024-05-15 17:17:33.172678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.661 [2024-05-15 17:17:33.172778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.661 [2024-05-15 17:17:33.172787] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.661 qpair failed and we were unable to recover it. 00:26:45.661 [2024-05-15 17:17:33.172956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.661 [2024-05-15 17:17:33.173179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.661 [2024-05-15 17:17:33.173188] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.661 qpair failed and we were unable to recover it. 00:26:45.661 [2024-05-15 17:17:33.173327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.661 [2024-05-15 17:17:33.173564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.661 [2024-05-15 17:17:33.173593] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.661 qpair failed and we were unable to recover it. 00:26:45.661 [2024-05-15 17:17:33.173795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.661 [2024-05-15 17:17:33.173939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.661 [2024-05-15 17:17:33.173968] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.661 qpair failed and we were unable to recover it. 00:26:45.661 [2024-05-15 17:17:33.174257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.661 [2024-05-15 17:17:33.174447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.661 [2024-05-15 17:17:33.174476] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.661 qpair failed and we were unable to recover it. 00:26:45.661 [2024-05-15 17:17:33.174719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.661 [2024-05-15 17:17:33.174869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.661 [2024-05-15 17:17:33.174898] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.661 qpair failed and we were unable to recover it. 00:26:45.661 [2024-05-15 17:17:33.175175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.661 [2024-05-15 17:17:33.175383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.661 [2024-05-15 17:17:33.175411] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.661 qpair failed and we were unable to recover it. 00:26:45.661 [2024-05-15 17:17:33.175672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.661 [2024-05-15 17:17:33.175773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.661 [2024-05-15 17:17:33.175782] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.661 qpair failed and we were unable to recover it. 00:26:45.661 [2024-05-15 17:17:33.175938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.661 [2024-05-15 17:17:33.176107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.661 [2024-05-15 17:17:33.176116] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.661 qpair failed and we were unable to recover it. 00:26:45.661 [2024-05-15 17:17:33.176316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.661 [2024-05-15 17:17:33.176571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.661 [2024-05-15 17:17:33.176599] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.661 qpair failed and we were unable to recover it. 00:26:45.661 [2024-05-15 17:17:33.176800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.661 [2024-05-15 17:17:33.177009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.661 [2024-05-15 17:17:33.177037] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.661 qpair failed and we were unable to recover it. 00:26:45.661 [2024-05-15 17:17:33.177282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.661 [2024-05-15 17:17:33.177496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.661 [2024-05-15 17:17:33.177524] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.661 qpair failed and we were unable to recover it. 00:26:45.661 [2024-05-15 17:17:33.177713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.661 [2024-05-15 17:17:33.177790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.661 [2024-05-15 17:17:33.177800] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.661 qpair failed and we were unable to recover it. 00:26:45.661 [2024-05-15 17:17:33.177980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.661 [2024-05-15 17:17:33.178092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.661 [2024-05-15 17:17:33.178102] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.661 qpair failed and we were unable to recover it. 00:26:45.661 [2024-05-15 17:17:33.178181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.661 [2024-05-15 17:17:33.178284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.661 [2024-05-15 17:17:33.178293] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.661 qpair failed and we were unable to recover it. 00:26:45.661 [2024-05-15 17:17:33.178464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.661 [2024-05-15 17:17:33.178645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.661 [2024-05-15 17:17:33.178674] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.661 qpair failed and we were unable to recover it. 00:26:45.661 [2024-05-15 17:17:33.178893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.661 [2024-05-15 17:17:33.179199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.661 [2024-05-15 17:17:33.179229] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.662 qpair failed and we were unable to recover it. 00:26:45.662 [2024-05-15 17:17:33.179453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.662 [2024-05-15 17:17:33.179666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.662 [2024-05-15 17:17:33.179694] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.662 qpair failed and we were unable to recover it. 00:26:45.662 [2024-05-15 17:17:33.179982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.662 [2024-05-15 17:17:33.180124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.662 [2024-05-15 17:17:33.180153] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.662 qpair failed and we were unable to recover it. 00:26:45.662 [2024-05-15 17:17:33.180380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.662 [2024-05-15 17:17:33.180640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.662 [2024-05-15 17:17:33.180668] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.662 qpair failed and we were unable to recover it. 00:26:45.662 [2024-05-15 17:17:33.180871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.662 [2024-05-15 17:17:33.181042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.662 [2024-05-15 17:17:33.181070] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.662 qpair failed and we were unable to recover it. 00:26:45.662 [2024-05-15 17:17:33.181343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.662 [2024-05-15 17:17:33.181484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.662 [2024-05-15 17:17:33.181513] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.662 qpair failed and we were unable to recover it. 00:26:45.662 [2024-05-15 17:17:33.181735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.662 [2024-05-15 17:17:33.181961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.662 [2024-05-15 17:17:33.181989] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.662 qpair failed and we were unable to recover it. 00:26:45.662 [2024-05-15 17:17:33.182305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.662 [2024-05-15 17:17:33.182520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.662 [2024-05-15 17:17:33.182554] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.662 qpair failed and we were unable to recover it. 00:26:45.662 [2024-05-15 17:17:33.182844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.662 [2024-05-15 17:17:33.183122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.662 [2024-05-15 17:17:33.183150] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.662 qpair failed and we were unable to recover it. 00:26:45.662 [2024-05-15 17:17:33.183487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.662 [2024-05-15 17:17:33.183645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.662 [2024-05-15 17:17:33.183655] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.662 qpair failed and we were unable to recover it. 00:26:45.662 [2024-05-15 17:17:33.183831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.662 [2024-05-15 17:17:33.184006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.662 [2024-05-15 17:17:33.184015] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.662 qpair failed and we were unable to recover it. 00:26:45.662 [2024-05-15 17:17:33.184270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.662 [2024-05-15 17:17:33.184471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.662 [2024-05-15 17:17:33.184499] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.662 qpair failed and we were unable to recover it. 00:26:45.662 [2024-05-15 17:17:33.184782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.662 [2024-05-15 17:17:33.184899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.662 [2024-05-15 17:17:33.184908] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.662 qpair failed and we were unable to recover it. 00:26:45.662 [2024-05-15 17:17:33.185138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.662 [2024-05-15 17:17:33.185383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.662 [2024-05-15 17:17:33.185413] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.662 qpair failed and we were unable to recover it. 00:26:45.662 [2024-05-15 17:17:33.185687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.662 [2024-05-15 17:17:33.185909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.662 [2024-05-15 17:17:33.185938] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.662 qpair failed and we were unable to recover it. 00:26:45.662 [2024-05-15 17:17:33.186131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.662 [2024-05-15 17:17:33.186364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.662 [2024-05-15 17:17:33.186393] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.662 qpair failed and we were unable to recover it. 00:26:45.662 [2024-05-15 17:17:33.186612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.662 [2024-05-15 17:17:33.186884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.662 [2024-05-15 17:17:33.186913] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.662 qpair failed and we were unable to recover it. 00:26:45.662 [2024-05-15 17:17:33.187048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.662 [2024-05-15 17:17:33.187252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.662 [2024-05-15 17:17:33.187287] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.662 qpair failed and we were unable to recover it. 00:26:45.662 [2024-05-15 17:17:33.187501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.662 [2024-05-15 17:17:33.187752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.662 [2024-05-15 17:17:33.187780] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.662 qpair failed and we were unable to recover it. 00:26:45.662 [2024-05-15 17:17:33.187994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.662 [2024-05-15 17:17:33.188253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.662 [2024-05-15 17:17:33.188283] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.662 qpair failed and we were unable to recover it. 00:26:45.662 [2024-05-15 17:17:33.188494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.662 [2024-05-15 17:17:33.188667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.662 [2024-05-15 17:17:33.188695] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.662 qpair failed and we were unable to recover it. 00:26:45.662 [2024-05-15 17:17:33.188897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.662 [2024-05-15 17:17:33.189037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.662 [2024-05-15 17:17:33.189065] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.662 qpair failed and we were unable to recover it. 00:26:45.662 [2024-05-15 17:17:33.189289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.662 [2024-05-15 17:17:33.189524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.662 [2024-05-15 17:17:33.189552] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.662 qpair failed and we were unable to recover it. 00:26:45.662 [2024-05-15 17:17:33.189695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.662 [2024-05-15 17:17:33.189936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.662 [2024-05-15 17:17:33.189964] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.662 qpair failed and we were unable to recover it. 00:26:45.662 [2024-05-15 17:17:33.190183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.662 [2024-05-15 17:17:33.190415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.662 [2024-05-15 17:17:33.190444] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.662 qpair failed and we were unable to recover it. 00:26:45.662 [2024-05-15 17:17:33.190730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.662 [2024-05-15 17:17:33.190830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.662 [2024-05-15 17:17:33.190840] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.662 qpair failed and we were unable to recover it. 00:26:45.662 [2024-05-15 17:17:33.190967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.662 [2024-05-15 17:17:33.191139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.662 [2024-05-15 17:17:33.191149] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.662 qpair failed and we were unable to recover it. 00:26:45.662 [2024-05-15 17:17:33.191356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.662 [2024-05-15 17:17:33.191561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.662 [2024-05-15 17:17:33.191594] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.662 qpair failed and we were unable to recover it. 00:26:45.662 [2024-05-15 17:17:33.191821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.662 [2024-05-15 17:17:33.192029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.662 [2024-05-15 17:17:33.192057] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.662 qpair failed and we were unable to recover it. 00:26:45.662 [2024-05-15 17:17:33.192328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.662 [2024-05-15 17:17:33.192482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.662 [2024-05-15 17:17:33.192510] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.662 qpair failed and we were unable to recover it. 00:26:45.662 [2024-05-15 17:17:33.192771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.662 [2024-05-15 17:17:33.192887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.662 [2024-05-15 17:17:33.192896] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.662 qpair failed and we were unable to recover it. 00:26:45.662 [2024-05-15 17:17:33.193095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.662 [2024-05-15 17:17:33.193305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.662 [2024-05-15 17:17:33.193335] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.662 qpair failed and we were unable to recover it. 00:26:45.662 [2024-05-15 17:17:33.193581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.662 [2024-05-15 17:17:33.193717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.662 [2024-05-15 17:17:33.193726] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.662 qpair failed and we were unable to recover it. 00:26:45.662 [2024-05-15 17:17:33.193947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.662 [2024-05-15 17:17:33.194221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.662 [2024-05-15 17:17:33.194251] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.662 qpair failed and we were unable to recover it. 00:26:45.662 [2024-05-15 17:17:33.194512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.662 [2024-05-15 17:17:33.194755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.662 [2024-05-15 17:17:33.194765] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.662 qpair failed and we were unable to recover it. 00:26:45.662 [2024-05-15 17:17:33.194886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.662 [2024-05-15 17:17:33.195039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.662 [2024-05-15 17:17:33.195049] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.662 qpair failed and we were unable to recover it. 00:26:45.662 [2024-05-15 17:17:33.195155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.662 [2024-05-15 17:17:33.195315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.662 [2024-05-15 17:17:33.195325] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.662 qpair failed and we were unable to recover it. 00:26:45.662 [2024-05-15 17:17:33.195439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.662 [2024-05-15 17:17:33.195554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.662 [2024-05-15 17:17:33.195565] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.662 qpair failed and we were unable to recover it. 00:26:45.662 [2024-05-15 17:17:33.195768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.662 [2024-05-15 17:17:33.195968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.662 [2024-05-15 17:17:33.195997] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.662 qpair failed and we were unable to recover it. 00:26:45.662 [2024-05-15 17:17:33.196160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.662 [2024-05-15 17:17:33.196367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.662 [2024-05-15 17:17:33.196397] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.662 qpair failed and we were unable to recover it. 00:26:45.662 [2024-05-15 17:17:33.196545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.662 [2024-05-15 17:17:33.196684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.662 [2024-05-15 17:17:33.196713] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.662 qpair failed and we were unable to recover it. 00:26:45.662 [2024-05-15 17:17:33.196939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.662 [2024-05-15 17:17:33.197145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.662 [2024-05-15 17:17:33.197185] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.662 qpair failed and we were unable to recover it. 00:26:45.662 [2024-05-15 17:17:33.197448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.662 [2024-05-15 17:17:33.197600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.662 [2024-05-15 17:17:33.197609] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.662 qpair failed and we were unable to recover it. 00:26:45.662 [2024-05-15 17:17:33.197880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.662 [2024-05-15 17:17:33.198061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.662 [2024-05-15 17:17:33.198070] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.662 qpair failed and we were unable to recover it. 00:26:45.662 [2024-05-15 17:17:33.198293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.662 [2024-05-15 17:17:33.198483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.662 [2024-05-15 17:17:33.198511] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.662 qpair failed and we were unable to recover it. 00:26:45.662 [2024-05-15 17:17:33.198805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.662 [2024-05-15 17:17:33.198958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.662 [2024-05-15 17:17:33.198987] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.662 qpair failed and we were unable to recover it. 00:26:45.662 [2024-05-15 17:17:33.199139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.662 [2024-05-15 17:17:33.199296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.662 [2024-05-15 17:17:33.199339] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.662 qpair failed and we were unable to recover it. 00:26:45.662 [2024-05-15 17:17:33.199509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.662 [2024-05-15 17:17:33.199774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.662 [2024-05-15 17:17:33.199802] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.662 qpair failed and we were unable to recover it. 00:26:45.662 [2024-05-15 17:17:33.199961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.662 [2024-05-15 17:17:33.200244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.662 [2024-05-15 17:17:33.200274] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.662 qpair failed and we were unable to recover it. 00:26:45.662 [2024-05-15 17:17:33.200560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.662 [2024-05-15 17:17:33.200783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.662 [2024-05-15 17:17:33.200793] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.662 qpair failed and we were unable to recover it. 00:26:45.662 [2024-05-15 17:17:33.200972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.662 [2024-05-15 17:17:33.201161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.662 [2024-05-15 17:17:33.201198] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.662 qpair failed and we were unable to recover it. 00:26:45.663 [2024-05-15 17:17:33.201400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.663 [2024-05-15 17:17:33.201542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.663 [2024-05-15 17:17:33.201571] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.663 qpair failed and we were unable to recover it. 00:26:45.663 [2024-05-15 17:17:33.201781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.663 [2024-05-15 17:17:33.201902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.663 [2024-05-15 17:17:33.201912] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.663 qpair failed and we were unable to recover it. 00:26:45.663 [2024-05-15 17:17:33.202020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.663 [2024-05-15 17:17:33.202204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.663 [2024-05-15 17:17:33.202214] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.663 qpair failed and we were unable to recover it. 00:26:45.663 [2024-05-15 17:17:33.202406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.663 [2024-05-15 17:17:33.202573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.663 [2024-05-15 17:17:33.202583] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.663 qpair failed and we were unable to recover it. 00:26:45.663 [2024-05-15 17:17:33.202778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.663 [2024-05-15 17:17:33.202952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.663 [2024-05-15 17:17:33.202980] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.663 qpair failed and we were unable to recover it. 00:26:45.663 [2024-05-15 17:17:33.203143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.663 [2024-05-15 17:17:33.203316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.663 [2024-05-15 17:17:33.203346] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.663 qpair failed and we were unable to recover it. 00:26:45.663 [2024-05-15 17:17:33.203538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.663 [2024-05-15 17:17:33.203724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.663 [2024-05-15 17:17:33.203752] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.663 qpair failed and we were unable to recover it. 00:26:45.663 [2024-05-15 17:17:33.204033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.663 [2024-05-15 17:17:33.204163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.663 [2024-05-15 17:17:33.204200] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.663 qpair failed and we were unable to recover it. 00:26:45.663 [2024-05-15 17:17:33.204334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.663 [2024-05-15 17:17:33.204537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.663 [2024-05-15 17:17:33.204565] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.663 qpair failed and we were unable to recover it. 00:26:45.663 [2024-05-15 17:17:33.204722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.663 [2024-05-15 17:17:33.204834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.663 [2024-05-15 17:17:33.204844] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.663 qpair failed and we were unable to recover it. 00:26:45.663 [2024-05-15 17:17:33.204948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.663 [2024-05-15 17:17:33.205108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.663 [2024-05-15 17:17:33.205117] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.663 qpair failed and we were unable to recover it. 00:26:45.663 [2024-05-15 17:17:33.205208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.663 [2024-05-15 17:17:33.205384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.663 [2024-05-15 17:17:33.205393] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.663 qpair failed and we were unable to recover it. 00:26:45.663 [2024-05-15 17:17:33.205567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.663 [2024-05-15 17:17:33.205667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.663 [2024-05-15 17:17:33.205676] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.663 qpair failed and we were unable to recover it. 00:26:45.663 [2024-05-15 17:17:33.205864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.663 [2024-05-15 17:17:33.205966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.663 [2024-05-15 17:17:33.205975] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.663 qpair failed and we were unable to recover it. 00:26:45.663 [2024-05-15 17:17:33.206076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.663 [2024-05-15 17:17:33.206346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.663 [2024-05-15 17:17:33.206356] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.663 qpair failed and we were unable to recover it. 00:26:45.663 [2024-05-15 17:17:33.206460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.663 [2024-05-15 17:17:33.206559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.663 [2024-05-15 17:17:33.206568] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.663 qpair failed and we were unable to recover it. 00:26:45.663 [2024-05-15 17:17:33.206675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.663 [2024-05-15 17:17:33.206774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.663 [2024-05-15 17:17:33.206784] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.663 qpair failed and we were unable to recover it. 00:26:45.663 [2024-05-15 17:17:33.206974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.663 [2024-05-15 17:17:33.207076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.663 [2024-05-15 17:17:33.207086] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.663 qpair failed and we were unable to recover it. 00:26:45.663 [2024-05-15 17:17:33.207252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.663 [2024-05-15 17:17:33.207475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.663 [2024-05-15 17:17:33.207484] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.663 qpair failed and we were unable to recover it. 00:26:45.663 [2024-05-15 17:17:33.207656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.663 [2024-05-15 17:17:33.207903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.663 [2024-05-15 17:17:33.207913] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.663 qpair failed and we were unable to recover it. 00:26:45.663 [2024-05-15 17:17:33.208184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.663 [2024-05-15 17:17:33.208359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.663 [2024-05-15 17:17:33.208368] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.663 qpair failed and we were unable to recover it. 00:26:45.663 [2024-05-15 17:17:33.208568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.663 [2024-05-15 17:17:33.208772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.663 [2024-05-15 17:17:33.208800] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.663 qpair failed and we were unable to recover it. 00:26:45.663 [2024-05-15 17:17:33.209018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.663 [2024-05-15 17:17:33.209223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.663 [2024-05-15 17:17:33.209252] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.663 qpair failed and we were unable to recover it. 00:26:45.663 [2024-05-15 17:17:33.209446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.663 [2024-05-15 17:17:33.209545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.663 [2024-05-15 17:17:33.209554] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.663 qpair failed and we were unable to recover it. 00:26:45.663 [2024-05-15 17:17:33.209822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.663 [2024-05-15 17:17:33.210057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.663 [2024-05-15 17:17:33.210067] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.663 qpair failed and we were unable to recover it. 00:26:45.663 [2024-05-15 17:17:33.210298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.663 [2024-05-15 17:17:33.210546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.663 [2024-05-15 17:17:33.210555] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.663 qpair failed and we were unable to recover it. 00:26:45.663 [2024-05-15 17:17:33.210729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.663 [2024-05-15 17:17:33.210986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.663 [2024-05-15 17:17:33.211014] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.663 qpair failed and we were unable to recover it. 00:26:45.663 [2024-05-15 17:17:33.211307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.663 [2024-05-15 17:17:33.211449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.663 [2024-05-15 17:17:33.211477] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.663 qpair failed and we were unable to recover it. 00:26:45.663 [2024-05-15 17:17:33.211758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.663 [2024-05-15 17:17:33.211880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.663 [2024-05-15 17:17:33.211889] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.663 qpair failed and we were unable to recover it. 00:26:45.663 [2024-05-15 17:17:33.211982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.663 [2024-05-15 17:17:33.212161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.663 [2024-05-15 17:17:33.212200] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.663 qpair failed and we were unable to recover it. 00:26:45.663 [2024-05-15 17:17:33.212403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.663 [2024-05-15 17:17:33.212669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.663 [2024-05-15 17:17:33.212697] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.663 qpair failed and we were unable to recover it. 00:26:45.663 [2024-05-15 17:17:33.212886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.663 [2024-05-15 17:17:33.213042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.663 [2024-05-15 17:17:33.213052] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.663 qpair failed and we were unable to recover it. 00:26:45.663 [2024-05-15 17:17:33.213233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.663 [2024-05-15 17:17:33.213404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.663 [2024-05-15 17:17:33.213413] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.663 qpair failed and we were unable to recover it. 00:26:45.663 [2024-05-15 17:17:33.213528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.663 [2024-05-15 17:17:33.213697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.663 [2024-05-15 17:17:33.213707] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.663 qpair failed and we were unable to recover it. 00:26:45.663 [2024-05-15 17:17:33.213944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.663 [2024-05-15 17:17:33.214041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.663 [2024-05-15 17:17:33.214051] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.663 qpair failed and we were unable to recover it. 00:26:45.663 [2024-05-15 17:17:33.214274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.663 [2024-05-15 17:17:33.214367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.663 [2024-05-15 17:17:33.214376] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.663 qpair failed and we were unable to recover it. 00:26:45.663 [2024-05-15 17:17:33.214615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.663 [2024-05-15 17:17:33.214834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.663 [2024-05-15 17:17:33.214862] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.663 qpair failed and we were unable to recover it. 00:26:45.663 [2024-05-15 17:17:33.215158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.663 [2024-05-15 17:17:33.215359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.663 [2024-05-15 17:17:33.215389] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.663 qpair failed and we were unable to recover it. 00:26:45.663 [2024-05-15 17:17:33.215605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.663 [2024-05-15 17:17:33.215780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.663 [2024-05-15 17:17:33.215809] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.663 qpair failed and we were unable to recover it. 00:26:45.663 [2024-05-15 17:17:33.216056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.663 [2024-05-15 17:17:33.216223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.663 [2024-05-15 17:17:33.216253] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.663 qpair failed and we were unable to recover it. 00:26:45.663 [2024-05-15 17:17:33.216468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.663 [2024-05-15 17:17:33.216727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.663 [2024-05-15 17:17:33.216756] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.663 qpair failed and we were unable to recover it. 00:26:45.663 [2024-05-15 17:17:33.216911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.663 [2024-05-15 17:17:33.217123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.663 [2024-05-15 17:17:33.217152] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.663 qpair failed and we were unable to recover it. 00:26:45.663 [2024-05-15 17:17:33.217308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.663 [2024-05-15 17:17:33.217619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.663 [2024-05-15 17:17:33.217647] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.663 qpair failed and we were unable to recover it. 00:26:45.663 [2024-05-15 17:17:33.217868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.663 [2024-05-15 17:17:33.218128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.663 [2024-05-15 17:17:33.218157] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.663 qpair failed and we were unable to recover it. 00:26:45.663 [2024-05-15 17:17:33.218391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.664 [2024-05-15 17:17:33.218619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.664 [2024-05-15 17:17:33.218648] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.664 qpair failed and we were unable to recover it. 00:26:45.664 [2024-05-15 17:17:33.218912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.664 [2024-05-15 17:17:33.219107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.664 [2024-05-15 17:17:33.219134] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.664 qpair failed and we were unable to recover it. 00:26:45.664 [2024-05-15 17:17:33.219383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.664 [2024-05-15 17:17:33.219609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.664 [2024-05-15 17:17:33.219637] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.664 qpair failed and we were unable to recover it. 00:26:45.664 [2024-05-15 17:17:33.219816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.664 [2024-05-15 17:17:33.219914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.664 [2024-05-15 17:17:33.219923] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.664 qpair failed and we were unable to recover it. 00:26:45.664 [2024-05-15 17:17:33.220078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.664 [2024-05-15 17:17:33.220240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.664 [2024-05-15 17:17:33.220251] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.664 qpair failed and we were unable to recover it. 00:26:45.664 [2024-05-15 17:17:33.220408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.664 [2024-05-15 17:17:33.220557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.664 [2024-05-15 17:17:33.220586] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.664 qpair failed and we were unable to recover it. 00:26:45.664 [2024-05-15 17:17:33.220732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.664 [2024-05-15 17:17:33.220862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.664 [2024-05-15 17:17:33.220890] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.664 qpair failed and we were unable to recover it. 00:26:45.664 [2024-05-15 17:17:33.221087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.664 [2024-05-15 17:17:33.221222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.664 [2024-05-15 17:17:33.221252] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.664 qpair failed and we were unable to recover it. 00:26:45.664 [2024-05-15 17:17:33.221541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.664 [2024-05-15 17:17:33.221676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.664 [2024-05-15 17:17:33.221705] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.664 qpair failed and we were unable to recover it. 00:26:45.664 [2024-05-15 17:17:33.221847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.664 [2024-05-15 17:17:33.222078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.664 [2024-05-15 17:17:33.222106] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.664 qpair failed and we were unable to recover it. 00:26:45.664 [2024-05-15 17:17:33.222323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.664 [2024-05-15 17:17:33.222525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.664 [2024-05-15 17:17:33.222554] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.664 qpair failed and we were unable to recover it. 00:26:45.664 [2024-05-15 17:17:33.222705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.664 [2024-05-15 17:17:33.222834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.664 [2024-05-15 17:17:33.222862] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.664 qpair failed and we were unable to recover it. 00:26:45.664 [2024-05-15 17:17:33.223133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.664 [2024-05-15 17:17:33.223418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.664 [2024-05-15 17:17:33.223448] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.664 qpair failed and we were unable to recover it. 00:26:45.664 [2024-05-15 17:17:33.223629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.664 [2024-05-15 17:17:33.223781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.664 [2024-05-15 17:17:33.223810] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.664 qpair failed and we were unable to recover it. 00:26:45.664 [2024-05-15 17:17:33.223954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.664 [2024-05-15 17:17:33.224158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.664 [2024-05-15 17:17:33.224199] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.664 qpair failed and we were unable to recover it. 00:26:45.664 [2024-05-15 17:17:33.224405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.664 [2024-05-15 17:17:33.224665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.664 [2024-05-15 17:17:33.224693] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.664 qpair failed and we were unable to recover it. 00:26:45.664 [2024-05-15 17:17:33.224892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.664 [2024-05-15 17:17:33.225031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.664 [2024-05-15 17:17:33.225059] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.664 qpair failed and we were unable to recover it. 00:26:45.664 [2024-05-15 17:17:33.225323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.664 [2024-05-15 17:17:33.225554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.664 [2024-05-15 17:17:33.225582] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.664 qpair failed and we were unable to recover it. 00:26:45.664 [2024-05-15 17:17:33.225815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.664 [2024-05-15 17:17:33.225967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.664 [2024-05-15 17:17:33.225996] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.664 qpair failed and we were unable to recover it. 00:26:45.664 [2024-05-15 17:17:33.226256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.664 [2024-05-15 17:17:33.226481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.664 [2024-05-15 17:17:33.226509] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.664 qpair failed and we were unable to recover it. 00:26:45.664 [2024-05-15 17:17:33.226767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.664 [2024-05-15 17:17:33.226888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.664 [2024-05-15 17:17:33.226898] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.664 qpair failed and we were unable to recover it. 00:26:45.664 [2024-05-15 17:17:33.227137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.664 [2024-05-15 17:17:33.227316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.664 [2024-05-15 17:17:33.227346] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.664 qpair failed and we were unable to recover it. 00:26:45.664 [2024-05-15 17:17:33.227504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.664 [2024-05-15 17:17:33.227710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.664 [2024-05-15 17:17:33.227750] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.664 qpair failed and we were unable to recover it. 00:26:45.664 [2024-05-15 17:17:33.227858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.664 [2024-05-15 17:17:33.228100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.664 [2024-05-15 17:17:33.228129] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.664 qpair failed and we were unable to recover it. 00:26:45.664 [2024-05-15 17:17:33.228295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.664 [2024-05-15 17:17:33.228555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.664 [2024-05-15 17:17:33.228583] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.664 qpair failed and we were unable to recover it. 00:26:45.664 [2024-05-15 17:17:33.228862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.664 [2024-05-15 17:17:33.229096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.664 [2024-05-15 17:17:33.229125] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.664 qpair failed and we were unable to recover it. 00:26:45.664 [2024-05-15 17:17:33.229426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.664 [2024-05-15 17:17:33.229661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.664 [2024-05-15 17:17:33.229689] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.664 qpair failed and we were unable to recover it. 00:26:45.664 [2024-05-15 17:17:33.229896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.664 [2024-05-15 17:17:33.230006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.664 [2024-05-15 17:17:33.230034] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.664 qpair failed and we were unable to recover it. 00:26:45.664 [2024-05-15 17:17:33.230186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.664 [2024-05-15 17:17:33.230401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.664 [2024-05-15 17:17:33.230429] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.664 qpair failed and we were unable to recover it. 00:26:45.664 [2024-05-15 17:17:33.230585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.664 [2024-05-15 17:17:33.230796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.664 [2024-05-15 17:17:33.230824] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.664 qpair failed and we were unable to recover it. 00:26:45.664 [2024-05-15 17:17:33.231037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.664 [2024-05-15 17:17:33.231236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.664 [2024-05-15 17:17:33.231266] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.664 qpair failed and we were unable to recover it. 00:26:45.664 [2024-05-15 17:17:33.231591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.664 [2024-05-15 17:17:33.231800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.664 [2024-05-15 17:17:33.231828] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.664 qpair failed and we were unable to recover it. 00:26:45.664 [2024-05-15 17:17:33.232094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.664 [2024-05-15 17:17:33.232234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.664 [2024-05-15 17:17:33.232265] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.664 qpair failed and we were unable to recover it. 00:26:45.664 [2024-05-15 17:17:33.232557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.664 [2024-05-15 17:17:33.232779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.664 [2024-05-15 17:17:33.232789] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.664 qpair failed and we were unable to recover it. 00:26:45.664 [2024-05-15 17:17:33.232962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.664 [2024-05-15 17:17:33.233190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.664 [2024-05-15 17:17:33.233220] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.664 qpair failed and we were unable to recover it. 00:26:45.664 [2024-05-15 17:17:33.233353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.664 [2024-05-15 17:17:33.233565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.664 [2024-05-15 17:17:33.233575] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.664 qpair failed and we were unable to recover it. 00:26:45.664 [2024-05-15 17:17:33.233679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.664 [2024-05-15 17:17:33.233903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.664 [2024-05-15 17:17:33.233913] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.664 qpair failed and we were unable to recover it. 00:26:45.664 [2024-05-15 17:17:33.234137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.664 [2024-05-15 17:17:33.234308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.664 [2024-05-15 17:17:33.234323] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.664 qpair failed and we were unable to recover it. 00:26:45.664 [2024-05-15 17:17:33.234480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.664 [2024-05-15 17:17:33.234589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.664 [2024-05-15 17:17:33.234598] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.664 qpair failed and we were unable to recover it. 00:26:45.664 [2024-05-15 17:17:33.234834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.664 [2024-05-15 17:17:33.235013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.664 [2024-05-15 17:17:33.235022] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.664 qpair failed and we were unable to recover it. 00:26:45.664 [2024-05-15 17:17:33.235128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.664 [2024-05-15 17:17:33.235286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.664 [2024-05-15 17:17:33.235296] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.664 qpair failed and we were unable to recover it. 00:26:45.664 [2024-05-15 17:17:33.235460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.664 [2024-05-15 17:17:33.235687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.664 [2024-05-15 17:17:33.235716] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.664 qpair failed and we were unable to recover it. 00:26:45.664 [2024-05-15 17:17:33.235857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.664 [2024-05-15 17:17:33.235998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.664 [2024-05-15 17:17:33.236026] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.664 qpair failed and we were unable to recover it. 00:26:45.664 [2024-05-15 17:17:33.236244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.664 [2024-05-15 17:17:33.236498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.664 [2024-05-15 17:17:33.236507] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.664 qpair failed and we were unable to recover it. 00:26:45.664 [2024-05-15 17:17:33.236594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.664 [2024-05-15 17:17:33.236815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.664 [2024-05-15 17:17:33.236824] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.664 qpair failed and we were unable to recover it. 00:26:45.664 [2024-05-15 17:17:33.236948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.664 [2024-05-15 17:17:33.237053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.664 [2024-05-15 17:17:33.237063] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.664 qpair failed and we were unable to recover it. 00:26:45.664 [2024-05-15 17:17:33.237169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.664 [2024-05-15 17:17:33.237345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.664 [2024-05-15 17:17:33.237355] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.664 qpair failed and we were unable to recover it. 00:26:45.664 [2024-05-15 17:17:33.237459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.664 [2024-05-15 17:17:33.237627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.664 [2024-05-15 17:17:33.237636] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.664 qpair failed and we were unable to recover it. 00:26:45.664 [2024-05-15 17:17:33.237747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.664 [2024-05-15 17:17:33.237844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.664 [2024-05-15 17:17:33.237853] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.664 qpair failed and we were unable to recover it. 00:26:45.664 [2024-05-15 17:17:33.238009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.665 [2024-05-15 17:17:33.238108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.665 [2024-05-15 17:17:33.238118] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.665 qpair failed and we were unable to recover it. 00:26:45.665 [2024-05-15 17:17:33.238273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.665 [2024-05-15 17:17:33.238386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.665 [2024-05-15 17:17:33.238395] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.665 qpair failed and we were unable to recover it. 00:26:45.665 [2024-05-15 17:17:33.238509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.665 [2024-05-15 17:17:33.238582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.665 [2024-05-15 17:17:33.238592] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.665 qpair failed and we were unable to recover it. 00:26:45.665 [2024-05-15 17:17:33.238705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.665 [2024-05-15 17:17:33.238874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.665 [2024-05-15 17:17:33.238902] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.665 qpair failed and we were unable to recover it. 00:26:45.665 [2024-05-15 17:17:33.239047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.665 [2024-05-15 17:17:33.239187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.665 [2024-05-15 17:17:33.239217] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.665 qpair failed and we were unable to recover it. 00:26:45.665 [2024-05-15 17:17:33.239422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.665 [2024-05-15 17:17:33.239549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.665 [2024-05-15 17:17:33.239578] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.665 qpair failed and we were unable to recover it. 00:26:45.665 [2024-05-15 17:17:33.239780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.665 [2024-05-15 17:17:33.240011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.665 [2024-05-15 17:17:33.240039] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.665 qpair failed and we were unable to recover it. 00:26:45.665 [2024-05-15 17:17:33.240270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.665 [2024-05-15 17:17:33.240465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.665 [2024-05-15 17:17:33.240494] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.665 qpair failed and we were unable to recover it. 00:26:45.665 [2024-05-15 17:17:33.240643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.665 [2024-05-15 17:17:33.240919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.665 [2024-05-15 17:17:33.240929] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.665 qpair failed and we were unable to recover it. 00:26:45.665 [2024-05-15 17:17:33.240995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.665 [2024-05-15 17:17:33.241160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.665 [2024-05-15 17:17:33.241174] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.665 qpair failed and we were unable to recover it. 00:26:45.665 [2024-05-15 17:17:33.241355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.665 [2024-05-15 17:17:33.241479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.665 [2024-05-15 17:17:33.241488] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.665 qpair failed and we were unable to recover it. 00:26:45.665 [2024-05-15 17:17:33.241594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.665 [2024-05-15 17:17:33.241788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.665 [2024-05-15 17:17:33.241797] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.665 qpair failed and we were unable to recover it. 00:26:45.665 [2024-05-15 17:17:33.241956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.665 [2024-05-15 17:17:33.242113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.665 [2024-05-15 17:17:33.242123] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.665 qpair failed and we were unable to recover it. 00:26:45.665 [2024-05-15 17:17:33.242280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.665 [2024-05-15 17:17:33.242488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.665 [2024-05-15 17:17:33.242497] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.665 qpair failed and we were unable to recover it. 00:26:45.665 [2024-05-15 17:17:33.242649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.665 [2024-05-15 17:17:33.242777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.665 [2024-05-15 17:17:33.242786] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.665 qpair failed and we were unable to recover it. 00:26:45.665 [2024-05-15 17:17:33.243059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.665 [2024-05-15 17:17:33.243213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.665 [2024-05-15 17:17:33.243243] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.665 qpair failed and we were unable to recover it. 00:26:45.665 [2024-05-15 17:17:33.243443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.665 [2024-05-15 17:17:33.243716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.665 [2024-05-15 17:17:33.243725] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.665 qpair failed and we were unable to recover it. 00:26:45.665 [2024-05-15 17:17:33.243872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.665 [2024-05-15 17:17:33.244097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.665 [2024-05-15 17:17:33.244125] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.665 qpair failed and we were unable to recover it. 00:26:45.665 [2024-05-15 17:17:33.244420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.665 [2024-05-15 17:17:33.244561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.665 [2024-05-15 17:17:33.244590] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.665 qpair failed and we were unable to recover it. 00:26:45.665 [2024-05-15 17:17:33.244772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.665 [2024-05-15 17:17:33.244957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.665 [2024-05-15 17:17:33.244986] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.665 qpair failed and we were unable to recover it. 00:26:45.665 [2024-05-15 17:17:33.245194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.665 [2024-05-15 17:17:33.245308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.665 [2024-05-15 17:17:33.245337] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.665 qpair failed and we were unable to recover it. 00:26:45.665 [2024-05-15 17:17:33.245471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.665 [2024-05-15 17:17:33.245576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.665 [2024-05-15 17:17:33.245586] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.665 qpair failed and we were unable to recover it. 00:26:45.665 [2024-05-15 17:17:33.245772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.665 [2024-05-15 17:17:33.246047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.665 [2024-05-15 17:17:33.246076] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.665 qpair failed and we were unable to recover it. 00:26:45.665 [2024-05-15 17:17:33.246371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.665 [2024-05-15 17:17:33.246511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.665 [2024-05-15 17:17:33.246540] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.665 qpair failed and we were unable to recover it. 00:26:45.665 [2024-05-15 17:17:33.246788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.665 [2024-05-15 17:17:33.246964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.665 [2024-05-15 17:17:33.246976] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.665 qpair failed and we were unable to recover it. 00:26:45.665 [2024-05-15 17:17:33.247079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.665 [2024-05-15 17:17:33.247155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.665 [2024-05-15 17:17:33.247168] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.665 qpair failed and we were unable to recover it. 00:26:45.665 [2024-05-15 17:17:33.247358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.665 [2024-05-15 17:17:33.247552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.665 [2024-05-15 17:17:33.247561] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.665 qpair failed and we were unable to recover it. 00:26:45.665 [2024-05-15 17:17:33.247732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.665 [2024-05-15 17:17:33.247890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.665 [2024-05-15 17:17:33.247922] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.665 qpair failed and we were unable to recover it. 00:26:45.665 [2024-05-15 17:17:33.248083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.665 [2024-05-15 17:17:33.248307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.665 [2024-05-15 17:17:33.248337] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.665 qpair failed and we were unable to recover it. 00:26:45.665 [2024-05-15 17:17:33.248480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.665 [2024-05-15 17:17:33.248649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.665 [2024-05-15 17:17:33.248659] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.665 qpair failed and we were unable to recover it. 00:26:45.665 [2024-05-15 17:17:33.248775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.665 [2024-05-15 17:17:33.248875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.665 [2024-05-15 17:17:33.248884] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.665 qpair failed and we were unable to recover it. 00:26:45.665 [2024-05-15 17:17:33.249038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.665 [2024-05-15 17:17:33.249208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.665 [2024-05-15 17:17:33.249218] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.665 qpair failed and we were unable to recover it. 00:26:45.665 [2024-05-15 17:17:33.249432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.665 [2024-05-15 17:17:33.249598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.665 [2024-05-15 17:17:33.249608] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.665 qpair failed and we were unable to recover it. 00:26:45.665 [2024-05-15 17:17:33.249695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.665 [2024-05-15 17:17:33.249864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.665 [2024-05-15 17:17:33.249874] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.665 qpair failed and we were unable to recover it. 00:26:45.665 [2024-05-15 17:17:33.250144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.665 [2024-05-15 17:17:33.250318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.665 [2024-05-15 17:17:33.250330] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.665 qpair failed and we were unable to recover it. 00:26:45.665 [2024-05-15 17:17:33.250498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.665 [2024-05-15 17:17:33.250596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.665 [2024-05-15 17:17:33.250606] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.665 qpair failed and we were unable to recover it. 00:26:45.665 [2024-05-15 17:17:33.250692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.665 [2024-05-15 17:17:33.250935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.665 [2024-05-15 17:17:33.250945] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.665 qpair failed and we were unable to recover it. 00:26:45.665 [2024-05-15 17:17:33.251052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.665 [2024-05-15 17:17:33.251208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.665 [2024-05-15 17:17:33.251218] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.665 qpair failed and we were unable to recover it. 00:26:45.665 [2024-05-15 17:17:33.251374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.665 [2024-05-15 17:17:33.251484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.665 [2024-05-15 17:17:33.251494] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.665 qpair failed and we were unable to recover it. 00:26:45.665 [2024-05-15 17:17:33.251671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.665 [2024-05-15 17:17:33.251893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.665 [2024-05-15 17:17:33.251902] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.665 qpair failed and we were unable to recover it. 00:26:45.665 [2024-05-15 17:17:33.252077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.665 [2024-05-15 17:17:33.252182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.665 [2024-05-15 17:17:33.252192] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.665 qpair failed and we were unable to recover it. 00:26:45.665 [2024-05-15 17:17:33.252359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.665 [2024-05-15 17:17:33.252477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.665 [2024-05-15 17:17:33.252487] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.665 qpair failed and we were unable to recover it. 00:26:45.665 [2024-05-15 17:17:33.252646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.665 [2024-05-15 17:17:33.252817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.665 [2024-05-15 17:17:33.252852] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.665 qpair failed and we were unable to recover it. 00:26:45.665 [2024-05-15 17:17:33.253062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.665 [2024-05-15 17:17:33.253282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.665 [2024-05-15 17:17:33.253312] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.665 qpair failed and we were unable to recover it. 00:26:45.665 [2024-05-15 17:17:33.253518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.665 [2024-05-15 17:17:33.253664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.665 [2024-05-15 17:17:33.253698] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.665 qpair failed and we were unable to recover it. 00:26:45.665 [2024-05-15 17:17:33.253925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.665 [2024-05-15 17:17:33.254139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.665 [2024-05-15 17:17:33.254176] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.665 qpair failed and we were unable to recover it. 00:26:45.665 [2024-05-15 17:17:33.254411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.665 [2024-05-15 17:17:33.254599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.665 [2024-05-15 17:17:33.254608] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.665 qpair failed and we were unable to recover it. 00:26:45.665 [2024-05-15 17:17:33.254858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.665 [2024-05-15 17:17:33.255029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.665 [2024-05-15 17:17:33.255038] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.665 qpair failed and we were unable to recover it. 00:26:45.665 [2024-05-15 17:17:33.255227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.665 [2024-05-15 17:17:33.255341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.665 [2024-05-15 17:17:33.255350] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.665 qpair failed and we were unable to recover it. 00:26:45.665 [2024-05-15 17:17:33.255601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.665 [2024-05-15 17:17:33.255707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.665 [2024-05-15 17:17:33.255717] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.665 qpair failed and we were unable to recover it. 00:26:45.665 [2024-05-15 17:17:33.255891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.665 [2024-05-15 17:17:33.255992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.665 [2024-05-15 17:17:33.256002] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.665 qpair failed and we were unable to recover it. 00:26:45.665 [2024-05-15 17:17:33.256100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.665 [2024-05-15 17:17:33.256261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.665 [2024-05-15 17:17:33.256271] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.665 qpair failed and we were unable to recover it. 00:26:45.665 [2024-05-15 17:17:33.256434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.665 [2024-05-15 17:17:33.256540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.665 [2024-05-15 17:17:33.256550] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.666 qpair failed and we were unable to recover it. 00:26:45.666 [2024-05-15 17:17:33.256655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.666 [2024-05-15 17:17:33.256826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.666 [2024-05-15 17:17:33.256836] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.666 qpair failed and we were unable to recover it. 00:26:45.666 [2024-05-15 17:17:33.256937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.666 [2024-05-15 17:17:33.257104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.666 [2024-05-15 17:17:33.257116] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.666 qpair failed and we were unable to recover it. 00:26:45.666 [2024-05-15 17:17:33.257218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.666 [2024-05-15 17:17:33.257375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.666 [2024-05-15 17:17:33.257385] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.666 qpair failed and we were unable to recover it. 00:26:45.666 [2024-05-15 17:17:33.257496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.666 [2024-05-15 17:17:33.257682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.666 [2024-05-15 17:17:33.257691] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.666 qpair failed and we were unable to recover it. 00:26:45.666 [2024-05-15 17:17:33.257775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.666 [2024-05-15 17:17:33.258030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.666 [2024-05-15 17:17:33.258040] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.666 qpair failed and we were unable to recover it. 00:26:45.666 [2024-05-15 17:17:33.258273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.666 [2024-05-15 17:17:33.258384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.666 [2024-05-15 17:17:33.258394] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.666 qpair failed and we were unable to recover it. 00:26:45.666 [2024-05-15 17:17:33.258493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.666 [2024-05-15 17:17:33.258732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.666 [2024-05-15 17:17:33.258760] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.666 qpair failed and we were unable to recover it. 00:26:45.666 [2024-05-15 17:17:33.259047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.666 [2024-05-15 17:17:33.259247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.666 [2024-05-15 17:17:33.259278] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.666 qpair failed and we were unable to recover it. 00:26:45.666 [2024-05-15 17:17:33.259488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.666 [2024-05-15 17:17:33.259644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.666 [2024-05-15 17:17:33.259673] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.666 qpair failed and we were unable to recover it. 00:26:45.666 [2024-05-15 17:17:33.259937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.666 [2024-05-15 17:17:33.260199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.666 [2024-05-15 17:17:33.260228] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.666 qpair failed and we were unable to recover it. 00:26:45.666 [2024-05-15 17:17:33.260384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.666 [2024-05-15 17:17:33.260570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.666 [2024-05-15 17:17:33.260579] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.666 qpair failed and we were unable to recover it. 00:26:45.666 [2024-05-15 17:17:33.260698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.666 [2024-05-15 17:17:33.260853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.666 [2024-05-15 17:17:33.260862] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.666 qpair failed and we were unable to recover it. 00:26:45.666 [2024-05-15 17:17:33.261044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.666 [2024-05-15 17:17:33.261218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.666 [2024-05-15 17:17:33.261248] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.666 qpair failed and we were unable to recover it. 00:26:45.666 [2024-05-15 17:17:33.261451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.666 [2024-05-15 17:17:33.261740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.666 [2024-05-15 17:17:33.261768] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.666 qpair failed and we were unable to recover it. 00:26:45.666 [2024-05-15 17:17:33.262000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.666 [2024-05-15 17:17:33.262263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.666 [2024-05-15 17:17:33.262292] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.666 qpair failed and we were unable to recover it. 00:26:45.666 [2024-05-15 17:17:33.262503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.666 [2024-05-15 17:17:33.262649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.666 [2024-05-15 17:17:33.262678] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.666 qpair failed and we were unable to recover it. 00:26:45.666 [2024-05-15 17:17:33.262856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.666 [2024-05-15 17:17:33.262960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.666 [2024-05-15 17:17:33.262970] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.666 qpair failed and we were unable to recover it. 00:26:45.666 [2024-05-15 17:17:33.263127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.666 [2024-05-15 17:17:33.263323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.666 [2024-05-15 17:17:33.263354] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.666 qpair failed and we were unable to recover it. 00:26:45.666 [2024-05-15 17:17:33.263515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.666 [2024-05-15 17:17:33.263717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.666 [2024-05-15 17:17:33.263745] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.666 qpair failed and we were unable to recover it. 00:26:45.666 [2024-05-15 17:17:33.263900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.666 [2024-05-15 17:17:33.264002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.666 [2024-05-15 17:17:33.264012] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.666 qpair failed and we were unable to recover it. 00:26:45.666 [2024-05-15 17:17:33.264133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.666 [2024-05-15 17:17:33.264379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.666 [2024-05-15 17:17:33.264388] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.666 qpair failed and we were unable to recover it. 00:26:45.666 [2024-05-15 17:17:33.264579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.666 [2024-05-15 17:17:33.264672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.666 [2024-05-15 17:17:33.264682] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.666 qpair failed and we were unable to recover it. 00:26:45.666 [2024-05-15 17:17:33.264859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.666 [2024-05-15 17:17:33.265031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.666 [2024-05-15 17:17:33.265060] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.666 qpair failed and we were unable to recover it. 00:26:45.666 [2024-05-15 17:17:33.265340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.666 [2024-05-15 17:17:33.265593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.666 [2024-05-15 17:17:33.265613] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.666 qpair failed and we were unable to recover it. 00:26:45.666 [2024-05-15 17:17:33.265706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.666 [2024-05-15 17:17:33.265893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.666 [2024-05-15 17:17:33.265903] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.666 qpair failed and we were unable to recover it. 00:26:45.666 [2024-05-15 17:17:33.266056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.666 [2024-05-15 17:17:33.266279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.666 [2024-05-15 17:17:33.266289] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.666 qpair failed and we were unable to recover it. 00:26:45.666 [2024-05-15 17:17:33.266483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.666 [2024-05-15 17:17:33.266689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.666 [2024-05-15 17:17:33.266717] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.666 qpair failed and we were unable to recover it. 00:26:45.666 [2024-05-15 17:17:33.266876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.666 [2024-05-15 17:17:33.267034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.666 [2024-05-15 17:17:33.267044] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.666 qpair failed and we were unable to recover it. 00:26:45.666 [2024-05-15 17:17:33.267240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.666 [2024-05-15 17:17:33.267432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.666 [2024-05-15 17:17:33.267442] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.666 qpair failed and we were unable to recover it. 00:26:45.666 [2024-05-15 17:17:33.267688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.666 [2024-05-15 17:17:33.267957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.666 [2024-05-15 17:17:33.267985] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.666 qpair failed and we were unable to recover it. 00:26:45.666 [2024-05-15 17:17:33.268200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.666 [2024-05-15 17:17:33.268329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.666 [2024-05-15 17:17:33.268357] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.666 qpair failed and we were unable to recover it. 00:26:45.666 [2024-05-15 17:17:33.268492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.666 [2024-05-15 17:17:33.268776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.666 [2024-05-15 17:17:33.268804] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.666 qpair failed and we were unable to recover it. 00:26:45.666 [2024-05-15 17:17:33.269077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.666 [2024-05-15 17:17:33.269363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.666 [2024-05-15 17:17:33.269393] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.666 qpair failed and we were unable to recover it. 00:26:45.666 [2024-05-15 17:17:33.269559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.666 [2024-05-15 17:17:33.269707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.666 [2024-05-15 17:17:33.269735] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.666 qpair failed and we were unable to recover it. 00:26:45.666 [2024-05-15 17:17:33.270053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.666 [2024-05-15 17:17:33.270271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.666 [2024-05-15 17:17:33.270301] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.666 qpair failed and we were unable to recover it. 00:26:45.666 [2024-05-15 17:17:33.270575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.666 [2024-05-15 17:17:33.270722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.666 [2024-05-15 17:17:33.270751] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.666 qpair failed and we were unable to recover it. 00:26:45.666 [2024-05-15 17:17:33.271033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.666 [2024-05-15 17:17:33.271277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.666 [2024-05-15 17:17:33.271287] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.666 qpair failed and we were unable to recover it. 00:26:45.666 [2024-05-15 17:17:33.271406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.666 [2024-05-15 17:17:33.271577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.666 [2024-05-15 17:17:33.271586] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.666 qpair failed and we were unable to recover it. 00:26:45.666 [2024-05-15 17:17:33.271703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.666 [2024-05-15 17:17:33.271871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.666 [2024-05-15 17:17:33.271898] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.666 qpair failed and we were unable to recover it. 00:26:45.666 [2024-05-15 17:17:33.272050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.666 [2024-05-15 17:17:33.272182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.666 [2024-05-15 17:17:33.272212] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.666 qpair failed and we were unable to recover it. 00:26:45.666 [2024-05-15 17:17:33.272349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.666 [2024-05-15 17:17:33.272591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.666 [2024-05-15 17:17:33.272620] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.666 qpair failed and we were unable to recover it. 00:26:45.666 [2024-05-15 17:17:33.272771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.666 [2024-05-15 17:17:33.272915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.666 [2024-05-15 17:17:33.272925] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.666 qpair failed and we were unable to recover it. 00:26:45.666 [2024-05-15 17:17:33.273109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.666 [2024-05-15 17:17:33.273301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.666 [2024-05-15 17:17:33.273330] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.666 qpair failed and we were unable to recover it. 00:26:45.666 [2024-05-15 17:17:33.273534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.666 [2024-05-15 17:17:33.273748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.666 [2024-05-15 17:17:33.273777] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.666 qpair failed and we were unable to recover it. 00:26:45.666 [2024-05-15 17:17:33.273992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.666 [2024-05-15 17:17:33.274198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.667 [2024-05-15 17:17:33.274228] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.667 qpair failed and we were unable to recover it. 00:26:45.667 [2024-05-15 17:17:33.274497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.667 [2024-05-15 17:17:33.274697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.667 [2024-05-15 17:17:33.274706] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.667 qpair failed and we were unable to recover it. 00:26:45.667 [2024-05-15 17:17:33.274796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.667 [2024-05-15 17:17:33.274972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.667 [2024-05-15 17:17:33.274981] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.667 qpair failed and we were unable to recover it. 00:26:45.667 [2024-05-15 17:17:33.275184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.667 [2024-05-15 17:17:33.275386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.667 [2024-05-15 17:17:33.275396] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.667 qpair failed and we were unable to recover it. 00:26:45.667 [2024-05-15 17:17:33.275506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.667 [2024-05-15 17:17:33.275605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.667 [2024-05-15 17:17:33.275615] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.667 qpair failed and we were unable to recover it. 00:26:45.667 [2024-05-15 17:17:33.275777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.667 [2024-05-15 17:17:33.275968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.667 [2024-05-15 17:17:33.275977] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.667 qpair failed and we were unable to recover it. 00:26:45.667 [2024-05-15 17:17:33.276095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.667 [2024-05-15 17:17:33.276331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.667 [2024-05-15 17:17:33.276361] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.667 qpair failed and we were unable to recover it. 00:26:45.667 [2024-05-15 17:17:33.276494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.667 [2024-05-15 17:17:33.276798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.667 [2024-05-15 17:17:33.276827] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.667 qpair failed and we were unable to recover it. 00:26:45.667 [2024-05-15 17:17:33.277091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.667 [2024-05-15 17:17:33.277198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.667 [2024-05-15 17:17:33.277208] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.667 qpair failed and we were unable to recover it. 00:26:45.667 [2024-05-15 17:17:33.277436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.667 [2024-05-15 17:17:33.277652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.667 [2024-05-15 17:17:33.277680] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.667 qpair failed and we were unable to recover it. 00:26:45.667 [2024-05-15 17:17:33.277948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.667 [2024-05-15 17:17:33.278060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.667 [2024-05-15 17:17:33.278070] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.667 qpair failed and we were unable to recover it. 00:26:45.667 [2024-05-15 17:17:33.278244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.667 [2024-05-15 17:17:33.278419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.667 [2024-05-15 17:17:33.278429] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.667 qpair failed and we were unable to recover it. 00:26:45.667 [2024-05-15 17:17:33.278627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.667 [2024-05-15 17:17:33.278804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.667 [2024-05-15 17:17:33.278832] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.667 qpair failed and we were unable to recover it. 00:26:45.667 [2024-05-15 17:17:33.279100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.667 [2024-05-15 17:17:33.279236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.667 [2024-05-15 17:17:33.279266] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.667 qpair failed and we were unable to recover it. 00:26:45.667 [2024-05-15 17:17:33.279446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.667 [2024-05-15 17:17:33.279637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.667 [2024-05-15 17:17:33.279665] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.667 qpair failed and we were unable to recover it. 00:26:45.667 [2024-05-15 17:17:33.279851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.667 [2024-05-15 17:17:33.280022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.667 [2024-05-15 17:17:33.280050] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.667 qpair failed and we were unable to recover it. 00:26:45.667 [2024-05-15 17:17:33.280381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.667 [2024-05-15 17:17:33.280590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.667 [2024-05-15 17:17:33.280619] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.667 qpair failed and we were unable to recover it. 00:26:45.667 [2024-05-15 17:17:33.280779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.667 [2024-05-15 17:17:33.280971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.667 [2024-05-15 17:17:33.281013] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.667 qpair failed and we were unable to recover it. 00:26:45.667 [2024-05-15 17:17:33.281152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.667 [2024-05-15 17:17:33.281467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.667 [2024-05-15 17:17:33.281497] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.667 qpair failed and we were unable to recover it. 00:26:45.667 [2024-05-15 17:17:33.281764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.667 [2024-05-15 17:17:33.281994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.667 [2024-05-15 17:17:33.282003] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.667 qpair failed and we were unable to recover it. 00:26:45.667 [2024-05-15 17:17:33.282167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.667 [2024-05-15 17:17:33.282273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.667 [2024-05-15 17:17:33.282283] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.667 qpair failed and we were unable to recover it. 00:26:45.667 [2024-05-15 17:17:33.282382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.667 [2024-05-15 17:17:33.282581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.667 [2024-05-15 17:17:33.282591] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.667 qpair failed and we were unable to recover it. 00:26:45.667 [2024-05-15 17:17:33.282744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.667 [2024-05-15 17:17:33.282991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.667 [2024-05-15 17:17:33.283001] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.667 qpair failed and we were unable to recover it. 00:26:45.667 [2024-05-15 17:17:33.283180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.667 [2024-05-15 17:17:33.283436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.667 [2024-05-15 17:17:33.283447] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.667 qpair failed and we were unable to recover it. 00:26:45.667 [2024-05-15 17:17:33.283623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.667 [2024-05-15 17:17:33.283792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.667 [2024-05-15 17:17:33.283802] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.667 qpair failed and we were unable to recover it. 00:26:45.667 [2024-05-15 17:17:33.284049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.667 [2024-05-15 17:17:33.284270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.667 [2024-05-15 17:17:33.284280] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.667 qpair failed and we were unable to recover it. 00:26:45.667 [2024-05-15 17:17:33.284462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.667 [2024-05-15 17:17:33.284633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.667 [2024-05-15 17:17:33.284642] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.667 qpair failed and we were unable to recover it. 00:26:45.667 [2024-05-15 17:17:33.284726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.667 [2024-05-15 17:17:33.284832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.667 [2024-05-15 17:17:33.284841] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.667 qpair failed and we were unable to recover it. 00:26:45.667 [2024-05-15 17:17:33.284949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.667 [2024-05-15 17:17:33.285130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.667 [2024-05-15 17:17:33.285139] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.667 qpair failed and we were unable to recover it. 00:26:45.667 [2024-05-15 17:17:33.285312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.667 [2024-05-15 17:17:33.285470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.667 [2024-05-15 17:17:33.285479] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.667 qpair failed and we were unable to recover it. 00:26:45.667 [2024-05-15 17:17:33.285652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.667 [2024-05-15 17:17:33.285748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.667 [2024-05-15 17:17:33.285757] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.667 qpair failed and we were unable to recover it. 00:26:45.667 [2024-05-15 17:17:33.285936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.667 [2024-05-15 17:17:33.286170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.667 [2024-05-15 17:17:33.286180] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.667 qpair failed and we were unable to recover it. 00:26:45.667 [2024-05-15 17:17:33.286361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.667 [2024-05-15 17:17:33.286469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.667 [2024-05-15 17:17:33.286479] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.667 qpair failed and we were unable to recover it. 00:26:45.667 [2024-05-15 17:17:33.286589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.667 [2024-05-15 17:17:33.286766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.667 [2024-05-15 17:17:33.286775] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.667 qpair failed and we were unable to recover it. 00:26:45.667 [2024-05-15 17:17:33.286954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.667 [2024-05-15 17:17:33.287121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.667 [2024-05-15 17:17:33.287131] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.667 qpair failed and we were unable to recover it. 00:26:45.667 [2024-05-15 17:17:33.287232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.667 [2024-05-15 17:17:33.287476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.667 [2024-05-15 17:17:33.287486] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.667 qpair failed and we were unable to recover it. 00:26:45.667 [2024-05-15 17:17:33.287647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.667 [2024-05-15 17:17:33.287827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.667 [2024-05-15 17:17:33.287836] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.667 qpair failed and we were unable to recover it. 00:26:45.667 [2024-05-15 17:17:33.288035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.667 [2024-05-15 17:17:33.288190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.667 [2024-05-15 17:17:33.288201] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.667 qpair failed and we were unable to recover it. 00:26:45.667 [2024-05-15 17:17:33.288315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.667 [2024-05-15 17:17:33.288474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.667 [2024-05-15 17:17:33.288484] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.667 qpair failed and we were unable to recover it. 00:26:45.667 [2024-05-15 17:17:33.288649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.667 [2024-05-15 17:17:33.288872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.667 [2024-05-15 17:17:33.288882] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.667 qpair failed and we were unable to recover it. 00:26:45.667 [2024-05-15 17:17:33.288996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.667 [2024-05-15 17:17:33.289124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.667 [2024-05-15 17:17:33.289133] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.667 qpair failed and we were unable to recover it. 00:26:45.667 [2024-05-15 17:17:33.289292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.667 [2024-05-15 17:17:33.289389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.667 [2024-05-15 17:17:33.289399] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.667 qpair failed and we were unable to recover it. 00:26:45.667 [2024-05-15 17:17:33.289573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.667 [2024-05-15 17:17:33.289797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.667 [2024-05-15 17:17:33.289807] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.667 qpair failed and we were unable to recover it. 00:26:45.667 [2024-05-15 17:17:33.289911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.667 [2024-05-15 17:17:33.290064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.667 [2024-05-15 17:17:33.290074] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.667 qpair failed and we were unable to recover it. 00:26:45.667 [2024-05-15 17:17:33.290189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.667 [2024-05-15 17:17:33.290354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.667 [2024-05-15 17:17:33.290364] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.667 qpair failed and we were unable to recover it. 00:26:45.667 [2024-05-15 17:17:33.290522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.667 [2024-05-15 17:17:33.290610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.667 [2024-05-15 17:17:33.290620] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.667 qpair failed and we were unable to recover it. 00:26:45.667 [2024-05-15 17:17:33.290713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.667 [2024-05-15 17:17:33.290797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.667 [2024-05-15 17:17:33.290807] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.667 qpair failed and we were unable to recover it. 00:26:45.667 [2024-05-15 17:17:33.291043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.667 [2024-05-15 17:17:33.291146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.668 [2024-05-15 17:17:33.291156] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.668 qpair failed and we were unable to recover it. 00:26:45.668 [2024-05-15 17:17:33.291330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.668 [2024-05-15 17:17:33.291430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.668 [2024-05-15 17:17:33.291440] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.668 qpair failed and we were unable to recover it. 00:26:45.668 [2024-05-15 17:17:33.291667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.668 [2024-05-15 17:17:33.291856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.668 [2024-05-15 17:17:33.291865] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.668 qpair failed and we were unable to recover it. 00:26:45.668 [2024-05-15 17:17:33.292089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.668 [2024-05-15 17:17:33.292316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.668 [2024-05-15 17:17:33.292327] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.668 qpair failed and we were unable to recover it. 00:26:45.668 [2024-05-15 17:17:33.292417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.668 [2024-05-15 17:17:33.292574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.668 [2024-05-15 17:17:33.292584] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.668 qpair failed and we were unable to recover it. 00:26:45.668 [2024-05-15 17:17:33.292696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.668 [2024-05-15 17:17:33.292805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.668 [2024-05-15 17:17:33.292815] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.668 qpair failed and we were unable to recover it. 00:26:45.668 [2024-05-15 17:17:33.292924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.668 [2024-05-15 17:17:33.293150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.668 [2024-05-15 17:17:33.293160] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.668 qpair failed and we were unable to recover it. 00:26:45.668 [2024-05-15 17:17:33.293360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.668 [2024-05-15 17:17:33.293587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.668 [2024-05-15 17:17:33.293597] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.668 qpair failed and we were unable to recover it. 00:26:45.668 [2024-05-15 17:17:33.293844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.668 [2024-05-15 17:17:33.293940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.668 [2024-05-15 17:17:33.293950] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.668 qpair failed and we were unable to recover it. 00:26:45.668 [2024-05-15 17:17:33.294137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.668 [2024-05-15 17:17:33.294403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.668 [2024-05-15 17:17:33.294414] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.668 qpair failed and we were unable to recover it. 00:26:45.668 [2024-05-15 17:17:33.294525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.668 [2024-05-15 17:17:33.294694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.668 [2024-05-15 17:17:33.294703] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.668 qpair failed and we were unable to recover it. 00:26:45.668 [2024-05-15 17:17:33.294815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.668 [2024-05-15 17:17:33.294923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.668 [2024-05-15 17:17:33.294933] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.668 qpair failed and we were unable to recover it. 00:26:45.668 [2024-05-15 17:17:33.295112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.668 [2024-05-15 17:17:33.295342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.668 [2024-05-15 17:17:33.295351] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.668 qpair failed and we were unable to recover it. 00:26:45.668 [2024-05-15 17:17:33.295522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.668 [2024-05-15 17:17:33.295630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.668 [2024-05-15 17:17:33.295639] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.668 qpair failed and we were unable to recover it. 00:26:45.668 [2024-05-15 17:17:33.295747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.668 [2024-05-15 17:17:33.295848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.668 [2024-05-15 17:17:33.295858] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.668 qpair failed and we were unable to recover it. 00:26:45.668 [2024-05-15 17:17:33.296021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.668 [2024-05-15 17:17:33.296194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.668 [2024-05-15 17:17:33.296204] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.668 qpair failed and we were unable to recover it. 00:26:45.668 [2024-05-15 17:17:33.296295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.668 [2024-05-15 17:17:33.296451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.668 [2024-05-15 17:17:33.296461] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.668 qpair failed and we were unable to recover it. 00:26:45.668 [2024-05-15 17:17:33.296555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.668 [2024-05-15 17:17:33.296725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.668 [2024-05-15 17:17:33.296734] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.668 qpair failed and we were unable to recover it. 00:26:45.668 [2024-05-15 17:17:33.296957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.668 [2024-05-15 17:17:33.297180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.668 [2024-05-15 17:17:33.297190] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.668 qpair failed and we were unable to recover it. 00:26:45.668 [2024-05-15 17:17:33.297290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.668 [2024-05-15 17:17:33.297460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.668 [2024-05-15 17:17:33.297470] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.668 qpair failed and we were unable to recover it. 00:26:45.668 [2024-05-15 17:17:33.297700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.668 [2024-05-15 17:17:33.297866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.668 [2024-05-15 17:17:33.297876] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.668 qpair failed and we were unable to recover it. 00:26:45.668 [2024-05-15 17:17:33.298100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.942 [2024-05-15 17:17:33.298257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.942 [2024-05-15 17:17:33.298268] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.942 qpair failed and we were unable to recover it. 00:26:45.942 [2024-05-15 17:17:33.298367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.942 [2024-05-15 17:17:33.298613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.942 [2024-05-15 17:17:33.298623] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.942 qpair failed and we were unable to recover it. 00:26:45.942 [2024-05-15 17:17:33.298858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.942 [2024-05-15 17:17:33.299030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.942 [2024-05-15 17:17:33.299040] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.942 qpair failed and we were unable to recover it. 00:26:45.942 [2024-05-15 17:17:33.299218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.942 [2024-05-15 17:17:33.299389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.942 [2024-05-15 17:17:33.299399] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.942 qpair failed and we were unable to recover it. 00:26:45.942 [2024-05-15 17:17:33.299477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.942 [2024-05-15 17:17:33.299631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.942 [2024-05-15 17:17:33.299640] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.942 qpair failed and we were unable to recover it. 00:26:45.942 [2024-05-15 17:17:33.299817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.942 [2024-05-15 17:17:33.299931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.942 [2024-05-15 17:17:33.299940] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.942 qpair failed and we were unable to recover it. 00:26:45.942 [2024-05-15 17:17:33.300067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.942 [2024-05-15 17:17:33.300183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.942 [2024-05-15 17:17:33.300194] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.942 qpair failed and we were unable to recover it. 00:26:45.942 [2024-05-15 17:17:33.300264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.942 [2024-05-15 17:17:33.300442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.942 [2024-05-15 17:17:33.300452] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.942 qpair failed and we were unable to recover it. 00:26:45.942 [2024-05-15 17:17:33.300612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.942 [2024-05-15 17:17:33.300724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.942 [2024-05-15 17:17:33.300734] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.942 qpair failed and we were unable to recover it. 00:26:45.942 [2024-05-15 17:17:33.300859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.942 [2024-05-15 17:17:33.300946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.942 [2024-05-15 17:17:33.300955] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.942 qpair failed and we were unable to recover it. 00:26:45.942 [2024-05-15 17:17:33.301069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.942 [2024-05-15 17:17:33.301160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.942 [2024-05-15 17:17:33.301222] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.942 qpair failed and we were unable to recover it. 00:26:45.942 [2024-05-15 17:17:33.301449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.942 [2024-05-15 17:17:33.301620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.942 [2024-05-15 17:17:33.301630] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.942 qpair failed and we were unable to recover it. 00:26:45.942 [2024-05-15 17:17:33.301738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.942 [2024-05-15 17:17:33.301899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.942 [2024-05-15 17:17:33.301909] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.942 qpair failed and we were unable to recover it. 00:26:45.942 [2024-05-15 17:17:33.301990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.942 [2024-05-15 17:17:33.302116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.942 [2024-05-15 17:17:33.302125] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.942 qpair failed and we were unable to recover it. 00:26:45.942 [2024-05-15 17:17:33.302249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.942 [2024-05-15 17:17:33.302408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.942 [2024-05-15 17:17:33.302418] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.943 qpair failed and we were unable to recover it. 00:26:45.943 [2024-05-15 17:17:33.302588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.943 [2024-05-15 17:17:33.302748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.943 [2024-05-15 17:17:33.302758] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.943 qpair failed and we were unable to recover it. 00:26:45.943 [2024-05-15 17:17:33.302967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.943 [2024-05-15 17:17:33.303074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.943 [2024-05-15 17:17:33.303084] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.943 qpair failed and we were unable to recover it. 00:26:45.943 [2024-05-15 17:17:33.303266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.943 [2024-05-15 17:17:33.303379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.943 [2024-05-15 17:17:33.303388] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.943 qpair failed and we were unable to recover it. 00:26:45.943 [2024-05-15 17:17:33.303566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.943 [2024-05-15 17:17:33.303738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.943 [2024-05-15 17:17:33.303748] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.943 qpair failed and we were unable to recover it. 00:26:45.943 [2024-05-15 17:17:33.303863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.943 [2024-05-15 17:17:33.304102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.943 [2024-05-15 17:17:33.304111] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.943 qpair failed and we were unable to recover it. 00:26:45.943 [2024-05-15 17:17:33.304340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.943 [2024-05-15 17:17:33.304459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.943 [2024-05-15 17:17:33.304471] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.943 qpair failed and we were unable to recover it. 00:26:45.943 [2024-05-15 17:17:33.304572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.943 [2024-05-15 17:17:33.304821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.943 [2024-05-15 17:17:33.304830] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.943 qpair failed and we were unable to recover it. 00:26:45.943 [2024-05-15 17:17:33.305007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.943 [2024-05-15 17:17:33.305176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.943 [2024-05-15 17:17:33.305186] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.943 qpair failed and we were unable to recover it. 00:26:45.943 [2024-05-15 17:17:33.305294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.943 [2024-05-15 17:17:33.305400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.943 [2024-05-15 17:17:33.305409] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.943 qpair failed and we were unable to recover it. 00:26:45.943 [2024-05-15 17:17:33.305595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.943 [2024-05-15 17:17:33.305702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.943 [2024-05-15 17:17:33.305711] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.943 qpair failed and we were unable to recover it. 00:26:45.943 [2024-05-15 17:17:33.305813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.943 [2024-05-15 17:17:33.305975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.943 [2024-05-15 17:17:33.305985] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.943 qpair failed and we were unable to recover it. 00:26:45.943 [2024-05-15 17:17:33.306092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.943 [2024-05-15 17:17:33.306253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.943 [2024-05-15 17:17:33.306263] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.943 qpair failed and we were unable to recover it. 00:26:45.943 [2024-05-15 17:17:33.306423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.943 [2024-05-15 17:17:33.306539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.943 [2024-05-15 17:17:33.306549] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.943 qpair failed and we were unable to recover it. 00:26:45.943 [2024-05-15 17:17:33.306708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.943 [2024-05-15 17:17:33.306867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.943 [2024-05-15 17:17:33.306876] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.943 qpair failed and we were unable to recover it. 00:26:45.943 [2024-05-15 17:17:33.307102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.943 [2024-05-15 17:17:33.307251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.943 [2024-05-15 17:17:33.307261] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.943 qpair failed and we were unable to recover it. 00:26:45.943 [2024-05-15 17:17:33.307385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.943 [2024-05-15 17:17:33.307480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.943 [2024-05-15 17:17:33.307492] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.943 qpair failed and we were unable to recover it. 00:26:45.943 [2024-05-15 17:17:33.307592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.943 [2024-05-15 17:17:33.307774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.943 [2024-05-15 17:17:33.307783] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.943 qpair failed and we were unable to recover it. 00:26:45.943 [2024-05-15 17:17:33.307948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.943 [2024-05-15 17:17:33.308052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.943 [2024-05-15 17:17:33.308062] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.943 qpair failed and we were unable to recover it. 00:26:45.943 [2024-05-15 17:17:33.308235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.943 [2024-05-15 17:17:33.308400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.943 [2024-05-15 17:17:33.308410] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.943 qpair failed and we were unable to recover it. 00:26:45.943 [2024-05-15 17:17:33.308510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.943 [2024-05-15 17:17:33.308683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.943 [2024-05-15 17:17:33.308693] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.943 qpair failed and we were unable to recover it. 00:26:45.943 [2024-05-15 17:17:33.308895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.943 [2024-05-15 17:17:33.309000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.943 [2024-05-15 17:17:33.309010] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.943 qpair failed and we were unable to recover it. 00:26:45.943 [2024-05-15 17:17:33.309133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.943 [2024-05-15 17:17:33.309221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.943 [2024-05-15 17:17:33.309232] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.943 qpair failed and we were unable to recover it. 00:26:45.943 [2024-05-15 17:17:33.309401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.943 [2024-05-15 17:17:33.309632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.943 [2024-05-15 17:17:33.309642] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.943 qpair failed and we were unable to recover it. 00:26:45.943 [2024-05-15 17:17:33.309761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.943 [2024-05-15 17:17:33.309926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.943 [2024-05-15 17:17:33.309936] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.943 qpair failed and we were unable to recover it. 00:26:45.943 [2024-05-15 17:17:33.310159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.943 [2024-05-15 17:17:33.310262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.943 [2024-05-15 17:17:33.310272] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.943 qpair failed and we were unable to recover it. 00:26:45.943 [2024-05-15 17:17:33.310377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.943 [2024-05-15 17:17:33.310557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.943 [2024-05-15 17:17:33.310568] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.943 qpair failed and we were unable to recover it. 00:26:45.943 [2024-05-15 17:17:33.310744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.943 [2024-05-15 17:17:33.310916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.943 [2024-05-15 17:17:33.310926] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.944 qpair failed and we were unable to recover it. 00:26:45.944 [2024-05-15 17:17:33.311032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.944 [2024-05-15 17:17:33.311139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.944 [2024-05-15 17:17:33.311148] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.944 qpair failed and we were unable to recover it. 00:26:45.944 [2024-05-15 17:17:33.311401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.944 [2024-05-15 17:17:33.311560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.944 [2024-05-15 17:17:33.311570] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.944 qpair failed and we were unable to recover it. 00:26:45.944 [2024-05-15 17:17:33.311817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.944 [2024-05-15 17:17:33.311919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.944 [2024-05-15 17:17:33.311928] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.944 qpair failed and we were unable to recover it. 00:26:45.944 [2024-05-15 17:17:33.312091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.944 [2024-05-15 17:17:33.312284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.944 [2024-05-15 17:17:33.312295] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.944 qpair failed and we were unable to recover it. 00:26:45.944 [2024-05-15 17:17:33.312488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.944 [2024-05-15 17:17:33.312654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.944 [2024-05-15 17:17:33.312664] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.944 qpair failed and we were unable to recover it. 00:26:45.944 [2024-05-15 17:17:33.312833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.944 [2024-05-15 17:17:33.312957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.944 [2024-05-15 17:17:33.312966] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.944 qpair failed and we were unable to recover it. 00:26:45.944 [2024-05-15 17:17:33.313078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.944 [2024-05-15 17:17:33.313186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.944 [2024-05-15 17:17:33.313196] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.944 qpair failed and we were unable to recover it. 00:26:45.944 [2024-05-15 17:17:33.313357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.944 [2024-05-15 17:17:33.313524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.944 [2024-05-15 17:17:33.313534] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.944 qpair failed and we were unable to recover it. 00:26:45.944 [2024-05-15 17:17:33.313639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.944 [2024-05-15 17:17:33.313802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.944 [2024-05-15 17:17:33.313813] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.944 qpair failed and we were unable to recover it. 00:26:45.944 [2024-05-15 17:17:33.313993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.944 [2024-05-15 17:17:33.314108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.944 [2024-05-15 17:17:33.314118] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.944 qpair failed and we were unable to recover it. 00:26:45.944 [2024-05-15 17:17:33.314229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.944 [2024-05-15 17:17:33.314448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.944 [2024-05-15 17:17:33.314458] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.944 qpair failed and we were unable to recover it. 00:26:45.944 [2024-05-15 17:17:33.314616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.944 [2024-05-15 17:17:33.314783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.944 [2024-05-15 17:17:33.314793] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.944 qpair failed and we were unable to recover it. 00:26:45.944 [2024-05-15 17:17:33.314965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.944 [2024-05-15 17:17:33.315129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.944 [2024-05-15 17:17:33.315138] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.944 qpair failed and we were unable to recover it. 00:26:45.944 [2024-05-15 17:17:33.315375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.944 [2024-05-15 17:17:33.315541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.944 [2024-05-15 17:17:33.315551] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.944 qpair failed and we were unable to recover it. 00:26:45.944 [2024-05-15 17:17:33.315772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.944 [2024-05-15 17:17:33.315930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.944 [2024-05-15 17:17:33.315940] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.944 qpair failed and we were unable to recover it. 00:26:45.944 [2024-05-15 17:17:33.316034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.944 [2024-05-15 17:17:33.316255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.944 [2024-05-15 17:17:33.316264] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.944 qpair failed and we were unable to recover it. 00:26:45.944 [2024-05-15 17:17:33.316436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.944 [2024-05-15 17:17:33.316593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.944 [2024-05-15 17:17:33.316602] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.944 qpair failed and we were unable to recover it. 00:26:45.944 [2024-05-15 17:17:33.316706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.944 [2024-05-15 17:17:33.316877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.944 [2024-05-15 17:17:33.316887] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.944 qpair failed and we were unable to recover it. 00:26:45.944 [2024-05-15 17:17:33.317037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.944 [2024-05-15 17:17:33.317217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.944 [2024-05-15 17:17:33.317227] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.944 qpair failed and we were unable to recover it. 00:26:45.944 [2024-05-15 17:17:33.317454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.944 [2024-05-15 17:17:33.317610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.944 [2024-05-15 17:17:33.317620] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.944 qpair failed and we were unable to recover it. 00:26:45.944 [2024-05-15 17:17:33.317811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.944 [2024-05-15 17:17:33.318026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.944 [2024-05-15 17:17:33.318035] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.944 qpair failed and we were unable to recover it. 00:26:45.944 [2024-05-15 17:17:33.318274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.944 [2024-05-15 17:17:33.318414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.944 [2024-05-15 17:17:33.318423] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.944 qpair failed and we were unable to recover it. 00:26:45.944 [2024-05-15 17:17:33.318532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.944 [2024-05-15 17:17:33.318753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.944 [2024-05-15 17:17:33.318763] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.944 qpair failed and we were unable to recover it. 00:26:45.944 [2024-05-15 17:17:33.318928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.944 [2024-05-15 17:17:33.319098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.944 [2024-05-15 17:17:33.319108] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.944 qpair failed and we were unable to recover it. 00:26:45.944 [2024-05-15 17:17:33.319280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.944 [2024-05-15 17:17:33.319439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.944 [2024-05-15 17:17:33.319449] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.944 qpair failed and we were unable to recover it. 00:26:45.944 [2024-05-15 17:17:33.319557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.944 [2024-05-15 17:17:33.319666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.944 [2024-05-15 17:17:33.319676] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.944 qpair failed and we were unable to recover it. 00:26:45.944 [2024-05-15 17:17:33.319836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.944 [2024-05-15 17:17:33.320057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.944 [2024-05-15 17:17:33.320066] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.944 qpair failed and we were unable to recover it. 00:26:45.944 [2024-05-15 17:17:33.320155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.944 [2024-05-15 17:17:33.320241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.944 [2024-05-15 17:17:33.320251] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.944 qpair failed and we were unable to recover it. 00:26:45.945 [2024-05-15 17:17:33.320413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.945 [2024-05-15 17:17:33.320585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.945 [2024-05-15 17:17:33.320595] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.945 qpair failed and we were unable to recover it. 00:26:45.945 [2024-05-15 17:17:33.320768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.945 [2024-05-15 17:17:33.320871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.945 [2024-05-15 17:17:33.320881] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.945 qpair failed and we were unable to recover it. 00:26:45.945 [2024-05-15 17:17:33.321130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.945 [2024-05-15 17:17:33.321323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.945 [2024-05-15 17:17:33.321333] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.945 qpair failed and we were unable to recover it. 00:26:45.945 [2024-05-15 17:17:33.321433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.945 [2024-05-15 17:17:33.321605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.945 [2024-05-15 17:17:33.321615] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.945 qpair failed and we were unable to recover it. 00:26:45.945 [2024-05-15 17:17:33.321789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.945 [2024-05-15 17:17:33.322041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.945 [2024-05-15 17:17:33.322050] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.945 qpair failed and we were unable to recover it. 00:26:45.945 [2024-05-15 17:17:33.322173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.945 [2024-05-15 17:17:33.322345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.945 [2024-05-15 17:17:33.322355] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.945 qpair failed and we were unable to recover it. 00:26:45.945 [2024-05-15 17:17:33.322580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.945 [2024-05-15 17:17:33.322769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.945 [2024-05-15 17:17:33.322778] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.945 qpair failed and we were unable to recover it. 00:26:45.945 [2024-05-15 17:17:33.322951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.945 [2024-05-15 17:17:33.323051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.945 [2024-05-15 17:17:33.323061] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.945 qpair failed and we were unable to recover it. 00:26:45.945 [2024-05-15 17:17:33.323221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.945 [2024-05-15 17:17:33.323393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.945 [2024-05-15 17:17:33.323403] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.945 qpair failed and we were unable to recover it. 00:26:45.945 [2024-05-15 17:17:33.323540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.945 [2024-05-15 17:17:33.323649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.945 [2024-05-15 17:17:33.323659] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.945 qpair failed and we were unable to recover it. 00:26:45.945 [2024-05-15 17:17:33.323900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.945 [2024-05-15 17:17:33.324009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.945 [2024-05-15 17:17:33.324019] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.945 qpair failed and we were unable to recover it. 00:26:45.945 [2024-05-15 17:17:33.324114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.945 [2024-05-15 17:17:33.324217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.945 [2024-05-15 17:17:33.324227] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.945 qpair failed and we were unable to recover it. 00:26:45.945 [2024-05-15 17:17:33.324330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.945 [2024-05-15 17:17:33.324486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.945 [2024-05-15 17:17:33.324495] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.945 qpair failed and we were unable to recover it. 00:26:45.945 [2024-05-15 17:17:33.324676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.945 [2024-05-15 17:17:33.324836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.945 [2024-05-15 17:17:33.324846] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.945 qpair failed and we were unable to recover it. 00:26:45.945 [2024-05-15 17:17:33.324955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.945 [2024-05-15 17:17:33.325112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.945 [2024-05-15 17:17:33.325121] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.945 qpair failed and we were unable to recover it. 00:26:45.945 [2024-05-15 17:17:33.325292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.945 [2024-05-15 17:17:33.325546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.945 [2024-05-15 17:17:33.325556] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.945 qpair failed and we were unable to recover it. 00:26:45.945 [2024-05-15 17:17:33.325830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.945 [2024-05-15 17:17:33.326024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.945 [2024-05-15 17:17:33.326034] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.945 qpair failed and we were unable to recover it. 00:26:45.945 [2024-05-15 17:17:33.326192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.945 [2024-05-15 17:17:33.326343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.945 [2024-05-15 17:17:33.326352] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.945 qpair failed and we were unable to recover it. 00:26:45.945 [2024-05-15 17:17:33.326480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.945 [2024-05-15 17:17:33.326574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.945 [2024-05-15 17:17:33.326584] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.945 qpair failed and we were unable to recover it. 00:26:45.945 [2024-05-15 17:17:33.326701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.945 [2024-05-15 17:17:33.326810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.945 [2024-05-15 17:17:33.326820] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.945 qpair failed and we were unable to recover it. 00:26:45.945 [2024-05-15 17:17:33.327010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.945 [2024-05-15 17:17:33.327103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.945 [2024-05-15 17:17:33.327112] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.945 qpair failed and we were unable to recover it. 00:26:45.945 [2024-05-15 17:17:33.327211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.945 [2024-05-15 17:17:33.327329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.945 [2024-05-15 17:17:33.327338] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.945 qpair failed and we were unable to recover it. 00:26:45.945 [2024-05-15 17:17:33.327430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.945 [2024-05-15 17:17:33.327561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.945 [2024-05-15 17:17:33.327570] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.945 qpair failed and we were unable to recover it. 00:26:45.945 [2024-05-15 17:17:33.327744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.945 [2024-05-15 17:17:33.327836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.945 [2024-05-15 17:17:33.327846] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.945 qpair failed and we were unable to recover it. 00:26:45.945 [2024-05-15 17:17:33.328002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.945 [2024-05-15 17:17:33.328112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.945 [2024-05-15 17:17:33.328122] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.945 qpair failed and we were unable to recover it. 00:26:45.945 [2024-05-15 17:17:33.328305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.945 [2024-05-15 17:17:33.328470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.945 [2024-05-15 17:17:33.328480] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.945 qpair failed and we were unable to recover it. 00:26:45.945 [2024-05-15 17:17:33.328586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.945 [2024-05-15 17:17:33.328744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.945 [2024-05-15 17:17:33.328753] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.945 qpair failed and we were unable to recover it. 00:26:45.945 [2024-05-15 17:17:33.328861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.945 [2024-05-15 17:17:33.329013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.945 [2024-05-15 17:17:33.329022] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.945 qpair failed and we were unable to recover it. 00:26:45.946 [2024-05-15 17:17:33.329123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.946 [2024-05-15 17:17:33.329323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.946 [2024-05-15 17:17:33.329333] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.946 qpair failed and we were unable to recover it. 00:26:45.946 [2024-05-15 17:17:33.329558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.946 [2024-05-15 17:17:33.329731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.946 [2024-05-15 17:17:33.329741] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.946 qpair failed and we were unable to recover it. 00:26:45.946 [2024-05-15 17:17:33.329901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.946 [2024-05-15 17:17:33.330008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.946 [2024-05-15 17:17:33.330017] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.946 qpair failed and we were unable to recover it. 00:26:45.946 [2024-05-15 17:17:33.330196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.946 [2024-05-15 17:17:33.330319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.946 [2024-05-15 17:17:33.330329] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.946 qpair failed and we were unable to recover it. 00:26:45.946 [2024-05-15 17:17:33.330428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.946 [2024-05-15 17:17:33.330598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.946 [2024-05-15 17:17:33.330608] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.946 qpair failed and we were unable to recover it. 00:26:45.946 [2024-05-15 17:17:33.330783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.946 [2024-05-15 17:17:33.330895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.946 [2024-05-15 17:17:33.330904] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.946 qpair failed and we were unable to recover it. 00:26:45.946 [2024-05-15 17:17:33.331008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.946 [2024-05-15 17:17:33.331226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.946 [2024-05-15 17:17:33.331236] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.946 qpair failed and we were unable to recover it. 00:26:45.946 [2024-05-15 17:17:33.331466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.946 [2024-05-15 17:17:33.331562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.946 [2024-05-15 17:17:33.331572] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.946 qpair failed and we were unable to recover it. 00:26:45.946 [2024-05-15 17:17:33.331675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.946 [2024-05-15 17:17:33.331777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.946 [2024-05-15 17:17:33.331787] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.946 qpair failed and we were unable to recover it. 00:26:45.946 [2024-05-15 17:17:33.331946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.946 [2024-05-15 17:17:33.332042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.946 [2024-05-15 17:17:33.332052] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.946 qpair failed and we were unable to recover it. 00:26:45.946 [2024-05-15 17:17:33.332172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.946 [2024-05-15 17:17:33.332359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.946 [2024-05-15 17:17:33.332368] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.946 qpair failed and we were unable to recover it. 00:26:45.946 [2024-05-15 17:17:33.332478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.946 [2024-05-15 17:17:33.332706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.946 [2024-05-15 17:17:33.332715] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.946 qpair failed and we were unable to recover it. 00:26:45.946 [2024-05-15 17:17:33.332937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.946 [2024-05-15 17:17:33.333109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.946 [2024-05-15 17:17:33.333118] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.946 qpair failed and we were unable to recover it. 00:26:45.946 [2024-05-15 17:17:33.333211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.946 [2024-05-15 17:17:33.333313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.946 [2024-05-15 17:17:33.333323] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.946 qpair failed and we were unable to recover it. 00:26:45.946 [2024-05-15 17:17:33.333496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.946 [2024-05-15 17:17:33.333561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.946 [2024-05-15 17:17:33.333571] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.946 qpair failed and we were unable to recover it. 00:26:45.946 [2024-05-15 17:17:33.333730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.946 [2024-05-15 17:17:33.333885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.946 [2024-05-15 17:17:33.333894] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.946 qpair failed and we were unable to recover it. 00:26:45.946 [2024-05-15 17:17:33.334120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.946 [2024-05-15 17:17:33.334341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.946 [2024-05-15 17:17:33.334351] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.946 qpair failed and we were unable to recover it. 00:26:45.946 [2024-05-15 17:17:33.334530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.946 [2024-05-15 17:17:33.334637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.946 [2024-05-15 17:17:33.334646] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.946 qpair failed and we were unable to recover it. 00:26:45.946 [2024-05-15 17:17:33.334813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.946 [2024-05-15 17:17:33.334932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.946 [2024-05-15 17:17:33.334942] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.946 qpair failed and we were unable to recover it. 00:26:45.946 [2024-05-15 17:17:33.335041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.946 [2024-05-15 17:17:33.335284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.946 [2024-05-15 17:17:33.335294] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.946 qpair failed and we were unable to recover it. 00:26:45.946 [2024-05-15 17:17:33.335524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.946 [2024-05-15 17:17:33.335628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.946 [2024-05-15 17:17:33.335638] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.946 qpair failed and we were unable to recover it. 00:26:45.946 [2024-05-15 17:17:33.335806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.946 [2024-05-15 17:17:33.335911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.946 [2024-05-15 17:17:33.335920] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.946 qpair failed and we were unable to recover it. 00:26:45.946 [2024-05-15 17:17:33.336115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.946 [2024-05-15 17:17:33.336254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.946 [2024-05-15 17:17:33.336264] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.946 qpair failed and we were unable to recover it. 00:26:45.946 [2024-05-15 17:17:33.336438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.946 [2024-05-15 17:17:33.336593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.946 [2024-05-15 17:17:33.336603] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.946 qpair failed and we were unable to recover it. 00:26:45.946 [2024-05-15 17:17:33.336825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.946 [2024-05-15 17:17:33.336912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.946 [2024-05-15 17:17:33.336922] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.946 qpair failed and we were unable to recover it. 00:26:45.946 [2024-05-15 17:17:33.337020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.946 [2024-05-15 17:17:33.337184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.946 [2024-05-15 17:17:33.337195] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.946 qpair failed and we were unable to recover it. 00:26:45.946 [2024-05-15 17:17:33.337296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.946 [2024-05-15 17:17:33.337397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.946 [2024-05-15 17:17:33.337407] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.946 qpair failed and we were unable to recover it. 00:26:45.946 [2024-05-15 17:17:33.337564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.946 [2024-05-15 17:17:33.337738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.946 [2024-05-15 17:17:33.337747] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.946 qpair failed and we were unable to recover it. 00:26:45.946 [2024-05-15 17:17:33.337846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.947 [2024-05-15 17:17:33.337950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.947 [2024-05-15 17:17:33.337960] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.947 qpair failed and we were unable to recover it. 00:26:45.947 [2024-05-15 17:17:33.338070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.947 [2024-05-15 17:17:33.338315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.947 [2024-05-15 17:17:33.338330] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.947 qpair failed and we were unable to recover it. 00:26:45.947 [2024-05-15 17:17:33.338544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.947 [2024-05-15 17:17:33.338732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.947 [2024-05-15 17:17:33.338742] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.947 qpair failed and we were unable to recover it. 00:26:45.947 [2024-05-15 17:17:33.338914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.947 [2024-05-15 17:17:33.339024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.947 [2024-05-15 17:17:33.339033] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.947 qpair failed and we were unable to recover it. 00:26:45.947 [2024-05-15 17:17:33.339212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.947 [2024-05-15 17:17:33.339379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.947 [2024-05-15 17:17:33.339388] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.947 qpair failed and we were unable to recover it. 00:26:45.947 [2024-05-15 17:17:33.339505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.947 [2024-05-15 17:17:33.339687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.947 [2024-05-15 17:17:33.339696] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.947 qpair failed and we were unable to recover it. 00:26:45.947 [2024-05-15 17:17:33.339809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.947 [2024-05-15 17:17:33.339959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.947 [2024-05-15 17:17:33.339968] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.947 qpair failed and we were unable to recover it. 00:26:45.947 [2024-05-15 17:17:33.340089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.947 [2024-05-15 17:17:33.340244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.947 [2024-05-15 17:17:33.340254] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.947 qpair failed and we were unable to recover it. 00:26:45.947 [2024-05-15 17:17:33.340347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.947 [2024-05-15 17:17:33.340515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.947 [2024-05-15 17:17:33.340525] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.947 qpair failed and we were unable to recover it. 00:26:45.947 [2024-05-15 17:17:33.340632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.947 [2024-05-15 17:17:33.340798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.947 [2024-05-15 17:17:33.340807] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.947 qpair failed and we were unable to recover it. 00:26:45.947 [2024-05-15 17:17:33.340925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.947 [2024-05-15 17:17:33.341149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.947 [2024-05-15 17:17:33.341158] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.947 qpair failed and we were unable to recover it. 00:26:45.947 [2024-05-15 17:17:33.341395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.947 [2024-05-15 17:17:33.341505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.947 [2024-05-15 17:17:33.341514] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.947 qpair failed and we were unable to recover it. 00:26:45.947 [2024-05-15 17:17:33.341676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.947 [2024-05-15 17:17:33.341829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.947 [2024-05-15 17:17:33.341839] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.947 qpair failed and we were unable to recover it. 00:26:45.947 [2024-05-15 17:17:33.341941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.947 [2024-05-15 17:17:33.342119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.947 [2024-05-15 17:17:33.342129] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.947 qpair failed and we were unable to recover it. 00:26:45.947 [2024-05-15 17:17:33.342240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.947 [2024-05-15 17:17:33.342402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.947 [2024-05-15 17:17:33.342411] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.947 qpair failed and we were unable to recover it. 00:26:45.947 [2024-05-15 17:17:33.342506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.947 [2024-05-15 17:17:33.342661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.947 [2024-05-15 17:17:33.342671] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.947 qpair failed and we were unable to recover it. 00:26:45.947 [2024-05-15 17:17:33.342759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.947 [2024-05-15 17:17:33.342862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.947 [2024-05-15 17:17:33.342871] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.947 qpair failed and we were unable to recover it. 00:26:45.947 [2024-05-15 17:17:33.343058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.947 [2024-05-15 17:17:33.343217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.947 [2024-05-15 17:17:33.343227] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.947 qpair failed and we were unable to recover it. 00:26:45.947 [2024-05-15 17:17:33.343328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.947 [2024-05-15 17:17:33.343506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.947 [2024-05-15 17:17:33.343515] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.947 qpair failed and we were unable to recover it. 00:26:45.947 [2024-05-15 17:17:33.343633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.947 [2024-05-15 17:17:33.343721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.947 [2024-05-15 17:17:33.343731] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.947 qpair failed and we were unable to recover it. 00:26:45.947 [2024-05-15 17:17:33.343818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.947 [2024-05-15 17:17:33.344012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.947 [2024-05-15 17:17:33.344022] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.947 qpair failed and we were unable to recover it. 00:26:45.947 [2024-05-15 17:17:33.344115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.947 [2024-05-15 17:17:33.344284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.947 [2024-05-15 17:17:33.344294] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.947 qpair failed and we were unable to recover it. 00:26:45.947 [2024-05-15 17:17:33.344403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.947 [2024-05-15 17:17:33.344569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.947 [2024-05-15 17:17:33.344579] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.947 qpair failed and we were unable to recover it. 00:26:45.947 [2024-05-15 17:17:33.344707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.948 [2024-05-15 17:17:33.344792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.948 [2024-05-15 17:17:33.344802] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.948 qpair failed and we were unable to recover it. 00:26:45.948 [2024-05-15 17:17:33.344902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.948 [2024-05-15 17:17:33.345006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.948 [2024-05-15 17:17:33.345015] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.948 qpair failed and we were unable to recover it. 00:26:45.948 [2024-05-15 17:17:33.345121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.948 [2024-05-15 17:17:33.345278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.948 [2024-05-15 17:17:33.345288] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.948 qpair failed and we were unable to recover it. 00:26:45.948 [2024-05-15 17:17:33.345398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.948 [2024-05-15 17:17:33.345504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.948 [2024-05-15 17:17:33.345514] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.948 qpair failed and we were unable to recover it. 00:26:45.948 [2024-05-15 17:17:33.345634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.948 [2024-05-15 17:17:33.345725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.948 [2024-05-15 17:17:33.345735] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.948 qpair failed and we were unable to recover it. 00:26:45.948 [2024-05-15 17:17:33.345909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.948 [2024-05-15 17:17:33.346022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.948 [2024-05-15 17:17:33.346032] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.948 qpair failed and we were unable to recover it. 00:26:45.948 [2024-05-15 17:17:33.346287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.948 [2024-05-15 17:17:33.346410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.948 [2024-05-15 17:17:33.346419] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.948 qpair failed and we were unable to recover it. 00:26:45.948 [2024-05-15 17:17:33.346572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.948 [2024-05-15 17:17:33.346661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.948 [2024-05-15 17:17:33.346670] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.948 qpair failed and we were unable to recover it. 00:26:45.948 [2024-05-15 17:17:33.346776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.948 [2024-05-15 17:17:33.346888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.948 [2024-05-15 17:17:33.346898] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.948 qpair failed and we were unable to recover it. 00:26:45.948 [2024-05-15 17:17:33.346997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.948 [2024-05-15 17:17:33.347218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.948 [2024-05-15 17:17:33.347228] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.948 qpair failed and we were unable to recover it. 00:26:45.948 [2024-05-15 17:17:33.347384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.948 [2024-05-15 17:17:33.347523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.948 [2024-05-15 17:17:33.347533] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.948 qpair failed and we were unable to recover it. 00:26:45.948 [2024-05-15 17:17:33.347717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.948 [2024-05-15 17:17:33.347876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.948 [2024-05-15 17:17:33.347885] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.948 qpair failed and we were unable to recover it. 00:26:45.948 [2024-05-15 17:17:33.348053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.948 [2024-05-15 17:17:33.348171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.948 [2024-05-15 17:17:33.348181] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.948 qpair failed and we were unable to recover it. 00:26:45.948 [2024-05-15 17:17:33.348306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.948 [2024-05-15 17:17:33.348407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.948 [2024-05-15 17:17:33.348417] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.948 qpair failed and we were unable to recover it. 00:26:45.948 [2024-05-15 17:17:33.348587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.948 [2024-05-15 17:17:33.348827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.948 [2024-05-15 17:17:33.348837] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.948 qpair failed and we were unable to recover it. 00:26:45.948 [2024-05-15 17:17:33.348937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.948 [2024-05-15 17:17:33.349106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.948 [2024-05-15 17:17:33.349115] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.948 qpair failed and we were unable to recover it. 00:26:45.948 [2024-05-15 17:17:33.349290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.948 [2024-05-15 17:17:33.349365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.948 [2024-05-15 17:17:33.349375] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.948 qpair failed and we were unable to recover it. 00:26:45.948 [2024-05-15 17:17:33.349567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.948 [2024-05-15 17:17:33.349740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.948 [2024-05-15 17:17:33.349749] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.948 qpair failed and we were unable to recover it. 00:26:45.948 [2024-05-15 17:17:33.349921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.948 [2024-05-15 17:17:33.350073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.948 [2024-05-15 17:17:33.350082] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.948 qpair failed and we were unable to recover it. 00:26:45.948 [2024-05-15 17:17:33.350272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.948 [2024-05-15 17:17:33.350387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.948 [2024-05-15 17:17:33.350397] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.948 qpair failed and we were unable to recover it. 00:26:45.948 [2024-05-15 17:17:33.350643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.948 [2024-05-15 17:17:33.350833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.948 [2024-05-15 17:17:33.350843] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.948 qpair failed and we were unable to recover it. 00:26:45.948 [2024-05-15 17:17:33.351065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.948 [2024-05-15 17:17:33.351150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.948 [2024-05-15 17:17:33.351160] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.948 qpair failed and we were unable to recover it. 00:26:45.948 [2024-05-15 17:17:33.351331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.948 [2024-05-15 17:17:33.351503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.948 [2024-05-15 17:17:33.351516] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.948 qpair failed and we were unable to recover it. 00:26:45.948 [2024-05-15 17:17:33.351681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.948 [2024-05-15 17:17:33.351800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.948 [2024-05-15 17:17:33.351809] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.948 qpair failed and we were unable to recover it. 00:26:45.948 [2024-05-15 17:17:33.352034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.948 [2024-05-15 17:17:33.352216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.948 [2024-05-15 17:17:33.352226] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.948 qpair failed and we were unable to recover it. 00:26:45.948 [2024-05-15 17:17:33.352403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.948 [2024-05-15 17:17:33.352561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.948 [2024-05-15 17:17:33.352570] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.948 qpair failed and we were unable to recover it. 00:26:45.948 [2024-05-15 17:17:33.352771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.948 [2024-05-15 17:17:33.352946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.948 [2024-05-15 17:17:33.352956] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.948 qpair failed and we were unable to recover it. 00:26:45.948 [2024-05-15 17:17:33.353213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.948 [2024-05-15 17:17:33.353327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.948 [2024-05-15 17:17:33.353336] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.948 qpair failed and we were unable to recover it. 00:26:45.948 [2024-05-15 17:17:33.353504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.949 [2024-05-15 17:17:33.353731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.949 [2024-05-15 17:17:33.353741] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.949 qpair failed and we were unable to recover it. 00:26:45.949 [2024-05-15 17:17:33.353861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.949 [2024-05-15 17:17:33.354038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.949 [2024-05-15 17:17:33.354048] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.949 qpair failed and we were unable to recover it. 00:26:45.949 [2024-05-15 17:17:33.354154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.949 [2024-05-15 17:17:33.354265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.949 [2024-05-15 17:17:33.354275] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.949 qpair failed and we were unable to recover it. 00:26:45.949 [2024-05-15 17:17:33.354454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.949 [2024-05-15 17:17:33.354572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.949 [2024-05-15 17:17:33.354581] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.949 qpair failed and we were unable to recover it. 00:26:45.949 [2024-05-15 17:17:33.354754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.949 [2024-05-15 17:17:33.354869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.949 [2024-05-15 17:17:33.354880] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.949 qpair failed and we were unable to recover it. 00:26:45.949 [2024-05-15 17:17:33.355032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.949 [2024-05-15 17:17:33.355131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.949 [2024-05-15 17:17:33.355141] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.949 qpair failed and we were unable to recover it. 00:26:45.949 [2024-05-15 17:17:33.355306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.949 [2024-05-15 17:17:33.355514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.949 [2024-05-15 17:17:33.355524] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.949 qpair failed and we were unable to recover it. 00:26:45.949 [2024-05-15 17:17:33.355710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.949 [2024-05-15 17:17:33.355886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.949 [2024-05-15 17:17:33.355896] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.949 qpair failed and we were unable to recover it. 00:26:45.949 [2024-05-15 17:17:33.356105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.949 [2024-05-15 17:17:33.356279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.949 [2024-05-15 17:17:33.356289] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.949 qpair failed and we were unable to recover it. 00:26:45.949 [2024-05-15 17:17:33.356380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.949 [2024-05-15 17:17:33.356551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.949 [2024-05-15 17:17:33.356560] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.949 qpair failed and we were unable to recover it. 00:26:45.949 [2024-05-15 17:17:33.356810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.949 [2024-05-15 17:17:33.357064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.949 [2024-05-15 17:17:33.357074] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.949 qpair failed and we were unable to recover it. 00:26:45.949 [2024-05-15 17:17:33.357317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.949 [2024-05-15 17:17:33.357491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.949 [2024-05-15 17:17:33.357500] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.949 qpair failed and we were unable to recover it. 00:26:45.949 [2024-05-15 17:17:33.357659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.949 [2024-05-15 17:17:33.357759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.949 [2024-05-15 17:17:33.357769] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.949 qpair failed and we were unable to recover it. 00:26:45.949 [2024-05-15 17:17:33.357865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.949 [2024-05-15 17:17:33.358087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.949 [2024-05-15 17:17:33.358097] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.949 qpair failed and we were unable to recover it. 00:26:45.949 [2024-05-15 17:17:33.358270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.949 [2024-05-15 17:17:33.358379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.949 [2024-05-15 17:17:33.358390] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.949 qpair failed and we were unable to recover it. 00:26:45.949 [2024-05-15 17:17:33.358588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.949 [2024-05-15 17:17:33.358709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.949 [2024-05-15 17:17:33.358718] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.949 qpair failed and we were unable to recover it. 00:26:45.949 [2024-05-15 17:17:33.358871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.949 [2024-05-15 17:17:33.358989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.949 [2024-05-15 17:17:33.358999] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.949 qpair failed and we were unable to recover it. 00:26:45.949 [2024-05-15 17:17:33.359157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.949 [2024-05-15 17:17:33.359263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.949 [2024-05-15 17:17:33.359273] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.949 qpair failed and we were unable to recover it. 00:26:45.949 [2024-05-15 17:17:33.359436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.949 [2024-05-15 17:17:33.359615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.949 [2024-05-15 17:17:33.359625] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.949 qpair failed and we were unable to recover it. 00:26:45.949 [2024-05-15 17:17:33.359731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.949 [2024-05-15 17:17:33.359902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.949 [2024-05-15 17:17:33.359912] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.949 qpair failed and we were unable to recover it. 00:26:45.949 [2024-05-15 17:17:33.360030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.949 [2024-05-15 17:17:33.360185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.949 [2024-05-15 17:17:33.360195] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.949 qpair failed and we were unable to recover it. 00:26:45.949 [2024-05-15 17:17:33.360314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.949 [2024-05-15 17:17:33.360411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.949 [2024-05-15 17:17:33.360421] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.949 qpair failed and we were unable to recover it. 00:26:45.949 [2024-05-15 17:17:33.360594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.949 [2024-05-15 17:17:33.360706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.949 [2024-05-15 17:17:33.360715] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.949 qpair failed and we were unable to recover it. 00:26:45.949 [2024-05-15 17:17:33.360877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.949 [2024-05-15 17:17:33.361046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.949 [2024-05-15 17:17:33.361055] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.949 qpair failed and we were unable to recover it. 00:26:45.949 [2024-05-15 17:17:33.361217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.949 [2024-05-15 17:17:33.361369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.949 [2024-05-15 17:17:33.361380] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.949 qpair failed and we were unable to recover it. 00:26:45.949 [2024-05-15 17:17:33.361486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.949 [2024-05-15 17:17:33.361658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.949 [2024-05-15 17:17:33.361668] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.949 qpair failed and we were unable to recover it. 00:26:45.949 [2024-05-15 17:17:33.361789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.949 [2024-05-15 17:17:33.361893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.949 [2024-05-15 17:17:33.361903] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.949 qpair failed and we were unable to recover it. 00:26:45.949 [2024-05-15 17:17:33.362012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.949 [2024-05-15 17:17:33.362117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.949 [2024-05-15 17:17:33.362127] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.949 qpair failed and we were unable to recover it. 00:26:45.950 [2024-05-15 17:17:33.362314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.950 [2024-05-15 17:17:33.362412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.950 [2024-05-15 17:17:33.362422] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.950 qpair failed and we were unable to recover it. 00:26:45.950 [2024-05-15 17:17:33.362598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.950 [2024-05-15 17:17:33.362702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.950 [2024-05-15 17:17:33.362712] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.950 qpair failed and we were unable to recover it. 00:26:45.950 [2024-05-15 17:17:33.362890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.950 [2024-05-15 17:17:33.362981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.950 [2024-05-15 17:17:33.362990] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.950 qpair failed and we were unable to recover it. 00:26:45.950 [2024-05-15 17:17:33.363157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.950 [2024-05-15 17:17:33.363289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.950 [2024-05-15 17:17:33.363299] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.950 qpair failed and we were unable to recover it. 00:26:45.950 [2024-05-15 17:17:33.363474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.950 [2024-05-15 17:17:33.363670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.950 [2024-05-15 17:17:33.363679] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.950 qpair failed and we were unable to recover it. 00:26:45.950 [2024-05-15 17:17:33.363783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.950 [2024-05-15 17:17:33.363889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.950 [2024-05-15 17:17:33.363899] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.950 qpair failed and we were unable to recover it. 00:26:45.950 [2024-05-15 17:17:33.364082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.950 [2024-05-15 17:17:33.364197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.950 [2024-05-15 17:17:33.364207] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.950 qpair failed and we were unable to recover it. 00:26:45.950 [2024-05-15 17:17:33.364382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.950 [2024-05-15 17:17:33.364610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.950 [2024-05-15 17:17:33.364620] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.950 qpair failed and we were unable to recover it. 00:26:45.950 [2024-05-15 17:17:33.364796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.950 [2024-05-15 17:17:33.364901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.950 [2024-05-15 17:17:33.364911] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.950 qpair failed and we were unable to recover it. 00:26:45.950 [2024-05-15 17:17:33.365012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.950 [2024-05-15 17:17:33.365198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.950 [2024-05-15 17:17:33.365208] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.950 qpair failed and we were unable to recover it. 00:26:45.950 [2024-05-15 17:17:33.365373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.950 [2024-05-15 17:17:33.365461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.950 [2024-05-15 17:17:33.365471] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.950 qpair failed and we were unable to recover it. 00:26:45.950 [2024-05-15 17:17:33.365655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.950 [2024-05-15 17:17:33.365764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.950 [2024-05-15 17:17:33.365774] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.950 qpair failed and we were unable to recover it. 00:26:45.950 [2024-05-15 17:17:33.365940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.950 [2024-05-15 17:17:33.366112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.950 [2024-05-15 17:17:33.366122] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.950 qpair failed and we were unable to recover it. 00:26:45.950 [2024-05-15 17:17:33.366348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.950 [2024-05-15 17:17:33.366444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.950 [2024-05-15 17:17:33.366453] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.950 qpair failed and we were unable to recover it. 00:26:45.950 [2024-05-15 17:17:33.366626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.950 [2024-05-15 17:17:33.366803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.950 [2024-05-15 17:17:33.366812] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.950 qpair failed and we were unable to recover it. 00:26:45.950 [2024-05-15 17:17:33.366971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.950 [2024-05-15 17:17:33.367129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.950 [2024-05-15 17:17:33.367139] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.950 qpair failed and we were unable to recover it. 00:26:45.950 [2024-05-15 17:17:33.367262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.950 [2024-05-15 17:17:33.367462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.950 [2024-05-15 17:17:33.367472] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.950 qpair failed and we were unable to recover it. 00:26:45.950 [2024-05-15 17:17:33.367591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.950 [2024-05-15 17:17:33.367706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.950 [2024-05-15 17:17:33.367716] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.950 qpair failed and we were unable to recover it. 00:26:45.950 [2024-05-15 17:17:33.367817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.950 [2024-05-15 17:17:33.368006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.950 [2024-05-15 17:17:33.368016] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.950 qpair failed and we were unable to recover it. 00:26:45.950 [2024-05-15 17:17:33.368193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.950 [2024-05-15 17:17:33.368392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.950 [2024-05-15 17:17:33.368402] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.950 qpair failed and we were unable to recover it. 00:26:45.950 [2024-05-15 17:17:33.368578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.950 [2024-05-15 17:17:33.368737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.950 [2024-05-15 17:17:33.368747] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.950 qpair failed and we were unable to recover it. 00:26:45.950 [2024-05-15 17:17:33.368854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.950 [2024-05-15 17:17:33.369030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.950 [2024-05-15 17:17:33.369039] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.950 qpair failed and we were unable to recover it. 00:26:45.950 [2024-05-15 17:17:33.369216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.950 [2024-05-15 17:17:33.369378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.950 [2024-05-15 17:17:33.369388] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.950 qpair failed and we were unable to recover it. 00:26:45.950 [2024-05-15 17:17:33.369548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.950 [2024-05-15 17:17:33.369649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.950 [2024-05-15 17:17:33.369659] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.950 qpair failed and we were unable to recover it. 00:26:45.950 [2024-05-15 17:17:33.369878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.950 [2024-05-15 17:17:33.370045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.950 [2024-05-15 17:17:33.370056] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.950 qpair failed and we were unable to recover it. 00:26:45.950 [2024-05-15 17:17:33.370214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.950 [2024-05-15 17:17:33.370370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.950 [2024-05-15 17:17:33.370379] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.950 qpair failed and we were unable to recover it. 00:26:45.950 [2024-05-15 17:17:33.370611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.950 [2024-05-15 17:17:33.370810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.950 [2024-05-15 17:17:33.370820] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.950 qpair failed and we were unable to recover it. 00:26:45.950 [2024-05-15 17:17:33.370929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.950 [2024-05-15 17:17:33.371172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.950 [2024-05-15 17:17:33.371182] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.951 qpair failed and we were unable to recover it. 00:26:45.951 [2024-05-15 17:17:33.371305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.951 [2024-05-15 17:17:33.371548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.951 [2024-05-15 17:17:33.371558] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.951 qpair failed and we were unable to recover it. 00:26:45.951 [2024-05-15 17:17:33.371655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.951 [2024-05-15 17:17:33.371882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.951 [2024-05-15 17:17:33.371892] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.951 qpair failed and we were unable to recover it. 00:26:45.951 [2024-05-15 17:17:33.372144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.951 [2024-05-15 17:17:33.372268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.951 [2024-05-15 17:17:33.372278] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.951 qpair failed and we were unable to recover it. 00:26:45.951 [2024-05-15 17:17:33.372443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.951 [2024-05-15 17:17:33.372664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.951 [2024-05-15 17:17:33.372673] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.951 qpair failed and we were unable to recover it. 00:26:45.951 [2024-05-15 17:17:33.372851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.951 [2024-05-15 17:17:33.373018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.951 [2024-05-15 17:17:33.373027] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.951 qpair failed and we were unable to recover it. 00:26:45.951 [2024-05-15 17:17:33.373207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.951 [2024-05-15 17:17:33.373323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.951 [2024-05-15 17:17:33.373334] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.951 qpair failed and we were unable to recover it. 00:26:45.951 [2024-05-15 17:17:33.373501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.951 [2024-05-15 17:17:33.373673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.951 [2024-05-15 17:17:33.373683] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.951 qpair failed and we were unable to recover it. 00:26:45.951 [2024-05-15 17:17:33.373802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.951 [2024-05-15 17:17:33.373977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.951 [2024-05-15 17:17:33.373987] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.951 qpair failed and we were unable to recover it. 00:26:45.951 [2024-05-15 17:17:33.374101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.951 [2024-05-15 17:17:33.374200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.951 [2024-05-15 17:17:33.374211] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.951 qpair failed and we were unable to recover it. 00:26:45.951 [2024-05-15 17:17:33.374320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.951 [2024-05-15 17:17:33.374423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.951 [2024-05-15 17:17:33.374432] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.951 qpair failed and we were unable to recover it. 00:26:45.951 [2024-05-15 17:17:33.374603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.951 [2024-05-15 17:17:33.374724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.951 [2024-05-15 17:17:33.374734] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.951 qpair failed and we were unable to recover it. 00:26:45.951 [2024-05-15 17:17:33.374841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.951 [2024-05-15 17:17:33.375111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.951 [2024-05-15 17:17:33.375120] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.951 qpair failed and we were unable to recover it. 00:26:45.951 [2024-05-15 17:17:33.375289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.951 [2024-05-15 17:17:33.375467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.951 [2024-05-15 17:17:33.375477] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.951 qpair failed and we were unable to recover it. 00:26:45.951 [2024-05-15 17:17:33.375750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.951 [2024-05-15 17:17:33.375918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.951 [2024-05-15 17:17:33.375928] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.951 qpair failed and we were unable to recover it. 00:26:45.951 [2024-05-15 17:17:33.376035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.951 [2024-05-15 17:17:33.376206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.951 [2024-05-15 17:17:33.376216] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.951 qpair failed and we were unable to recover it. 00:26:45.951 [2024-05-15 17:17:33.376330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.951 [2024-05-15 17:17:33.376504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.951 [2024-05-15 17:17:33.376513] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.951 qpair failed and we were unable to recover it. 00:26:45.951 [2024-05-15 17:17:33.376739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.951 [2024-05-15 17:17:33.376824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.951 [2024-05-15 17:17:33.376833] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.951 qpair failed and we were unable to recover it. 00:26:45.951 [2024-05-15 17:17:33.376992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.951 [2024-05-15 17:17:33.377106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.951 [2024-05-15 17:17:33.377116] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.951 qpair failed and we were unable to recover it. 00:26:45.951 [2024-05-15 17:17:33.377289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.951 [2024-05-15 17:17:33.377393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.951 [2024-05-15 17:17:33.377403] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.951 qpair failed and we were unable to recover it. 00:26:45.951 [2024-05-15 17:17:33.377571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.951 [2024-05-15 17:17:33.377765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.951 [2024-05-15 17:17:33.377775] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.951 qpair failed and we were unable to recover it. 00:26:45.951 [2024-05-15 17:17:33.377956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.951 [2024-05-15 17:17:33.378069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.951 [2024-05-15 17:17:33.378079] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.951 qpair failed and we were unable to recover it. 00:26:45.951 [2024-05-15 17:17:33.378149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.951 [2024-05-15 17:17:33.378254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.951 [2024-05-15 17:17:33.378264] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.951 qpair failed and we were unable to recover it. 00:26:45.951 [2024-05-15 17:17:33.378376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.951 [2024-05-15 17:17:33.378545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.951 [2024-05-15 17:17:33.378554] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.951 qpair failed and we were unable to recover it. 00:26:45.951 [2024-05-15 17:17:33.378652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.951 [2024-05-15 17:17:33.378765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.952 [2024-05-15 17:17:33.378774] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.952 qpair failed and we were unable to recover it. 00:26:45.952 [2024-05-15 17:17:33.378881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.952 [2024-05-15 17:17:33.379035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.952 [2024-05-15 17:17:33.379044] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.952 qpair failed and we were unable to recover it. 00:26:45.952 [2024-05-15 17:17:33.379174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.952 [2024-05-15 17:17:33.379364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.952 [2024-05-15 17:17:33.379375] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.952 qpair failed and we were unable to recover it. 00:26:45.952 [2024-05-15 17:17:33.379478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.952 [2024-05-15 17:17:33.379568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.952 [2024-05-15 17:17:33.379577] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.952 qpair failed and we were unable to recover it. 00:26:45.952 [2024-05-15 17:17:33.379687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.952 [2024-05-15 17:17:33.379789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.952 [2024-05-15 17:17:33.379798] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.952 qpair failed and we were unable to recover it. 00:26:45.952 [2024-05-15 17:17:33.379921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.952 [2024-05-15 17:17:33.380092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.952 [2024-05-15 17:17:33.380102] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.952 qpair failed and we were unable to recover it. 00:26:45.952 [2024-05-15 17:17:33.380263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.952 [2024-05-15 17:17:33.380369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.952 [2024-05-15 17:17:33.380379] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.952 qpair failed and we were unable to recover it. 00:26:45.952 [2024-05-15 17:17:33.380545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.952 [2024-05-15 17:17:33.380700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.952 [2024-05-15 17:17:33.380709] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.952 qpair failed and we were unable to recover it. 00:26:45.952 [2024-05-15 17:17:33.380796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.952 [2024-05-15 17:17:33.380958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.952 [2024-05-15 17:17:33.380968] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.952 qpair failed and we were unable to recover it. 00:26:45.952 [2024-05-15 17:17:33.381057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.952 [2024-05-15 17:17:33.381215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.952 [2024-05-15 17:17:33.381226] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.952 qpair failed and we were unable to recover it. 00:26:45.952 [2024-05-15 17:17:33.381394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.952 [2024-05-15 17:17:33.381621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.952 [2024-05-15 17:17:33.381631] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.952 qpair failed and we were unable to recover it. 00:26:45.952 [2024-05-15 17:17:33.381796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.952 [2024-05-15 17:17:33.381973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.952 [2024-05-15 17:17:33.381983] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.952 qpair failed and we were unable to recover it. 00:26:45.952 [2024-05-15 17:17:33.382179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.952 [2024-05-15 17:17:33.382277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.952 [2024-05-15 17:17:33.382287] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.952 qpair failed and we were unable to recover it. 00:26:45.952 [2024-05-15 17:17:33.382395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.952 [2024-05-15 17:17:33.382618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.952 [2024-05-15 17:17:33.382628] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.952 qpair failed and we were unable to recover it. 00:26:45.952 [2024-05-15 17:17:33.382807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.952 [2024-05-15 17:17:33.383002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.952 [2024-05-15 17:17:33.383012] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.952 qpair failed and we were unable to recover it. 00:26:45.952 [2024-05-15 17:17:33.383181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.952 [2024-05-15 17:17:33.383270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.952 [2024-05-15 17:17:33.383279] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:45.952 qpair failed and we were unable to recover it. 00:26:45.952 [2024-05-15 17:17:33.383481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.952 [2024-05-15 17:17:33.383626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.952 [2024-05-15 17:17:33.383643] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:45.952 qpair failed and we were unable to recover it. 00:26:45.952 [2024-05-15 17:17:33.383818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.952 [2024-05-15 17:17:33.384012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.952 [2024-05-15 17:17:33.384025] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:45.952 qpair failed and we were unable to recover it. 00:26:45.952 [2024-05-15 17:17:33.384200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.952 [2024-05-15 17:17:33.384305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.952 [2024-05-15 17:17:33.384319] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:45.952 qpair failed and we were unable to recover it. 00:26:45.952 [2024-05-15 17:17:33.384396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.952 [2024-05-15 17:17:33.384631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.952 [2024-05-15 17:17:33.384644] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:45.952 qpair failed and we were unable to recover it. 00:26:45.952 [2024-05-15 17:17:33.384810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.952 [2024-05-15 17:17:33.385003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.952 [2024-05-15 17:17:33.385017] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:45.952 qpair failed and we were unable to recover it. 00:26:45.952 [2024-05-15 17:17:33.385204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.952 [2024-05-15 17:17:33.385453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.952 [2024-05-15 17:17:33.385466] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:45.952 qpair failed and we were unable to recover it. 00:26:45.952 [2024-05-15 17:17:33.385643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.952 [2024-05-15 17:17:33.385841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.952 [2024-05-15 17:17:33.385854] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:45.952 qpair failed and we were unable to recover it. 00:26:45.952 [2024-05-15 17:17:33.386063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.952 [2024-05-15 17:17:33.386266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.952 [2024-05-15 17:17:33.386280] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:45.952 qpair failed and we were unable to recover it. 00:26:45.952 [2024-05-15 17:17:33.386399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.952 [2024-05-15 17:17:33.386654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.952 [2024-05-15 17:17:33.386667] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:45.952 qpair failed and we were unable to recover it. 00:26:45.952 [2024-05-15 17:17:33.386793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.952 [2024-05-15 17:17:33.386973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.952 [2024-05-15 17:17:33.386986] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:45.952 qpair failed and we were unable to recover it. 00:26:45.952 [2024-05-15 17:17:33.387174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.952 [2024-05-15 17:17:33.387297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.952 [2024-05-15 17:17:33.387310] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:45.952 qpair failed and we were unable to recover it. 00:26:45.952 [2024-05-15 17:17:33.387409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.952 [2024-05-15 17:17:33.387635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.952 [2024-05-15 17:17:33.387648] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:45.952 qpair failed and we were unable to recover it. 00:26:45.952 [2024-05-15 17:17:33.387814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.952 [2024-05-15 17:17:33.387918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.952 [2024-05-15 17:17:33.387932] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:45.952 qpair failed and we were unable to recover it. 00:26:45.953 [2024-05-15 17:17:33.388189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.953 [2024-05-15 17:17:33.388311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.953 [2024-05-15 17:17:33.388325] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:45.953 qpair failed and we were unable to recover it. 00:26:45.953 [2024-05-15 17:17:33.388552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.953 [2024-05-15 17:17:33.388718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.953 [2024-05-15 17:17:33.388731] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:45.953 qpair failed and we were unable to recover it. 00:26:45.953 [2024-05-15 17:17:33.388904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.953 [2024-05-15 17:17:33.389013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.953 [2024-05-15 17:17:33.389026] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:45.953 qpair failed and we were unable to recover it. 00:26:45.953 [2024-05-15 17:17:33.389214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.953 [2024-05-15 17:17:33.389387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.953 [2024-05-15 17:17:33.389400] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:45.953 qpair failed and we were unable to recover it. 00:26:45.953 [2024-05-15 17:17:33.389512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.953 [2024-05-15 17:17:33.389680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.953 [2024-05-15 17:17:33.389693] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:45.953 qpair failed and we were unable to recover it. 00:26:45.953 [2024-05-15 17:17:33.389818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.953 [2024-05-15 17:17:33.389928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.953 [2024-05-15 17:17:33.389942] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:45.953 qpair failed and we were unable to recover it. 00:26:45.953 [2024-05-15 17:17:33.390123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.953 [2024-05-15 17:17:33.390363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.953 [2024-05-15 17:17:33.390377] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:45.953 qpair failed and we were unable to recover it. 00:26:45.953 [2024-05-15 17:17:33.390478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.953 [2024-05-15 17:17:33.390605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.953 [2024-05-15 17:17:33.390621] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:45.953 qpair failed and we were unable to recover it. 00:26:45.953 [2024-05-15 17:17:33.390804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.953 [2024-05-15 17:17:33.390964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.953 [2024-05-15 17:17:33.390977] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:45.953 qpair failed and we were unable to recover it. 00:26:45.953 [2024-05-15 17:17:33.391094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.953 [2024-05-15 17:17:33.391331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.953 [2024-05-15 17:17:33.391344] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:45.953 qpair failed and we were unable to recover it. 00:26:45.953 [2024-05-15 17:17:33.391451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.953 [2024-05-15 17:17:33.391576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.953 [2024-05-15 17:17:33.391589] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:45.953 qpair failed and we were unable to recover it. 00:26:45.953 [2024-05-15 17:17:33.391757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.953 [2024-05-15 17:17:33.391849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.953 [2024-05-15 17:17:33.391862] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:45.953 qpair failed and we were unable to recover it. 00:26:45.953 [2024-05-15 17:17:33.392034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.953 [2024-05-15 17:17:33.392148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.953 [2024-05-15 17:17:33.392161] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:45.953 qpair failed and we were unable to recover it. 00:26:45.953 [2024-05-15 17:17:33.392283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.953 [2024-05-15 17:17:33.392463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.953 [2024-05-15 17:17:33.392476] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:45.953 qpair failed and we were unable to recover it. 00:26:45.953 [2024-05-15 17:17:33.392586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.953 [2024-05-15 17:17:33.392705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.953 [2024-05-15 17:17:33.392718] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:45.953 qpair failed and we were unable to recover it. 00:26:45.953 [2024-05-15 17:17:33.392908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.953 [2024-05-15 17:17:33.393094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.953 [2024-05-15 17:17:33.393107] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:45.953 qpair failed and we were unable to recover it. 00:26:45.953 [2024-05-15 17:17:33.393290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.953 [2024-05-15 17:17:33.393398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.953 [2024-05-15 17:17:33.393412] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:45.953 qpair failed and we were unable to recover it. 00:26:45.953 [2024-05-15 17:17:33.393620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.953 [2024-05-15 17:17:33.393788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.953 [2024-05-15 17:17:33.393804] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:45.953 qpair failed and we were unable to recover it. 00:26:45.953 [2024-05-15 17:17:33.394058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.953 [2024-05-15 17:17:33.394163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.953 [2024-05-15 17:17:33.394180] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:45.953 qpair failed and we were unable to recover it. 00:26:45.953 [2024-05-15 17:17:33.394276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.953 [2024-05-15 17:17:33.394392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.953 [2024-05-15 17:17:33.394405] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:45.953 qpair failed and we were unable to recover it. 00:26:45.953 [2024-05-15 17:17:33.394507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.953 [2024-05-15 17:17:33.394610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.953 [2024-05-15 17:17:33.394623] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:45.953 qpair failed and we were unable to recover it. 00:26:45.953 [2024-05-15 17:17:33.394722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.953 [2024-05-15 17:17:33.394836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.953 [2024-05-15 17:17:33.394849] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:45.953 qpair failed and we were unable to recover it. 00:26:45.953 [2024-05-15 17:17:33.395019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.953 [2024-05-15 17:17:33.395177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.953 [2024-05-15 17:17:33.395191] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:45.953 qpair failed and we were unable to recover it. 00:26:45.953 [2024-05-15 17:17:33.395312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.953 [2024-05-15 17:17:33.395543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.953 [2024-05-15 17:17:33.395556] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:45.953 qpair failed and we were unable to recover it. 00:26:45.953 [2024-05-15 17:17:33.395791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.953 [2024-05-15 17:17:33.395966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.953 [2024-05-15 17:17:33.395979] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:45.953 qpair failed and we were unable to recover it. 00:26:45.953 [2024-05-15 17:17:33.396161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.953 [2024-05-15 17:17:33.396354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.953 [2024-05-15 17:17:33.396368] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:45.953 qpair failed and we were unable to recover it. 00:26:45.953 [2024-05-15 17:17:33.396601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.953 [2024-05-15 17:17:33.396772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.953 [2024-05-15 17:17:33.396785] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:45.953 qpair failed and we were unable to recover it. 00:26:45.953 [2024-05-15 17:17:33.396978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.953 [2024-05-15 17:17:33.397209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.953 [2024-05-15 17:17:33.397224] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:45.953 qpair failed and we were unable to recover it. 00:26:45.953 [2024-05-15 17:17:33.397316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.953 [2024-05-15 17:17:33.397432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.954 [2024-05-15 17:17:33.397445] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:45.954 qpair failed and we were unable to recover it. 00:26:45.954 [2024-05-15 17:17:33.397623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.954 [2024-05-15 17:17:33.397792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.954 [2024-05-15 17:17:33.397805] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:45.954 qpair failed and we were unable to recover it. 00:26:45.954 [2024-05-15 17:17:33.398039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.954 [2024-05-15 17:17:33.398147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.954 [2024-05-15 17:17:33.398160] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:45.954 qpair failed and we were unable to recover it. 00:26:45.954 [2024-05-15 17:17:33.398420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.954 [2024-05-15 17:17:33.398600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.954 [2024-05-15 17:17:33.398613] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:45.954 qpair failed and we were unable to recover it. 00:26:45.954 [2024-05-15 17:17:33.398862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.954 [2024-05-15 17:17:33.399029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.954 [2024-05-15 17:17:33.399042] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:45.954 qpair failed and we were unable to recover it. 00:26:45.954 [2024-05-15 17:17:33.399216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.954 [2024-05-15 17:17:33.399472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.954 [2024-05-15 17:17:33.399485] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:45.954 qpair failed and we were unable to recover it. 00:26:45.954 [2024-05-15 17:17:33.399696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.954 [2024-05-15 17:17:33.399954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.954 [2024-05-15 17:17:33.399967] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:45.954 qpair failed and we were unable to recover it. 00:26:45.954 [2024-05-15 17:17:33.400075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.954 [2024-05-15 17:17:33.400266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.954 [2024-05-15 17:17:33.400280] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:45.954 qpair failed and we were unable to recover it. 00:26:45.954 [2024-05-15 17:17:33.400413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.954 [2024-05-15 17:17:33.400486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.954 [2024-05-15 17:17:33.400499] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:45.954 qpair failed and we were unable to recover it. 00:26:45.954 [2024-05-15 17:17:33.400698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.954 [2024-05-15 17:17:33.400950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.954 [2024-05-15 17:17:33.400963] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:45.954 qpair failed and we were unable to recover it. 00:26:45.954 [2024-05-15 17:17:33.401144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.954 [2024-05-15 17:17:33.401267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.954 [2024-05-15 17:17:33.401281] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:45.954 qpair failed and we were unable to recover it. 00:26:45.954 [2024-05-15 17:17:33.401532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.954 [2024-05-15 17:17:33.401654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.954 [2024-05-15 17:17:33.401667] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:45.954 qpair failed and we were unable to recover it. 00:26:45.954 [2024-05-15 17:17:33.401920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.954 [2024-05-15 17:17:33.402113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.954 [2024-05-15 17:17:33.402126] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:45.954 qpair failed and we were unable to recover it. 00:26:45.954 [2024-05-15 17:17:33.402306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.954 [2024-05-15 17:17:33.402427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.954 [2024-05-15 17:17:33.402440] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:45.954 qpair failed and we were unable to recover it. 00:26:45.954 [2024-05-15 17:17:33.402610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.954 [2024-05-15 17:17:33.402841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.954 [2024-05-15 17:17:33.402854] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:45.954 qpair failed and we were unable to recover it. 00:26:45.954 [2024-05-15 17:17:33.403031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.954 [2024-05-15 17:17:33.403160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.954 [2024-05-15 17:17:33.403177] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:45.954 qpair failed and we were unable to recover it. 00:26:45.954 [2024-05-15 17:17:33.403273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.954 [2024-05-15 17:17:33.403455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.954 [2024-05-15 17:17:33.403468] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:45.954 qpair failed and we were unable to recover it. 00:26:45.954 [2024-05-15 17:17:33.403642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.954 [2024-05-15 17:17:33.403741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.954 [2024-05-15 17:17:33.403754] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:45.954 qpair failed and we were unable to recover it. 00:26:45.954 [2024-05-15 17:17:33.403880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.954 [2024-05-15 17:17:33.404010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.954 [2024-05-15 17:17:33.404023] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:45.954 qpair failed and we were unable to recover it. 00:26:45.954 [2024-05-15 17:17:33.404200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.954 [2024-05-15 17:17:33.404327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.954 [2024-05-15 17:17:33.404340] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:45.954 qpair failed and we were unable to recover it. 00:26:45.954 [2024-05-15 17:17:33.404437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.954 [2024-05-15 17:17:33.404535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.954 [2024-05-15 17:17:33.404549] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:45.954 qpair failed and we were unable to recover it. 00:26:45.954 [2024-05-15 17:17:33.404729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.954 [2024-05-15 17:17:33.404981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.954 [2024-05-15 17:17:33.404994] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:45.954 qpair failed and we were unable to recover it. 00:26:45.954 [2024-05-15 17:17:33.405106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.954 [2024-05-15 17:17:33.405312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.954 [2024-05-15 17:17:33.405325] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:45.954 qpair failed and we were unable to recover it. 00:26:45.954 [2024-05-15 17:17:33.405450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.954 [2024-05-15 17:17:33.405629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.954 [2024-05-15 17:17:33.405642] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:45.954 qpair failed and we were unable to recover it. 00:26:45.954 [2024-05-15 17:17:33.405749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.954 [2024-05-15 17:17:33.405867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.954 [2024-05-15 17:17:33.405880] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:45.954 qpair failed and we were unable to recover it. 00:26:45.954 [2024-05-15 17:17:33.406051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.954 [2024-05-15 17:17:33.406210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.954 [2024-05-15 17:17:33.406224] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:45.954 qpair failed and we were unable to recover it. 00:26:45.954 [2024-05-15 17:17:33.406478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.954 [2024-05-15 17:17:33.406646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.954 [2024-05-15 17:17:33.406659] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:45.954 qpair failed and we were unable to recover it. 00:26:45.954 [2024-05-15 17:17:33.406839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.954 [2024-05-15 17:17:33.407005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.954 [2024-05-15 17:17:33.407018] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:45.954 qpair failed and we were unable to recover it. 00:26:45.954 [2024-05-15 17:17:33.407182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.954 [2024-05-15 17:17:33.407342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.954 [2024-05-15 17:17:33.407355] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:45.954 qpair failed and we were unable to recover it. 00:26:45.954 [2024-05-15 17:17:33.407464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.955 [2024-05-15 17:17:33.407588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.955 [2024-05-15 17:17:33.407601] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:45.955 qpair failed and we were unable to recover it. 00:26:45.955 [2024-05-15 17:17:33.407700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.955 [2024-05-15 17:17:33.407886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.955 [2024-05-15 17:17:33.407899] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:45.955 qpair failed and we were unable to recover it. 00:26:45.955 [2024-05-15 17:17:33.408019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.955 [2024-05-15 17:17:33.408146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.955 [2024-05-15 17:17:33.408159] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:45.955 qpair failed and we were unable to recover it. 00:26:45.955 [2024-05-15 17:17:33.408413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.955 [2024-05-15 17:17:33.408529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.955 [2024-05-15 17:17:33.408542] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:45.955 qpair failed and we were unable to recover it. 00:26:45.955 [2024-05-15 17:17:33.408801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.955 [2024-05-15 17:17:33.409037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.955 [2024-05-15 17:17:33.409050] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:45.955 qpair failed and we were unable to recover it. 00:26:45.955 [2024-05-15 17:17:33.409189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.955 [2024-05-15 17:17:33.409363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.955 [2024-05-15 17:17:33.409377] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:45.955 qpair failed and we were unable to recover it. 00:26:45.955 [2024-05-15 17:17:33.409593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.955 [2024-05-15 17:17:33.409768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.955 [2024-05-15 17:17:33.409781] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:45.955 qpair failed and we were unable to recover it. 00:26:45.955 [2024-05-15 17:17:33.409899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.955 [2024-05-15 17:17:33.410052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.955 [2024-05-15 17:17:33.410065] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:45.955 qpair failed and we were unable to recover it. 00:26:45.955 [2024-05-15 17:17:33.410298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.955 [2024-05-15 17:17:33.410547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.955 [2024-05-15 17:17:33.410560] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:45.955 qpair failed and we were unable to recover it. 00:26:45.955 [2024-05-15 17:17:33.410789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.955 [2024-05-15 17:17:33.410897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.955 [2024-05-15 17:17:33.410910] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:45.955 qpair failed and we were unable to recover it. 00:26:45.955 [2024-05-15 17:17:33.411140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.955 [2024-05-15 17:17:33.411313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.955 [2024-05-15 17:17:33.411343] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:45.955 qpair failed and we were unable to recover it. 00:26:45.955 [2024-05-15 17:17:33.411609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.955 [2024-05-15 17:17:33.411824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.955 [2024-05-15 17:17:33.411858] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:45.955 qpair failed and we were unable to recover it. 00:26:45.955 [2024-05-15 17:17:33.412011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.955 [2024-05-15 17:17:33.412273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.955 [2024-05-15 17:17:33.412288] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:45.955 qpair failed and we were unable to recover it. 00:26:45.955 [2024-05-15 17:17:33.412567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.955 [2024-05-15 17:17:33.412738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.955 [2024-05-15 17:17:33.412752] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:45.955 qpair failed and we were unable to recover it. 00:26:45.955 [2024-05-15 17:17:33.412931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.955 [2024-05-15 17:17:33.413120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.955 [2024-05-15 17:17:33.413134] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:45.955 qpair failed and we were unable to recover it. 00:26:45.955 [2024-05-15 17:17:33.413412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.955 [2024-05-15 17:17:33.413576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.955 [2024-05-15 17:17:33.413589] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:45.955 qpair failed and we were unable to recover it. 00:26:45.955 [2024-05-15 17:17:33.413843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.955 [2024-05-15 17:17:33.414099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.955 [2024-05-15 17:17:33.414127] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:45.955 qpair failed and we were unable to recover it. 00:26:45.955 [2024-05-15 17:17:33.414298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.955 [2024-05-15 17:17:33.414435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.955 [2024-05-15 17:17:33.414463] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:45.955 qpair failed and we were unable to recover it. 00:26:45.955 [2024-05-15 17:17:33.414675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.955 [2024-05-15 17:17:33.414962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.955 [2024-05-15 17:17:33.414990] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:45.955 qpair failed and we were unable to recover it. 00:26:45.955 [2024-05-15 17:17:33.415283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.955 [2024-05-15 17:17:33.415492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.955 [2024-05-15 17:17:33.415520] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:45.955 qpair failed and we were unable to recover it. 00:26:45.955 [2024-05-15 17:17:33.415967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.955 [2024-05-15 17:17:33.416206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.955 [2024-05-15 17:17:33.416223] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:45.955 qpair failed and we were unable to recover it. 00:26:45.955 [2024-05-15 17:17:33.416342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.955 [2024-05-15 17:17:33.416459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.955 [2024-05-15 17:17:33.416473] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:45.955 qpair failed and we were unable to recover it. 00:26:45.955 [2024-05-15 17:17:33.416708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.955 [2024-05-15 17:17:33.416895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.955 [2024-05-15 17:17:33.416925] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:45.955 qpair failed and we were unable to recover it. 00:26:45.955 [2024-05-15 17:17:33.417141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.955 [2024-05-15 17:17:33.417304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.955 [2024-05-15 17:17:33.417333] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:45.955 qpair failed and we were unable to recover it. 00:26:45.955 [2024-05-15 17:17:33.417572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.955 [2024-05-15 17:17:33.417830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.955 [2024-05-15 17:17:33.417859] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:45.955 qpair failed and we were unable to recover it. 00:26:45.955 [2024-05-15 17:17:33.418013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.955 [2024-05-15 17:17:33.418184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.955 [2024-05-15 17:17:33.418197] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:45.955 qpair failed and we were unable to recover it. 00:26:45.955 [2024-05-15 17:17:33.418379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.955 [2024-05-15 17:17:33.418558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.955 [2024-05-15 17:17:33.418571] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:45.955 qpair failed and we were unable to recover it. 00:26:45.955 [2024-05-15 17:17:33.418693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.955 [2024-05-15 17:17:33.418814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.955 [2024-05-15 17:17:33.418827] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:45.955 qpair failed and we were unable to recover it. 00:26:45.955 [2024-05-15 17:17:33.418998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.955 [2024-05-15 17:17:33.419228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.955 [2024-05-15 17:17:33.419242] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:45.956 qpair failed and we were unable to recover it. 00:26:45.956 [2024-05-15 17:17:33.419408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.956 [2024-05-15 17:17:33.419645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.956 [2024-05-15 17:17:33.419674] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:45.956 qpair failed and we were unable to recover it. 00:26:45.956 [2024-05-15 17:17:33.419898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.956 [2024-05-15 17:17:33.420130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.956 [2024-05-15 17:17:33.420169] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:45.956 qpair failed and we were unable to recover it. 00:26:45.956 [2024-05-15 17:17:33.420430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.956 [2024-05-15 17:17:33.420634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.956 [2024-05-15 17:17:33.420663] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:45.956 qpair failed and we were unable to recover it. 00:26:45.956 [2024-05-15 17:17:33.420886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.956 [2024-05-15 17:17:33.421089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.956 [2024-05-15 17:17:33.421117] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:45.956 qpair failed and we were unable to recover it. 00:26:45.956 [2024-05-15 17:17:33.421384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.956 [2024-05-15 17:17:33.421602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.956 [2024-05-15 17:17:33.421630] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:45.956 qpair failed and we were unable to recover it. 00:26:45.956 [2024-05-15 17:17:33.421905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.956 [2024-05-15 17:17:33.422114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.956 [2024-05-15 17:17:33.422142] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:45.956 qpair failed and we were unable to recover it. 00:26:45.956 [2024-05-15 17:17:33.422294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.956 [2024-05-15 17:17:33.422474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.956 [2024-05-15 17:17:33.422487] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:45.956 qpair failed and we were unable to recover it. 00:26:45.956 [2024-05-15 17:17:33.422663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.956 [2024-05-15 17:17:33.422915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.956 [2024-05-15 17:17:33.422928] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:45.956 qpair failed and we were unable to recover it. 00:26:45.956 [2024-05-15 17:17:33.423094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.956 [2024-05-15 17:17:33.423306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.956 [2024-05-15 17:17:33.423336] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:45.956 qpair failed and we were unable to recover it. 00:26:45.956 [2024-05-15 17:17:33.423548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.956 [2024-05-15 17:17:33.423688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.956 [2024-05-15 17:17:33.423716] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:45.956 qpair failed and we were unable to recover it. 00:26:45.956 [2024-05-15 17:17:33.423850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.956 [2024-05-15 17:17:33.424063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.956 [2024-05-15 17:17:33.424092] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:45.956 qpair failed and we were unable to recover it. 00:26:45.956 [2024-05-15 17:17:33.424228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.956 [2024-05-15 17:17:33.424452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.956 [2024-05-15 17:17:33.424466] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:45.956 qpair failed and we were unable to recover it. 00:26:45.956 [2024-05-15 17:17:33.424598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.956 [2024-05-15 17:17:33.424770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.956 [2024-05-15 17:17:33.424783] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:45.956 qpair failed and we were unable to recover it. 00:26:45.956 [2024-05-15 17:17:33.424913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.956 [2024-05-15 17:17:33.425105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.956 [2024-05-15 17:17:33.425134] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:45.956 qpair failed and we were unable to recover it. 00:26:45.956 [2024-05-15 17:17:33.425437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.956 [2024-05-15 17:17:33.425643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.956 [2024-05-15 17:17:33.425671] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:45.956 qpair failed and we were unable to recover it. 00:26:45.956 [2024-05-15 17:17:33.425817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.956 [2024-05-15 17:17:33.426011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.956 [2024-05-15 17:17:33.426039] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:45.956 qpair failed and we were unable to recover it. 00:26:45.956 [2024-05-15 17:17:33.426184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.956 [2024-05-15 17:17:33.426491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.956 [2024-05-15 17:17:33.426520] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:45.956 qpair failed and we were unable to recover it. 00:26:45.956 [2024-05-15 17:17:33.426737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.956 [2024-05-15 17:17:33.427017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.956 [2024-05-15 17:17:33.427031] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:45.956 qpair failed and we were unable to recover it. 00:26:45.956 [2024-05-15 17:17:33.427193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.956 [2024-05-15 17:17:33.427399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.956 [2024-05-15 17:17:33.427428] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:45.956 qpair failed and we were unable to recover it. 00:26:45.956 [2024-05-15 17:17:33.427693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.956 [2024-05-15 17:17:33.427854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.956 [2024-05-15 17:17:33.427882] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:45.956 qpair failed and we were unable to recover it. 00:26:45.956 [2024-05-15 17:17:33.428063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.956 [2024-05-15 17:17:33.428260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.956 [2024-05-15 17:17:33.428292] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:45.956 qpair failed and we were unable to recover it. 00:26:45.956 [2024-05-15 17:17:33.428445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.956 [2024-05-15 17:17:33.428651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.956 [2024-05-15 17:17:33.428680] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:45.956 qpair failed and we were unable to recover it. 00:26:45.956 [2024-05-15 17:17:33.428918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.956 [2024-05-15 17:17:33.429199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.956 [2024-05-15 17:17:33.429229] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:45.956 qpair failed and we were unable to recover it. 00:26:45.956 [2024-05-15 17:17:33.429491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.956 [2024-05-15 17:17:33.429598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.957 [2024-05-15 17:17:33.429611] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:45.957 qpair failed and we were unable to recover it. 00:26:45.957 [2024-05-15 17:17:33.429846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.957 [2024-05-15 17:17:33.430047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.957 [2024-05-15 17:17:33.430075] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:45.957 qpair failed and we were unable to recover it. 00:26:45.957 [2024-05-15 17:17:33.430247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.957 [2024-05-15 17:17:33.430479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.957 [2024-05-15 17:17:33.430508] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:45.957 qpair failed and we were unable to recover it. 00:26:45.957 [2024-05-15 17:17:33.430723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.957 [2024-05-15 17:17:33.430858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.957 [2024-05-15 17:17:33.430886] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:45.957 qpair failed and we were unable to recover it. 00:26:45.957 [2024-05-15 17:17:33.431154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.957 [2024-05-15 17:17:33.431450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.957 [2024-05-15 17:17:33.431479] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:45.957 qpair failed and we were unable to recover it. 00:26:45.957 [2024-05-15 17:17:33.431754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.957 [2024-05-15 17:17:33.431900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.957 [2024-05-15 17:17:33.431928] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:45.957 qpair failed and we were unable to recover it. 00:26:45.957 [2024-05-15 17:17:33.432211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.957 [2024-05-15 17:17:33.432449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.957 [2024-05-15 17:17:33.432478] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:45.957 qpair failed and we were unable to recover it. 00:26:45.957 [2024-05-15 17:17:33.432699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.957 [2024-05-15 17:17:33.432957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.957 [2024-05-15 17:17:33.432986] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:45.957 qpair failed and we were unable to recover it. 00:26:45.957 [2024-05-15 17:17:33.433272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.957 [2024-05-15 17:17:33.433385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.957 [2024-05-15 17:17:33.433399] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:45.957 qpair failed and we were unable to recover it. 00:26:45.957 [2024-05-15 17:17:33.433568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.957 [2024-05-15 17:17:33.433670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.957 [2024-05-15 17:17:33.433683] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:45.957 qpair failed and we were unable to recover it. 00:26:45.957 [2024-05-15 17:17:33.433852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.957 [2024-05-15 17:17:33.434025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.957 [2024-05-15 17:17:33.434041] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:45.957 qpair failed and we were unable to recover it. 00:26:45.957 [2024-05-15 17:17:33.434215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.957 [2024-05-15 17:17:33.434392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.957 [2024-05-15 17:17:33.434405] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:45.957 qpair failed and we were unable to recover it. 00:26:45.957 [2024-05-15 17:17:33.434599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.957 [2024-05-15 17:17:33.434854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.957 [2024-05-15 17:17:33.434882] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:45.957 qpair failed and we were unable to recover it. 00:26:45.957 [2024-05-15 17:17:33.435091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.957 [2024-05-15 17:17:33.435300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.957 [2024-05-15 17:17:33.435329] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:45.957 qpair failed and we were unable to recover it. 00:26:45.957 [2024-05-15 17:17:33.435544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.957 [2024-05-15 17:17:33.435811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.957 [2024-05-15 17:17:33.435839] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:45.957 qpair failed and we were unable to recover it. 00:26:45.957 [2024-05-15 17:17:33.436066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.957 [2024-05-15 17:17:33.436266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.957 [2024-05-15 17:17:33.436298] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:45.957 qpair failed and we were unable to recover it. 00:26:45.957 [2024-05-15 17:17:33.436590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.957 [2024-05-15 17:17:33.436848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.957 [2024-05-15 17:17:33.436877] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:45.957 qpair failed and we were unable to recover it. 00:26:45.957 [2024-05-15 17:17:33.437120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.957 [2024-05-15 17:17:33.437302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.957 [2024-05-15 17:17:33.437332] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:45.957 qpair failed and we were unable to recover it. 00:26:45.957 [2024-05-15 17:17:33.437558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.957 [2024-05-15 17:17:33.437792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.957 [2024-05-15 17:17:33.437820] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:45.957 qpair failed and we were unable to recover it. 00:26:45.957 [2024-05-15 17:17:33.437983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.957 [2024-05-15 17:17:33.438205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.957 [2024-05-15 17:17:33.438235] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:45.957 qpair failed and we were unable to recover it. 00:26:45.957 [2024-05-15 17:17:33.438440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.957 [2024-05-15 17:17:33.438572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.957 [2024-05-15 17:17:33.438585] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:45.957 qpair failed and we were unable to recover it. 00:26:45.957 [2024-05-15 17:17:33.438756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.957 [2024-05-15 17:17:33.438942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.957 [2024-05-15 17:17:33.438971] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:45.957 qpair failed and we were unable to recover it. 00:26:45.957 [2024-05-15 17:17:33.439188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.957 [2024-05-15 17:17:33.439395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.957 [2024-05-15 17:17:33.439424] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:45.957 qpair failed and we were unable to recover it. 00:26:45.957 [2024-05-15 17:17:33.439639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.957 [2024-05-15 17:17:33.439883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.957 [2024-05-15 17:17:33.439912] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:45.957 qpair failed and we were unable to recover it. 00:26:45.957 [2024-05-15 17:17:33.440050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.957 [2024-05-15 17:17:33.440295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.957 [2024-05-15 17:17:33.440329] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:45.957 qpair failed and we were unable to recover it. 00:26:45.957 [2024-05-15 17:17:33.440473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.957 [2024-05-15 17:17:33.440601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.957 [2024-05-15 17:17:33.440631] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:45.957 qpair failed and we were unable to recover it. 00:26:45.957 [2024-05-15 17:17:33.440909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.957 [2024-05-15 17:17:33.441068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.957 [2024-05-15 17:17:33.441097] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:45.957 qpair failed and we were unable to recover it. 00:26:45.957 [2024-05-15 17:17:33.441313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.957 [2024-05-15 17:17:33.441430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.957 [2024-05-15 17:17:33.441443] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:45.957 qpair failed and we were unable to recover it. 00:26:45.957 [2024-05-15 17:17:33.441546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.957 [2024-05-15 17:17:33.441710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.957 [2024-05-15 17:17:33.441724] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:45.957 qpair failed and we were unable to recover it. 00:26:45.957 [2024-05-15 17:17:33.441956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.958 [2024-05-15 17:17:33.442115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.958 [2024-05-15 17:17:33.442128] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:45.958 qpair failed and we were unable to recover it. 00:26:45.958 [2024-05-15 17:17:33.442414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.958 [2024-05-15 17:17:33.442590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.958 [2024-05-15 17:17:33.442603] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:45.958 qpair failed and we were unable to recover it. 00:26:45.958 [2024-05-15 17:17:33.442785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.958 [2024-05-15 17:17:33.442893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.958 [2024-05-15 17:17:33.442907] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:45.958 qpair failed and we were unable to recover it. 00:26:45.958 [2024-05-15 17:17:33.443087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.958 [2024-05-15 17:17:33.443267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.958 [2024-05-15 17:17:33.443281] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:45.958 qpair failed and we were unable to recover it. 00:26:45.958 [2024-05-15 17:17:33.443473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.958 [2024-05-15 17:17:33.443673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.958 [2024-05-15 17:17:33.443702] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:45.958 qpair failed and we were unable to recover it. 00:26:45.958 [2024-05-15 17:17:33.443842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.958 [2024-05-15 17:17:33.443967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.958 [2024-05-15 17:17:33.443995] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:45.958 qpair failed and we were unable to recover it. 00:26:45.958 [2024-05-15 17:17:33.444193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.958 [2024-05-15 17:17:33.444310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.958 [2024-05-15 17:17:33.444324] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:45.958 qpair failed and we were unable to recover it. 00:26:45.958 [2024-05-15 17:17:33.444516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.958 [2024-05-15 17:17:33.444684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.958 [2024-05-15 17:17:33.444713] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:45.958 qpair failed and we were unable to recover it. 00:26:45.958 [2024-05-15 17:17:33.444864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.958 [2024-05-15 17:17:33.445007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.958 [2024-05-15 17:17:33.445035] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:45.958 qpair failed and we were unable to recover it. 00:26:45.958 [2024-05-15 17:17:33.445262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.958 [2024-05-15 17:17:33.445388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.958 [2024-05-15 17:17:33.445401] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:45.958 qpair failed and we were unable to recover it. 00:26:45.958 [2024-05-15 17:17:33.445581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.958 [2024-05-15 17:17:33.445772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.958 [2024-05-15 17:17:33.445800] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:45.958 qpair failed and we were unable to recover it. 00:26:45.958 [2024-05-15 17:17:33.446013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.958 [2024-05-15 17:17:33.446143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.958 [2024-05-15 17:17:33.446180] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:45.958 qpair failed and we were unable to recover it. 00:26:45.958 [2024-05-15 17:17:33.446389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.958 [2024-05-15 17:17:33.446531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.958 [2024-05-15 17:17:33.446561] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:45.958 qpair failed and we were unable to recover it. 00:26:45.958 [2024-05-15 17:17:33.446707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.958 [2024-05-15 17:17:33.446967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.958 [2024-05-15 17:17:33.446996] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:45.958 qpair failed and we were unable to recover it. 00:26:45.958 [2024-05-15 17:17:33.447204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.958 [2024-05-15 17:17:33.447397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.958 [2024-05-15 17:17:33.447410] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:45.958 qpair failed and we were unable to recover it. 00:26:45.958 [2024-05-15 17:17:33.447655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.958 [2024-05-15 17:17:33.447881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.958 [2024-05-15 17:17:33.447895] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:45.958 qpair failed and we were unable to recover it. 00:26:45.958 [2024-05-15 17:17:33.448009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.958 [2024-05-15 17:17:33.448208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.958 [2024-05-15 17:17:33.448224] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:45.958 qpair failed and we were unable to recover it. 00:26:45.958 [2024-05-15 17:17:33.448341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.958 [2024-05-15 17:17:33.448502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.958 [2024-05-15 17:17:33.448515] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:45.958 qpair failed and we were unable to recover it. 00:26:45.958 [2024-05-15 17:17:33.448716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.958 [2024-05-15 17:17:33.448897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.958 [2024-05-15 17:17:33.448910] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:45.958 qpair failed and we were unable to recover it. 00:26:45.958 [2024-05-15 17:17:33.449073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.958 [2024-05-15 17:17:33.449253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.958 [2024-05-15 17:17:33.449282] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:45.958 qpair failed and we were unable to recover it. 00:26:45.958 [2024-05-15 17:17:33.449489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.958 [2024-05-15 17:17:33.449688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.958 [2024-05-15 17:17:33.449717] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:45.958 qpair failed and we were unable to recover it. 00:26:45.958 [2024-05-15 17:17:33.449990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.958 [2024-05-15 17:17:33.450127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.958 [2024-05-15 17:17:33.450139] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:45.958 qpair failed and we were unable to recover it. 00:26:45.958 [2024-05-15 17:17:33.450268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.958 [2024-05-15 17:17:33.450524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.958 [2024-05-15 17:17:33.450537] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:45.958 qpair failed and we were unable to recover it. 00:26:45.958 [2024-05-15 17:17:33.450794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.958 [2024-05-15 17:17:33.450979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.958 [2024-05-15 17:17:33.450992] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:45.958 qpair failed and we were unable to recover it. 00:26:45.958 [2024-05-15 17:17:33.451089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.958 [2024-05-15 17:17:33.451222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.958 [2024-05-15 17:17:33.451236] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:45.958 qpair failed and we were unable to recover it. 00:26:45.958 [2024-05-15 17:17:33.451415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.958 [2024-05-15 17:17:33.451564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.958 [2024-05-15 17:17:33.451593] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:45.958 qpair failed and we were unable to recover it. 00:26:45.958 [2024-05-15 17:17:33.451742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.958 [2024-05-15 17:17:33.451931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.958 [2024-05-15 17:17:33.451960] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:45.958 qpair failed and we were unable to recover it. 00:26:45.958 [2024-05-15 17:17:33.452178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.958 [2024-05-15 17:17:33.452332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.958 [2024-05-15 17:17:33.452365] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:45.958 qpair failed and we were unable to recover it. 00:26:45.958 [2024-05-15 17:17:33.452504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.958 [2024-05-15 17:17:33.452726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.958 [2024-05-15 17:17:33.452755] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:45.958 qpair failed and we were unable to recover it. 00:26:45.959 [2024-05-15 17:17:33.452972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.959 [2024-05-15 17:17:33.453115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.959 [2024-05-15 17:17:33.453144] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:45.959 qpair failed and we were unable to recover it. 00:26:45.959 [2024-05-15 17:17:33.453415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.959 [2024-05-15 17:17:33.453625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.959 [2024-05-15 17:17:33.453654] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:45.959 qpair failed and we were unable to recover it. 00:26:45.959 [2024-05-15 17:17:33.453890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.959 [2024-05-15 17:17:33.454185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.959 [2024-05-15 17:17:33.454215] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:45.959 qpair failed and we were unable to recover it. 00:26:45.959 [2024-05-15 17:17:33.454408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.959 [2024-05-15 17:17:33.454640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.959 [2024-05-15 17:17:33.454675] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:45.959 qpair failed and we were unable to recover it. 00:26:45.959 [2024-05-15 17:17:33.454832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.959 [2024-05-15 17:17:33.455098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.959 [2024-05-15 17:17:33.455127] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:45.959 qpair failed and we were unable to recover it. 00:26:45.959 [2024-05-15 17:17:33.455355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.959 [2024-05-15 17:17:33.455586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.959 [2024-05-15 17:17:33.455615] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:45.959 qpair failed and we were unable to recover it. 00:26:45.959 [2024-05-15 17:17:33.455736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.959 [2024-05-15 17:17:33.455997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.959 [2024-05-15 17:17:33.456026] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:45.959 qpair failed and we were unable to recover it. 00:26:45.959 [2024-05-15 17:17:33.456181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.959 [2024-05-15 17:17:33.456382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.959 [2024-05-15 17:17:33.456411] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:45.959 qpair failed and we were unable to recover it. 00:26:45.959 [2024-05-15 17:17:33.456679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.959 [2024-05-15 17:17:33.456964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.959 [2024-05-15 17:17:33.456993] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:45.959 qpair failed and we were unable to recover it. 00:26:45.959 [2024-05-15 17:17:33.457264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.959 [2024-05-15 17:17:33.457424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.959 [2024-05-15 17:17:33.457437] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:45.959 qpair failed and we were unable to recover it. 00:26:45.959 [2024-05-15 17:17:33.457694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.959 [2024-05-15 17:17:33.457881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.959 [2024-05-15 17:17:33.457894] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:45.959 qpair failed and we were unable to recover it. 00:26:45.959 [2024-05-15 17:17:33.457992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.959 [2024-05-15 17:17:33.458174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.959 [2024-05-15 17:17:33.458188] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:45.959 qpair failed and we were unable to recover it. 00:26:45.959 [2024-05-15 17:17:33.458422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.959 [2024-05-15 17:17:33.458552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.959 [2024-05-15 17:17:33.458581] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:45.959 qpair failed and we were unable to recover it. 00:26:45.959 [2024-05-15 17:17:33.458826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.959 [2024-05-15 17:17:33.458975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.959 [2024-05-15 17:17:33.458991] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:45.959 qpair failed and we were unable to recover it. 00:26:45.959 [2024-05-15 17:17:33.459245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.959 [2024-05-15 17:17:33.459441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.959 [2024-05-15 17:17:33.459469] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:45.959 qpair failed and we were unable to recover it. 00:26:45.959 [2024-05-15 17:17:33.459696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.959 [2024-05-15 17:17:33.459942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.959 [2024-05-15 17:17:33.459971] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:45.959 qpair failed and we were unable to recover it. 00:26:45.959 [2024-05-15 17:17:33.460213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.959 [2024-05-15 17:17:33.460448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.959 [2024-05-15 17:17:33.460461] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:45.959 qpair failed and we were unable to recover it. 00:26:45.959 [2024-05-15 17:17:33.460643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.959 [2024-05-15 17:17:33.460886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.959 [2024-05-15 17:17:33.460914] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:45.959 qpair failed and we were unable to recover it. 00:26:45.959 [2024-05-15 17:17:33.461126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.959 [2024-05-15 17:17:33.461301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.959 [2024-05-15 17:17:33.461332] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:45.959 qpair failed and we were unable to recover it. 00:26:45.959 [2024-05-15 17:17:33.461517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.959 [2024-05-15 17:17:33.461757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.959 [2024-05-15 17:17:33.461785] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:45.959 qpair failed and we were unable to recover it. 00:26:45.959 [2024-05-15 17:17:33.461995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.959 [2024-05-15 17:17:33.462188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.959 [2024-05-15 17:17:33.462217] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:45.959 qpair failed and we were unable to recover it. 00:26:45.959 [2024-05-15 17:17:33.462397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.959 [2024-05-15 17:17:33.462652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.959 [2024-05-15 17:17:33.462680] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:45.959 qpair failed and we were unable to recover it. 00:26:45.959 [2024-05-15 17:17:33.462950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.959 [2024-05-15 17:17:33.463158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.959 [2024-05-15 17:17:33.463194] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:45.959 qpair failed and we were unable to recover it. 00:26:45.959 [2024-05-15 17:17:33.463407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.959 [2024-05-15 17:17:33.463671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.959 [2024-05-15 17:17:33.463700] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:45.959 qpair failed and we were unable to recover it. 00:26:45.959 [2024-05-15 17:17:33.463922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.959 [2024-05-15 17:17:33.464135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.959 [2024-05-15 17:17:33.464176] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:45.959 qpair failed and we were unable to recover it. 00:26:45.959 [2024-05-15 17:17:33.464404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.959 [2024-05-15 17:17:33.464520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.959 [2024-05-15 17:17:33.464533] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:45.959 qpair failed and we were unable to recover it. 00:26:45.959 [2024-05-15 17:17:33.464802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.959 [2024-05-15 17:17:33.464980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.959 [2024-05-15 17:17:33.464993] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:45.959 qpair failed and we were unable to recover it. 00:26:45.959 [2024-05-15 17:17:33.465225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.959 [2024-05-15 17:17:33.465485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.959 [2024-05-15 17:17:33.465515] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:45.959 qpair failed and we were unable to recover it. 00:26:45.959 [2024-05-15 17:17:33.465733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.959 [2024-05-15 17:17:33.465883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.960 [2024-05-15 17:17:33.465911] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:45.960 qpair failed and we were unable to recover it. 00:26:45.960 [2024-05-15 17:17:33.466107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.960 [2024-05-15 17:17:33.466273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.960 [2024-05-15 17:17:33.466302] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:45.960 qpair failed and we were unable to recover it. 00:26:45.960 [2024-05-15 17:17:33.466613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.960 [2024-05-15 17:17:33.466821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.960 [2024-05-15 17:17:33.466850] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:45.960 qpair failed and we were unable to recover it. 00:26:45.960 [2024-05-15 17:17:33.467059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.960 [2024-05-15 17:17:33.467293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.960 [2024-05-15 17:17:33.467323] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:45.960 qpair failed and we were unable to recover it. 00:26:45.960 [2024-05-15 17:17:33.467611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.960 [2024-05-15 17:17:33.467868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.960 [2024-05-15 17:17:33.467902] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:45.960 qpair failed and we were unable to recover it. 00:26:45.960 [2024-05-15 17:17:33.468183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.960 [2024-05-15 17:17:33.468337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.960 [2024-05-15 17:17:33.468367] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:45.960 qpair failed and we were unable to recover it. 00:26:45.960 [2024-05-15 17:17:33.468661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.960 [2024-05-15 17:17:33.468833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.960 [2024-05-15 17:17:33.468846] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:45.960 qpair failed and we were unable to recover it. 00:26:45.960 [2024-05-15 17:17:33.469058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.960 [2024-05-15 17:17:33.469249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.960 [2024-05-15 17:17:33.469280] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:45.960 qpair failed and we were unable to recover it. 00:26:45.960 [2024-05-15 17:17:33.469495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.960 [2024-05-15 17:17:33.469622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.960 [2024-05-15 17:17:33.469651] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:45.960 qpair failed and we were unable to recover it. 00:26:45.960 [2024-05-15 17:17:33.469868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.960 [2024-05-15 17:17:33.470011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.960 [2024-05-15 17:17:33.470040] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:45.960 qpair failed and we were unable to recover it. 00:26:45.960 [2024-05-15 17:17:33.470325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.960 [2024-05-15 17:17:33.470522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.960 [2024-05-15 17:17:33.470535] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:45.960 qpair failed and we were unable to recover it. 00:26:45.960 [2024-05-15 17:17:33.470645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.960 [2024-05-15 17:17:33.470810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.960 [2024-05-15 17:17:33.470823] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:45.960 qpair failed and we were unable to recover it. 00:26:45.960 [2024-05-15 17:17:33.471078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.960 [2024-05-15 17:17:33.471193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.960 [2024-05-15 17:17:33.471207] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:45.960 qpair failed and we were unable to recover it. 00:26:45.960 [2024-05-15 17:17:33.471407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.960 [2024-05-15 17:17:33.471579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.960 [2024-05-15 17:17:33.471592] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:45.960 qpair failed and we were unable to recover it. 00:26:45.960 [2024-05-15 17:17:33.471755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.960 [2024-05-15 17:17:33.471859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.960 [2024-05-15 17:17:33.471873] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:45.960 qpair failed and we were unable to recover it. 00:26:45.960 [2024-05-15 17:17:33.472038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.960 [2024-05-15 17:17:33.472203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.960 [2024-05-15 17:17:33.472218] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:45.960 qpair failed and we were unable to recover it. 00:26:45.960 [2024-05-15 17:17:33.472410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.960 [2024-05-15 17:17:33.472674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.960 [2024-05-15 17:17:33.472688] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:45.960 qpair failed and we were unable to recover it. 00:26:45.960 [2024-05-15 17:17:33.472814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.960 [2024-05-15 17:17:33.472926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.960 [2024-05-15 17:17:33.472939] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:45.960 qpair failed and we were unable to recover it. 00:26:45.960 [2024-05-15 17:17:33.473130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.960 [2024-05-15 17:17:33.473248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.960 [2024-05-15 17:17:33.473263] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:45.960 qpair failed and we were unable to recover it. 00:26:45.960 [2024-05-15 17:17:33.473437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.960 [2024-05-15 17:17:33.473664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.960 [2024-05-15 17:17:33.473677] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:45.960 qpair failed and we were unable to recover it. 00:26:45.960 [2024-05-15 17:17:33.473852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.960 [2024-05-15 17:17:33.473947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.960 [2024-05-15 17:17:33.473960] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:45.960 qpair failed and we were unable to recover it. 00:26:45.960 [2024-05-15 17:17:33.474124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.960 [2024-05-15 17:17:33.474312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.960 [2024-05-15 17:17:33.474326] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:45.960 qpair failed and we were unable to recover it. 00:26:45.960 [2024-05-15 17:17:33.474557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.960 [2024-05-15 17:17:33.474789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.960 [2024-05-15 17:17:33.474817] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:45.960 qpair failed and we were unable to recover it. 00:26:45.960 [2024-05-15 17:17:33.474963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.960 [2024-05-15 17:17:33.475118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.960 [2024-05-15 17:17:33.475146] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:45.960 qpair failed and we were unable to recover it. 00:26:45.960 [2024-05-15 17:17:33.475292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.960 [2024-05-15 17:17:33.475542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.960 [2024-05-15 17:17:33.475571] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:45.960 qpair failed and we were unable to recover it. 00:26:45.960 [2024-05-15 17:17:33.475776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.960 [2024-05-15 17:17:33.476037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.960 [2024-05-15 17:17:33.476065] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:45.960 qpair failed and we were unable to recover it. 00:26:45.960 [2024-05-15 17:17:33.476286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.960 [2024-05-15 17:17:33.476499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.960 [2024-05-15 17:17:33.476529] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:45.960 qpair failed and we were unable to recover it. 00:26:45.960 [2024-05-15 17:17:33.476767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.960 [2024-05-15 17:17:33.476897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.960 [2024-05-15 17:17:33.476926] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:45.960 qpair failed and we were unable to recover it. 00:26:45.960 [2024-05-15 17:17:33.477125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.960 [2024-05-15 17:17:33.477347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.960 [2024-05-15 17:17:33.477378] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:45.960 qpair failed and we were unable to recover it. 00:26:45.960 [2024-05-15 17:17:33.477669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.960 [2024-05-15 17:17:33.477884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.961 [2024-05-15 17:17:33.477912] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:45.961 qpair failed and we were unable to recover it. 00:26:45.961 [2024-05-15 17:17:33.478056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.961 [2024-05-15 17:17:33.478272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.961 [2024-05-15 17:17:33.478303] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:45.961 qpair failed and we were unable to recover it. 00:26:45.961 [2024-05-15 17:17:33.478538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.961 [2024-05-15 17:17:33.478713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.961 [2024-05-15 17:17:33.478726] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:45.961 qpair failed and we were unable to recover it. 00:26:45.961 [2024-05-15 17:17:33.478841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.961 [2024-05-15 17:17:33.479044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.961 [2024-05-15 17:17:33.479057] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:45.961 qpair failed and we were unable to recover it. 00:26:45.961 [2024-05-15 17:17:33.479224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.961 [2024-05-15 17:17:33.479410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.961 [2024-05-15 17:17:33.479439] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:45.961 qpair failed and we were unable to recover it. 00:26:45.961 [2024-05-15 17:17:33.479577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.961 [2024-05-15 17:17:33.479707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.961 [2024-05-15 17:17:33.479735] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:45.961 qpair failed and we were unable to recover it. 00:26:45.961 [2024-05-15 17:17:33.479894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.961 [2024-05-15 17:17:33.480081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.961 [2024-05-15 17:17:33.480094] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:45.961 qpair failed and we were unable to recover it. 00:26:45.961 [2024-05-15 17:17:33.480347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.961 [2024-05-15 17:17:33.480530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.961 [2024-05-15 17:17:33.480546] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:45.961 qpair failed and we were unable to recover it. 00:26:45.961 [2024-05-15 17:17:33.480710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.961 [2024-05-15 17:17:33.480901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.961 [2024-05-15 17:17:33.480930] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:45.961 qpair failed and we were unable to recover it. 00:26:45.961 [2024-05-15 17:17:33.481094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.961 [2024-05-15 17:17:33.481202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.961 [2024-05-15 17:17:33.481231] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:45.961 qpair failed and we were unable to recover it. 00:26:45.961 [2024-05-15 17:17:33.481397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.961 [2024-05-15 17:17:33.481603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.961 [2024-05-15 17:17:33.481616] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:45.961 qpair failed and we were unable to recover it. 00:26:45.961 [2024-05-15 17:17:33.481734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.961 [2024-05-15 17:17:33.481847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.961 [2024-05-15 17:17:33.481860] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:45.961 qpair failed and we were unable to recover it. 00:26:45.961 [2024-05-15 17:17:33.482091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.961 [2024-05-15 17:17:33.482182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.961 [2024-05-15 17:17:33.482196] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:45.961 qpair failed and we were unable to recover it. 00:26:45.961 [2024-05-15 17:17:33.482307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.961 [2024-05-15 17:17:33.482486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.961 [2024-05-15 17:17:33.482524] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:45.961 qpair failed and we were unable to recover it. 00:26:45.961 [2024-05-15 17:17:33.482683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.961 [2024-05-15 17:17:33.482943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.961 [2024-05-15 17:17:33.482972] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:45.961 qpair failed and we were unable to recover it. 00:26:45.961 [2024-05-15 17:17:33.483170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.961 [2024-05-15 17:17:33.483409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.961 [2024-05-15 17:17:33.483438] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:45.961 qpair failed and we were unable to recover it. 00:26:45.961 [2024-05-15 17:17:33.483707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.961 [2024-05-15 17:17:33.483992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.961 [2024-05-15 17:17:33.484020] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:45.961 qpair failed and we were unable to recover it. 00:26:45.961 [2024-05-15 17:17:33.484248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.961 [2024-05-15 17:17:33.484475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.961 [2024-05-15 17:17:33.484506] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:45.961 qpair failed and we were unable to recover it. 00:26:45.961 [2024-05-15 17:17:33.484724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.961 [2024-05-15 17:17:33.485005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.961 [2024-05-15 17:17:33.485033] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:45.961 qpair failed and we were unable to recover it. 00:26:45.961 [2024-05-15 17:17:33.485323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.961 [2024-05-15 17:17:33.485609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.961 [2024-05-15 17:17:33.485622] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:45.961 qpair failed and we were unable to recover it. 00:26:45.961 [2024-05-15 17:17:33.485747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.961 [2024-05-15 17:17:33.485881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.961 [2024-05-15 17:17:33.485894] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:45.961 qpair failed and we were unable to recover it. 00:26:45.961 [2024-05-15 17:17:33.486137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.961 [2024-05-15 17:17:33.486312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.961 [2024-05-15 17:17:33.486326] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:45.961 qpair failed and we were unable to recover it. 00:26:45.961 [2024-05-15 17:17:33.486447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.961 [2024-05-15 17:17:33.486622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.961 [2024-05-15 17:17:33.486635] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:45.961 qpair failed and we were unable to recover it. 00:26:45.961 [2024-05-15 17:17:33.486822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.961 [2024-05-15 17:17:33.487094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.961 [2024-05-15 17:17:33.487122] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:45.961 qpair failed and we were unable to recover it. 00:26:45.961 [2024-05-15 17:17:33.487403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.961 [2024-05-15 17:17:33.487626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.961 [2024-05-15 17:17:33.487654] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:45.961 qpair failed and we were unable to recover it. 00:26:45.961 [2024-05-15 17:17:33.487869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.962 [2024-05-15 17:17:33.488053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.962 [2024-05-15 17:17:33.488081] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:45.962 qpair failed and we were unable to recover it. 00:26:45.962 [2024-05-15 17:17:33.488318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.962 [2024-05-15 17:17:33.488519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.962 [2024-05-15 17:17:33.488549] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:45.962 qpair failed and we were unable to recover it. 00:26:45.962 [2024-05-15 17:17:33.488849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.962 [2024-05-15 17:17:33.489064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.962 [2024-05-15 17:17:33.489092] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:45.962 qpair failed and we were unable to recover it. 00:26:45.962 [2024-05-15 17:17:33.489306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.962 [2024-05-15 17:17:33.489513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.962 [2024-05-15 17:17:33.489526] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:45.962 qpair failed and we were unable to recover it. 00:26:45.962 [2024-05-15 17:17:33.489704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.962 [2024-05-15 17:17:33.489869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.962 [2024-05-15 17:17:33.489882] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:45.962 qpair failed and we were unable to recover it. 00:26:45.962 [2024-05-15 17:17:33.490067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.962 [2024-05-15 17:17:33.490250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.962 [2024-05-15 17:17:33.490264] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:45.962 qpair failed and we were unable to recover it. 00:26:45.962 [2024-05-15 17:17:33.490439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.962 [2024-05-15 17:17:33.490544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.962 [2024-05-15 17:17:33.490558] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:45.962 qpair failed and we were unable to recover it. 00:26:45.962 [2024-05-15 17:17:33.490799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.962 [2024-05-15 17:17:33.490980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.962 [2024-05-15 17:17:33.490993] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:45.962 qpair failed and we were unable to recover it. 00:26:45.962 [2024-05-15 17:17:33.491187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.962 [2024-05-15 17:17:33.491366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.962 [2024-05-15 17:17:33.491395] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:45.962 qpair failed and we were unable to recover it. 00:26:45.962 [2024-05-15 17:17:33.491660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.962 [2024-05-15 17:17:33.491806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.962 [2024-05-15 17:17:33.491834] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:45.962 qpair failed and we were unable to recover it. 00:26:45.962 [2024-05-15 17:17:33.492099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.962 [2024-05-15 17:17:33.492284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.962 [2024-05-15 17:17:33.492317] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:45.962 qpair failed and we were unable to recover it. 00:26:45.962 [2024-05-15 17:17:33.492545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.962 [2024-05-15 17:17:33.492646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.962 [2024-05-15 17:17:33.492679] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:45.962 qpair failed and we were unable to recover it. 00:26:45.962 [2024-05-15 17:17:33.492944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.962 [2024-05-15 17:17:33.493188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.962 [2024-05-15 17:17:33.493219] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:45.962 qpair failed and we were unable to recover it. 00:26:45.962 [2024-05-15 17:17:33.493426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.962 [2024-05-15 17:17:33.493630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.962 [2024-05-15 17:17:33.493659] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:45.962 qpair failed and we were unable to recover it. 00:26:45.962 [2024-05-15 17:17:33.493821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.962 [2024-05-15 17:17:33.494051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.962 [2024-05-15 17:17:33.494080] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:45.962 qpair failed and we were unable to recover it. 00:26:45.962 [2024-05-15 17:17:33.494231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.962 [2024-05-15 17:17:33.494340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.962 [2024-05-15 17:17:33.494353] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:45.962 qpair failed and we were unable to recover it. 00:26:45.962 [2024-05-15 17:17:33.494516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.962 [2024-05-15 17:17:33.494639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.962 [2024-05-15 17:17:33.494652] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:45.962 qpair failed and we were unable to recover it. 00:26:45.962 [2024-05-15 17:17:33.494779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.962 [2024-05-15 17:17:33.494955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.962 [2024-05-15 17:17:33.494968] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:45.962 qpair failed and we were unable to recover it. 00:26:45.962 [2024-05-15 17:17:33.495111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.962 [2024-05-15 17:17:33.495290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.962 [2024-05-15 17:17:33.495320] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:45.962 qpair failed and we were unable to recover it. 00:26:45.962 [2024-05-15 17:17:33.495537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.962 [2024-05-15 17:17:33.495749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.962 [2024-05-15 17:17:33.495778] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:45.962 qpair failed and we were unable to recover it. 00:26:45.962 [2024-05-15 17:17:33.496051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.962 [2024-05-15 17:17:33.496191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.962 [2024-05-15 17:17:33.496229] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:45.962 qpair failed and we were unable to recover it. 00:26:45.962 [2024-05-15 17:17:33.496471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.962 [2024-05-15 17:17:33.496680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.962 [2024-05-15 17:17:33.496709] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:45.962 qpair failed and we were unable to recover it. 00:26:45.962 [2024-05-15 17:17:33.496909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.962 [2024-05-15 17:17:33.497113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.962 [2024-05-15 17:17:33.497127] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:45.962 qpair failed and we were unable to recover it. 00:26:45.962 [2024-05-15 17:17:33.497302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.962 [2024-05-15 17:17:33.497549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.962 [2024-05-15 17:17:33.497578] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:45.962 qpair failed and we were unable to recover it. 00:26:45.962 [2024-05-15 17:17:33.497841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.962 [2024-05-15 17:17:33.498053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.962 [2024-05-15 17:17:33.498082] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:45.962 qpair failed and we were unable to recover it. 00:26:45.962 [2024-05-15 17:17:33.498232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.962 [2024-05-15 17:17:33.498460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.962 [2024-05-15 17:17:33.498473] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:45.962 qpair failed and we were unable to recover it. 00:26:45.962 [2024-05-15 17:17:33.498638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.962 [2024-05-15 17:17:33.498814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.962 [2024-05-15 17:17:33.498828] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:45.962 qpair failed and we were unable to recover it. 00:26:45.962 [2024-05-15 17:17:33.499061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.962 [2024-05-15 17:17:33.499308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.962 [2024-05-15 17:17:33.499338] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:45.962 qpair failed and we were unable to recover it. 00:26:45.962 [2024-05-15 17:17:33.499549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.962 [2024-05-15 17:17:33.499813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.962 [2024-05-15 17:17:33.499842] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:45.962 qpair failed and we were unable to recover it. 00:26:45.963 [2024-05-15 17:17:33.500058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.963 [2024-05-15 17:17:33.500340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.963 [2024-05-15 17:17:33.500373] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:45.963 qpair failed and we were unable to recover it. 00:26:45.963 [2024-05-15 17:17:33.500609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.963 [2024-05-15 17:17:33.500748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.963 [2024-05-15 17:17:33.500777] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:45.963 qpair failed and we were unable to recover it. 00:26:45.963 [2024-05-15 17:17:33.500993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.963 [2024-05-15 17:17:33.501144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.963 [2024-05-15 17:17:33.501181] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:45.963 qpair failed and we were unable to recover it. 00:26:45.963 [2024-05-15 17:17:33.501473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.963 [2024-05-15 17:17:33.501666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.963 [2024-05-15 17:17:33.501694] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:45.963 qpair failed and we were unable to recover it. 00:26:45.963 [2024-05-15 17:17:33.501893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.963 [2024-05-15 17:17:33.502085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.963 [2024-05-15 17:17:33.502101] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:45.963 qpair failed and we were unable to recover it. 00:26:45.963 [2024-05-15 17:17:33.502242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.963 [2024-05-15 17:17:33.502466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.963 [2024-05-15 17:17:33.502494] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:45.963 qpair failed and we were unable to recover it. 00:26:45.963 [2024-05-15 17:17:33.502705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.963 [2024-05-15 17:17:33.502970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.963 [2024-05-15 17:17:33.502999] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:45.963 qpair failed and we were unable to recover it. 00:26:45.963 [2024-05-15 17:17:33.503287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.963 [2024-05-15 17:17:33.503526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.963 [2024-05-15 17:17:33.503554] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:45.963 qpair failed and we were unable to recover it. 00:26:45.963 [2024-05-15 17:17:33.503715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.963 [2024-05-15 17:17:33.503914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.963 [2024-05-15 17:17:33.503943] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:45.963 qpair failed and we were unable to recover it. 00:26:45.963 [2024-05-15 17:17:33.504097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.963 [2024-05-15 17:17:33.504295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.963 [2024-05-15 17:17:33.504328] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:45.963 qpair failed and we were unable to recover it. 00:26:45.963 [2024-05-15 17:17:33.504557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.963 [2024-05-15 17:17:33.504794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.963 [2024-05-15 17:17:33.504822] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:45.963 qpair failed and we were unable to recover it. 00:26:45.963 [2024-05-15 17:17:33.505019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.963 [2024-05-15 17:17:33.505310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.963 [2024-05-15 17:17:33.505341] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:45.963 qpair failed and we were unable to recover it. 00:26:45.963 [2024-05-15 17:17:33.505567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.963 [2024-05-15 17:17:33.505775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.963 [2024-05-15 17:17:33.505803] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:45.963 qpair failed and we were unable to recover it. 00:26:45.963 [2024-05-15 17:17:33.506038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.963 [2024-05-15 17:17:33.506224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.963 [2024-05-15 17:17:33.506253] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:45.963 qpair failed and we were unable to recover it. 00:26:45.963 [2024-05-15 17:17:33.506456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.963 [2024-05-15 17:17:33.506567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.963 [2024-05-15 17:17:33.506580] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:45.963 qpair failed and we were unable to recover it. 00:26:45.963 [2024-05-15 17:17:33.506855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.963 [2024-05-15 17:17:33.507036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.963 [2024-05-15 17:17:33.507049] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:45.963 qpair failed and we were unable to recover it. 00:26:45.963 [2024-05-15 17:17:33.507306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.963 [2024-05-15 17:17:33.507418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.963 [2024-05-15 17:17:33.507431] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:45.963 qpair failed and we were unable to recover it. 00:26:45.963 [2024-05-15 17:17:33.507606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.963 [2024-05-15 17:17:33.507759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.963 [2024-05-15 17:17:33.507787] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:45.963 qpair failed and we were unable to recover it. 00:26:45.963 [2024-05-15 17:17:33.508072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.963 [2024-05-15 17:17:33.508223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.963 [2024-05-15 17:17:33.508237] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:45.963 qpair failed and we were unable to recover it. 00:26:45.963 [2024-05-15 17:17:33.508427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.963 [2024-05-15 17:17:33.508639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.963 [2024-05-15 17:17:33.508668] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:45.963 qpair failed and we were unable to recover it. 00:26:45.963 [2024-05-15 17:17:33.508963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.963 [2024-05-15 17:17:33.509073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.963 [2024-05-15 17:17:33.509102] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:45.963 qpair failed and we were unable to recover it. 00:26:45.963 [2024-05-15 17:17:33.509420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.963 [2024-05-15 17:17:33.509619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.963 [2024-05-15 17:17:33.509633] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:45.963 qpair failed and we were unable to recover it. 00:26:45.963 [2024-05-15 17:17:33.509911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.963 [2024-05-15 17:17:33.510143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.963 [2024-05-15 17:17:33.510156] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:45.963 qpair failed and we were unable to recover it. 00:26:45.963 [2024-05-15 17:17:33.510343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.963 [2024-05-15 17:17:33.510472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.963 [2024-05-15 17:17:33.510510] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:45.963 qpair failed and we were unable to recover it. 00:26:45.963 [2024-05-15 17:17:33.510805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.963 [2024-05-15 17:17:33.511079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.963 [2024-05-15 17:17:33.511108] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:45.963 qpair failed and we were unable to recover it. 00:26:45.963 [2024-05-15 17:17:33.511285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.963 [2024-05-15 17:17:33.511413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.963 [2024-05-15 17:17:33.511444] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:45.963 qpair failed and we were unable to recover it. 00:26:45.963 [2024-05-15 17:17:33.511563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.963 [2024-05-15 17:17:33.511802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.963 [2024-05-15 17:17:33.511830] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:45.963 qpair failed and we were unable to recover it. 00:26:45.963 [2024-05-15 17:17:33.512031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.963 [2024-05-15 17:17:33.512258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.963 [2024-05-15 17:17:33.512291] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:45.963 qpair failed and we were unable to recover it. 00:26:45.963 [2024-05-15 17:17:33.512514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.963 [2024-05-15 17:17:33.512634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.964 [2024-05-15 17:17:33.512648] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:45.964 qpair failed and we were unable to recover it. 00:26:45.964 [2024-05-15 17:17:33.512814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.964 [2024-05-15 17:17:33.512994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.964 [2024-05-15 17:17:33.513007] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:45.964 qpair failed and we were unable to recover it. 00:26:45.964 [2024-05-15 17:17:33.513182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.964 [2024-05-15 17:17:33.513373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.964 [2024-05-15 17:17:33.513403] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:45.964 qpair failed and we were unable to recover it. 00:26:45.964 [2024-05-15 17:17:33.513636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.964 [2024-05-15 17:17:33.513838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.964 [2024-05-15 17:17:33.513867] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:45.964 qpair failed and we were unable to recover it. 00:26:45.964 [2024-05-15 17:17:33.514017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.964 [2024-05-15 17:17:33.514222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.964 [2024-05-15 17:17:33.514251] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:45.964 qpair failed and we were unable to recover it. 00:26:45.964 [2024-05-15 17:17:33.514448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.964 [2024-05-15 17:17:33.514645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.964 [2024-05-15 17:17:33.514658] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:45.964 qpair failed and we were unable to recover it. 00:26:45.964 [2024-05-15 17:17:33.514792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.964 [2024-05-15 17:17:33.514973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.964 [2024-05-15 17:17:33.514986] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:45.964 qpair failed and we were unable to recover it. 00:26:45.964 [2024-05-15 17:17:33.515102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.964 [2024-05-15 17:17:33.515331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.964 [2024-05-15 17:17:33.515344] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:45.964 qpair failed and we were unable to recover it. 00:26:45.964 [2024-05-15 17:17:33.515626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.964 [2024-05-15 17:17:33.515780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.964 [2024-05-15 17:17:33.515808] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:45.964 qpair failed and we were unable to recover it. 00:26:45.964 [2024-05-15 17:17:33.515964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.964 [2024-05-15 17:17:33.516257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.964 [2024-05-15 17:17:33.516290] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:45.964 qpair failed and we were unable to recover it. 00:26:45.964 [2024-05-15 17:17:33.516517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.964 [2024-05-15 17:17:33.516675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.964 [2024-05-15 17:17:33.516703] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:45.964 qpair failed and we were unable to recover it. 00:26:45.964 [2024-05-15 17:17:33.516854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.964 [2024-05-15 17:17:33.517014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.964 [2024-05-15 17:17:33.517043] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:45.964 qpair failed and we were unable to recover it. 00:26:45.964 [2024-05-15 17:17:33.517271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.964 [2024-05-15 17:17:33.517379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.964 [2024-05-15 17:17:33.517392] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:45.964 qpair failed and we were unable to recover it. 00:26:45.964 [2024-05-15 17:17:33.517493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.964 [2024-05-15 17:17:33.517650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.964 [2024-05-15 17:17:33.517663] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:45.964 qpair failed and we were unable to recover it. 00:26:45.964 [2024-05-15 17:17:33.517781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.964 [2024-05-15 17:17:33.518035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.964 [2024-05-15 17:17:33.518048] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:45.964 qpair failed and we were unable to recover it. 00:26:45.964 [2024-05-15 17:17:33.518162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.964 [2024-05-15 17:17:33.518329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.964 [2024-05-15 17:17:33.518342] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:45.964 qpair failed and we were unable to recover it. 00:26:45.964 [2024-05-15 17:17:33.518517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.964 [2024-05-15 17:17:33.518678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.964 [2024-05-15 17:17:33.518692] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:45.964 qpair failed and we were unable to recover it. 00:26:45.964 [2024-05-15 17:17:33.518820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.964 [2024-05-15 17:17:33.518999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.964 [2024-05-15 17:17:33.519012] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:45.964 qpair failed and we were unable to recover it. 00:26:45.964 [2024-05-15 17:17:33.519248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.964 [2024-05-15 17:17:33.519476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.964 [2024-05-15 17:17:33.519489] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:45.964 qpair failed and we were unable to recover it. 00:26:45.964 [2024-05-15 17:17:33.519674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.964 [2024-05-15 17:17:33.519803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.964 [2024-05-15 17:17:33.519816] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:45.964 qpair failed and we were unable to recover it. 00:26:45.964 [2024-05-15 17:17:33.519995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.964 [2024-05-15 17:17:33.520178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.964 [2024-05-15 17:17:33.520194] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:45.964 qpair failed and we were unable to recover it. 00:26:45.964 [2024-05-15 17:17:33.520430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.964 [2024-05-15 17:17:33.520661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.964 [2024-05-15 17:17:33.520690] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:45.964 qpair failed and we were unable to recover it. 00:26:45.964 [2024-05-15 17:17:33.520906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.964 [2024-05-15 17:17:33.521191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.964 [2024-05-15 17:17:33.521221] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:45.964 qpair failed and we were unable to recover it. 00:26:45.964 [2024-05-15 17:17:33.521424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.964 [2024-05-15 17:17:33.521617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.964 [2024-05-15 17:17:33.521646] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:45.964 qpair failed and we were unable to recover it. 00:26:45.964 [2024-05-15 17:17:33.521804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.964 [2024-05-15 17:17:33.522062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.964 [2024-05-15 17:17:33.522090] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:45.964 qpair failed and we were unable to recover it. 00:26:45.964 [2024-05-15 17:17:33.522354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.964 [2024-05-15 17:17:33.522464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.964 [2024-05-15 17:17:33.522478] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:45.964 qpair failed and we were unable to recover it. 00:26:45.964 [2024-05-15 17:17:33.522669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.964 [2024-05-15 17:17:33.522809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.964 [2024-05-15 17:17:33.522822] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:45.964 qpair failed and we were unable to recover it. 00:26:45.964 [2024-05-15 17:17:33.522995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.964 [2024-05-15 17:17:33.523228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.964 [2024-05-15 17:17:33.523263] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:45.964 qpair failed and we were unable to recover it. 00:26:45.964 [2024-05-15 17:17:33.523557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.964 [2024-05-15 17:17:33.523818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.964 [2024-05-15 17:17:33.523847] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:45.964 qpair failed and we were unable to recover it. 00:26:45.965 [2024-05-15 17:17:33.524082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.965 [2024-05-15 17:17:33.524378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.965 [2024-05-15 17:17:33.524393] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:45.965 qpair failed and we were unable to recover it. 00:26:45.965 [2024-05-15 17:17:33.524558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.965 [2024-05-15 17:17:33.524755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.965 [2024-05-15 17:17:33.524784] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:45.965 qpair failed and we were unable to recover it. 00:26:45.965 [2024-05-15 17:17:33.524991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.965 [2024-05-15 17:17:33.525276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.965 [2024-05-15 17:17:33.525305] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:45.965 qpair failed and we were unable to recover it. 00:26:45.965 [2024-05-15 17:17:33.525498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.965 [2024-05-15 17:17:33.525624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.965 [2024-05-15 17:17:33.525637] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:45.965 qpair failed and we were unable to recover it. 00:26:45.965 [2024-05-15 17:17:33.525801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.965 [2024-05-15 17:17:33.526058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.965 [2024-05-15 17:17:33.526072] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:45.965 qpair failed and we were unable to recover it. 00:26:45.965 [2024-05-15 17:17:33.526195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.965 [2024-05-15 17:17:33.526357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.965 [2024-05-15 17:17:33.526370] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:45.965 qpair failed and we were unable to recover it. 00:26:45.965 [2024-05-15 17:17:33.526551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.965 [2024-05-15 17:17:33.526660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.965 [2024-05-15 17:17:33.526673] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:45.965 qpair failed and we were unable to recover it. 00:26:45.965 [2024-05-15 17:17:33.526784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.965 [2024-05-15 17:17:33.526923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.965 [2024-05-15 17:17:33.526953] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:45.965 qpair failed and we were unable to recover it. 00:26:45.965 [2024-05-15 17:17:33.527160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.965 [2024-05-15 17:17:33.527365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.965 [2024-05-15 17:17:33.527395] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:45.965 qpair failed and we were unable to recover it. 00:26:45.965 [2024-05-15 17:17:33.527559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.965 [2024-05-15 17:17:33.527770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.965 [2024-05-15 17:17:33.527798] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:45.965 qpair failed and we were unable to recover it. 00:26:45.965 [2024-05-15 17:17:33.527942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.965 [2024-05-15 17:17:33.528208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.965 [2024-05-15 17:17:33.528241] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:45.965 qpair failed and we were unable to recover it. 00:26:45.965 [2024-05-15 17:17:33.528397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.965 [2024-05-15 17:17:33.528577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.965 [2024-05-15 17:17:33.528617] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:45.965 qpair failed and we were unable to recover it. 00:26:45.965 [2024-05-15 17:17:33.528750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.965 [2024-05-15 17:17:33.528873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.965 [2024-05-15 17:17:33.528902] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:45.965 qpair failed and we were unable to recover it. 00:26:45.965 [2024-05-15 17:17:33.529102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.965 [2024-05-15 17:17:33.529379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.965 [2024-05-15 17:17:33.529393] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:45.965 qpair failed and we were unable to recover it. 00:26:45.965 [2024-05-15 17:17:33.529661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.965 [2024-05-15 17:17:33.529842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.965 [2024-05-15 17:17:33.529855] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:45.965 qpair failed and we were unable to recover it. 00:26:45.965 [2024-05-15 17:17:33.529949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.965 [2024-05-15 17:17:33.530070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.965 [2024-05-15 17:17:33.530084] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:45.965 qpair failed and we were unable to recover it. 00:26:45.965 [2024-05-15 17:17:33.530310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.965 [2024-05-15 17:17:33.530518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.965 [2024-05-15 17:17:33.530546] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:45.965 qpair failed and we were unable to recover it. 00:26:45.965 [2024-05-15 17:17:33.530745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.965 [2024-05-15 17:17:33.530938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.965 [2024-05-15 17:17:33.530967] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:45.965 qpair failed and we were unable to recover it. 00:26:45.965 [2024-05-15 17:17:33.531134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.965 [2024-05-15 17:17:33.531336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.965 [2024-05-15 17:17:33.531366] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:45.965 qpair failed and we were unable to recover it. 00:26:45.965 [2024-05-15 17:17:33.531587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.965 [2024-05-15 17:17:33.531816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.965 [2024-05-15 17:17:33.531829] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:45.965 qpair failed and we were unable to recover it. 00:26:45.965 [2024-05-15 17:17:33.532029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.965 [2024-05-15 17:17:33.532149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.965 [2024-05-15 17:17:33.532162] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:45.965 qpair failed and we were unable to recover it. 00:26:45.965 [2024-05-15 17:17:33.532268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.965 [2024-05-15 17:17:33.532429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.965 [2024-05-15 17:17:33.532443] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:45.965 qpair failed and we were unable to recover it. 00:26:45.965 [2024-05-15 17:17:33.532574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.965 [2024-05-15 17:17:33.532787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.965 [2024-05-15 17:17:33.532800] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:45.965 qpair failed and we were unable to recover it. 00:26:45.965 [2024-05-15 17:17:33.532912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.965 [2024-05-15 17:17:33.533105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.965 [2024-05-15 17:17:33.533118] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:45.965 qpair failed and we were unable to recover it. 00:26:45.965 [2024-05-15 17:17:33.533302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.965 [2024-05-15 17:17:33.533498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.966 [2024-05-15 17:17:33.533511] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:45.966 qpair failed and we were unable to recover it. 00:26:45.966 [2024-05-15 17:17:33.533628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.966 [2024-05-15 17:17:33.533747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.966 [2024-05-15 17:17:33.533760] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:45.966 qpair failed and we were unable to recover it. 00:26:45.966 [2024-05-15 17:17:33.533960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.966 [2024-05-15 17:17:33.534077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.966 [2024-05-15 17:17:33.534090] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:45.966 qpair failed and we were unable to recover it. 00:26:45.966 [2024-05-15 17:17:33.534258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.966 [2024-05-15 17:17:33.534505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.966 [2024-05-15 17:17:33.534518] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:45.966 qpair failed and we were unable to recover it. 00:26:45.966 [2024-05-15 17:17:33.534686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.966 [2024-05-15 17:17:33.534863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.966 [2024-05-15 17:17:33.534876] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:45.966 qpair failed and we were unable to recover it. 00:26:45.966 [2024-05-15 17:17:33.535111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.966 [2024-05-15 17:17:33.535210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.966 [2024-05-15 17:17:33.535223] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:45.966 qpair failed and we were unable to recover it. 00:26:45.966 [2024-05-15 17:17:33.535400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.966 [2024-05-15 17:17:33.535523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.966 [2024-05-15 17:17:33.535536] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:45.966 qpair failed and we were unable to recover it. 00:26:45.966 [2024-05-15 17:17:33.535659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.966 [2024-05-15 17:17:33.535743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.966 [2024-05-15 17:17:33.535756] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:45.966 qpair failed and we were unable to recover it. 00:26:45.966 [2024-05-15 17:17:33.535863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.966 [2024-05-15 17:17:33.535986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.966 [2024-05-15 17:17:33.535999] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:45.966 qpair failed and we were unable to recover it. 00:26:45.966 [2024-05-15 17:17:33.536110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.966 [2024-05-15 17:17:33.536211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.966 [2024-05-15 17:17:33.536226] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:45.966 qpair failed and we were unable to recover it. 00:26:45.966 [2024-05-15 17:17:33.536360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.966 [2024-05-15 17:17:33.536521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.966 [2024-05-15 17:17:33.536534] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:45.966 qpair failed and we were unable to recover it. 00:26:45.966 [2024-05-15 17:17:33.536708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.966 [2024-05-15 17:17:33.536832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.966 [2024-05-15 17:17:33.536845] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:45.966 qpair failed and we were unable to recover it. 00:26:45.966 [2024-05-15 17:17:33.536955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.966 [2024-05-15 17:17:33.537136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.966 [2024-05-15 17:17:33.537149] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:45.966 qpair failed and we were unable to recover it. 00:26:45.966 [2024-05-15 17:17:33.537263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.966 [2024-05-15 17:17:33.537439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.966 [2024-05-15 17:17:33.537452] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:45.966 qpair failed and we were unable to recover it. 00:26:45.966 [2024-05-15 17:17:33.537561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.966 [2024-05-15 17:17:33.537739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.966 [2024-05-15 17:17:33.537752] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:45.966 qpair failed and we were unable to recover it. 00:26:45.966 [2024-05-15 17:17:33.537934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.966 [2024-05-15 17:17:33.538116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.966 [2024-05-15 17:17:33.538130] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:45.966 qpair failed and we were unable to recover it. 00:26:45.966 [2024-05-15 17:17:33.538368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.966 [2024-05-15 17:17:33.538535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.966 [2024-05-15 17:17:33.538548] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:45.966 qpair failed and we were unable to recover it. 00:26:45.966 [2024-05-15 17:17:33.538677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.966 [2024-05-15 17:17:33.538796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.966 [2024-05-15 17:17:33.538809] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:45.966 qpair failed and we were unable to recover it. 00:26:45.966 [2024-05-15 17:17:33.538919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.966 [2024-05-15 17:17:33.539096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.966 [2024-05-15 17:17:33.539110] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:45.966 qpair failed and we were unable to recover it. 00:26:45.966 [2024-05-15 17:17:33.539309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.966 [2024-05-15 17:17:33.539414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.966 [2024-05-15 17:17:33.539427] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:45.966 qpair failed and we were unable to recover it. 00:26:45.966 [2024-05-15 17:17:33.539552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.966 [2024-05-15 17:17:33.539732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.966 [2024-05-15 17:17:33.539745] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:45.966 qpair failed and we were unable to recover it. 00:26:45.966 [2024-05-15 17:17:33.539846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.966 [2024-05-15 17:17:33.540014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.966 [2024-05-15 17:17:33.540027] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:45.966 qpair failed and we were unable to recover it. 00:26:45.966 [2024-05-15 17:17:33.540151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.966 [2024-05-15 17:17:33.540273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.966 [2024-05-15 17:17:33.540288] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:45.966 qpair failed and we were unable to recover it. 00:26:45.966 [2024-05-15 17:17:33.540408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.966 [2024-05-15 17:17:33.540511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.966 [2024-05-15 17:17:33.540525] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:45.966 qpair failed and we were unable to recover it. 00:26:45.966 [2024-05-15 17:17:33.540759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.966 [2024-05-15 17:17:33.540936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.966 [2024-05-15 17:17:33.540950] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:45.966 qpair failed and we were unable to recover it. 00:26:45.966 [2024-05-15 17:17:33.541072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.966 [2024-05-15 17:17:33.541252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.966 [2024-05-15 17:17:33.541268] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:45.966 qpair failed and we were unable to recover it. 00:26:45.966 [2024-05-15 17:17:33.541501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.966 [2024-05-15 17:17:33.541681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.966 [2024-05-15 17:17:33.541694] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:45.966 qpair failed and we were unable to recover it. 00:26:45.967 [2024-05-15 17:17:33.541869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.967 [2024-05-15 17:17:33.541986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.967 [2024-05-15 17:17:33.541999] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:45.967 qpair failed and we were unable to recover it. 00:26:45.967 [2024-05-15 17:17:33.542122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.967 [2024-05-15 17:17:33.542288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.967 [2024-05-15 17:17:33.542301] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:45.967 qpair failed and we were unable to recover it. 00:26:45.967 [2024-05-15 17:17:33.542543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.967 [2024-05-15 17:17:33.542791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.967 [2024-05-15 17:17:33.542804] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:45.967 qpair failed and we were unable to recover it. 00:26:45.967 [2024-05-15 17:17:33.542978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.967 [2024-05-15 17:17:33.543215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.967 [2024-05-15 17:17:33.543229] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:45.967 qpair failed and we were unable to recover it. 00:26:45.967 [2024-05-15 17:17:33.543420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.967 [2024-05-15 17:17:33.543647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.967 [2024-05-15 17:17:33.543660] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:45.967 qpair failed and we were unable to recover it. 00:26:45.967 [2024-05-15 17:17:33.543812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.967 [2024-05-15 17:17:33.543935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.967 [2024-05-15 17:17:33.543948] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:45.967 qpair failed and we were unable to recover it. 00:26:45.967 [2024-05-15 17:17:33.544055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.967 [2024-05-15 17:17:33.544153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.967 [2024-05-15 17:17:33.544172] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:45.967 qpair failed and we were unable to recover it. 00:26:45.967 [2024-05-15 17:17:33.544279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.967 [2024-05-15 17:17:33.544532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.967 [2024-05-15 17:17:33.544545] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:45.967 qpair failed and we were unable to recover it. 00:26:45.967 [2024-05-15 17:17:33.544721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.967 [2024-05-15 17:17:33.544844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.967 [2024-05-15 17:17:33.544860] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:45.967 qpair failed and we were unable to recover it. 00:26:45.967 [2024-05-15 17:17:33.545038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.967 [2024-05-15 17:17:33.545214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.967 [2024-05-15 17:17:33.545228] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:45.967 qpair failed and we were unable to recover it. 00:26:45.967 [2024-05-15 17:17:33.545340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.967 [2024-05-15 17:17:33.545503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.967 [2024-05-15 17:17:33.545516] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:45.967 qpair failed and we were unable to recover it. 00:26:45.967 [2024-05-15 17:17:33.545681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.967 [2024-05-15 17:17:33.545794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.967 [2024-05-15 17:17:33.545807] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:45.967 qpair failed and we were unable to recover it. 00:26:45.967 [2024-05-15 17:17:33.545923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.967 [2024-05-15 17:17:33.546045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.967 [2024-05-15 17:17:33.546058] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:45.967 qpair failed and we were unable to recover it. 00:26:45.967 [2024-05-15 17:17:33.546246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.967 [2024-05-15 17:17:33.546363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.967 [2024-05-15 17:17:33.546377] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:45.967 qpair failed and we were unable to recover it. 00:26:45.967 [2024-05-15 17:17:33.546543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.967 [2024-05-15 17:17:33.546665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.967 [2024-05-15 17:17:33.546678] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:45.967 qpair failed and we were unable to recover it. 00:26:45.967 [2024-05-15 17:17:33.546864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.967 [2024-05-15 17:17:33.546976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.967 [2024-05-15 17:17:33.546989] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:45.967 qpair failed and we were unable to recover it. 00:26:45.967 [2024-05-15 17:17:33.547174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.967 [2024-05-15 17:17:33.547335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.967 [2024-05-15 17:17:33.547348] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:45.967 qpair failed and we were unable to recover it. 00:26:45.967 [2024-05-15 17:17:33.547457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.967 [2024-05-15 17:17:33.547710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.967 [2024-05-15 17:17:33.547723] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:45.967 qpair failed and we were unable to recover it. 00:26:45.967 [2024-05-15 17:17:33.547899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.967 [2024-05-15 17:17:33.548013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.967 [2024-05-15 17:17:33.548026] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:45.967 qpair failed and we were unable to recover it. 00:26:45.967 [2024-05-15 17:17:33.548210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.967 [2024-05-15 17:17:33.548304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.967 [2024-05-15 17:17:33.548317] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:45.967 qpair failed and we were unable to recover it. 00:26:45.967 [2024-05-15 17:17:33.548417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.967 [2024-05-15 17:17:33.548610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.967 [2024-05-15 17:17:33.548623] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:45.967 qpair failed and we were unable to recover it. 00:26:45.967 [2024-05-15 17:17:33.548870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.967 [2024-05-15 17:17:33.549067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.967 [2024-05-15 17:17:33.549080] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:45.967 qpair failed and we were unable to recover it. 00:26:45.967 [2024-05-15 17:17:33.549338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.967 [2024-05-15 17:17:33.549461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.967 [2024-05-15 17:17:33.549474] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:45.967 qpair failed and we were unable to recover it. 00:26:45.967 [2024-05-15 17:17:33.549547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.967 [2024-05-15 17:17:33.549724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.967 [2024-05-15 17:17:33.549737] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:45.967 qpair failed and we were unable to recover it. 00:26:45.967 [2024-05-15 17:17:33.549866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.967 [2024-05-15 17:17:33.550100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.967 [2024-05-15 17:17:33.550112] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:45.967 qpair failed and we were unable to recover it. 00:26:45.967 [2024-05-15 17:17:33.550365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.967 [2024-05-15 17:17:33.550473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.967 [2024-05-15 17:17:33.550486] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:45.967 qpair failed and we were unable to recover it. 00:26:45.967 [2024-05-15 17:17:33.550671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.967 [2024-05-15 17:17:33.550847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.967 [2024-05-15 17:17:33.550861] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:45.967 qpair failed and we were unable to recover it. 00:26:45.968 [2024-05-15 17:17:33.551031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.968 [2024-05-15 17:17:33.551197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.968 [2024-05-15 17:17:33.551211] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:45.968 qpair failed and we were unable to recover it. 00:26:45.968 [2024-05-15 17:17:33.551331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.968 [2024-05-15 17:17:33.551519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.968 [2024-05-15 17:17:33.551532] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:45.968 qpair failed and we were unable to recover it. 00:26:45.968 [2024-05-15 17:17:33.551613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.968 [2024-05-15 17:17:33.551792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.968 [2024-05-15 17:17:33.551805] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:45.968 qpair failed and we were unable to recover it. 00:26:45.968 [2024-05-15 17:17:33.552057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.968 [2024-05-15 17:17:33.552176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.968 [2024-05-15 17:17:33.552193] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:45.968 qpair failed and we were unable to recover it. 00:26:45.968 [2024-05-15 17:17:33.552319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.968 [2024-05-15 17:17:33.552482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.968 [2024-05-15 17:17:33.552495] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:45.968 qpair failed and we were unable to recover it. 00:26:45.968 [2024-05-15 17:17:33.552687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.968 [2024-05-15 17:17:33.552804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.968 [2024-05-15 17:17:33.552817] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:45.968 qpair failed and we were unable to recover it. 00:26:45.968 [2024-05-15 17:17:33.552937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.968 [2024-05-15 17:17:33.553187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.968 [2024-05-15 17:17:33.553200] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:45.968 qpair failed and we were unable to recover it. 00:26:45.968 [2024-05-15 17:17:33.553375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.968 [2024-05-15 17:17:33.553503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.968 [2024-05-15 17:17:33.553516] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:45.968 qpair failed and we were unable to recover it. 00:26:45.968 [2024-05-15 17:17:33.553655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.968 [2024-05-15 17:17:33.553777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.968 [2024-05-15 17:17:33.553791] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:45.968 qpair failed and we were unable to recover it. 00:26:45.968 [2024-05-15 17:17:33.554065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.968 [2024-05-15 17:17:33.554228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.968 [2024-05-15 17:17:33.554242] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:45.968 qpair failed and we were unable to recover it. 00:26:45.968 [2024-05-15 17:17:33.554475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.968 [2024-05-15 17:17:33.554681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.968 [2024-05-15 17:17:33.554694] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:45.968 qpair failed and we were unable to recover it. 00:26:45.968 [2024-05-15 17:17:33.554810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.968 [2024-05-15 17:17:33.554984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.968 [2024-05-15 17:17:33.554997] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:45.968 qpair failed and we were unable to recover it. 00:26:45.968 [2024-05-15 17:17:33.555256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.968 [2024-05-15 17:17:33.555443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.968 [2024-05-15 17:17:33.555456] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:45.968 qpair failed and we were unable to recover it. 00:26:45.968 [2024-05-15 17:17:33.555652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.968 [2024-05-15 17:17:33.555846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.968 [2024-05-15 17:17:33.555859] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:45.968 qpair failed and we were unable to recover it. 00:26:45.968 [2024-05-15 17:17:33.555982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.968 [2024-05-15 17:17:33.556157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.968 [2024-05-15 17:17:33.556177] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:45.968 qpair failed and we were unable to recover it. 00:26:45.968 [2024-05-15 17:17:33.556417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.968 [2024-05-15 17:17:33.556597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.968 [2024-05-15 17:17:33.556610] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:45.968 qpair failed and we were unable to recover it. 00:26:45.968 [2024-05-15 17:17:33.556726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.968 [2024-05-15 17:17:33.556854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.968 [2024-05-15 17:17:33.556868] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:45.968 qpair failed and we were unable to recover it. 00:26:45.968 [2024-05-15 17:17:33.556998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.968 [2024-05-15 17:17:33.557092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.968 [2024-05-15 17:17:33.557105] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:45.968 qpair failed and we were unable to recover it. 00:26:45.968 [2024-05-15 17:17:33.557202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.968 [2024-05-15 17:17:33.557282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.968 [2024-05-15 17:17:33.557295] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:45.968 qpair failed and we were unable to recover it. 00:26:45.968 [2024-05-15 17:17:33.557472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.968 [2024-05-15 17:17:33.557651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.968 [2024-05-15 17:17:33.557664] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:45.968 qpair failed and we were unable to recover it. 00:26:45.968 [2024-05-15 17:17:33.557826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.968 [2024-05-15 17:17:33.558058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.968 [2024-05-15 17:17:33.558071] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:45.968 qpair failed and we were unable to recover it. 00:26:45.968 [2024-05-15 17:17:33.558182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.968 [2024-05-15 17:17:33.558416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.968 [2024-05-15 17:17:33.558428] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:45.968 qpair failed and we were unable to recover it. 00:26:45.968 [2024-05-15 17:17:33.558609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.968 [2024-05-15 17:17:33.558789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.968 [2024-05-15 17:17:33.558802] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:45.968 qpair failed and we were unable to recover it. 00:26:45.968 [2024-05-15 17:17:33.558919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.968 [2024-05-15 17:17:33.559086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.968 [2024-05-15 17:17:33.559099] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:45.968 qpair failed and we were unable to recover it. 00:26:45.968 [2024-05-15 17:17:33.559214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.968 [2024-05-15 17:17:33.559330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.968 [2024-05-15 17:17:33.559342] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:45.968 qpair failed and we were unable to recover it. 00:26:45.968 [2024-05-15 17:17:33.559605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.968 [2024-05-15 17:17:33.559712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.968 [2024-05-15 17:17:33.559725] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:45.968 qpair failed and we were unable to recover it. 00:26:45.968 [2024-05-15 17:17:33.559832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.968 [2024-05-15 17:17:33.559947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.968 [2024-05-15 17:17:33.559960] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:45.968 qpair failed and we were unable to recover it. 00:26:45.969 [2024-05-15 17:17:33.560123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.969 [2024-05-15 17:17:33.560233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.969 [2024-05-15 17:17:33.560248] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:45.969 qpair failed and we were unable to recover it. 00:26:45.969 [2024-05-15 17:17:33.560419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.969 [2024-05-15 17:17:33.560512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.969 [2024-05-15 17:17:33.560525] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:45.969 qpair failed and we were unable to recover it. 00:26:45.969 [2024-05-15 17:17:33.560651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.969 [2024-05-15 17:17:33.560824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.969 [2024-05-15 17:17:33.560837] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:45.969 qpair failed and we were unable to recover it. 00:26:45.969 [2024-05-15 17:17:33.561093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.969 [2024-05-15 17:17:33.561198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.969 [2024-05-15 17:17:33.561212] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:45.969 qpair failed and we were unable to recover it. 00:26:45.969 [2024-05-15 17:17:33.561327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.969 [2024-05-15 17:17:33.561556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.969 [2024-05-15 17:17:33.561569] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:45.969 qpair failed and we were unable to recover it. 00:26:45.969 [2024-05-15 17:17:33.561695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.969 [2024-05-15 17:17:33.561871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.969 [2024-05-15 17:17:33.561887] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:45.969 qpair failed and we were unable to recover it. 00:26:45.969 [2024-05-15 17:17:33.562087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.969 [2024-05-15 17:17:33.562268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.969 [2024-05-15 17:17:33.562282] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:45.969 qpair failed and we were unable to recover it. 00:26:45.969 [2024-05-15 17:17:33.562392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.969 [2024-05-15 17:17:33.562589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.969 [2024-05-15 17:17:33.562602] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:45.969 qpair failed and we were unable to recover it. 00:26:45.969 [2024-05-15 17:17:33.562785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.969 [2024-05-15 17:17:33.563013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.969 [2024-05-15 17:17:33.563026] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:45.969 qpair failed and we were unable to recover it. 00:26:45.969 [2024-05-15 17:17:33.563257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.969 [2024-05-15 17:17:33.563356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.969 [2024-05-15 17:17:33.563369] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:45.969 qpair failed and we were unable to recover it. 00:26:45.969 [2024-05-15 17:17:33.563537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.969 [2024-05-15 17:17:33.563790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.969 [2024-05-15 17:17:33.563803] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:45.969 qpair failed and we were unable to recover it. 00:26:45.969 [2024-05-15 17:17:33.564053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.969 [2024-05-15 17:17:33.564229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.969 [2024-05-15 17:17:33.564243] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:45.969 qpair failed and we were unable to recover it. 00:26:45.969 [2024-05-15 17:17:33.564445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.969 [2024-05-15 17:17:33.564633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.969 [2024-05-15 17:17:33.564646] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:45.969 qpair failed and we were unable to recover it. 00:26:45.969 [2024-05-15 17:17:33.564842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.969 [2024-05-15 17:17:33.564950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.969 [2024-05-15 17:17:33.564963] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:45.969 qpair failed and we were unable to recover it. 00:26:45.969 [2024-05-15 17:17:33.565143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.969 [2024-05-15 17:17:33.565387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.969 [2024-05-15 17:17:33.565401] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:45.969 qpair failed and we were unable to recover it. 00:26:45.969 [2024-05-15 17:17:33.565527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.969 [2024-05-15 17:17:33.565643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.969 [2024-05-15 17:17:33.565656] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:45.969 qpair failed and we were unable to recover it. 00:26:45.969 [2024-05-15 17:17:33.565774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.969 [2024-05-15 17:17:33.565896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.969 [2024-05-15 17:17:33.565909] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:45.969 qpair failed and we were unable to recover it. 00:26:45.969 [2024-05-15 17:17:33.566029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.969 [2024-05-15 17:17:33.566286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.969 [2024-05-15 17:17:33.566300] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:45.969 qpair failed and we were unable to recover it. 00:26:45.969 [2024-05-15 17:17:33.566427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.969 [2024-05-15 17:17:33.566527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.969 [2024-05-15 17:17:33.566540] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:45.969 qpair failed and we were unable to recover it. 00:26:45.969 [2024-05-15 17:17:33.566742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.969 [2024-05-15 17:17:33.566967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.969 [2024-05-15 17:17:33.566980] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:45.969 qpair failed and we were unable to recover it. 00:26:45.969 [2024-05-15 17:17:33.567187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.969 [2024-05-15 17:17:33.567420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.969 [2024-05-15 17:17:33.567433] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:45.969 qpair failed and we were unable to recover it. 00:26:45.969 [2024-05-15 17:17:33.567616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.969 [2024-05-15 17:17:33.567802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.969 [2024-05-15 17:17:33.567815] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:45.969 qpair failed and we were unable to recover it. 00:26:45.969 [2024-05-15 17:17:33.568010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.969 [2024-05-15 17:17:33.568212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.969 [2024-05-15 17:17:33.568227] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:45.969 qpair failed and we were unable to recover it. 00:26:45.969 [2024-05-15 17:17:33.568332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.969 [2024-05-15 17:17:33.568502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.969 [2024-05-15 17:17:33.568516] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:45.969 qpair failed and we were unable to recover it. 00:26:45.969 [2024-05-15 17:17:33.568698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.969 [2024-05-15 17:17:33.568815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.969 [2024-05-15 17:17:33.568828] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:45.969 qpair failed and we were unable to recover it. 00:26:45.969 [2024-05-15 17:17:33.568955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.969 [2024-05-15 17:17:33.569133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.969 [2024-05-15 17:17:33.569146] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:45.969 qpair failed and we were unable to recover it. 00:26:45.969 [2024-05-15 17:17:33.569400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.969 [2024-05-15 17:17:33.569576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.969 [2024-05-15 17:17:33.569589] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:45.969 qpair failed and we were unable to recover it. 00:26:45.969 [2024-05-15 17:17:33.569704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.969 [2024-05-15 17:17:33.569884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.970 [2024-05-15 17:17:33.569896] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:45.970 qpair failed and we were unable to recover it. 00:26:45.970 [2024-05-15 17:17:33.570013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.970 [2024-05-15 17:17:33.570260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.970 [2024-05-15 17:17:33.570273] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:45.970 qpair failed and we were unable to recover it. 00:26:45.970 [2024-05-15 17:17:33.570475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.970 [2024-05-15 17:17:33.570644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.970 [2024-05-15 17:17:33.570658] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:45.970 qpair failed and we were unable to recover it. 00:26:45.970 [2024-05-15 17:17:33.570783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.970 [2024-05-15 17:17:33.571046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.970 [2024-05-15 17:17:33.571059] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:45.970 qpair failed and we were unable to recover it. 00:26:45.970 [2024-05-15 17:17:33.571224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.970 [2024-05-15 17:17:33.571339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.970 [2024-05-15 17:17:33.571352] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:45.970 qpair failed and we were unable to recover it. 00:26:45.970 [2024-05-15 17:17:33.571583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.970 [2024-05-15 17:17:33.571810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.970 [2024-05-15 17:17:33.571823] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:45.970 qpair failed and we were unable to recover it. 00:26:45.970 [2024-05-15 17:17:33.572030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.970 [2024-05-15 17:17:33.572193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.970 [2024-05-15 17:17:33.572207] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:45.970 qpair failed and we were unable to recover it. 00:26:45.970 [2024-05-15 17:17:33.572382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.970 [2024-05-15 17:17:33.572612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.970 [2024-05-15 17:17:33.572625] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:45.970 qpair failed and we were unable to recover it. 00:26:45.970 [2024-05-15 17:17:33.572800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.970 [2024-05-15 17:17:33.572928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.970 [2024-05-15 17:17:33.572941] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:45.970 qpair failed and we were unable to recover it. 00:26:45.970 [2024-05-15 17:17:33.573119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.970 [2024-05-15 17:17:33.573325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.970 [2024-05-15 17:17:33.573339] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:45.970 qpair failed and we were unable to recover it. 00:26:45.970 [2024-05-15 17:17:33.573519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.970 [2024-05-15 17:17:33.573687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.970 [2024-05-15 17:17:33.573700] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:45.970 qpair failed and we were unable to recover it. 00:26:45.970 [2024-05-15 17:17:33.573865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.970 [2024-05-15 17:17:33.573962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.970 [2024-05-15 17:17:33.573975] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:45.970 qpair failed and we were unable to recover it. 00:26:45.970 [2024-05-15 17:17:33.574073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.970 [2024-05-15 17:17:33.574255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.970 [2024-05-15 17:17:33.574269] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:45.970 qpair failed and we were unable to recover it. 00:26:45.970 [2024-05-15 17:17:33.574445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.970 [2024-05-15 17:17:33.574569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.970 [2024-05-15 17:17:33.574583] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:45.970 qpair failed and we were unable to recover it. 00:26:45.970 [2024-05-15 17:17:33.574764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.970 [2024-05-15 17:17:33.574934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.970 [2024-05-15 17:17:33.574947] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:45.970 qpair failed and we were unable to recover it. 00:26:45.970 [2024-05-15 17:17:33.575065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.970 [2024-05-15 17:17:33.575184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.970 [2024-05-15 17:17:33.575197] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:45.970 qpair failed and we were unable to recover it. 00:26:45.970 [2024-05-15 17:17:33.575316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.970 [2024-05-15 17:17:33.575388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.970 [2024-05-15 17:17:33.575402] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:45.970 qpair failed and we were unable to recover it. 00:26:45.970 [2024-05-15 17:17:33.575629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.970 [2024-05-15 17:17:33.575785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.970 [2024-05-15 17:17:33.575799] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:45.970 qpair failed and we were unable to recover it. 00:26:45.970 [2024-05-15 17:17:33.575969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.970 [2024-05-15 17:17:33.576088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.970 [2024-05-15 17:17:33.576101] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:45.970 qpair failed and we were unable to recover it. 00:26:45.970 [2024-05-15 17:17:33.576218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.970 [2024-05-15 17:17:33.576330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.970 [2024-05-15 17:17:33.576343] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:45.970 qpair failed and we were unable to recover it. 00:26:45.970 [2024-05-15 17:17:33.576505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.970 [2024-05-15 17:17:33.576670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.970 [2024-05-15 17:17:33.576682] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:45.970 qpair failed and we were unable to recover it. 00:26:45.970 [2024-05-15 17:17:33.576759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.970 [2024-05-15 17:17:33.576870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.970 [2024-05-15 17:17:33.576883] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:45.970 qpair failed and we were unable to recover it. 00:26:45.970 [2024-05-15 17:17:33.576982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.970 [2024-05-15 17:17:33.577083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.971 [2024-05-15 17:17:33.577097] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:45.971 qpair failed and we were unable to recover it. 00:26:45.971 [2024-05-15 17:17:33.577273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.971 [2024-05-15 17:17:33.577542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.971 [2024-05-15 17:17:33.577555] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:45.971 qpair failed and we were unable to recover it. 00:26:45.971 [2024-05-15 17:17:33.577655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.971 [2024-05-15 17:17:33.577752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.971 [2024-05-15 17:17:33.577765] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:45.971 qpair failed and we were unable to recover it. 00:26:45.971 [2024-05-15 17:17:33.577882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.971 [2024-05-15 17:17:33.577991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.971 [2024-05-15 17:17:33.578004] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:45.971 qpair failed and we were unable to recover it. 00:26:45.971 [2024-05-15 17:17:33.578113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.971 [2024-05-15 17:17:33.578350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.971 [2024-05-15 17:17:33.578363] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:45.971 qpair failed and we were unable to recover it. 00:26:45.971 [2024-05-15 17:17:33.578529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.971 [2024-05-15 17:17:33.578644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.971 [2024-05-15 17:17:33.578657] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:45.971 qpair failed and we were unable to recover it. 00:26:45.971 [2024-05-15 17:17:33.578772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.971 [2024-05-15 17:17:33.578893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.971 [2024-05-15 17:17:33.578906] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:45.971 qpair failed and we were unable to recover it. 00:26:45.971 [2024-05-15 17:17:33.578996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.971 [2024-05-15 17:17:33.579169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.971 [2024-05-15 17:17:33.579185] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:45.971 qpair failed and we were unable to recover it. 00:26:45.971 [2024-05-15 17:17:33.579419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.971 [2024-05-15 17:17:33.579585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.971 [2024-05-15 17:17:33.579599] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:45.971 qpair failed and we were unable to recover it. 00:26:45.971 [2024-05-15 17:17:33.579706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.971 [2024-05-15 17:17:33.579830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.971 [2024-05-15 17:17:33.579843] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:45.971 qpair failed and we were unable to recover it. 00:26:45.971 [2024-05-15 17:17:33.580032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.971 [2024-05-15 17:17:33.580185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.971 [2024-05-15 17:17:33.580200] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:45.971 qpair failed and we were unable to recover it. 00:26:45.971 [2024-05-15 17:17:33.580327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.971 [2024-05-15 17:17:33.580498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.971 [2024-05-15 17:17:33.580514] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:45.971 qpair failed and we were unable to recover it. 00:26:45.971 [2024-05-15 17:17:33.580702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.971 [2024-05-15 17:17:33.580875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.971 [2024-05-15 17:17:33.580890] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:45.971 qpair failed and we were unable to recover it. 00:26:45.971 [2024-05-15 17:17:33.581148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.971 [2024-05-15 17:17:33.581333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.971 [2024-05-15 17:17:33.581347] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:45.971 qpair failed and we were unable to recover it. 00:26:45.971 [2024-05-15 17:17:33.581467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.971 [2024-05-15 17:17:33.581580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.971 [2024-05-15 17:17:33.581593] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:45.971 qpair failed and we were unable to recover it. 00:26:45.971 [2024-05-15 17:17:33.581856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.971 [2024-05-15 17:17:33.581966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.971 [2024-05-15 17:17:33.581979] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:45.971 qpair failed and we were unable to recover it. 00:26:45.971 [2024-05-15 17:17:33.582072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.971 [2024-05-15 17:17:33.582299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.971 [2024-05-15 17:17:33.582315] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:45.971 qpair failed and we were unable to recover it. 00:26:45.971 [2024-05-15 17:17:33.582425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.971 [2024-05-15 17:17:33.582599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.971 [2024-05-15 17:17:33.582619] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:45.971 qpair failed and we were unable to recover it. 00:26:45.971 [2024-05-15 17:17:33.582816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.971 [2024-05-15 17:17:33.582926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.971 [2024-05-15 17:17:33.582940] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:45.971 qpair failed and we were unable to recover it. 00:26:45.971 [2024-05-15 17:17:33.583128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.971 [2024-05-15 17:17:33.583241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.971 [2024-05-15 17:17:33.583256] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:45.971 qpair failed and we were unable to recover it. 00:26:45.971 [2024-05-15 17:17:33.583369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.971 [2024-05-15 17:17:33.583485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.971 [2024-05-15 17:17:33.583498] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:45.971 qpair failed and we were unable to recover it. 00:26:45.971 [2024-05-15 17:17:33.583679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.971 [2024-05-15 17:17:33.583776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.971 [2024-05-15 17:17:33.583789] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:45.971 qpair failed and we were unable to recover it. 00:26:45.971 [2024-05-15 17:17:33.583882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.971 [2024-05-15 17:17:33.584084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.971 [2024-05-15 17:17:33.584099] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:45.971 qpair failed and we were unable to recover it. 00:26:45.971 [2024-05-15 17:17:33.584215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.971 [2024-05-15 17:17:33.584382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.971 [2024-05-15 17:17:33.584398] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:45.971 qpair failed and we were unable to recover it. 00:26:45.971 [2024-05-15 17:17:33.584580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.252 [2024-05-15 17:17:33.584756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.252 [2024-05-15 17:17:33.584770] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:46.252 qpair failed and we were unable to recover it. 00:26:46.252 [2024-05-15 17:17:33.585003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.252 [2024-05-15 17:17:33.585184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.252 [2024-05-15 17:17:33.585204] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:46.252 qpair failed and we were unable to recover it. 00:26:46.252 [2024-05-15 17:17:33.585319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.252 [2024-05-15 17:17:33.585562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.252 [2024-05-15 17:17:33.585583] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:46.252 qpair failed and we were unable to recover it. 00:26:46.252 [2024-05-15 17:17:33.585699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.252 [2024-05-15 17:17:33.585916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.252 [2024-05-15 17:17:33.585935] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:46.252 qpair failed and we were unable to recover it. 00:26:46.252 [2024-05-15 17:17:33.586066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.252 [2024-05-15 17:17:33.586202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.252 [2024-05-15 17:17:33.586222] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:46.252 qpair failed and we were unable to recover it. 00:26:46.252 [2024-05-15 17:17:33.586470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.252 [2024-05-15 17:17:33.586715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.252 [2024-05-15 17:17:33.586728] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:46.252 qpair failed and we were unable to recover it. 00:26:46.252 [2024-05-15 17:17:33.586896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.252 [2024-05-15 17:17:33.586992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.252 [2024-05-15 17:17:33.587005] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:46.252 qpair failed and we were unable to recover it. 00:26:46.252 [2024-05-15 17:17:33.587139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.252 [2024-05-15 17:17:33.587382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.252 [2024-05-15 17:17:33.587396] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:46.252 qpair failed and we were unable to recover it. 00:26:46.252 [2024-05-15 17:17:33.587649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.252 [2024-05-15 17:17:33.587766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.252 [2024-05-15 17:17:33.587779] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:46.252 qpair failed and we were unable to recover it. 00:26:46.252 [2024-05-15 17:17:33.587990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.252 [2024-05-15 17:17:33.588249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.252 [2024-05-15 17:17:33.588263] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:46.252 qpair failed and we were unable to recover it. 00:26:46.252 [2024-05-15 17:17:33.588375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.252 [2024-05-15 17:17:33.588559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.252 [2024-05-15 17:17:33.588573] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:46.252 qpair failed and we were unable to recover it. 00:26:46.252 [2024-05-15 17:17:33.588687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.252 [2024-05-15 17:17:33.588868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.252 [2024-05-15 17:17:33.588881] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:46.252 qpair failed and we were unable to recover it. 00:26:46.252 [2024-05-15 17:17:33.589058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.252 [2024-05-15 17:17:33.589245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.252 [2024-05-15 17:17:33.589259] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:46.252 qpair failed and we were unable to recover it. 00:26:46.252 [2024-05-15 17:17:33.589382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.252 [2024-05-15 17:17:33.589556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.252 [2024-05-15 17:17:33.589569] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:46.252 qpair failed and we were unable to recover it. 00:26:46.252 [2024-05-15 17:17:33.589830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.252 [2024-05-15 17:17:33.589936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.252 [2024-05-15 17:17:33.589950] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:46.252 qpair failed and we were unable to recover it. 00:26:46.252 [2024-05-15 17:17:33.590058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.252 [2024-05-15 17:17:33.590162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.252 [2024-05-15 17:17:33.590190] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:46.252 qpair failed and we were unable to recover it. 00:26:46.252 [2024-05-15 17:17:33.590371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.252 [2024-05-15 17:17:33.590504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.252 [2024-05-15 17:17:33.590518] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:46.252 qpair failed and we were unable to recover it. 00:26:46.252 [2024-05-15 17:17:33.590630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.252 [2024-05-15 17:17:33.590863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.252 [2024-05-15 17:17:33.590876] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:46.252 qpair failed and we were unable to recover it. 00:26:46.252 [2024-05-15 17:17:33.590986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.252 [2024-05-15 17:17:33.591162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.253 [2024-05-15 17:17:33.591182] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:46.253 qpair failed and we were unable to recover it. 00:26:46.253 [2024-05-15 17:17:33.591427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.253 [2024-05-15 17:17:33.591591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.253 [2024-05-15 17:17:33.591604] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:46.253 qpair failed and we were unable to recover it. 00:26:46.253 [2024-05-15 17:17:33.591713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.253 [2024-05-15 17:17:33.591885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.253 [2024-05-15 17:17:33.591899] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:46.253 qpair failed and we were unable to recover it. 00:26:46.253 [2024-05-15 17:17:33.592043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.253 [2024-05-15 17:17:33.592217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.253 [2024-05-15 17:17:33.592231] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:46.253 qpair failed and we were unable to recover it. 00:26:46.253 [2024-05-15 17:17:33.592428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.253 [2024-05-15 17:17:33.592534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.253 [2024-05-15 17:17:33.592548] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:46.253 qpair failed and we were unable to recover it. 00:26:46.253 [2024-05-15 17:17:33.592627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.253 [2024-05-15 17:17:33.592854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.253 [2024-05-15 17:17:33.592867] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:46.253 qpair failed and we were unable to recover it. 00:26:46.253 [2024-05-15 17:17:33.593049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.253 [2024-05-15 17:17:33.593215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.253 [2024-05-15 17:17:33.593229] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:46.253 qpair failed and we were unable to recover it. 00:26:46.253 [2024-05-15 17:17:33.593405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.253 [2024-05-15 17:17:33.593515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.253 [2024-05-15 17:17:33.593529] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:46.253 qpair failed and we were unable to recover it. 00:26:46.253 [2024-05-15 17:17:33.593703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.253 [2024-05-15 17:17:33.593946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.253 [2024-05-15 17:17:33.593959] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:46.253 qpair failed and we were unable to recover it. 00:26:46.253 [2024-05-15 17:17:33.594140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.253 [2024-05-15 17:17:33.594375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.253 [2024-05-15 17:17:33.594388] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:46.253 qpair failed and we were unable to recover it. 00:26:46.253 [2024-05-15 17:17:33.594502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.253 [2024-05-15 17:17:33.594734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.253 [2024-05-15 17:17:33.594747] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:46.253 qpair failed and we were unable to recover it. 00:26:46.253 [2024-05-15 17:17:33.594843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.253 [2024-05-15 17:17:33.594926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.253 [2024-05-15 17:17:33.594939] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:46.253 qpair failed and we were unable to recover it. 00:26:46.253 [2024-05-15 17:17:33.595133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.253 [2024-05-15 17:17:33.595204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.253 [2024-05-15 17:17:33.595218] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:46.253 qpair failed and we were unable to recover it. 00:26:46.253 [2024-05-15 17:17:33.595333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.253 [2024-05-15 17:17:33.595452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.253 [2024-05-15 17:17:33.595465] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:46.253 qpair failed and we were unable to recover it. 00:26:46.253 [2024-05-15 17:17:33.595647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.253 [2024-05-15 17:17:33.595813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.253 [2024-05-15 17:17:33.595826] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:46.253 qpair failed and we were unable to recover it. 00:26:46.253 [2024-05-15 17:17:33.595942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.253 [2024-05-15 17:17:33.596115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.253 [2024-05-15 17:17:33.596129] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:46.253 qpair failed and we were unable to recover it. 00:26:46.253 [2024-05-15 17:17:33.596368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.253 [2024-05-15 17:17:33.596539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.253 [2024-05-15 17:17:33.596555] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:46.253 qpair failed and we were unable to recover it. 00:26:46.253 [2024-05-15 17:17:33.596734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.253 [2024-05-15 17:17:33.596899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.253 [2024-05-15 17:17:33.596912] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:46.253 qpair failed and we were unable to recover it. 00:26:46.253 [2024-05-15 17:17:33.597025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.253 [2024-05-15 17:17:33.597259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.253 [2024-05-15 17:17:33.597273] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:46.253 qpair failed and we were unable to recover it. 00:26:46.253 [2024-05-15 17:17:33.597446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.253 [2024-05-15 17:17:33.597555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.253 [2024-05-15 17:17:33.597570] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:46.253 qpair failed and we were unable to recover it. 00:26:46.253 [2024-05-15 17:17:33.597745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.253 [2024-05-15 17:17:33.597899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.253 [2024-05-15 17:17:33.597912] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:46.253 qpair failed and we were unable to recover it. 00:26:46.253 [2024-05-15 17:17:33.597993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.253 [2024-05-15 17:17:33.598180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.253 [2024-05-15 17:17:33.598193] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:46.253 qpair failed and we were unable to recover it. 00:26:46.253 [2024-05-15 17:17:33.598303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.253 [2024-05-15 17:17:33.598486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.253 [2024-05-15 17:17:33.598499] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:46.253 qpair failed and we were unable to recover it. 00:26:46.253 [2024-05-15 17:17:33.598620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.253 [2024-05-15 17:17:33.598781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.253 [2024-05-15 17:17:33.598793] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:46.253 qpair failed and we were unable to recover it. 00:26:46.253 [2024-05-15 17:17:33.598872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.253 [2024-05-15 17:17:33.598980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.253 [2024-05-15 17:17:33.598993] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:46.253 qpair failed and we were unable to recover it. 00:26:46.253 [2024-05-15 17:17:33.599173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.253 [2024-05-15 17:17:33.599370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.253 [2024-05-15 17:17:33.599383] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:46.253 qpair failed and we were unable to recover it. 00:26:46.253 [2024-05-15 17:17:33.599582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.253 [2024-05-15 17:17:33.599699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.253 [2024-05-15 17:17:33.599715] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:46.253 qpair failed and we were unable to recover it. 00:26:46.253 [2024-05-15 17:17:33.599876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.253 [2024-05-15 17:17:33.599973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.253 [2024-05-15 17:17:33.599987] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:46.253 qpair failed and we were unable to recover it. 00:26:46.253 [2024-05-15 17:17:33.600121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.253 [2024-05-15 17:17:33.600287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.253 [2024-05-15 17:17:33.600301] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:46.254 qpair failed and we were unable to recover it. 00:26:46.254 [2024-05-15 17:17:33.600494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.254 [2024-05-15 17:17:33.600666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.254 [2024-05-15 17:17:33.600679] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:46.254 qpair failed and we were unable to recover it. 00:26:46.254 [2024-05-15 17:17:33.600864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.254 [2024-05-15 17:17:33.601067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.254 [2024-05-15 17:17:33.601080] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:46.254 qpair failed and we were unable to recover it. 00:26:46.254 [2024-05-15 17:17:33.601173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.254 [2024-05-15 17:17:33.601402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.254 [2024-05-15 17:17:33.601415] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:46.254 qpair failed and we were unable to recover it. 00:26:46.254 [2024-05-15 17:17:33.601557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.254 [2024-05-15 17:17:33.601736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.254 [2024-05-15 17:17:33.601749] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:46.254 qpair failed and we were unable to recover it. 00:26:46.254 [2024-05-15 17:17:33.601979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.254 [2024-05-15 17:17:33.602150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.254 [2024-05-15 17:17:33.602162] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:46.254 qpair failed and we were unable to recover it. 00:26:46.254 [2024-05-15 17:17:33.602366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.254 [2024-05-15 17:17:33.602492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.254 [2024-05-15 17:17:33.602505] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:46.254 qpair failed and we were unable to recover it. 00:26:46.254 [2024-05-15 17:17:33.602691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.254 [2024-05-15 17:17:33.602889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.254 [2024-05-15 17:17:33.602902] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:46.254 qpair failed and we were unable to recover it. 00:26:46.254 [2024-05-15 17:17:33.603097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.254 [2024-05-15 17:17:33.603262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.254 [2024-05-15 17:17:33.603276] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:46.254 qpair failed and we were unable to recover it. 00:26:46.254 [2024-05-15 17:17:33.603448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.254 [2024-05-15 17:17:33.603545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.254 [2024-05-15 17:17:33.603558] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:46.254 qpair failed and we were unable to recover it. 00:26:46.254 [2024-05-15 17:17:33.603693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.254 [2024-05-15 17:17:33.603891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.254 [2024-05-15 17:17:33.603904] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:46.254 qpair failed and we were unable to recover it. 00:26:46.254 [2024-05-15 17:17:33.604074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.254 [2024-05-15 17:17:33.604272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.254 [2024-05-15 17:17:33.604285] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:46.254 qpair failed and we were unable to recover it. 00:26:46.254 [2024-05-15 17:17:33.604452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.254 [2024-05-15 17:17:33.604615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.254 [2024-05-15 17:17:33.604628] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:46.254 qpair failed and we were unable to recover it. 00:26:46.254 [2024-05-15 17:17:33.604731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.254 [2024-05-15 17:17:33.604922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.254 [2024-05-15 17:17:33.604935] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:46.254 qpair failed and we were unable to recover it. 00:26:46.254 [2024-05-15 17:17:33.605035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.254 [2024-05-15 17:17:33.605222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.254 [2024-05-15 17:17:33.605236] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:46.254 qpair failed and we were unable to recover it. 00:26:46.254 [2024-05-15 17:17:33.605400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.254 [2024-05-15 17:17:33.605627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.254 [2024-05-15 17:17:33.605641] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:46.254 qpair failed and we were unable to recover it. 00:26:46.254 [2024-05-15 17:17:33.605755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.254 [2024-05-15 17:17:33.605870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.254 [2024-05-15 17:17:33.605883] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:46.254 qpair failed and we were unable to recover it. 00:26:46.254 [2024-05-15 17:17:33.606065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.254 [2024-05-15 17:17:33.606226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.254 [2024-05-15 17:17:33.606240] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:46.254 qpair failed and we were unable to recover it. 00:26:46.254 [2024-05-15 17:17:33.606422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.254 [2024-05-15 17:17:33.606590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.254 [2024-05-15 17:17:33.606603] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:46.254 qpair failed and we were unable to recover it. 00:26:46.254 [2024-05-15 17:17:33.606847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.254 [2024-05-15 17:17:33.607017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.254 [2024-05-15 17:17:33.607030] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:46.254 qpair failed and we were unable to recover it. 00:26:46.254 [2024-05-15 17:17:33.607210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.254 [2024-05-15 17:17:33.607370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.254 [2024-05-15 17:17:33.607383] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:46.254 qpair failed and we were unable to recover it. 00:26:46.254 [2024-05-15 17:17:33.607517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.254 [2024-05-15 17:17:33.607609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.254 [2024-05-15 17:17:33.607622] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:46.254 qpair failed and we were unable to recover it. 00:26:46.254 [2024-05-15 17:17:33.607786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.254 [2024-05-15 17:17:33.608032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.254 [2024-05-15 17:17:33.608045] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:46.254 qpair failed and we were unable to recover it. 00:26:46.254 [2024-05-15 17:17:33.608230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.254 [2024-05-15 17:17:33.608409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.254 [2024-05-15 17:17:33.608422] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:46.254 qpair failed and we were unable to recover it. 00:26:46.254 [2024-05-15 17:17:33.608613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.254 [2024-05-15 17:17:33.608790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.254 [2024-05-15 17:17:33.608803] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:46.254 qpair failed and we were unable to recover it. 00:26:46.254 [2024-05-15 17:17:33.608912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.254 [2024-05-15 17:17:33.609084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.254 [2024-05-15 17:17:33.609097] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:46.254 qpair failed and we were unable to recover it. 00:26:46.254 [2024-05-15 17:17:33.609378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.254 [2024-05-15 17:17:33.609510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.254 [2024-05-15 17:17:33.609523] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:46.254 qpair failed and we were unable to recover it. 00:26:46.254 [2024-05-15 17:17:33.609738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.254 [2024-05-15 17:17:33.609828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.254 [2024-05-15 17:17:33.609841] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:46.254 qpair failed and we were unable to recover it. 00:26:46.254 [2024-05-15 17:17:33.610017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.254 [2024-05-15 17:17:33.610186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.254 [2024-05-15 17:17:33.610200] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:46.254 qpair failed and we were unable to recover it. 00:26:46.254 [2024-05-15 17:17:33.610442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.255 [2024-05-15 17:17:33.610605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.255 [2024-05-15 17:17:33.610618] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:46.255 qpair failed and we were unable to recover it. 00:26:46.255 [2024-05-15 17:17:33.610796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.255 [2024-05-15 17:17:33.610923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.255 [2024-05-15 17:17:33.610936] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:46.255 qpair failed and we were unable to recover it. 00:26:46.255 [2024-05-15 17:17:33.611105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.255 [2024-05-15 17:17:33.611356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.255 [2024-05-15 17:17:33.611370] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:46.255 qpair failed and we were unable to recover it. 00:26:46.255 [2024-05-15 17:17:33.611553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.255 [2024-05-15 17:17:33.611717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.255 [2024-05-15 17:17:33.611730] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:46.255 qpair failed and we were unable to recover it. 00:26:46.255 [2024-05-15 17:17:33.611883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.255 [2024-05-15 17:17:33.612062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.255 [2024-05-15 17:17:33.612075] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:46.255 qpair failed and we were unable to recover it. 00:26:46.255 [2024-05-15 17:17:33.612276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.255 [2024-05-15 17:17:33.612474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.255 [2024-05-15 17:17:33.612487] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:46.255 qpair failed and we were unable to recover it. 00:26:46.255 [2024-05-15 17:17:33.612595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.255 [2024-05-15 17:17:33.612716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.255 [2024-05-15 17:17:33.612729] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:46.255 qpair failed and we were unable to recover it. 00:26:46.255 [2024-05-15 17:17:33.612910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.255 [2024-05-15 17:17:33.613016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.255 [2024-05-15 17:17:33.613029] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:46.255 qpair failed and we were unable to recover it. 00:26:46.255 [2024-05-15 17:17:33.613236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.255 [2024-05-15 17:17:33.613459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.255 [2024-05-15 17:17:33.613472] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:46.255 qpair failed and we were unable to recover it. 00:26:46.255 [2024-05-15 17:17:33.613667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.255 [2024-05-15 17:17:33.613849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.255 [2024-05-15 17:17:33.613862] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:46.255 qpair failed and we were unable to recover it. 00:26:46.255 [2024-05-15 17:17:33.613983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.255 [2024-05-15 17:17:33.614105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.255 [2024-05-15 17:17:33.614118] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:46.255 qpair failed and we were unable to recover it. 00:26:46.255 [2024-05-15 17:17:33.614281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.255 [2024-05-15 17:17:33.614451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.255 [2024-05-15 17:17:33.614465] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:46.255 qpair failed and we were unable to recover it. 00:26:46.255 [2024-05-15 17:17:33.614575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.255 [2024-05-15 17:17:33.614740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.255 [2024-05-15 17:17:33.614755] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:46.255 qpair failed and we were unable to recover it. 00:26:46.255 [2024-05-15 17:17:33.614994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.255 [2024-05-15 17:17:33.615179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.255 [2024-05-15 17:17:33.615194] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:46.255 qpair failed and we were unable to recover it. 00:26:46.255 [2024-05-15 17:17:33.615359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.255 [2024-05-15 17:17:33.615473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.255 [2024-05-15 17:17:33.615486] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:46.255 qpair failed and we were unable to recover it. 00:26:46.255 [2024-05-15 17:17:33.615619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.255 [2024-05-15 17:17:33.615737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.255 [2024-05-15 17:17:33.615751] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:46.255 qpair failed and we were unable to recover it. 00:26:46.255 [2024-05-15 17:17:33.615929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.255 [2024-05-15 17:17:33.616104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.255 [2024-05-15 17:17:33.616117] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:46.255 qpair failed and we were unable to recover it. 00:26:46.255 [2024-05-15 17:17:33.616300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.255 [2024-05-15 17:17:33.616512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.255 [2024-05-15 17:17:33.616526] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:46.255 qpair failed and we were unable to recover it. 00:26:46.255 [2024-05-15 17:17:33.616806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.255 [2024-05-15 17:17:33.616966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.255 [2024-05-15 17:17:33.616979] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:46.255 qpair failed and we were unable to recover it. 00:26:46.255 [2024-05-15 17:17:33.617092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.255 [2024-05-15 17:17:33.617202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.255 [2024-05-15 17:17:33.617216] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:46.255 qpair failed and we were unable to recover it. 00:26:46.255 [2024-05-15 17:17:33.617330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.255 [2024-05-15 17:17:33.617438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.255 [2024-05-15 17:17:33.617455] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:46.255 qpair failed and we were unable to recover it. 00:26:46.255 [2024-05-15 17:17:33.617716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.255 [2024-05-15 17:17:33.617912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.255 [2024-05-15 17:17:33.617925] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:46.255 qpair failed and we were unable to recover it. 00:26:46.255 [2024-05-15 17:17:33.618132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.255 [2024-05-15 17:17:33.618256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.255 [2024-05-15 17:17:33.618270] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:46.255 qpair failed and we were unable to recover it. 00:26:46.255 [2024-05-15 17:17:33.618451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.255 [2024-05-15 17:17:33.618586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.255 [2024-05-15 17:17:33.618600] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:46.255 qpair failed and we were unable to recover it. 00:26:46.255 [2024-05-15 17:17:33.618710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.255 [2024-05-15 17:17:33.618827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.255 [2024-05-15 17:17:33.618840] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:46.255 qpair failed and we were unable to recover it. 00:26:46.255 [2024-05-15 17:17:33.619020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.255 [2024-05-15 17:17:33.619137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.255 [2024-05-15 17:17:33.619150] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:46.255 qpair failed and we were unable to recover it. 00:26:46.256 [2024-05-15 17:17:33.619258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.256 [2024-05-15 17:17:33.619381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.256 [2024-05-15 17:17:33.619395] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:46.256 qpair failed and we were unable to recover it. 00:26:46.256 [2024-05-15 17:17:33.619569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.256 [2024-05-15 17:17:33.619664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.256 [2024-05-15 17:17:33.619677] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:46.256 qpair failed and we were unable to recover it. 00:26:46.256 [2024-05-15 17:17:33.619862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.256 [2024-05-15 17:17:33.620096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.256 [2024-05-15 17:17:33.620110] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:46.256 qpair failed and we were unable to recover it. 00:26:46.256 [2024-05-15 17:17:33.620277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.256 [2024-05-15 17:17:33.620441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.256 [2024-05-15 17:17:33.620454] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:46.256 qpair failed and we were unable to recover it. 00:26:46.256 [2024-05-15 17:17:33.620571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.256 [2024-05-15 17:17:33.620803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.256 [2024-05-15 17:17:33.620832] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:46.256 qpair failed and we were unable to recover it. 00:26:46.256 [2024-05-15 17:17:33.621069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.256 [2024-05-15 17:17:33.621266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.256 [2024-05-15 17:17:33.621296] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:46.256 qpair failed and we were unable to recover it. 00:26:46.256 [2024-05-15 17:17:33.621483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.256 [2024-05-15 17:17:33.621620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.256 [2024-05-15 17:17:33.621634] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:46.256 qpair failed and we were unable to recover it. 00:26:46.256 [2024-05-15 17:17:33.621857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.256 [2024-05-15 17:17:33.621990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.256 [2024-05-15 17:17:33.622019] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:46.256 qpair failed and we were unable to recover it. 00:26:46.256 [2024-05-15 17:17:33.622202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.256 [2024-05-15 17:17:33.622354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.256 [2024-05-15 17:17:33.622384] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:46.256 qpair failed and we were unable to recover it. 00:26:46.256 [2024-05-15 17:17:33.622523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.256 [2024-05-15 17:17:33.622770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.256 [2024-05-15 17:17:33.622798] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:46.256 qpair failed and we were unable to recover it. 00:26:46.256 [2024-05-15 17:17:33.623074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.256 [2024-05-15 17:17:33.623216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.256 [2024-05-15 17:17:33.623248] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:46.256 qpair failed and we were unable to recover it. 00:26:46.256 [2024-05-15 17:17:33.623487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.256 [2024-05-15 17:17:33.623629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.256 [2024-05-15 17:17:33.623658] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:46.256 qpair failed and we were unable to recover it. 00:26:46.256 [2024-05-15 17:17:33.623823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.256 [2024-05-15 17:17:33.623947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.256 [2024-05-15 17:17:33.623960] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:46.256 qpair failed and we were unable to recover it. 00:26:46.256 [2024-05-15 17:17:33.624069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.256 [2024-05-15 17:17:33.624256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.256 [2024-05-15 17:17:33.624270] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:46.256 qpair failed and we were unable to recover it. 00:26:46.256 [2024-05-15 17:17:33.624453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.256 [2024-05-15 17:17:33.624715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.256 [2024-05-15 17:17:33.624744] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:46.256 qpair failed and we were unable to recover it. 00:26:46.256 [2024-05-15 17:17:33.624897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.256 [2024-05-15 17:17:33.625026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.256 [2024-05-15 17:17:33.625054] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:46.256 qpair failed and we were unable to recover it. 00:26:46.256 [2024-05-15 17:17:33.625183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.256 [2024-05-15 17:17:33.625384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.256 [2024-05-15 17:17:33.625413] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:46.256 qpair failed and we were unable to recover it. 00:26:46.256 [2024-05-15 17:17:33.625557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.256 [2024-05-15 17:17:33.625726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.256 [2024-05-15 17:17:33.625754] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:46.256 qpair failed and we were unable to recover it. 00:26:46.256 [2024-05-15 17:17:33.625951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.256 [2024-05-15 17:17:33.626151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.256 [2024-05-15 17:17:33.626193] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:46.256 qpair failed and we were unable to recover it. 00:26:46.256 [2024-05-15 17:17:33.626419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.256 [2024-05-15 17:17:33.626587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.256 [2024-05-15 17:17:33.626615] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:46.256 qpair failed and we were unable to recover it. 00:26:46.256 [2024-05-15 17:17:33.626761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.256 [2024-05-15 17:17:33.626912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.256 [2024-05-15 17:17:33.626941] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:46.256 qpair failed and we were unable to recover it. 00:26:46.256 [2024-05-15 17:17:33.627093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.256 [2024-05-15 17:17:33.627245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.256 [2024-05-15 17:17:33.627274] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:46.256 qpair failed and we were unable to recover it. 00:26:46.256 [2024-05-15 17:17:33.627498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.256 [2024-05-15 17:17:33.627704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.256 [2024-05-15 17:17:33.627733] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:46.256 qpair failed and we were unable to recover it. 00:26:46.256 [2024-05-15 17:17:33.627938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.256 [2024-05-15 17:17:33.628076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.256 [2024-05-15 17:17:33.628105] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:46.256 qpair failed and we were unable to recover it. 00:26:46.256 [2024-05-15 17:17:33.628261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.256 [2024-05-15 17:17:33.628396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.256 [2024-05-15 17:17:33.628426] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:46.256 qpair failed and we were unable to recover it. 00:26:46.256 [2024-05-15 17:17:33.628681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.256 [2024-05-15 17:17:33.628855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.256 [2024-05-15 17:17:33.628890] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.256 qpair failed and we were unable to recover it. 00:26:46.256 [2024-05-15 17:17:33.629049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.256 [2024-05-15 17:17:33.629192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.256 [2024-05-15 17:17:33.629224] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.256 qpair failed and we were unable to recover it. 00:26:46.256 [2024-05-15 17:17:33.629368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.256 [2024-05-15 17:17:33.629575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.257 [2024-05-15 17:17:33.629605] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.257 qpair failed and we were unable to recover it. 00:26:46.257 [2024-05-15 17:17:33.629812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.257 [2024-05-15 17:17:33.630013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.257 [2024-05-15 17:17:33.630043] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.257 qpair failed and we were unable to recover it. 00:26:46.257 [2024-05-15 17:17:33.630204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.257 [2024-05-15 17:17:33.630346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.257 [2024-05-15 17:17:33.630376] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.257 qpair failed and we were unable to recover it. 00:26:46.257 [2024-05-15 17:17:33.630497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.257 [2024-05-15 17:17:33.630590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.257 [2024-05-15 17:17:33.630599] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.257 qpair failed and we were unable to recover it. 00:26:46.257 [2024-05-15 17:17:33.630769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.257 [2024-05-15 17:17:33.630950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.257 [2024-05-15 17:17:33.630960] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.257 qpair failed and we were unable to recover it. 00:26:46.257 [2024-05-15 17:17:33.631082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.257 [2024-05-15 17:17:33.631252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.257 [2024-05-15 17:17:33.631281] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.257 qpair failed and we were unable to recover it. 00:26:46.257 [2024-05-15 17:17:33.631514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.257 [2024-05-15 17:17:33.631752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.257 [2024-05-15 17:17:33.631781] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.257 qpair failed and we were unable to recover it. 00:26:46.257 [2024-05-15 17:17:33.632004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.257 [2024-05-15 17:17:33.632141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.257 [2024-05-15 17:17:33.632177] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.257 qpair failed and we were unable to recover it. 00:26:46.257 [2024-05-15 17:17:33.632390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.257 [2024-05-15 17:17:33.632561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.257 [2024-05-15 17:17:33.632591] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.257 qpair failed and we were unable to recover it. 00:26:46.257 [2024-05-15 17:17:33.632870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.257 [2024-05-15 17:17:33.633132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.257 [2024-05-15 17:17:33.633161] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.257 qpair failed and we were unable to recover it. 00:26:46.257 [2024-05-15 17:17:33.633305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.257 [2024-05-15 17:17:33.633446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.257 [2024-05-15 17:17:33.633475] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.257 qpair failed and we were unable to recover it. 00:26:46.257 [2024-05-15 17:17:33.633687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.257 [2024-05-15 17:17:33.633894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.257 [2024-05-15 17:17:33.633923] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.257 qpair failed and we were unable to recover it. 00:26:46.257 [2024-05-15 17:17:33.634128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.257 [2024-05-15 17:17:33.634341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.257 [2024-05-15 17:17:33.634371] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.257 qpair failed and we were unable to recover it. 00:26:46.257 [2024-05-15 17:17:33.634493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.257 [2024-05-15 17:17:33.634624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.257 [2024-05-15 17:17:33.634653] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.257 qpair failed and we were unable to recover it. 00:26:46.257 [2024-05-15 17:17:33.634866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.257 [2024-05-15 17:17:33.635031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.257 [2024-05-15 17:17:33.635061] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.257 qpair failed and we were unable to recover it. 00:26:46.257 [2024-05-15 17:17:33.635198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.257 [2024-05-15 17:17:33.635396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.257 [2024-05-15 17:17:33.635425] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.257 qpair failed and we were unable to recover it. 00:26:46.257 [2024-05-15 17:17:33.635581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.257 [2024-05-15 17:17:33.635829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.257 [2024-05-15 17:17:33.635838] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.257 qpair failed and we were unable to recover it. 00:26:46.257 [2024-05-15 17:17:33.635953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.257 [2024-05-15 17:17:33.636067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.257 [2024-05-15 17:17:33.636087] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.257 qpair failed and we were unable to recover it. 00:26:46.257 [2024-05-15 17:17:33.636178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.257 [2024-05-15 17:17:33.636294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.257 [2024-05-15 17:17:33.636303] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.257 qpair failed and we were unable to recover it. 00:26:46.257 [2024-05-15 17:17:33.636470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.257 [2024-05-15 17:17:33.636650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.257 [2024-05-15 17:17:33.636679] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.257 qpair failed and we were unable to recover it. 00:26:46.257 [2024-05-15 17:17:33.636912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.257 [2024-05-15 17:17:33.637120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.257 [2024-05-15 17:17:33.637149] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.257 qpair failed and we were unable to recover it. 00:26:46.257 [2024-05-15 17:17:33.637356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.257 [2024-05-15 17:17:33.637481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.257 [2024-05-15 17:17:33.637510] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.257 qpair failed and we were unable to recover it. 00:26:46.257 [2024-05-15 17:17:33.637653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.257 [2024-05-15 17:17:33.637914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.258 [2024-05-15 17:17:33.637942] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.258 qpair failed and we were unable to recover it. 00:26:46.258 [2024-05-15 17:17:33.638149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.258 [2024-05-15 17:17:33.638374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.258 [2024-05-15 17:17:33.638404] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.258 qpair failed and we were unable to recover it. 00:26:46.258 [2024-05-15 17:17:33.638550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.258 [2024-05-15 17:17:33.638715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.258 [2024-05-15 17:17:33.638725] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.258 qpair failed and we were unable to recover it. 00:26:46.258 [2024-05-15 17:17:33.638887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.258 [2024-05-15 17:17:33.639033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.258 [2024-05-15 17:17:33.639043] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.258 qpair failed and we were unable to recover it. 00:26:46.258 [2024-05-15 17:17:33.639151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.258 [2024-05-15 17:17:33.639256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.258 [2024-05-15 17:17:33.639267] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.258 qpair failed and we were unable to recover it. 00:26:46.258 [2024-05-15 17:17:33.639370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.258 [2024-05-15 17:17:33.639593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.258 [2024-05-15 17:17:33.639603] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.258 qpair failed and we were unable to recover it. 00:26:46.258 [2024-05-15 17:17:33.639770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.258 [2024-05-15 17:17:33.639871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.258 [2024-05-15 17:17:33.639881] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.258 qpair failed and we were unable to recover it. 00:26:46.258 [2024-05-15 17:17:33.639989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.258 [2024-05-15 17:17:33.640155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.258 [2024-05-15 17:17:33.640169] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.258 qpair failed and we were unable to recover it. 00:26:46.258 [2024-05-15 17:17:33.640297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.258 [2024-05-15 17:17:33.640453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.258 [2024-05-15 17:17:33.640462] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.258 qpair failed and we were unable to recover it. 00:26:46.258 [2024-05-15 17:17:33.640636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.258 [2024-05-15 17:17:33.640799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.258 [2024-05-15 17:17:33.640808] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.258 qpair failed and we were unable to recover it. 00:26:46.258 [2024-05-15 17:17:33.640911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.258 [2024-05-15 17:17:33.641013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.258 [2024-05-15 17:17:33.641023] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.258 qpair failed and we were unable to recover it. 00:26:46.258 [2024-05-15 17:17:33.641260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.258 [2024-05-15 17:17:33.641423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.258 [2024-05-15 17:17:33.641451] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.258 qpair failed and we were unable to recover it. 00:26:46.258 [2024-05-15 17:17:33.641582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.258 [2024-05-15 17:17:33.641780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.258 [2024-05-15 17:17:33.641809] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.258 qpair failed and we were unable to recover it. 00:26:46.258 [2024-05-15 17:17:33.642011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.258 [2024-05-15 17:17:33.642231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.258 [2024-05-15 17:17:33.642260] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.258 qpair failed and we were unable to recover it. 00:26:46.258 [2024-05-15 17:17:33.642407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.258 [2024-05-15 17:17:33.642549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.258 [2024-05-15 17:17:33.642559] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.258 qpair failed and we were unable to recover it. 00:26:46.258 [2024-05-15 17:17:33.642641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.258 [2024-05-15 17:17:33.642756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.258 [2024-05-15 17:17:33.642776] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.258 qpair failed and we were unable to recover it. 00:26:46.258 [2024-05-15 17:17:33.642929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.258 [2024-05-15 17:17:33.643051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.258 [2024-05-15 17:17:33.643085] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.258 qpair failed and we were unable to recover it. 00:26:46.258 [2024-05-15 17:17:33.643397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.258 [2024-05-15 17:17:33.643540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.258 [2024-05-15 17:17:33.643550] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.258 qpair failed and we were unable to recover it. 00:26:46.258 [2024-05-15 17:17:33.643650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.258 [2024-05-15 17:17:33.643804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.258 [2024-05-15 17:17:33.643813] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.258 qpair failed and we were unable to recover it. 00:26:46.258 [2024-05-15 17:17:33.643913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.258 [2024-05-15 17:17:33.644033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.258 [2024-05-15 17:17:33.644062] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.258 qpair failed and we were unable to recover it. 00:26:46.258 [2024-05-15 17:17:33.644190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.258 [2024-05-15 17:17:33.644452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.258 [2024-05-15 17:17:33.644481] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.258 qpair failed and we were unable to recover it. 00:26:46.258 [2024-05-15 17:17:33.644746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.258 [2024-05-15 17:17:33.644848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.258 [2024-05-15 17:17:33.644857] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.258 qpair failed and we were unable to recover it. 00:26:46.258 [2024-05-15 17:17:33.645032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.258 [2024-05-15 17:17:33.645212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.258 [2024-05-15 17:17:33.645243] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.258 qpair failed and we were unable to recover it. 00:26:46.258 [2024-05-15 17:17:33.645404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.258 [2024-05-15 17:17:33.645697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.258 [2024-05-15 17:17:33.645726] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.258 qpair failed and we were unable to recover it. 00:26:46.258 [2024-05-15 17:17:33.645925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.258 [2024-05-15 17:17:33.646214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.258 [2024-05-15 17:17:33.646243] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.258 qpair failed and we were unable to recover it. 00:26:46.258 [2024-05-15 17:17:33.646397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.258 [2024-05-15 17:17:33.646603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.258 [2024-05-15 17:17:33.646631] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.258 qpair failed and we were unable to recover it. 00:26:46.259 [2024-05-15 17:17:33.646780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.259 [2024-05-15 17:17:33.646881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.259 [2024-05-15 17:17:33.646892] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.259 qpair failed and we were unable to recover it. 00:26:46.259 [2024-05-15 17:17:33.646989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.259 [2024-05-15 17:17:33.647172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.259 [2024-05-15 17:17:33.647182] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.259 qpair failed and we were unable to recover it. 00:26:46.259 [2024-05-15 17:17:33.647294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.259 [2024-05-15 17:17:33.647398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.259 [2024-05-15 17:17:33.647408] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.259 qpair failed and we were unable to recover it. 00:26:46.259 [2024-05-15 17:17:33.647578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.259 [2024-05-15 17:17:33.647682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.259 [2024-05-15 17:17:33.647692] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.259 qpair failed and we were unable to recover it. 00:26:46.259 [2024-05-15 17:17:33.647850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.259 [2024-05-15 17:17:33.648120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.259 [2024-05-15 17:17:33.648129] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.259 qpair failed and we were unable to recover it. 00:26:46.259 [2024-05-15 17:17:33.648237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.259 [2024-05-15 17:17:33.648341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.259 [2024-05-15 17:17:33.648351] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.259 qpair failed and we were unable to recover it. 00:26:46.259 [2024-05-15 17:17:33.648513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.259 [2024-05-15 17:17:33.648615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.259 [2024-05-15 17:17:33.648624] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.259 qpair failed and we were unable to recover it. 00:26:46.259 [2024-05-15 17:17:33.648783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.259 [2024-05-15 17:17:33.648956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.259 [2024-05-15 17:17:33.648966] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.259 qpair failed and we were unable to recover it. 00:26:46.259 [2024-05-15 17:17:33.649071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.259 [2024-05-15 17:17:33.649226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.259 [2024-05-15 17:17:33.649247] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.259 qpair failed and we were unable to recover it. 00:26:46.259 [2024-05-15 17:17:33.649338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.259 [2024-05-15 17:17:33.649437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.259 [2024-05-15 17:17:33.649446] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.259 qpair failed and we were unable to recover it. 00:26:46.259 [2024-05-15 17:17:33.649564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.259 [2024-05-15 17:17:33.649806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.259 [2024-05-15 17:17:33.649840] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.259 qpair failed and we were unable to recover it. 00:26:46.259 [2024-05-15 17:17:33.649989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.259 [2024-05-15 17:17:33.650197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.259 [2024-05-15 17:17:33.650227] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.259 qpair failed and we were unable to recover it. 00:26:46.259 [2024-05-15 17:17:33.650466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.259 [2024-05-15 17:17:33.650673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.259 [2024-05-15 17:17:33.650702] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.259 qpair failed and we were unable to recover it. 00:26:46.259 [2024-05-15 17:17:33.650902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.259 [2024-05-15 17:17:33.651079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.259 [2024-05-15 17:17:33.651108] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.259 qpair failed and we were unable to recover it. 00:26:46.259 [2024-05-15 17:17:33.651311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.259 [2024-05-15 17:17:33.651441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.259 [2024-05-15 17:17:33.651470] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.259 qpair failed and we were unable to recover it. 00:26:46.259 [2024-05-15 17:17:33.651736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.259 [2024-05-15 17:17:33.651927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.259 [2024-05-15 17:17:33.651956] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.259 qpair failed and we were unable to recover it. 00:26:46.259 [2024-05-15 17:17:33.652232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.259 [2024-05-15 17:17:33.652462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.259 [2024-05-15 17:17:33.652491] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.259 qpair failed and we were unable to recover it. 00:26:46.259 [2024-05-15 17:17:33.652693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.259 [2024-05-15 17:17:33.652921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.259 [2024-05-15 17:17:33.652949] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.259 qpair failed and we were unable to recover it. 00:26:46.259 [2024-05-15 17:17:33.653182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.259 [2024-05-15 17:17:33.653395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.259 [2024-05-15 17:17:33.653423] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.259 qpair failed and we were unable to recover it. 00:26:46.259 [2024-05-15 17:17:33.653713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.259 [2024-05-15 17:17:33.653872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.259 [2024-05-15 17:17:33.653882] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.259 qpair failed and we were unable to recover it. 00:26:46.259 [2024-05-15 17:17:33.654051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.259 [2024-05-15 17:17:33.654155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.259 [2024-05-15 17:17:33.654178] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.259 qpair failed and we were unable to recover it. 00:26:46.259 [2024-05-15 17:17:33.654365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.259 [2024-05-15 17:17:33.654547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.259 [2024-05-15 17:17:33.654556] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.259 qpair failed and we were unable to recover it. 00:26:46.259 [2024-05-15 17:17:33.654736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.259 [2024-05-15 17:17:33.654912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.259 [2024-05-15 17:17:33.654921] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.259 qpair failed and we were unable to recover it. 00:26:46.259 [2024-05-15 17:17:33.655076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.259 [2024-05-15 17:17:33.655194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.259 [2024-05-15 17:17:33.655204] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.259 qpair failed and we were unable to recover it. 00:26:46.259 [2024-05-15 17:17:33.655374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.259 [2024-05-15 17:17:33.655474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.259 [2024-05-15 17:17:33.655484] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.259 qpair failed and we were unable to recover it. 00:26:46.259 [2024-05-15 17:17:33.655665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.259 [2024-05-15 17:17:33.655768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.259 [2024-05-15 17:17:33.655777] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.259 qpair failed and we were unable to recover it. 00:26:46.260 [2024-05-15 17:17:33.655898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.260 [2024-05-15 17:17:33.656119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.260 [2024-05-15 17:17:33.656129] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.260 qpair failed and we were unable to recover it. 00:26:46.260 [2024-05-15 17:17:33.656307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.260 [2024-05-15 17:17:33.656444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.260 [2024-05-15 17:17:33.656473] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.260 qpair failed and we were unable to recover it. 00:26:46.260 [2024-05-15 17:17:33.656608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.260 [2024-05-15 17:17:33.656762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.260 [2024-05-15 17:17:33.656771] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.260 qpair failed and we were unable to recover it. 00:26:46.260 [2024-05-15 17:17:33.656881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.260 [2024-05-15 17:17:33.656989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.260 [2024-05-15 17:17:33.656998] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.260 qpair failed and we were unable to recover it. 00:26:46.260 [2024-05-15 17:17:33.657145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.260 [2024-05-15 17:17:33.657391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.260 [2024-05-15 17:17:33.657402] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.260 qpair failed and we were unable to recover it. 00:26:46.260 [2024-05-15 17:17:33.657565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.260 [2024-05-15 17:17:33.657732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.260 [2024-05-15 17:17:33.657742] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.260 qpair failed and we were unable to recover it. 00:26:46.260 [2024-05-15 17:17:33.657892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.260 [2024-05-15 17:17:33.658153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.260 [2024-05-15 17:17:33.658212] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.260 qpair failed and we were unable to recover it. 00:26:46.260 [2024-05-15 17:17:33.658351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.260 [2024-05-15 17:17:33.658556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.260 [2024-05-15 17:17:33.658585] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.260 qpair failed and we were unable to recover it. 00:26:46.260 [2024-05-15 17:17:33.658737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.260 [2024-05-15 17:17:33.659002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.260 [2024-05-15 17:17:33.659030] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.260 qpair failed and we were unable to recover it. 00:26:46.260 [2024-05-15 17:17:33.659232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.260 [2024-05-15 17:17:33.659428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.260 [2024-05-15 17:17:33.659456] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.260 qpair failed and we were unable to recover it. 00:26:46.260 [2024-05-15 17:17:33.659651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.260 [2024-05-15 17:17:33.659825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.260 [2024-05-15 17:17:33.659854] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.260 qpair failed and we were unable to recover it. 00:26:46.260 [2024-05-15 17:17:33.660133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.260 [2024-05-15 17:17:33.660342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.260 [2024-05-15 17:17:33.660372] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.260 qpair failed and we were unable to recover it. 00:26:46.260 [2024-05-15 17:17:33.660515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.260 [2024-05-15 17:17:33.660720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.260 [2024-05-15 17:17:33.660749] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.260 qpair failed and we were unable to recover it. 00:26:46.260 [2024-05-15 17:17:33.660908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.260 [2024-05-15 17:17:33.661126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.260 [2024-05-15 17:17:33.661154] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.260 qpair failed and we were unable to recover it. 00:26:46.260 [2024-05-15 17:17:33.661433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.260 [2024-05-15 17:17:33.661742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.260 [2024-05-15 17:17:33.661771] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.260 qpair failed and we were unable to recover it. 00:26:46.260 [2024-05-15 17:17:33.661998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.260 [2024-05-15 17:17:33.662129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.260 [2024-05-15 17:17:33.662157] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.260 qpair failed and we were unable to recover it. 00:26:46.260 [2024-05-15 17:17:33.662369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.260 [2024-05-15 17:17:33.662657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.260 [2024-05-15 17:17:33.662685] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.260 qpair failed and we were unable to recover it. 00:26:46.260 [2024-05-15 17:17:33.662845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.260 [2024-05-15 17:17:33.663058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.260 [2024-05-15 17:17:33.663086] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.260 qpair failed and we were unable to recover it. 00:26:46.260 [2024-05-15 17:17:33.663228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.260 [2024-05-15 17:17:33.663445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.260 [2024-05-15 17:17:33.663473] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.260 qpair failed and we were unable to recover it. 00:26:46.260 [2024-05-15 17:17:33.663607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.260 [2024-05-15 17:17:33.663826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.260 [2024-05-15 17:17:33.663854] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.260 qpair failed and we were unable to recover it. 00:26:46.260 [2024-05-15 17:17:33.664066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.260 [2024-05-15 17:17:33.664277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.260 [2024-05-15 17:17:33.664307] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.260 qpair failed and we were unable to recover it. 00:26:46.260 [2024-05-15 17:17:33.664594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.260 [2024-05-15 17:17:33.664794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.260 [2024-05-15 17:17:33.664803] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.260 qpair failed and we were unable to recover it. 00:26:46.260 [2024-05-15 17:17:33.664980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.260 [2024-05-15 17:17:33.665146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.260 [2024-05-15 17:17:33.665156] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.260 qpair failed and we were unable to recover it. 00:26:46.260 [2024-05-15 17:17:33.665361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.260 [2024-05-15 17:17:33.665651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.260 [2024-05-15 17:17:33.665680] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.260 qpair failed and we were unable to recover it. 00:26:46.260 [2024-05-15 17:17:33.665836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.260 [2024-05-15 17:17:33.665923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.260 [2024-05-15 17:17:33.665933] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.260 qpair failed and we were unable to recover it. 00:26:46.260 [2024-05-15 17:17:33.666105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.260 [2024-05-15 17:17:33.666338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.260 [2024-05-15 17:17:33.666368] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.260 qpair failed and we were unable to recover it. 00:26:46.260 [2024-05-15 17:17:33.666611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.260 [2024-05-15 17:17:33.666802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.260 [2024-05-15 17:17:33.666830] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.260 qpair failed and we were unable to recover it. 00:26:46.260 [2024-05-15 17:17:33.667021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.260 [2024-05-15 17:17:33.667200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.260 [2024-05-15 17:17:33.667230] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.260 qpair failed and we were unable to recover it. 00:26:46.261 [2024-05-15 17:17:33.667380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.261 [2024-05-15 17:17:33.667515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.261 [2024-05-15 17:17:33.667543] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.261 qpair failed and we were unable to recover it. 00:26:46.261 [2024-05-15 17:17:33.667750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.261 [2024-05-15 17:17:33.667885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.261 [2024-05-15 17:17:33.667894] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.261 qpair failed and we were unable to recover it. 00:26:46.261 [2024-05-15 17:17:33.667982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.261 [2024-05-15 17:17:33.668225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.261 [2024-05-15 17:17:33.668235] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.261 qpair failed and we were unable to recover it. 00:26:46.261 [2024-05-15 17:17:33.668485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.261 [2024-05-15 17:17:33.668731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.261 [2024-05-15 17:17:33.668740] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.261 qpair failed and we were unable to recover it. 00:26:46.261 [2024-05-15 17:17:33.668897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.261 [2024-05-15 17:17:33.669061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.261 [2024-05-15 17:17:33.669071] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.261 qpair failed and we were unable to recover it. 00:26:46.261 [2024-05-15 17:17:33.669277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.261 [2024-05-15 17:17:33.669407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.261 [2024-05-15 17:17:33.669436] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.261 qpair failed and we were unable to recover it. 00:26:46.261 [2024-05-15 17:17:33.669589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.261 [2024-05-15 17:17:33.669792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.261 [2024-05-15 17:17:33.669801] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.261 qpair failed and we were unable to recover it. 00:26:46.261 [2024-05-15 17:17:33.669912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.261 [2024-05-15 17:17:33.670155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.261 [2024-05-15 17:17:33.670168] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.261 qpair failed and we were unable to recover it. 00:26:46.261 [2024-05-15 17:17:33.670335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.261 [2024-05-15 17:17:33.670582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.261 [2024-05-15 17:17:33.670591] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.261 qpair failed and we were unable to recover it. 00:26:46.261 [2024-05-15 17:17:33.670807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.261 [2024-05-15 17:17:33.671020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.261 [2024-05-15 17:17:33.671048] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.261 qpair failed and we were unable to recover it. 00:26:46.261 [2024-05-15 17:17:33.671315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.261 [2024-05-15 17:17:33.671533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.261 [2024-05-15 17:17:33.671562] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.261 qpair failed and we were unable to recover it. 00:26:46.261 [2024-05-15 17:17:33.671824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.261 [2024-05-15 17:17:33.671955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.261 [2024-05-15 17:17:33.671965] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.261 qpair failed and we were unable to recover it. 00:26:46.261 [2024-05-15 17:17:33.672139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.261 [2024-05-15 17:17:33.672411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.261 [2024-05-15 17:17:33.672441] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.261 qpair failed and we were unable to recover it. 00:26:46.261 [2024-05-15 17:17:33.672642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.261 [2024-05-15 17:17:33.672836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.261 [2024-05-15 17:17:33.672864] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.261 qpair failed and we were unable to recover it. 00:26:46.261 [2024-05-15 17:17:33.673015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.261 [2024-05-15 17:17:33.673162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.261 [2024-05-15 17:17:33.673198] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.261 qpair failed and we were unable to recover it. 00:26:46.261 [2024-05-15 17:17:33.673347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.261 [2024-05-15 17:17:33.673495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.261 [2024-05-15 17:17:33.673523] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.261 qpair failed and we were unable to recover it. 00:26:46.261 [2024-05-15 17:17:33.673733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.261 [2024-05-15 17:17:33.673852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.261 [2024-05-15 17:17:33.673862] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.261 qpair failed and we were unable to recover it. 00:26:46.261 [2024-05-15 17:17:33.674056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.261 [2024-05-15 17:17:33.674284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.261 [2024-05-15 17:17:33.674314] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.261 qpair failed and we were unable to recover it. 00:26:46.261 [2024-05-15 17:17:33.674522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.261 [2024-05-15 17:17:33.674807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.261 [2024-05-15 17:17:33.674835] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.261 qpair failed and we were unable to recover it. 00:26:46.261 [2024-05-15 17:17:33.675098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.261 [2024-05-15 17:17:33.675244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.261 [2024-05-15 17:17:33.675273] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.261 qpair failed and we were unable to recover it. 00:26:46.261 [2024-05-15 17:17:33.675496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.261 [2024-05-15 17:17:33.675640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.261 [2024-05-15 17:17:33.675669] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.261 qpair failed and we were unable to recover it. 00:26:46.261 [2024-05-15 17:17:33.675818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.261 [2024-05-15 17:17:33.676019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.261 [2024-05-15 17:17:33.676048] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.261 qpair failed and we were unable to recover it. 00:26:46.261 [2024-05-15 17:17:33.676314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.261 [2024-05-15 17:17:33.676454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.261 [2024-05-15 17:17:33.676483] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.261 qpair failed and we were unable to recover it. 00:26:46.261 [2024-05-15 17:17:33.676637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.261 [2024-05-15 17:17:33.676907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.261 [2024-05-15 17:17:33.676936] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.261 qpair failed and we were unable to recover it. 00:26:46.261 [2024-05-15 17:17:33.677179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.261 [2024-05-15 17:17:33.677409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.261 [2024-05-15 17:17:33.677439] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.261 qpair failed and we were unable to recover it. 00:26:46.261 [2024-05-15 17:17:33.677577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.261 [2024-05-15 17:17:33.677685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.261 [2024-05-15 17:17:33.677694] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.262 qpair failed and we were unable to recover it. 00:26:46.262 [2024-05-15 17:17:33.677804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.262 [2024-05-15 17:17:33.677911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.262 [2024-05-15 17:17:33.677920] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.262 qpair failed and we were unable to recover it. 00:26:46.262 [2024-05-15 17:17:33.678179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.262 [2024-05-15 17:17:33.678369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.262 [2024-05-15 17:17:33.678398] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.262 qpair failed and we were unable to recover it. 00:26:46.262 [2024-05-15 17:17:33.678628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.262 [2024-05-15 17:17:33.678752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.262 [2024-05-15 17:17:33.678781] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.262 qpair failed and we were unable to recover it. 00:26:46.262 [2024-05-15 17:17:33.678934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.262 [2024-05-15 17:17:33.679145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.262 [2024-05-15 17:17:33.679184] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.262 qpair failed and we were unable to recover it. 00:26:46.262 [2024-05-15 17:17:33.679385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.262 [2024-05-15 17:17:33.679533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.262 [2024-05-15 17:17:33.679561] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.262 qpair failed and we were unable to recover it. 00:26:46.262 [2024-05-15 17:17:33.679708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.262 [2024-05-15 17:17:33.679829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.262 [2024-05-15 17:17:33.679838] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.262 qpair failed and we were unable to recover it. 00:26:46.262 [2024-05-15 17:17:33.680011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.262 [2024-05-15 17:17:33.680084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.262 [2024-05-15 17:17:33.680094] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.262 qpair failed and we were unable to recover it. 00:26:46.262 [2024-05-15 17:17:33.680217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.262 [2024-05-15 17:17:33.680409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.262 [2024-05-15 17:17:33.680419] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.262 qpair failed and we were unable to recover it. 00:26:46.262 [2024-05-15 17:17:33.680541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.262 [2024-05-15 17:17:33.680615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.262 [2024-05-15 17:17:33.680625] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.262 qpair failed and we were unable to recover it. 00:26:46.262 [2024-05-15 17:17:33.680715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.262 [2024-05-15 17:17:33.680805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.262 [2024-05-15 17:17:33.680815] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.262 qpair failed and we were unable to recover it. 00:26:46.262 [2024-05-15 17:17:33.681042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.262 [2024-05-15 17:17:33.681252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.262 [2024-05-15 17:17:33.681282] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.262 qpair failed and we were unable to recover it. 00:26:46.262 [2024-05-15 17:17:33.681434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.262 [2024-05-15 17:17:33.681630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.262 [2024-05-15 17:17:33.681640] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.262 qpair failed and we were unable to recover it. 00:26:46.262 [2024-05-15 17:17:33.681736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.262 [2024-05-15 17:17:33.681966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.262 [2024-05-15 17:17:33.681975] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.262 qpair failed and we were unable to recover it. 00:26:46.262 [2024-05-15 17:17:33.682214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.262 [2024-05-15 17:17:33.682387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.262 [2024-05-15 17:17:33.682415] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.262 qpair failed and we were unable to recover it. 00:26:46.262 [2024-05-15 17:17:33.682559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.262 [2024-05-15 17:17:33.682698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.262 [2024-05-15 17:17:33.682726] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.262 qpair failed and we were unable to recover it. 00:26:46.262 [2024-05-15 17:17:33.682846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.262 [2024-05-15 17:17:33.683122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.262 [2024-05-15 17:17:33.683131] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.262 qpair failed and we were unable to recover it. 00:26:46.262 [2024-05-15 17:17:33.683301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.262 [2024-05-15 17:17:33.683555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.262 [2024-05-15 17:17:33.683583] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.262 qpair failed and we were unable to recover it. 00:26:46.262 [2024-05-15 17:17:33.683790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.262 [2024-05-15 17:17:33.683995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.262 [2024-05-15 17:17:33.684024] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.262 qpair failed and we were unable to recover it. 00:26:46.262 [2024-05-15 17:17:33.684232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.262 [2024-05-15 17:17:33.684360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.262 [2024-05-15 17:17:33.684388] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.262 qpair failed and we were unable to recover it. 00:26:46.262 [2024-05-15 17:17:33.684600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.262 [2024-05-15 17:17:33.684795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.262 [2024-05-15 17:17:33.684823] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.262 qpair failed and we were unable to recover it. 00:26:46.262 [2024-05-15 17:17:33.685109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.262 [2024-05-15 17:17:33.685274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.262 [2024-05-15 17:17:33.685284] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.262 qpair failed and we were unable to recover it. 00:26:46.262 [2024-05-15 17:17:33.685394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.262 [2024-05-15 17:17:33.685565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.262 [2024-05-15 17:17:33.685575] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.262 qpair failed and we were unable to recover it. 00:26:46.262 [2024-05-15 17:17:33.685749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.262 [2024-05-15 17:17:33.685915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.262 [2024-05-15 17:17:33.685925] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.262 qpair failed and we were unable to recover it. 00:26:46.262 [2024-05-15 17:17:33.686100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.262 [2024-05-15 17:17:33.686259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.262 [2024-05-15 17:17:33.686269] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.262 qpair failed and we were unable to recover it. 00:26:46.262 [2024-05-15 17:17:33.686387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.262 [2024-05-15 17:17:33.686587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.262 [2024-05-15 17:17:33.686615] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.262 qpair failed and we were unable to recover it. 00:26:46.262 [2024-05-15 17:17:33.686821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.263 [2024-05-15 17:17:33.686945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.263 [2024-05-15 17:17:33.686974] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.263 qpair failed and we were unable to recover it. 00:26:46.263 [2024-05-15 17:17:33.687184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.263 [2024-05-15 17:17:33.687384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.263 [2024-05-15 17:17:33.687412] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.263 qpair failed and we were unable to recover it. 00:26:46.263 [2024-05-15 17:17:33.687631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.263 [2024-05-15 17:17:33.687826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.263 [2024-05-15 17:17:33.687855] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.263 qpair failed and we were unable to recover it. 00:26:46.263 [2024-05-15 17:17:33.688049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.263 [2024-05-15 17:17:33.688244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.263 [2024-05-15 17:17:33.688274] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.263 qpair failed and we were unable to recover it. 00:26:46.263 [2024-05-15 17:17:33.688473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.263 [2024-05-15 17:17:33.688688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.263 [2024-05-15 17:17:33.688716] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.263 qpair failed and we were unable to recover it. 00:26:46.263 [2024-05-15 17:17:33.688999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.263 [2024-05-15 17:17:33.689137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.263 [2024-05-15 17:17:33.689183] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.263 qpair failed and we were unable to recover it. 00:26:46.263 [2024-05-15 17:17:33.689470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.263 [2024-05-15 17:17:33.689743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.263 [2024-05-15 17:17:33.689753] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.263 qpair failed and we were unable to recover it. 00:26:46.263 [2024-05-15 17:17:33.689844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.263 [2024-05-15 17:17:33.690000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.263 [2024-05-15 17:17:33.690009] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.263 qpair failed and we were unable to recover it. 00:26:46.263 [2024-05-15 17:17:33.690183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.263 [2024-05-15 17:17:33.690382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.263 [2024-05-15 17:17:33.690411] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.263 qpair failed and we were unable to recover it. 00:26:46.263 [2024-05-15 17:17:33.690615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.263 [2024-05-15 17:17:33.690834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.263 [2024-05-15 17:17:33.690863] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.263 qpair failed and we were unable to recover it. 00:26:46.263 [2024-05-15 17:17:33.691082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.263 [2024-05-15 17:17:33.691210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.263 [2024-05-15 17:17:33.691239] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.263 qpair failed and we were unable to recover it. 00:26:46.263 [2024-05-15 17:17:33.691504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.263 [2024-05-15 17:17:33.691730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.263 [2024-05-15 17:17:33.691759] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.263 qpair failed and we were unable to recover it. 00:26:46.263 [2024-05-15 17:17:33.691958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.263 [2024-05-15 17:17:33.692265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.263 [2024-05-15 17:17:33.692296] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.263 qpair failed and we were unable to recover it. 00:26:46.263 [2024-05-15 17:17:33.692503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.263 [2024-05-15 17:17:33.692699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.263 [2024-05-15 17:17:33.692728] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.263 qpair failed and we were unable to recover it. 00:26:46.263 [2024-05-15 17:17:33.692951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.263 [2024-05-15 17:17:33.693161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.263 [2024-05-15 17:17:33.693200] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.263 qpair failed and we were unable to recover it. 00:26:46.263 [2024-05-15 17:17:33.693427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.263 [2024-05-15 17:17:33.693620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.263 [2024-05-15 17:17:33.693649] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.263 qpair failed and we were unable to recover it. 00:26:46.263 [2024-05-15 17:17:33.693878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.263 [2024-05-15 17:17:33.694012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.263 [2024-05-15 17:17:33.694041] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.263 qpair failed and we were unable to recover it. 00:26:46.263 [2024-05-15 17:17:33.694204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.263 [2024-05-15 17:17:33.694490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.263 [2024-05-15 17:17:33.694519] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.263 qpair failed and we were unable to recover it. 00:26:46.263 [2024-05-15 17:17:33.694657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.263 [2024-05-15 17:17:33.694847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.263 [2024-05-15 17:17:33.694887] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.263 qpair failed and we were unable to recover it. 00:26:46.263 [2024-05-15 17:17:33.695037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.263 [2024-05-15 17:17:33.695297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.263 [2024-05-15 17:17:33.695327] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.263 qpair failed and we were unable to recover it. 00:26:46.263 [2024-05-15 17:17:33.695484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.263 [2024-05-15 17:17:33.695760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.263 [2024-05-15 17:17:33.695787] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.263 qpair failed and we were unable to recover it. 00:26:46.263 [2024-05-15 17:17:33.695910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.263 [2024-05-15 17:17:33.696078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.263 [2024-05-15 17:17:33.696086] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.263 qpair failed and we were unable to recover it. 00:26:46.263 [2024-05-15 17:17:33.696290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.263 [2024-05-15 17:17:33.696530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.263 [2024-05-15 17:17:33.696538] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.263 qpair failed and we were unable to recover it. 00:26:46.263 [2024-05-15 17:17:33.696792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.263 [2024-05-15 17:17:33.696866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.263 [2024-05-15 17:17:33.696873] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.263 qpair failed and we were unable to recover it. 00:26:46.263 [2024-05-15 17:17:33.696993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.263 [2024-05-15 17:17:33.697169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.263 [2024-05-15 17:17:33.697177] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.263 qpair failed and we were unable to recover it. 00:26:46.263 [2024-05-15 17:17:33.697350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.263 [2024-05-15 17:17:33.697449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.263 [2024-05-15 17:17:33.697457] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.263 qpair failed and we were unable to recover it. 00:26:46.263 [2024-05-15 17:17:33.697567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.263 [2024-05-15 17:17:33.697730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.263 [2024-05-15 17:17:33.697738] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.263 qpair failed and we were unable to recover it. 00:26:46.263 [2024-05-15 17:17:33.697838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.263 [2024-05-15 17:17:33.697945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.263 [2024-05-15 17:17:33.697954] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.263 qpair failed and we were unable to recover it. 00:26:46.263 [2024-05-15 17:17:33.698072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.263 [2024-05-15 17:17:33.698272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.264 [2024-05-15 17:17:33.698282] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.264 qpair failed and we were unable to recover it. 00:26:46.264 [2024-05-15 17:17:33.698446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.264 [2024-05-15 17:17:33.698613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.264 [2024-05-15 17:17:33.698621] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.264 qpair failed and we were unable to recover it. 00:26:46.264 [2024-05-15 17:17:33.698732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.264 [2024-05-15 17:17:33.698802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.264 [2024-05-15 17:17:33.698811] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.264 qpair failed and we were unable to recover it. 00:26:46.264 [2024-05-15 17:17:33.699069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.264 [2024-05-15 17:17:33.699177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.264 [2024-05-15 17:17:33.699186] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.264 qpair failed and we were unable to recover it. 00:26:46.264 [2024-05-15 17:17:33.699366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.264 [2024-05-15 17:17:33.699483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.264 [2024-05-15 17:17:33.699492] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.264 qpair failed and we were unable to recover it. 00:26:46.264 [2024-05-15 17:17:33.699591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.264 [2024-05-15 17:17:33.699703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.264 [2024-05-15 17:17:33.699712] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.264 qpair failed and we were unable to recover it. 00:26:46.264 [2024-05-15 17:17:33.699884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.264 [2024-05-15 17:17:33.700044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.264 [2024-05-15 17:17:33.700053] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.264 qpair failed and we were unable to recover it. 00:26:46.264 [2024-05-15 17:17:33.700146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.264 [2024-05-15 17:17:33.700322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.264 [2024-05-15 17:17:33.700331] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.264 qpair failed and we were unable to recover it. 00:26:46.264 [2024-05-15 17:17:33.700599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.264 [2024-05-15 17:17:33.700758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.264 [2024-05-15 17:17:33.700768] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.264 qpair failed and we were unable to recover it. 00:26:46.264 [2024-05-15 17:17:33.701018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.264 [2024-05-15 17:17:33.701190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.264 [2024-05-15 17:17:33.701200] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.264 qpair failed and we were unable to recover it. 00:26:46.264 [2024-05-15 17:17:33.701305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.264 [2024-05-15 17:17:33.701457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.264 [2024-05-15 17:17:33.701467] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.264 qpair failed and we were unable to recover it. 00:26:46.264 [2024-05-15 17:17:33.701693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.264 [2024-05-15 17:17:33.701859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.264 [2024-05-15 17:17:33.701868] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.264 qpair failed and we were unable to recover it. 00:26:46.264 [2024-05-15 17:17:33.701975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.264 [2024-05-15 17:17:33.702152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.264 [2024-05-15 17:17:33.702161] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.264 qpair failed and we were unable to recover it. 00:26:46.264 [2024-05-15 17:17:33.702292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.264 [2024-05-15 17:17:33.702383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.264 [2024-05-15 17:17:33.702393] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.264 qpair failed and we were unable to recover it. 00:26:46.264 [2024-05-15 17:17:33.702552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.264 [2024-05-15 17:17:33.702733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.264 [2024-05-15 17:17:33.702743] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.264 qpair failed and we were unable to recover it. 00:26:46.264 [2024-05-15 17:17:33.702966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.264 [2024-05-15 17:17:33.703073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.264 [2024-05-15 17:17:33.703083] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.264 qpair failed and we were unable to recover it. 00:26:46.264 [2024-05-15 17:17:33.703200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.264 [2024-05-15 17:17:33.703310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.264 [2024-05-15 17:17:33.703320] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.264 qpair failed and we were unable to recover it. 00:26:46.264 [2024-05-15 17:17:33.703431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.264 [2024-05-15 17:17:33.703664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.264 [2024-05-15 17:17:33.703673] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.264 qpair failed and we were unable to recover it. 00:26:46.264 [2024-05-15 17:17:33.703794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.264 [2024-05-15 17:17:33.703898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.264 [2024-05-15 17:17:33.703910] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.264 qpair failed and we were unable to recover it. 00:26:46.264 [2024-05-15 17:17:33.704085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.264 [2024-05-15 17:17:33.704173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.264 [2024-05-15 17:17:33.704183] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.264 qpair failed and we were unable to recover it. 00:26:46.264 [2024-05-15 17:17:33.704280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.264 [2024-05-15 17:17:33.704446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.264 [2024-05-15 17:17:33.704455] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.264 qpair failed and we were unable to recover it. 00:26:46.264 [2024-05-15 17:17:33.704571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.264 [2024-05-15 17:17:33.704724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.264 [2024-05-15 17:17:33.704734] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.264 qpair failed and we were unable to recover it. 00:26:46.264 [2024-05-15 17:17:33.704827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.264 [2024-05-15 17:17:33.704931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.264 [2024-05-15 17:17:33.704940] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.264 qpair failed and we were unable to recover it. 00:26:46.264 [2024-05-15 17:17:33.705102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.264 [2024-05-15 17:17:33.705274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.264 [2024-05-15 17:17:33.705284] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.264 qpair failed and we were unable to recover it. 00:26:46.264 [2024-05-15 17:17:33.705443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.264 [2024-05-15 17:17:33.705611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.264 [2024-05-15 17:17:33.705621] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.264 qpair failed and we were unable to recover it. 00:26:46.264 [2024-05-15 17:17:33.705765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.264 [2024-05-15 17:17:33.705875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.264 [2024-05-15 17:17:33.705885] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.264 qpair failed and we were unable to recover it. 00:26:46.264 [2024-05-15 17:17:33.706040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.264 [2024-05-15 17:17:33.706266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.264 [2024-05-15 17:17:33.706276] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.264 qpair failed and we were unable to recover it. 00:26:46.264 [2024-05-15 17:17:33.706392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.264 [2024-05-15 17:17:33.706509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.264 [2024-05-15 17:17:33.706519] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.264 qpair failed and we were unable to recover it. 00:26:46.264 [2024-05-15 17:17:33.706615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.264 [2024-05-15 17:17:33.706794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.265 [2024-05-15 17:17:33.706806] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.265 qpair failed and we were unable to recover it. 00:26:46.265 [2024-05-15 17:17:33.706986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.265 [2024-05-15 17:17:33.707064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.265 [2024-05-15 17:17:33.707074] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.265 qpair failed and we were unable to recover it. 00:26:46.265 [2024-05-15 17:17:33.707241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.265 [2024-05-15 17:17:33.707418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.265 [2024-05-15 17:17:33.707428] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.265 qpair failed and we were unable to recover it. 00:26:46.265 [2024-05-15 17:17:33.707594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.265 [2024-05-15 17:17:33.707777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.265 [2024-05-15 17:17:33.707786] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.265 qpair failed and we were unable to recover it. 00:26:46.265 [2024-05-15 17:17:33.707985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.265 [2024-05-15 17:17:33.708131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.265 [2024-05-15 17:17:33.708141] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.265 qpair failed and we were unable to recover it. 00:26:46.265 [2024-05-15 17:17:33.708243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.265 [2024-05-15 17:17:33.708467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.265 [2024-05-15 17:17:33.708477] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.265 qpair failed and we were unable to recover it. 00:26:46.265 [2024-05-15 17:17:33.708573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.265 [2024-05-15 17:17:33.708745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.265 [2024-05-15 17:17:33.708754] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.265 qpair failed and we were unable to recover it. 00:26:46.265 [2024-05-15 17:17:33.708985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.265 [2024-05-15 17:17:33.709139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.265 [2024-05-15 17:17:33.709149] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.265 qpair failed and we were unable to recover it. 00:26:46.265 [2024-05-15 17:17:33.709321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.265 [2024-05-15 17:17:33.709555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.265 [2024-05-15 17:17:33.709565] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.265 qpair failed and we were unable to recover it. 00:26:46.265 [2024-05-15 17:17:33.709724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.265 [2024-05-15 17:17:33.709912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.265 [2024-05-15 17:17:33.709922] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.265 qpair failed and we were unable to recover it. 00:26:46.265 [2024-05-15 17:17:33.710145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.265 [2024-05-15 17:17:33.710317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.265 [2024-05-15 17:17:33.710341] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.265 qpair failed and we were unable to recover it. 00:26:46.265 [2024-05-15 17:17:33.710533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.265 [2024-05-15 17:17:33.710729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.265 [2024-05-15 17:17:33.710739] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.265 qpair failed and we were unable to recover it. 00:26:46.265 [2024-05-15 17:17:33.710983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.265 [2024-05-15 17:17:33.711076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.265 [2024-05-15 17:17:33.711085] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.265 qpair failed and we were unable to recover it. 00:26:46.265 [2024-05-15 17:17:33.711252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.265 [2024-05-15 17:17:33.711420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.265 [2024-05-15 17:17:33.711429] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.265 qpair failed and we were unable to recover it. 00:26:46.265 [2024-05-15 17:17:33.711663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.265 [2024-05-15 17:17:33.711833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.265 [2024-05-15 17:17:33.711842] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.265 qpair failed and we were unable to recover it. 00:26:46.265 [2024-05-15 17:17:33.712031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.265 [2024-05-15 17:17:33.712140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.265 [2024-05-15 17:17:33.712150] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.265 qpair failed and we were unable to recover it. 00:26:46.265 [2024-05-15 17:17:33.712397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.265 [2024-05-15 17:17:33.712509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.265 [2024-05-15 17:17:33.712519] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.265 qpair failed and we were unable to recover it. 00:26:46.265 [2024-05-15 17:17:33.712694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.265 [2024-05-15 17:17:33.712865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.265 [2024-05-15 17:17:33.712874] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.265 qpair failed and we were unable to recover it. 00:26:46.265 [2024-05-15 17:17:33.712962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.265 [2024-05-15 17:17:33.713115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.265 [2024-05-15 17:17:33.713124] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.265 qpair failed and we were unable to recover it. 00:26:46.265 [2024-05-15 17:17:33.713229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.265 [2024-05-15 17:17:33.713452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.265 [2024-05-15 17:17:33.713462] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.265 qpair failed and we were unable to recover it. 00:26:46.265 [2024-05-15 17:17:33.713548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.265 [2024-05-15 17:17:33.713723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.265 [2024-05-15 17:17:33.713735] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.265 qpair failed and we were unable to recover it. 00:26:46.265 [2024-05-15 17:17:33.713966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.265 [2024-05-15 17:17:33.714084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.265 [2024-05-15 17:17:33.714094] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.265 qpair failed and we were unable to recover it. 00:26:46.265 [2024-05-15 17:17:33.714288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.265 [2024-05-15 17:17:33.714480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.265 [2024-05-15 17:17:33.714490] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.265 qpair failed and we were unable to recover it. 00:26:46.265 [2024-05-15 17:17:33.714646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.265 [2024-05-15 17:17:33.714750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.265 [2024-05-15 17:17:33.714759] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.265 qpair failed and we were unable to recover it. 00:26:46.265 [2024-05-15 17:17:33.714935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.265 [2024-05-15 17:17:33.715101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.265 [2024-05-15 17:17:33.715110] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.265 qpair failed and we were unable to recover it. 00:26:46.265 [2024-05-15 17:17:33.715335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.265 [2024-05-15 17:17:33.715583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.265 [2024-05-15 17:17:33.715593] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.265 qpair failed and we were unable to recover it. 00:26:46.265 [2024-05-15 17:17:33.715787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.265 [2024-05-15 17:17:33.715940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.265 [2024-05-15 17:17:33.715949] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.265 qpair failed and we were unable to recover it. 00:26:46.265 [2024-05-15 17:17:33.716173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.265 [2024-05-15 17:17:33.716376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.265 [2024-05-15 17:17:33.716386] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.265 qpair failed and we were unable to recover it. 00:26:46.265 [2024-05-15 17:17:33.716502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.265 [2024-05-15 17:17:33.716580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.265 [2024-05-15 17:17:33.716590] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.266 qpair failed and we were unable to recover it. 00:26:46.266 [2024-05-15 17:17:33.716838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.266 [2024-05-15 17:17:33.716940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.266 [2024-05-15 17:17:33.716950] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.266 qpair failed and we were unable to recover it. 00:26:46.266 [2024-05-15 17:17:33.717073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.266 [2024-05-15 17:17:33.717244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.266 [2024-05-15 17:17:33.717254] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.266 qpair failed and we were unable to recover it. 00:26:46.266 [2024-05-15 17:17:33.717522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.266 [2024-05-15 17:17:33.717643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.266 [2024-05-15 17:17:33.717653] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.266 qpair failed and we were unable to recover it. 00:26:46.266 [2024-05-15 17:17:33.717831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.266 [2024-05-15 17:17:33.717997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.266 [2024-05-15 17:17:33.718007] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.266 qpair failed and we were unable to recover it. 00:26:46.266 [2024-05-15 17:17:33.718184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.266 [2024-05-15 17:17:33.718367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.266 [2024-05-15 17:17:33.718377] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.266 qpair failed and we were unable to recover it. 00:26:46.266 [2024-05-15 17:17:33.718536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.266 [2024-05-15 17:17:33.718648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.266 [2024-05-15 17:17:33.718657] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.266 qpair failed and we were unable to recover it. 00:26:46.266 [2024-05-15 17:17:33.718813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.266 [2024-05-15 17:17:33.718925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.266 [2024-05-15 17:17:33.718934] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.266 qpair failed and we were unable to recover it. 00:26:46.266 [2024-05-15 17:17:33.719031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.266 [2024-05-15 17:17:33.719215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.266 [2024-05-15 17:17:33.719225] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.266 qpair failed and we were unable to recover it. 00:26:46.266 [2024-05-15 17:17:33.719332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.266 [2024-05-15 17:17:33.719447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.266 [2024-05-15 17:17:33.719456] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.266 qpair failed and we were unable to recover it. 00:26:46.266 [2024-05-15 17:17:33.719629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.266 [2024-05-15 17:17:33.719730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.266 [2024-05-15 17:17:33.719739] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.266 qpair failed and we were unable to recover it. 00:26:46.266 [2024-05-15 17:17:33.719913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.266 [2024-05-15 17:17:33.720140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.266 [2024-05-15 17:17:33.720149] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.266 qpair failed and we were unable to recover it. 00:26:46.266 [2024-05-15 17:17:33.720310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.266 [2024-05-15 17:17:33.720394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.266 [2024-05-15 17:17:33.720403] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.266 qpair failed and we were unable to recover it. 00:26:46.266 [2024-05-15 17:17:33.720561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.266 [2024-05-15 17:17:33.720749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.266 [2024-05-15 17:17:33.720759] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.266 qpair failed and we were unable to recover it. 00:26:46.266 [2024-05-15 17:17:33.720929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.266 [2024-05-15 17:17:33.721084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.266 [2024-05-15 17:17:33.721093] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.266 qpair failed and we were unable to recover it. 00:26:46.266 [2024-05-15 17:17:33.721262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.266 [2024-05-15 17:17:33.721342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.266 [2024-05-15 17:17:33.721352] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.266 qpair failed and we were unable to recover it. 00:26:46.266 [2024-05-15 17:17:33.721597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.266 [2024-05-15 17:17:33.721787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.266 [2024-05-15 17:17:33.721797] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.266 qpair failed and we were unable to recover it. 00:26:46.266 [2024-05-15 17:17:33.721958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.266 [2024-05-15 17:17:33.722198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.266 [2024-05-15 17:17:33.722208] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.267 qpair failed and we were unable to recover it. 00:26:46.267 [2024-05-15 17:17:33.722436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.267 [2024-05-15 17:17:33.722691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.267 [2024-05-15 17:17:33.722700] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.267 qpair failed and we were unable to recover it. 00:26:46.267 [2024-05-15 17:17:33.722820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.267 [2024-05-15 17:17:33.722978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.267 [2024-05-15 17:17:33.722987] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.267 qpair failed and we were unable to recover it. 00:26:46.267 [2024-05-15 17:17:33.723213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.267 [2024-05-15 17:17:33.723389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.267 [2024-05-15 17:17:33.723399] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.267 qpair failed and we were unable to recover it. 00:26:46.267 [2024-05-15 17:17:33.723569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.267 [2024-05-15 17:17:33.723789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.267 [2024-05-15 17:17:33.723799] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.267 qpair failed and we were unable to recover it. 00:26:46.267 [2024-05-15 17:17:33.723892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.267 [2024-05-15 17:17:33.723987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.267 [2024-05-15 17:17:33.723996] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.267 qpair failed and we were unable to recover it. 00:26:46.267 [2024-05-15 17:17:33.724112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.267 [2024-05-15 17:17:33.724276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.267 [2024-05-15 17:17:33.724286] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.267 qpair failed and we were unable to recover it. 00:26:46.267 [2024-05-15 17:17:33.724442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.267 [2024-05-15 17:17:33.724595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.267 [2024-05-15 17:17:33.724604] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.267 qpair failed and we were unable to recover it. 00:26:46.267 [2024-05-15 17:17:33.724771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.267 [2024-05-15 17:17:33.724924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.267 [2024-05-15 17:17:33.724934] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.267 qpair failed and we were unable to recover it. 00:26:46.267 [2024-05-15 17:17:33.725042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.267 [2024-05-15 17:17:33.725240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.267 [2024-05-15 17:17:33.725250] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.267 qpair failed and we were unable to recover it. 00:26:46.267 [2024-05-15 17:17:33.725341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.267 [2024-05-15 17:17:33.725440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.267 [2024-05-15 17:17:33.725450] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.267 qpair failed and we were unable to recover it. 00:26:46.267 [2024-05-15 17:17:33.725539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.267 [2024-05-15 17:17:33.725733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.267 [2024-05-15 17:17:33.725743] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.267 qpair failed and we were unable to recover it. 00:26:46.267 [2024-05-15 17:17:33.725907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.267 [2024-05-15 17:17:33.726011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.267 [2024-05-15 17:17:33.726020] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.267 qpair failed and we were unable to recover it. 00:26:46.267 [2024-05-15 17:17:33.726140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.267 [2024-05-15 17:17:33.726260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.267 [2024-05-15 17:17:33.726270] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.267 qpair failed and we were unable to recover it. 00:26:46.267 [2024-05-15 17:17:33.726430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.267 [2024-05-15 17:17:33.726535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.267 [2024-05-15 17:17:33.726544] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.267 qpair failed and we were unable to recover it. 00:26:46.267 [2024-05-15 17:17:33.726660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.267 [2024-05-15 17:17:33.726750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.267 [2024-05-15 17:17:33.726760] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.267 qpair failed and we were unable to recover it. 00:26:46.267 [2024-05-15 17:17:33.726848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.267 [2024-05-15 17:17:33.727022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.267 [2024-05-15 17:17:33.727032] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.267 qpair failed and we were unable to recover it. 00:26:46.267 [2024-05-15 17:17:33.727098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.267 [2024-05-15 17:17:33.727253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.267 [2024-05-15 17:17:33.727263] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.267 qpair failed and we were unable to recover it. 00:26:46.267 [2024-05-15 17:17:33.727369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.267 [2024-05-15 17:17:33.727535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.267 [2024-05-15 17:17:33.727545] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.267 qpair failed and we were unable to recover it. 00:26:46.267 [2024-05-15 17:17:33.727650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.267 [2024-05-15 17:17:33.727883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.267 [2024-05-15 17:17:33.727893] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.267 qpair failed and we were unable to recover it. 00:26:46.267 [2024-05-15 17:17:33.728002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.267 [2024-05-15 17:17:33.728090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.267 [2024-05-15 17:17:33.728100] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.267 qpair failed and we were unable to recover it. 00:26:46.267 [2024-05-15 17:17:33.728299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.267 [2024-05-15 17:17:33.728409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.267 [2024-05-15 17:17:33.728418] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.267 qpair failed and we were unable to recover it. 00:26:46.267 [2024-05-15 17:17:33.728591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.267 [2024-05-15 17:17:33.728668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.268 [2024-05-15 17:17:33.728678] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.268 qpair failed and we were unable to recover it. 00:26:46.268 [2024-05-15 17:17:33.728798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.268 [2024-05-15 17:17:33.728971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.268 [2024-05-15 17:17:33.728981] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.268 qpair failed and we were unable to recover it. 00:26:46.268 [2024-05-15 17:17:33.729146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.268 [2024-05-15 17:17:33.729252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.268 [2024-05-15 17:17:33.729262] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.268 qpair failed and we were unable to recover it. 00:26:46.268 [2024-05-15 17:17:33.729420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.268 [2024-05-15 17:17:33.729647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.268 [2024-05-15 17:17:33.729657] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.268 qpair failed and we were unable to recover it. 00:26:46.268 [2024-05-15 17:17:33.729762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.268 [2024-05-15 17:17:33.729928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.268 [2024-05-15 17:17:33.729938] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.268 qpair failed and we were unable to recover it. 00:26:46.268 [2024-05-15 17:17:33.730178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.268 [2024-05-15 17:17:33.730257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.268 [2024-05-15 17:17:33.730267] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.268 qpair failed and we were unable to recover it. 00:26:46.268 [2024-05-15 17:17:33.730372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.268 [2024-05-15 17:17:33.730598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.268 [2024-05-15 17:17:33.730607] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.268 qpair failed and we were unable to recover it. 00:26:46.268 [2024-05-15 17:17:33.730714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.268 [2024-05-15 17:17:33.730885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.268 [2024-05-15 17:17:33.730895] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.268 qpair failed and we were unable to recover it. 00:26:46.268 [2024-05-15 17:17:33.731070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.268 [2024-05-15 17:17:33.731178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.268 [2024-05-15 17:17:33.731188] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.268 qpair failed and we were unable to recover it. 00:26:46.268 [2024-05-15 17:17:33.731355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.268 [2024-05-15 17:17:33.731643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.268 [2024-05-15 17:17:33.731653] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.268 qpair failed and we were unable to recover it. 00:26:46.268 [2024-05-15 17:17:33.731811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.268 [2024-05-15 17:17:33.731906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.268 [2024-05-15 17:17:33.731916] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.268 qpair failed and we were unable to recover it. 00:26:46.268 [2024-05-15 17:17:33.732088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.268 [2024-05-15 17:17:33.732285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.268 [2024-05-15 17:17:33.732295] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.268 qpair failed and we were unable to recover it. 00:26:46.268 [2024-05-15 17:17:33.732554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.268 [2024-05-15 17:17:33.732778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.268 [2024-05-15 17:17:33.732787] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.268 qpair failed and we were unable to recover it. 00:26:46.268 [2024-05-15 17:17:33.732956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.268 [2024-05-15 17:17:33.733067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.268 [2024-05-15 17:17:33.733076] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.268 qpair failed and we were unable to recover it. 00:26:46.268 [2024-05-15 17:17:33.733174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.268 [2024-05-15 17:17:33.733355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.268 [2024-05-15 17:17:33.733365] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.268 qpair failed and we were unable to recover it. 00:26:46.268 [2024-05-15 17:17:33.733453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.268 [2024-05-15 17:17:33.733567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.268 [2024-05-15 17:17:33.733577] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.268 qpair failed and we were unable to recover it. 00:26:46.268 [2024-05-15 17:17:33.733743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.268 [2024-05-15 17:17:33.733836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.268 [2024-05-15 17:17:33.733845] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.268 qpair failed and we were unable to recover it. 00:26:46.268 [2024-05-15 17:17:33.733950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.268 [2024-05-15 17:17:33.734180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.268 [2024-05-15 17:17:33.734190] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.268 qpair failed and we were unable to recover it. 00:26:46.268 [2024-05-15 17:17:33.734377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.268 [2024-05-15 17:17:33.734568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.268 [2024-05-15 17:17:33.734577] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.268 qpair failed and we were unable to recover it. 00:26:46.268 [2024-05-15 17:17:33.734686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.268 [2024-05-15 17:17:33.734843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.268 [2024-05-15 17:17:33.734852] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.268 qpair failed and we were unable to recover it. 00:26:46.268 [2024-05-15 17:17:33.735104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.268 [2024-05-15 17:17:33.735333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.268 [2024-05-15 17:17:33.735343] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.268 qpair failed and we were unable to recover it. 00:26:46.268 [2024-05-15 17:17:33.735460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.268 [2024-05-15 17:17:33.735683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.268 [2024-05-15 17:17:33.735693] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.268 qpair failed and we were unable to recover it. 00:26:46.268 [2024-05-15 17:17:33.735794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.268 [2024-05-15 17:17:33.735989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.268 [2024-05-15 17:17:33.735999] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.268 qpair failed and we were unable to recover it. 00:26:46.268 [2024-05-15 17:17:33.736156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.268 [2024-05-15 17:17:33.736331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.268 [2024-05-15 17:17:33.736341] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.268 qpair failed and we were unable to recover it. 00:26:46.268 [2024-05-15 17:17:33.736511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.268 [2024-05-15 17:17:33.736699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.268 [2024-05-15 17:17:33.736709] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.268 qpair failed and we were unable to recover it. 00:26:46.268 [2024-05-15 17:17:33.736890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.268 [2024-05-15 17:17:33.737015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.268 [2024-05-15 17:17:33.737025] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.268 qpair failed and we were unable to recover it. 00:26:46.268 [2024-05-15 17:17:33.737292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.268 [2024-05-15 17:17:33.737517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.268 [2024-05-15 17:17:33.737526] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.268 qpair failed and we were unable to recover it. 00:26:46.268 [2024-05-15 17:17:33.737639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.268 [2024-05-15 17:17:33.737801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.268 [2024-05-15 17:17:33.737810] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.268 qpair failed and we were unable to recover it. 00:26:46.268 [2024-05-15 17:17:33.737967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.268 [2024-05-15 17:17:33.738072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.269 [2024-05-15 17:17:33.738081] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.269 qpair failed and we were unable to recover it. 00:26:46.269 [2024-05-15 17:17:33.738203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.269 [2024-05-15 17:17:33.738369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.269 [2024-05-15 17:17:33.738379] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.269 qpair failed and we were unable to recover it. 00:26:46.269 [2024-05-15 17:17:33.738489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.269 [2024-05-15 17:17:33.738645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.269 [2024-05-15 17:17:33.738655] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.269 qpair failed and we were unable to recover it. 00:26:46.269 [2024-05-15 17:17:33.738767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.269 [2024-05-15 17:17:33.738862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.269 [2024-05-15 17:17:33.738872] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.269 qpair failed and we were unable to recover it. 00:26:46.269 [2024-05-15 17:17:33.738961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.269 [2024-05-15 17:17:33.739137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.269 [2024-05-15 17:17:33.739146] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.269 qpair failed and we were unable to recover it. 00:26:46.269 [2024-05-15 17:17:33.739260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.269 [2024-05-15 17:17:33.739348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.269 [2024-05-15 17:17:33.739358] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.269 qpair failed and we were unable to recover it. 00:26:46.269 [2024-05-15 17:17:33.739527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.269 [2024-05-15 17:17:33.739599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.269 [2024-05-15 17:17:33.739608] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.269 qpair failed and we were unable to recover it. 00:26:46.269 [2024-05-15 17:17:33.739768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.269 [2024-05-15 17:17:33.739872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.269 [2024-05-15 17:17:33.739882] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.269 qpair failed and we were unable to recover it. 00:26:46.269 [2024-05-15 17:17:33.739968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.269 [2024-05-15 17:17:33.740069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.269 [2024-05-15 17:17:33.740079] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.269 qpair failed and we were unable to recover it. 00:26:46.269 [2024-05-15 17:17:33.740181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.269 [2024-05-15 17:17:33.740430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.269 [2024-05-15 17:17:33.740439] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.269 qpair failed and we were unable to recover it. 00:26:46.269 [2024-05-15 17:17:33.740623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.269 [2024-05-15 17:17:33.740735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.269 [2024-05-15 17:17:33.740745] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.269 qpair failed and we were unable to recover it. 00:26:46.269 [2024-05-15 17:17:33.740921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.269 [2024-05-15 17:17:33.741008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.269 [2024-05-15 17:17:33.741017] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.269 qpair failed and we were unable to recover it. 00:26:46.269 [2024-05-15 17:17:33.741120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.269 [2024-05-15 17:17:33.741341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.269 [2024-05-15 17:17:33.741351] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.269 qpair failed and we were unable to recover it. 00:26:46.269 [2024-05-15 17:17:33.741462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.269 [2024-05-15 17:17:33.741584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.269 [2024-05-15 17:17:33.741593] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.269 qpair failed and we were unable to recover it. 00:26:46.269 [2024-05-15 17:17:33.741766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.269 [2024-05-15 17:17:33.741867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.269 [2024-05-15 17:17:33.741876] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.269 qpair failed and we were unable to recover it. 00:26:46.269 [2024-05-15 17:17:33.742034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.269 [2024-05-15 17:17:33.742193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.269 [2024-05-15 17:17:33.742203] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.269 qpair failed and we were unable to recover it. 00:26:46.269 [2024-05-15 17:17:33.742327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.269 [2024-05-15 17:17:33.742499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.269 [2024-05-15 17:17:33.742509] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.269 qpair failed and we were unable to recover it. 00:26:46.269 [2024-05-15 17:17:33.742613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.269 [2024-05-15 17:17:33.742766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.269 [2024-05-15 17:17:33.742776] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.269 qpair failed and we were unable to recover it. 00:26:46.269 [2024-05-15 17:17:33.742971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.269 [2024-05-15 17:17:33.743137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.269 [2024-05-15 17:17:33.743147] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.269 qpair failed and we were unable to recover it. 00:26:46.269 [2024-05-15 17:17:33.743249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.269 [2024-05-15 17:17:33.743470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.269 [2024-05-15 17:17:33.743480] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.269 qpair failed and we were unable to recover it. 00:26:46.269 [2024-05-15 17:17:33.743719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.269 [2024-05-15 17:17:33.743949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.269 [2024-05-15 17:17:33.743959] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.269 qpair failed and we were unable to recover it. 00:26:46.269 [2024-05-15 17:17:33.744227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.269 [2024-05-15 17:17:33.744336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.269 [2024-05-15 17:17:33.744346] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.269 qpair failed and we were unable to recover it. 00:26:46.269 [2024-05-15 17:17:33.744521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.269 [2024-05-15 17:17:33.744759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.269 [2024-05-15 17:17:33.744769] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.269 qpair failed and we were unable to recover it. 00:26:46.269 [2024-05-15 17:17:33.744876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.269 [2024-05-15 17:17:33.745033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.269 [2024-05-15 17:17:33.745042] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.269 qpair failed and we were unable to recover it. 00:26:46.269 [2024-05-15 17:17:33.745141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.269 [2024-05-15 17:17:33.745308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.269 [2024-05-15 17:17:33.745318] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.269 qpair failed and we were unable to recover it. 00:26:46.269 [2024-05-15 17:17:33.745429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.269 [2024-05-15 17:17:33.745538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.269 [2024-05-15 17:17:33.745547] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.269 qpair failed and we were unable to recover it. 00:26:46.269 [2024-05-15 17:17:33.745795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.269 [2024-05-15 17:17:33.746020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.269 [2024-05-15 17:17:33.746029] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.269 qpair failed and we were unable to recover it. 00:26:46.269 [2024-05-15 17:17:33.746217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.269 [2024-05-15 17:17:33.746395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.269 [2024-05-15 17:17:33.746405] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.269 qpair failed and we were unable to recover it. 00:26:46.269 [2024-05-15 17:17:33.746642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.269 [2024-05-15 17:17:33.746750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.270 [2024-05-15 17:17:33.746759] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.270 qpair failed and we were unable to recover it. 00:26:46.270 [2024-05-15 17:17:33.746920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.270 [2024-05-15 17:17:33.747077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.270 [2024-05-15 17:17:33.747086] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.270 qpair failed and we were unable to recover it. 00:26:46.270 [2024-05-15 17:17:33.747287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.270 [2024-05-15 17:17:33.747464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.270 [2024-05-15 17:17:33.747474] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.270 qpair failed and we were unable to recover it. 00:26:46.270 [2024-05-15 17:17:33.747595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.270 [2024-05-15 17:17:33.747690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.270 [2024-05-15 17:17:33.747700] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.270 qpair failed and we were unable to recover it. 00:26:46.270 [2024-05-15 17:17:33.747880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.270 [2024-05-15 17:17:33.748068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.270 [2024-05-15 17:17:33.748078] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.270 qpair failed and we were unable to recover it. 00:26:46.270 [2024-05-15 17:17:33.748179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.270 [2024-05-15 17:17:33.748280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.270 [2024-05-15 17:17:33.748289] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.270 qpair failed and we were unable to recover it. 00:26:46.270 [2024-05-15 17:17:33.748392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.270 [2024-05-15 17:17:33.748640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.270 [2024-05-15 17:17:33.748650] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.270 qpair failed and we were unable to recover it. 00:26:46.270 [2024-05-15 17:17:33.748754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.270 [2024-05-15 17:17:33.748860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.270 [2024-05-15 17:17:33.748870] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.270 qpair failed and we were unable to recover it. 00:26:46.270 [2024-05-15 17:17:33.748988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.270 [2024-05-15 17:17:33.749109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.270 [2024-05-15 17:17:33.749118] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.270 qpair failed and we were unable to recover it. 00:26:46.270 [2024-05-15 17:17:33.749382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.270 [2024-05-15 17:17:33.749640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.270 [2024-05-15 17:17:33.749649] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.270 qpair failed and we were unable to recover it. 00:26:46.270 [2024-05-15 17:17:33.749807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.270 [2024-05-15 17:17:33.749974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.270 [2024-05-15 17:17:33.749984] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.270 qpair failed and we were unable to recover it. 00:26:46.270 [2024-05-15 17:17:33.750146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.270 [2024-05-15 17:17:33.750244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.270 [2024-05-15 17:17:33.750255] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.270 qpair failed and we were unable to recover it. 00:26:46.270 [2024-05-15 17:17:33.750482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.270 [2024-05-15 17:17:33.750677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.270 [2024-05-15 17:17:33.750687] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.270 qpair failed and we were unable to recover it. 00:26:46.270 [2024-05-15 17:17:33.750793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.270 [2024-05-15 17:17:33.750969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.270 [2024-05-15 17:17:33.750979] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.270 qpair failed and we were unable to recover it. 00:26:46.270 [2024-05-15 17:17:33.751066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.270 [2024-05-15 17:17:33.751234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.270 [2024-05-15 17:17:33.751244] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.270 qpair failed and we were unable to recover it. 00:26:46.270 [2024-05-15 17:17:33.751374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.270 [2024-05-15 17:17:33.751464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.270 [2024-05-15 17:17:33.751473] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.270 qpair failed and we were unable to recover it. 00:26:46.270 [2024-05-15 17:17:33.751702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.270 [2024-05-15 17:17:33.751800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.270 [2024-05-15 17:17:33.751810] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.270 qpair failed and we were unable to recover it. 00:26:46.270 [2024-05-15 17:17:33.752037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.270 [2024-05-15 17:17:33.752154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.270 [2024-05-15 17:17:33.752167] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.270 qpair failed and we were unable to recover it. 00:26:46.270 [2024-05-15 17:17:33.752373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.270 [2024-05-15 17:17:33.752477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.270 [2024-05-15 17:17:33.752487] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.270 qpair failed and we were unable to recover it. 00:26:46.270 [2024-05-15 17:17:33.752733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.270 [2024-05-15 17:17:33.752907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.270 [2024-05-15 17:17:33.752916] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.270 qpair failed and we were unable to recover it. 00:26:46.270 [2024-05-15 17:17:33.753089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.270 [2024-05-15 17:17:33.753289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.270 [2024-05-15 17:17:33.753299] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.270 qpair failed and we were unable to recover it. 00:26:46.270 [2024-05-15 17:17:33.753547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.270 [2024-05-15 17:17:33.753780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.270 [2024-05-15 17:17:33.753790] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.270 qpair failed and we were unable to recover it. 00:26:46.270 [2024-05-15 17:17:33.753959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.270 [2024-05-15 17:17:33.754226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.270 [2024-05-15 17:17:33.754236] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.270 qpair failed and we were unable to recover it. 00:26:46.270 [2024-05-15 17:17:33.754454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.270 [2024-05-15 17:17:33.754620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.270 [2024-05-15 17:17:33.754630] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.270 qpair failed and we were unable to recover it. 00:26:46.270 [2024-05-15 17:17:33.754814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.270 [2024-05-15 17:17:33.754986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.270 [2024-05-15 17:17:33.754995] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.270 qpair failed and we were unable to recover it. 00:26:46.270 [2024-05-15 17:17:33.755169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.270 [2024-05-15 17:17:33.755333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.270 [2024-05-15 17:17:33.755343] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.270 qpair failed and we were unable to recover it. 00:26:46.270 [2024-05-15 17:17:33.755507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.270 [2024-05-15 17:17:33.755664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.270 [2024-05-15 17:17:33.755674] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.270 qpair failed and we were unable to recover it. 00:26:46.270 [2024-05-15 17:17:33.755772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.270 [2024-05-15 17:17:33.755939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.270 [2024-05-15 17:17:33.755949] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.270 qpair failed and we were unable to recover it. 00:26:46.270 [2024-05-15 17:17:33.756106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.270 [2024-05-15 17:17:33.756277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.270 [2024-05-15 17:17:33.756289] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.270 qpair failed and we were unable to recover it. 00:26:46.271 [2024-05-15 17:17:33.756383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.271 [2024-05-15 17:17:33.756501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.271 [2024-05-15 17:17:33.756511] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.271 qpair failed and we were unable to recover it. 00:26:46.271 [2024-05-15 17:17:33.756765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.271 [2024-05-15 17:17:33.756872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.271 [2024-05-15 17:17:33.756882] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.271 qpair failed and we were unable to recover it. 00:26:46.271 [2024-05-15 17:17:33.757045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.271 [2024-05-15 17:17:33.757216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.271 [2024-05-15 17:17:33.757226] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.271 qpair failed and we were unable to recover it. 00:26:46.271 [2024-05-15 17:17:33.757327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.271 [2024-05-15 17:17:33.757493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.271 [2024-05-15 17:17:33.757503] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.271 qpair failed and we were unable to recover it. 00:26:46.271 [2024-05-15 17:17:33.757613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.271 [2024-05-15 17:17:33.757767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.271 [2024-05-15 17:17:33.757777] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.271 qpair failed and we were unable to recover it. 00:26:46.271 [2024-05-15 17:17:33.757897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.271 [2024-05-15 17:17:33.758121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.271 [2024-05-15 17:17:33.758131] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.271 qpair failed and we were unable to recover it. 00:26:46.271 [2024-05-15 17:17:33.758227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.271 [2024-05-15 17:17:33.758381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.271 [2024-05-15 17:17:33.758391] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.271 qpair failed and we were unable to recover it. 00:26:46.271 [2024-05-15 17:17:33.758620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.271 [2024-05-15 17:17:33.758783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.271 [2024-05-15 17:17:33.758793] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.271 qpair failed and we were unable to recover it. 00:26:46.271 [2024-05-15 17:17:33.759038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.271 [2024-05-15 17:17:33.759222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.271 [2024-05-15 17:17:33.759233] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.271 qpair failed and we were unable to recover it. 00:26:46.271 [2024-05-15 17:17:33.759454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.271 [2024-05-15 17:17:33.759620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.271 [2024-05-15 17:17:33.759632] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.271 qpair failed and we were unable to recover it. 00:26:46.271 [2024-05-15 17:17:33.759861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.271 [2024-05-15 17:17:33.760108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.271 [2024-05-15 17:17:33.760136] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.271 qpair failed and we were unable to recover it. 00:26:46.271 [2024-05-15 17:17:33.760193] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x245d770 (9): Bad file descriptor 00:26:46.271 [2024-05-15 17:17:33.760651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.271 [2024-05-15 17:17:33.760910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.271 [2024-05-15 17:17:33.760947] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:46.271 qpair failed and we were unable to recover it. 00:26:46.271 [2024-05-15 17:17:33.761249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.271 [2024-05-15 17:17:33.761402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.271 [2024-05-15 17:17:33.761432] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:46.271 qpair failed and we were unable to recover it. 00:26:46.271 [2024-05-15 17:17:33.761645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.271 [2024-05-15 17:17:33.761855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.271 [2024-05-15 17:17:33.761885] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:46.271 qpair failed and we were unable to recover it. 00:26:46.271 [2024-05-15 17:17:33.762202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.271 [2024-05-15 17:17:33.762343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.271 [2024-05-15 17:17:33.762372] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:46.271 qpair failed and we were unable to recover it. 00:26:46.271 [2024-05-15 17:17:33.762665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.271 [2024-05-15 17:17:33.762952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.271 [2024-05-15 17:17:33.762965] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:46.271 qpair failed and we were unable to recover it. 00:26:46.271 [2024-05-15 17:17:33.763155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.271 [2024-05-15 17:17:33.763264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.271 [2024-05-15 17:17:33.763277] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:46.271 qpair failed and we were unable to recover it. 00:26:46.271 [2024-05-15 17:17:33.763468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.271 [2024-05-15 17:17:33.763675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.271 [2024-05-15 17:17:33.763688] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:46.271 qpair failed and we were unable to recover it. 00:26:46.271 [2024-05-15 17:17:33.763932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.271 [2024-05-15 17:17:33.764109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.271 [2024-05-15 17:17:33.764122] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:46.271 qpair failed and we were unable to recover it. 00:26:46.271 [2024-05-15 17:17:33.764306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.271 [2024-05-15 17:17:33.764565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.271 [2024-05-15 17:17:33.764578] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:46.271 qpair failed and we were unable to recover it. 00:26:46.271 [2024-05-15 17:17:33.764706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.271 [2024-05-15 17:17:33.764893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.271 [2024-05-15 17:17:33.764906] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:46.271 qpair failed and we were unable to recover it. 00:26:46.271 [2024-05-15 17:17:33.765018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.271 [2024-05-15 17:17:33.765212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.271 [2024-05-15 17:17:33.765226] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:46.271 qpair failed and we were unable to recover it. 00:26:46.271 [2024-05-15 17:17:33.765352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.271 [2024-05-15 17:17:33.765536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.271 [2024-05-15 17:17:33.765550] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:46.271 qpair failed and we were unable to recover it. 00:26:46.271 [2024-05-15 17:17:33.765789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.271 [2024-05-15 17:17:33.765976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.272 [2024-05-15 17:17:33.765989] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:46.272 qpair failed and we were unable to recover it. 00:26:46.272 [2024-05-15 17:17:33.766221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.272 [2024-05-15 17:17:33.766360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.272 [2024-05-15 17:17:33.766389] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:46.272 qpair failed and we were unable to recover it. 00:26:46.272 [2024-05-15 17:17:33.766529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.272 [2024-05-15 17:17:33.766725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.272 [2024-05-15 17:17:33.766754] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:46.272 qpair failed and we were unable to recover it. 00:26:46.272 [2024-05-15 17:17:33.766897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.272 [2024-05-15 17:17:33.767097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.272 [2024-05-15 17:17:33.767126] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:46.272 qpair failed and we were unable to recover it. 00:26:46.272 [2024-05-15 17:17:33.767280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.272 [2024-05-15 17:17:33.767470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.272 [2024-05-15 17:17:33.767499] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:46.272 qpair failed and we were unable to recover it. 00:26:46.272 [2024-05-15 17:17:33.767707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.272 [2024-05-15 17:17:33.767935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.272 [2024-05-15 17:17:33.767964] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:46.272 qpair failed and we were unable to recover it. 00:26:46.272 [2024-05-15 17:17:33.768161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.272 [2024-05-15 17:17:33.768385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.272 [2024-05-15 17:17:33.768414] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:46.272 qpair failed and we were unable to recover it. 00:26:46.272 [2024-05-15 17:17:33.768561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.272 [2024-05-15 17:17:33.768771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.272 [2024-05-15 17:17:33.768800] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:46.272 qpair failed and we were unable to recover it. 00:26:46.272 [2024-05-15 17:17:33.769013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.272 [2024-05-15 17:17:33.769296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.272 [2024-05-15 17:17:33.769325] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:46.272 qpair failed and we were unable to recover it. 00:26:46.272 [2024-05-15 17:17:33.769592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.272 [2024-05-15 17:17:33.769749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.272 [2024-05-15 17:17:33.769777] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:46.272 qpair failed and we were unable to recover it. 00:26:46.272 [2024-05-15 17:17:33.769910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.272 [2024-05-15 17:17:33.770146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.272 [2024-05-15 17:17:33.770159] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:46.272 qpair failed and we were unable to recover it. 00:26:46.272 [2024-05-15 17:17:33.770325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.272 [2024-05-15 17:17:33.770474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.272 [2024-05-15 17:17:33.770487] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:46.272 qpair failed and we were unable to recover it. 00:26:46.272 [2024-05-15 17:17:33.770718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.272 [2024-05-15 17:17:33.770918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.272 [2024-05-15 17:17:33.770931] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:46.272 qpair failed and we were unable to recover it. 00:26:46.272 [2024-05-15 17:17:33.771049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.272 [2024-05-15 17:17:33.771231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.272 [2024-05-15 17:17:33.771244] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:46.272 qpair failed and we were unable to recover it. 00:26:46.272 [2024-05-15 17:17:33.771500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.272 [2024-05-15 17:17:33.771678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.272 [2024-05-15 17:17:33.771691] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:46.272 qpair failed and we were unable to recover it. 00:26:46.272 [2024-05-15 17:17:33.771848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.272 [2024-05-15 17:17:33.771994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.272 [2024-05-15 17:17:33.772022] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:46.272 qpair failed and we were unable to recover it. 00:26:46.272 [2024-05-15 17:17:33.772234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.272 [2024-05-15 17:17:33.772507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.272 [2024-05-15 17:17:33.772536] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:46.272 qpair failed and we were unable to recover it. 00:26:46.272 [2024-05-15 17:17:33.772756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.272 [2024-05-15 17:17:33.772968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.272 [2024-05-15 17:17:33.772996] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:46.272 qpair failed and we were unable to recover it. 00:26:46.272 [2024-05-15 17:17:33.773191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.272 [2024-05-15 17:17:33.773374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.272 [2024-05-15 17:17:33.773403] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:46.272 qpair failed and we were unable to recover it. 00:26:46.272 [2024-05-15 17:17:33.773669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.272 [2024-05-15 17:17:33.773798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.272 [2024-05-15 17:17:33.773829] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:46.272 qpair failed and we were unable to recover it. 00:26:46.272 [2024-05-15 17:17:33.773982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.272 [2024-05-15 17:17:33.774191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.272 [2024-05-15 17:17:33.774222] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:46.272 qpair failed and we were unable to recover it. 00:26:46.272 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh: line 36: 3221999 Killed "${NVMF_APP[@]}" "$@" 00:26:46.272 [2024-05-15 17:17:33.774527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.272 [2024-05-15 17:17:33.774817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.272 [2024-05-15 17:17:33.774846] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:46.272 qpair failed and we were unable to recover it. 00:26:46.272 17:17:33 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@48 -- # disconnect_init 10.0.0.2 00:26:46.272 [2024-05-15 17:17:33.775068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.272 17:17:33 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:26:46.272 [2024-05-15 17:17:33.775350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.272 [2024-05-15 17:17:33.775365] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:46.272 qpair failed and we were unable to recover it. 00:26:46.272 [2024-05-15 17:17:33.775475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.272 17:17:33 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:26:46.272 [2024-05-15 17:17:33.775708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.272 [2024-05-15 17:17:33.775723] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:46.272 qpair failed and we were unable to recover it. 00:26:46.272 [2024-05-15 17:17:33.775836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.272 17:17:33 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@720 -- # xtrace_disable 00:26:46.272 [2024-05-15 17:17:33.775958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.272 [2024-05-15 17:17:33.775971] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:46.272 qpair failed and we were unable to recover it. 00:26:46.272 [2024-05-15 17:17:33.776079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.272 17:17:33 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:26:46.272 [2024-05-15 17:17:33.776189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.272 [2024-05-15 17:17:33.776204] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:46.272 qpair failed and we were unable to recover it. 00:26:46.272 [2024-05-15 17:17:33.776383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.272 [2024-05-15 17:17:33.776559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.273 [2024-05-15 17:17:33.776572] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:46.273 qpair failed and we were unable to recover it. 00:26:46.273 [2024-05-15 17:17:33.776750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.273 [2024-05-15 17:17:33.776927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.273 [2024-05-15 17:17:33.776940] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:46.273 qpair failed and we were unable to recover it. 00:26:46.273 [2024-05-15 17:17:33.777103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.273 [2024-05-15 17:17:33.777273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.273 [2024-05-15 17:17:33.777287] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:46.273 qpair failed and we were unable to recover it. 00:26:46.273 [2024-05-15 17:17:33.777457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.273 [2024-05-15 17:17:33.777659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.273 [2024-05-15 17:17:33.777672] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:46.273 qpair failed and we were unable to recover it. 00:26:46.273 [2024-05-15 17:17:33.777767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.273 [2024-05-15 17:17:33.778022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.273 [2024-05-15 17:17:33.778035] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:46.273 qpair failed and we were unable to recover it. 00:26:46.273 [2024-05-15 17:17:33.778155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.273 [2024-05-15 17:17:33.778263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.273 [2024-05-15 17:17:33.778277] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:46.273 qpair failed and we were unable to recover it. 00:26:46.273 [2024-05-15 17:17:33.778392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.273 [2024-05-15 17:17:33.778504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.273 [2024-05-15 17:17:33.778517] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:46.273 qpair failed and we were unable to recover it. 00:26:46.273 [2024-05-15 17:17:33.778665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.273 [2024-05-15 17:17:33.778844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.273 [2024-05-15 17:17:33.778858] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:46.273 qpair failed and we were unable to recover it. 00:26:46.273 [2024-05-15 17:17:33.778970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.273 [2024-05-15 17:17:33.779094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.273 [2024-05-15 17:17:33.779110] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:46.273 qpair failed and we were unable to recover it. 00:26:46.273 [2024-05-15 17:17:33.779220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.273 [2024-05-15 17:17:33.779396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.273 [2024-05-15 17:17:33.779409] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:46.273 qpair failed and we were unable to recover it. 00:26:46.273 [2024-05-15 17:17:33.779521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.273 [2024-05-15 17:17:33.779702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.273 [2024-05-15 17:17:33.779715] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:46.273 qpair failed and we were unable to recover it. 00:26:46.273 [2024-05-15 17:17:33.779890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.273 [2024-05-15 17:17:33.780120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.273 [2024-05-15 17:17:33.780133] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:46.273 qpair failed and we were unable to recover it. 00:26:46.273 [2024-05-15 17:17:33.780308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.273 [2024-05-15 17:17:33.780434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.273 [2024-05-15 17:17:33.780446] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:46.273 qpair failed and we were unable to recover it. 00:26:46.273 [2024-05-15 17:17:33.780623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.273 [2024-05-15 17:17:33.780726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.273 [2024-05-15 17:17:33.780738] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:46.273 qpair failed and we were unable to recover it. 00:26:46.273 [2024-05-15 17:17:33.780939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.273 [2024-05-15 17:17:33.781041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.273 [2024-05-15 17:17:33.781054] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:46.273 qpair failed and we were unable to recover it. 00:26:46.273 [2024-05-15 17:17:33.781160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.273 [2024-05-15 17:17:33.781359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.273 [2024-05-15 17:17:33.781373] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:46.273 qpair failed and we were unable to recover it. 00:26:46.273 [2024-05-15 17:17:33.781553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.273 [2024-05-15 17:17:33.781648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.273 [2024-05-15 17:17:33.781660] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:46.273 qpair failed and we were unable to recover it. 00:26:46.273 [2024-05-15 17:17:33.781845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.273 [2024-05-15 17:17:33.781951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.273 [2024-05-15 17:17:33.781964] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:46.273 qpair failed and we were unable to recover it. 00:26:46.273 [2024-05-15 17:17:33.782176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.273 [2024-05-15 17:17:33.782281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.273 [2024-05-15 17:17:33.782297] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:46.273 qpair failed and we were unable to recover it. 00:26:46.273 [2024-05-15 17:17:33.782498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.273 [2024-05-15 17:17:33.782730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.273 [2024-05-15 17:17:33.782743] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:46.273 qpair failed and we were unable to recover it. 00:26:46.273 17:17:33 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@481 -- # nvmfpid=3222724 00:26:46.273 [2024-05-15 17:17:33.782924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.273 [2024-05-15 17:17:33.783039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.273 [2024-05-15 17:17:33.783051] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:46.273 qpair failed and we were unable to recover it. 00:26:46.273 17:17:33 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@482 -- # waitforlisten 3222724 00:26:46.273 [2024-05-15 17:17:33.783161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.273 17:17:33 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:26:46.273 [2024-05-15 17:17:33.783367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.273 [2024-05-15 17:17:33.783380] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:46.273 qpair failed and we were unable to recover it. 00:26:46.273 17:17:33 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@827 -- # '[' -z 3222724 ']' 00:26:46.273 [2024-05-15 17:17:33.783567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.273 [2024-05-15 17:17:33.783682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.273 [2024-05-15 17:17:33.783696] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:46.273 qpair failed and we were unable to recover it. 00:26:46.273 17:17:33 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:46.273 [2024-05-15 17:17:33.783869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.273 [2024-05-15 17:17:33.783976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.273 [2024-05-15 17:17:33.783989] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:46.273 qpair failed and we were unable to recover it. 00:26:46.273 17:17:33 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@832 -- # local max_retries=100 00:26:46.273 [2024-05-15 17:17:33.784100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.274 [2024-05-15 17:17:33.784272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.274 [2024-05-15 17:17:33.784286] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:46.274 qpair failed and we were unable to recover it. 00:26:46.274 17:17:33 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:46.274 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:46.274 [2024-05-15 17:17:33.784396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.274 17:17:33 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@836 -- # xtrace_disable 00:26:46.274 [2024-05-15 17:17:33.784637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.274 [2024-05-15 17:17:33.784653] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:46.274 qpair failed and we were unable to recover it. 00:26:46.274 [2024-05-15 17:17:33.784750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.274 17:17:33 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:26:46.274 [2024-05-15 17:17:33.784935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.274 [2024-05-15 17:17:33.784949] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:46.274 qpair failed and we were unable to recover it. 00:26:46.274 [2024-05-15 17:17:33.785130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.274 [2024-05-15 17:17:33.785371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.274 [2024-05-15 17:17:33.785384] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:46.274 qpair failed and we were unable to recover it. 00:26:46.274 [2024-05-15 17:17:33.785561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.274 [2024-05-15 17:17:33.785683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.274 [2024-05-15 17:17:33.785696] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:46.274 qpair failed and we were unable to recover it. 00:26:46.274 [2024-05-15 17:17:33.785928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.274 [2024-05-15 17:17:33.786106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.274 [2024-05-15 17:17:33.786120] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:46.274 qpair failed and we were unable to recover it. 00:26:46.274 [2024-05-15 17:17:33.786311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.274 [2024-05-15 17:17:33.786475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.274 [2024-05-15 17:17:33.786488] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:46.274 qpair failed and we were unable to recover it. 00:26:46.274 [2024-05-15 17:17:33.786653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.274 [2024-05-15 17:17:33.786832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.274 [2024-05-15 17:17:33.786846] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:46.274 qpair failed and we were unable to recover it. 00:26:46.274 [2024-05-15 17:17:33.787094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.274 [2024-05-15 17:17:33.787216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.274 [2024-05-15 17:17:33.787230] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:46.274 qpair failed and we were unable to recover it. 00:26:46.274 [2024-05-15 17:17:33.787444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.274 [2024-05-15 17:17:33.787649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.274 [2024-05-15 17:17:33.787663] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:46.274 qpair failed and we were unable to recover it. 00:26:46.274 [2024-05-15 17:17:33.787746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.274 [2024-05-15 17:17:33.787859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.274 [2024-05-15 17:17:33.787872] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:46.274 qpair failed and we were unable to recover it. 00:26:46.274 [2024-05-15 17:17:33.787981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.274 [2024-05-15 17:17:33.788100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.274 [2024-05-15 17:17:33.788116] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:46.274 qpair failed and we were unable to recover it. 00:26:46.274 [2024-05-15 17:17:33.788348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.274 [2024-05-15 17:17:33.788518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.274 [2024-05-15 17:17:33.788531] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:46.274 qpair failed and we were unable to recover it. 00:26:46.274 [2024-05-15 17:17:33.788701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.274 [2024-05-15 17:17:33.788875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.274 [2024-05-15 17:17:33.788888] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:46.274 qpair failed and we were unable to recover it. 00:26:46.274 [2024-05-15 17:17:33.789003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.274 [2024-05-15 17:17:33.789180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.274 [2024-05-15 17:17:33.789194] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:46.274 qpair failed and we were unable to recover it. 00:26:46.274 [2024-05-15 17:17:33.789357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.274 [2024-05-15 17:17:33.789466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.274 [2024-05-15 17:17:33.789479] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:46.274 qpair failed and we were unable to recover it. 00:26:46.274 [2024-05-15 17:17:33.789575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.274 [2024-05-15 17:17:33.789746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.274 [2024-05-15 17:17:33.789759] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:46.274 qpair failed and we were unable to recover it. 00:26:46.274 [2024-05-15 17:17:33.789876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.274 [2024-05-15 17:17:33.790040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.274 [2024-05-15 17:17:33.790054] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:46.274 qpair failed and we were unable to recover it. 00:26:46.274 [2024-05-15 17:17:33.790177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.274 [2024-05-15 17:17:33.790344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.274 [2024-05-15 17:17:33.790358] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:46.274 qpair failed and we were unable to recover it. 00:26:46.274 [2024-05-15 17:17:33.790461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.274 [2024-05-15 17:17:33.790558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.274 [2024-05-15 17:17:33.790571] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:46.274 qpair failed and we were unable to recover it. 00:26:46.274 [2024-05-15 17:17:33.790808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.274 [2024-05-15 17:17:33.790991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.274 [2024-05-15 17:17:33.791004] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:46.274 qpair failed and we were unable to recover it. 00:26:46.274 [2024-05-15 17:17:33.791183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.274 [2024-05-15 17:17:33.791322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.274 [2024-05-15 17:17:33.791341] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:46.274 qpair failed and we were unable to recover it. 00:26:46.274 [2024-05-15 17:17:33.791458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.274 [2024-05-15 17:17:33.791580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.274 [2024-05-15 17:17:33.791594] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:46.274 qpair failed and we were unable to recover it. 00:26:46.274 [2024-05-15 17:17:33.791698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.274 [2024-05-15 17:17:33.791935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.274 [2024-05-15 17:17:33.791948] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:46.274 qpair failed and we were unable to recover it. 00:26:46.274 [2024-05-15 17:17:33.792141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.274 [2024-05-15 17:17:33.792269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.274 [2024-05-15 17:17:33.792283] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:46.274 qpair failed and we were unable to recover it. 00:26:46.274 [2024-05-15 17:17:33.792558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.274 [2024-05-15 17:17:33.792668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.275 [2024-05-15 17:17:33.792681] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:46.275 qpair failed and we were unable to recover it. 00:26:46.275 [2024-05-15 17:17:33.792884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.275 [2024-05-15 17:17:33.793055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.275 [2024-05-15 17:17:33.793068] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:46.275 qpair failed and we were unable to recover it. 00:26:46.275 [2024-05-15 17:17:33.793242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.275 [2024-05-15 17:17:33.793443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.275 [2024-05-15 17:17:33.793456] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:46.275 qpair failed and we were unable to recover it. 00:26:46.275 [2024-05-15 17:17:33.793657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.275 [2024-05-15 17:17:33.793758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.275 [2024-05-15 17:17:33.793771] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:46.275 qpair failed and we were unable to recover it. 00:26:46.275 [2024-05-15 17:17:33.793970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.275 [2024-05-15 17:17:33.794091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.275 [2024-05-15 17:17:33.794104] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:46.275 qpair failed and we were unable to recover it. 00:26:46.275 [2024-05-15 17:17:33.794233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.275 [2024-05-15 17:17:33.794420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.275 [2024-05-15 17:17:33.794434] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:46.275 qpair failed and we were unable to recover it. 00:26:46.275 [2024-05-15 17:17:33.794535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.275 [2024-05-15 17:17:33.794704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.275 [2024-05-15 17:17:33.794719] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:46.275 qpair failed and we were unable to recover it. 00:26:46.275 [2024-05-15 17:17:33.794887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.275 [2024-05-15 17:17:33.795059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.275 [2024-05-15 17:17:33.795073] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:46.275 qpair failed and we were unable to recover it. 00:26:46.275 [2024-05-15 17:17:33.795293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.275 [2024-05-15 17:17:33.795409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.275 [2024-05-15 17:17:33.795423] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:46.275 qpair failed and we were unable to recover it. 00:26:46.275 [2024-05-15 17:17:33.795534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.275 [2024-05-15 17:17:33.795779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.275 [2024-05-15 17:17:33.795792] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:46.275 qpair failed and we were unable to recover it. 00:26:46.275 [2024-05-15 17:17:33.795905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.275 [2024-05-15 17:17:33.796015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.275 [2024-05-15 17:17:33.796028] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:46.275 qpair failed and we were unable to recover it. 00:26:46.275 [2024-05-15 17:17:33.796206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.275 [2024-05-15 17:17:33.796438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.275 [2024-05-15 17:17:33.796451] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:46.275 qpair failed and we were unable to recover it. 00:26:46.275 [2024-05-15 17:17:33.796629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.275 [2024-05-15 17:17:33.796821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.275 [2024-05-15 17:17:33.796834] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:46.275 qpair failed and we were unable to recover it. 00:26:46.275 [2024-05-15 17:17:33.797002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.275 [2024-05-15 17:17:33.797171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.275 [2024-05-15 17:17:33.797185] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:46.275 qpair failed and we were unable to recover it. 00:26:46.275 [2024-05-15 17:17:33.797310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.275 [2024-05-15 17:17:33.797539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.275 [2024-05-15 17:17:33.797552] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:46.275 qpair failed and we were unable to recover it. 00:26:46.275 [2024-05-15 17:17:33.797723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.275 [2024-05-15 17:17:33.797829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.275 [2024-05-15 17:17:33.797842] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:46.275 qpair failed and we were unable to recover it. 00:26:46.275 [2024-05-15 17:17:33.798026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.275 [2024-05-15 17:17:33.798281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.275 [2024-05-15 17:17:33.798298] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:46.275 qpair failed and we were unable to recover it. 00:26:46.275 [2024-05-15 17:17:33.798503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.275 [2024-05-15 17:17:33.798616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.275 [2024-05-15 17:17:33.798629] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:46.275 qpair failed and we were unable to recover it. 00:26:46.275 [2024-05-15 17:17:33.798725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.275 [2024-05-15 17:17:33.798844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.275 [2024-05-15 17:17:33.798857] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:46.275 qpair failed and we were unable to recover it. 00:26:46.275 [2024-05-15 17:17:33.798968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.275 [2024-05-15 17:17:33.799067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.275 [2024-05-15 17:17:33.799080] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:46.275 qpair failed and we were unable to recover it. 00:26:46.275 [2024-05-15 17:17:33.799211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.275 [2024-05-15 17:17:33.799369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.275 [2024-05-15 17:17:33.799382] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:46.275 qpair failed and we were unable to recover it. 00:26:46.275 [2024-05-15 17:17:33.799491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.275 [2024-05-15 17:17:33.799649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.275 [2024-05-15 17:17:33.799662] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:46.275 qpair failed and we were unable to recover it. 00:26:46.275 [2024-05-15 17:17:33.799830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.275 [2024-05-15 17:17:33.799990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.275 [2024-05-15 17:17:33.800003] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:46.275 qpair failed and we were unable to recover it. 00:26:46.275 [2024-05-15 17:17:33.800189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.275 [2024-05-15 17:17:33.800285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.275 [2024-05-15 17:17:33.800299] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:46.275 qpair failed and we were unable to recover it. 00:26:46.275 [2024-05-15 17:17:33.800460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.275 [2024-05-15 17:17:33.800642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.275 [2024-05-15 17:17:33.800654] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:46.275 qpair failed and we were unable to recover it. 00:26:46.275 [2024-05-15 17:17:33.800782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.275 [2024-05-15 17:17:33.801010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.275 [2024-05-15 17:17:33.801022] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:46.275 qpair failed and we were unable to recover it. 00:26:46.275 [2024-05-15 17:17:33.801129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.275 [2024-05-15 17:17:33.801356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.275 [2024-05-15 17:17:33.801371] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:46.276 qpair failed and we were unable to recover it. 00:26:46.276 [2024-05-15 17:17:33.801602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.276 [2024-05-15 17:17:33.801862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.276 [2024-05-15 17:17:33.801875] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:46.276 qpair failed and we were unable to recover it. 00:26:46.276 [2024-05-15 17:17:33.802056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.276 [2024-05-15 17:17:33.802335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.276 [2024-05-15 17:17:33.802349] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:46.276 qpair failed and we were unable to recover it. 00:26:46.276 [2024-05-15 17:17:33.802512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.276 [2024-05-15 17:17:33.802638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.276 [2024-05-15 17:17:33.802651] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:46.276 qpair failed and we were unable to recover it. 00:26:46.276 [2024-05-15 17:17:33.802763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.276 [2024-05-15 17:17:33.802946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.276 [2024-05-15 17:17:33.802959] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:46.276 qpair failed and we were unable to recover it. 00:26:46.276 [2024-05-15 17:17:33.803140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.276 [2024-05-15 17:17:33.803261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.276 [2024-05-15 17:17:33.803274] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:46.276 qpair failed and we were unable to recover it. 00:26:46.276 [2024-05-15 17:17:33.803536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.276 [2024-05-15 17:17:33.803709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.276 [2024-05-15 17:17:33.803722] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:46.276 qpair failed and we were unable to recover it. 00:26:46.276 [2024-05-15 17:17:33.803890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.276 [2024-05-15 17:17:33.804005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.276 [2024-05-15 17:17:33.804018] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:46.276 qpair failed and we were unable to recover it. 00:26:46.276 [2024-05-15 17:17:33.804186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.276 [2024-05-15 17:17:33.804318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.276 [2024-05-15 17:17:33.804333] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:46.276 qpair failed and we were unable to recover it. 00:26:46.276 [2024-05-15 17:17:33.804440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.276 [2024-05-15 17:17:33.804648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.276 [2024-05-15 17:17:33.804663] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:46.276 qpair failed and we were unable to recover it. 00:26:46.276 [2024-05-15 17:17:33.804775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.276 [2024-05-15 17:17:33.804953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.276 [2024-05-15 17:17:33.804966] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:46.276 qpair failed and we were unable to recover it. 00:26:46.276 [2024-05-15 17:17:33.805152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.276 [2024-05-15 17:17:33.805338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.276 [2024-05-15 17:17:33.805352] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:46.276 qpair failed and we were unable to recover it. 00:26:46.276 [2024-05-15 17:17:33.805586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.276 [2024-05-15 17:17:33.805788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.276 [2024-05-15 17:17:33.805801] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:46.276 qpair failed and we were unable to recover it. 00:26:46.276 [2024-05-15 17:17:33.806049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.276 [2024-05-15 17:17:33.806223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.276 [2024-05-15 17:17:33.806237] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:46.276 qpair failed and we were unable to recover it. 00:26:46.276 [2024-05-15 17:17:33.806408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.276 [2024-05-15 17:17:33.806574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.276 [2024-05-15 17:17:33.806587] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:46.276 qpair failed and we were unable to recover it. 00:26:46.276 [2024-05-15 17:17:33.806834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.276 [2024-05-15 17:17:33.807015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.276 [2024-05-15 17:17:33.807028] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:46.276 qpair failed and we were unable to recover it. 00:26:46.276 [2024-05-15 17:17:33.807111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.276 [2024-05-15 17:17:33.807285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.276 [2024-05-15 17:17:33.807298] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:46.276 qpair failed and we were unable to recover it. 00:26:46.276 [2024-05-15 17:17:33.807368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.276 [2024-05-15 17:17:33.807464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.276 [2024-05-15 17:17:33.807476] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:46.276 qpair failed and we were unable to recover it. 00:26:46.276 [2024-05-15 17:17:33.807593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.276 [2024-05-15 17:17:33.807703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.276 [2024-05-15 17:17:33.807716] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:46.276 qpair failed and we were unable to recover it. 00:26:46.276 [2024-05-15 17:17:33.807915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.276 [2024-05-15 17:17:33.808041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.276 [2024-05-15 17:17:33.808053] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:46.276 qpair failed and we were unable to recover it. 00:26:46.276 [2024-05-15 17:17:33.808225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.276 [2024-05-15 17:17:33.808407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.276 [2024-05-15 17:17:33.808420] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:46.276 qpair failed and we were unable to recover it. 00:26:46.276 [2024-05-15 17:17:33.808533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.276 [2024-05-15 17:17:33.808633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.277 [2024-05-15 17:17:33.808647] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:46.277 qpair failed and we were unable to recover it. 00:26:46.277 [2024-05-15 17:17:33.808839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.277 [2024-05-15 17:17:33.809018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.277 [2024-05-15 17:17:33.809031] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:46.277 qpair failed and we were unable to recover it. 00:26:46.277 [2024-05-15 17:17:33.809202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.277 [2024-05-15 17:17:33.809384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.277 [2024-05-15 17:17:33.809397] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:46.277 qpair failed and we were unable to recover it. 00:26:46.277 [2024-05-15 17:17:33.809573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.277 [2024-05-15 17:17:33.809665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.277 [2024-05-15 17:17:33.809678] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:46.277 qpair failed and we were unable to recover it. 00:26:46.277 [2024-05-15 17:17:33.809789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.277 [2024-05-15 17:17:33.809906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.277 [2024-05-15 17:17:33.809919] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:46.277 qpair failed and we were unable to recover it. 00:26:46.277 [2024-05-15 17:17:33.810106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.277 [2024-05-15 17:17:33.810225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.277 [2024-05-15 17:17:33.810240] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:46.277 qpair failed and we were unable to recover it. 00:26:46.277 [2024-05-15 17:17:33.810352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.277 [2024-05-15 17:17:33.810465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.277 [2024-05-15 17:17:33.810479] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:46.277 qpair failed and we were unable to recover it. 00:26:46.277 [2024-05-15 17:17:33.810661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.277 [2024-05-15 17:17:33.810769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.277 [2024-05-15 17:17:33.810783] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:46.277 qpair failed and we were unable to recover it. 00:26:46.277 [2024-05-15 17:17:33.810967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.277 [2024-05-15 17:17:33.811078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.277 [2024-05-15 17:17:33.811091] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:46.277 qpair failed and we were unable to recover it. 00:26:46.277 [2024-05-15 17:17:33.811323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.277 [2024-05-15 17:17:33.811436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.277 [2024-05-15 17:17:33.811450] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:46.277 qpair failed and we were unable to recover it. 00:26:46.277 [2024-05-15 17:17:33.811631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.277 [2024-05-15 17:17:33.811730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.277 [2024-05-15 17:17:33.811744] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:46.277 qpair failed and we were unable to recover it. 00:26:46.277 [2024-05-15 17:17:33.811926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.277 [2024-05-15 17:17:33.812089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.277 [2024-05-15 17:17:33.812102] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:46.277 qpair failed and we were unable to recover it. 00:26:46.277 [2024-05-15 17:17:33.812217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.277 [2024-05-15 17:17:33.812398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.277 [2024-05-15 17:17:33.812412] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:46.277 qpair failed and we were unable to recover it. 00:26:46.277 [2024-05-15 17:17:33.812513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.277 [2024-05-15 17:17:33.812711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.277 [2024-05-15 17:17:33.812724] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:46.277 qpair failed and we were unable to recover it. 00:26:46.277 [2024-05-15 17:17:33.812846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.277 [2024-05-15 17:17:33.813013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.277 [2024-05-15 17:17:33.813027] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:46.277 qpair failed and we were unable to recover it. 00:26:46.277 [2024-05-15 17:17:33.813234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.277 [2024-05-15 17:17:33.813481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.277 [2024-05-15 17:17:33.813494] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:46.277 qpair failed and we were unable to recover it. 00:26:46.277 [2024-05-15 17:17:33.813592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.277 [2024-05-15 17:17:33.813704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.277 [2024-05-15 17:17:33.813717] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:46.277 qpair failed and we were unable to recover it. 00:26:46.277 [2024-05-15 17:17:33.813975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.277 [2024-05-15 17:17:33.814153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.277 [2024-05-15 17:17:33.814172] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:46.277 qpair failed and we were unable to recover it. 00:26:46.277 [2024-05-15 17:17:33.814272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.277 [2024-05-15 17:17:33.814376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.277 [2024-05-15 17:17:33.814389] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:46.277 qpair failed and we were unable to recover it. 00:26:46.277 [2024-05-15 17:17:33.814568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.277 [2024-05-15 17:17:33.814766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.277 [2024-05-15 17:17:33.814779] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:46.277 qpair failed and we were unable to recover it. 00:26:46.277 [2024-05-15 17:17:33.814886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.277 [2024-05-15 17:17:33.814996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.277 [2024-05-15 17:17:33.815009] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:46.277 qpair failed and we were unable to recover it. 00:26:46.277 [2024-05-15 17:17:33.815176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.277 [2024-05-15 17:17:33.815268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.277 [2024-05-15 17:17:33.815281] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:46.277 qpair failed and we were unable to recover it. 00:26:46.277 [2024-05-15 17:17:33.815463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.277 [2024-05-15 17:17:33.815697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.277 [2024-05-15 17:17:33.815711] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:46.277 qpair failed and we were unable to recover it. 00:26:46.277 [2024-05-15 17:17:33.815822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.277 [2024-05-15 17:17:33.816000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.277 [2024-05-15 17:17:33.816013] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:46.277 qpair failed and we were unable to recover it. 00:26:46.277 [2024-05-15 17:17:33.816222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.277 [2024-05-15 17:17:33.816394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.277 [2024-05-15 17:17:33.816408] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:46.277 qpair failed and we were unable to recover it. 00:26:46.277 [2024-05-15 17:17:33.816518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.277 [2024-05-15 17:17:33.816680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.277 [2024-05-15 17:17:33.816693] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:46.277 qpair failed and we were unable to recover it. 00:26:46.277 [2024-05-15 17:17:33.816871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.277 [2024-05-15 17:17:33.817132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.278 [2024-05-15 17:17:33.817146] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:46.278 qpair failed and we were unable to recover it. 00:26:46.278 [2024-05-15 17:17:33.817279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.278 [2024-05-15 17:17:33.817389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.278 [2024-05-15 17:17:33.817402] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:46.278 qpair failed and we were unable to recover it. 00:26:46.278 [2024-05-15 17:17:33.817515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.278 [2024-05-15 17:17:33.817676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.278 [2024-05-15 17:17:33.817690] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:46.278 qpair failed and we were unable to recover it. 00:26:46.278 [2024-05-15 17:17:33.817868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.278 [2024-05-15 17:17:33.817953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.278 [2024-05-15 17:17:33.817966] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:46.278 qpair failed and we were unable to recover it. 00:26:46.278 [2024-05-15 17:17:33.818088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.278 [2024-05-15 17:17:33.818323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.278 [2024-05-15 17:17:33.818337] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:46.278 qpair failed and we were unable to recover it. 00:26:46.278 [2024-05-15 17:17:33.818447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.278 [2024-05-15 17:17:33.818649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.278 [2024-05-15 17:17:33.818663] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:46.278 qpair failed and we were unable to recover it. 00:26:46.278 [2024-05-15 17:17:33.818841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.278 [2024-05-15 17:17:33.819028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.278 [2024-05-15 17:17:33.819042] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:46.278 qpair failed and we were unable to recover it. 00:26:46.278 [2024-05-15 17:17:33.819204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.278 [2024-05-15 17:17:33.819323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.278 [2024-05-15 17:17:33.819336] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:46.278 qpair failed and we were unable to recover it. 00:26:46.278 [2024-05-15 17:17:33.819587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.278 [2024-05-15 17:17:33.819683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.278 [2024-05-15 17:17:33.819695] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:46.278 qpair failed and we were unable to recover it. 00:26:46.278 [2024-05-15 17:17:33.819862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.278 [2024-05-15 17:17:33.820041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.278 [2024-05-15 17:17:33.820054] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:46.278 qpair failed and we were unable to recover it. 00:26:46.278 [2024-05-15 17:17:33.820299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.278 [2024-05-15 17:17:33.820406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.278 [2024-05-15 17:17:33.820419] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:46.278 qpair failed and we were unable to recover it. 00:26:46.278 [2024-05-15 17:17:33.820520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.278 [2024-05-15 17:17:33.820610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.278 [2024-05-15 17:17:33.820623] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:46.278 qpair failed and we were unable to recover it. 00:26:46.278 [2024-05-15 17:17:33.820799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.278 [2024-05-15 17:17:33.820966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.278 [2024-05-15 17:17:33.820979] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:46.278 qpair failed and we were unable to recover it. 00:26:46.278 [2024-05-15 17:17:33.821142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.278 [2024-05-15 17:17:33.821350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.278 [2024-05-15 17:17:33.821364] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:46.278 qpair failed and we were unable to recover it. 00:26:46.278 [2024-05-15 17:17:33.821478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.278 [2024-05-15 17:17:33.821676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.278 [2024-05-15 17:17:33.821688] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.278 qpair failed and we were unable to recover it. 00:26:46.278 [2024-05-15 17:17:33.821871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.278 [2024-05-15 17:17:33.822035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.278 [2024-05-15 17:17:33.822045] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.278 qpair failed and we were unable to recover it. 00:26:46.278 [2024-05-15 17:17:33.822155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.278 [2024-05-15 17:17:33.822316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.278 [2024-05-15 17:17:33.822327] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.278 qpair failed and we were unable to recover it. 00:26:46.278 [2024-05-15 17:17:33.822498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.278 [2024-05-15 17:17:33.822670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.278 [2024-05-15 17:17:33.822681] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.278 qpair failed and we were unable to recover it. 00:26:46.278 [2024-05-15 17:17:33.822841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.278 [2024-05-15 17:17:33.823024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.278 [2024-05-15 17:17:33.823034] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.278 qpair failed and we were unable to recover it. 00:26:46.278 [2024-05-15 17:17:33.823204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.278 [2024-05-15 17:17:33.823376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.278 [2024-05-15 17:17:33.823386] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.278 qpair failed and we were unable to recover it. 00:26:46.278 [2024-05-15 17:17:33.823579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.278 [2024-05-15 17:17:33.823770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.278 [2024-05-15 17:17:33.823780] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.278 qpair failed and we were unable to recover it. 00:26:46.278 [2024-05-15 17:17:33.823882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.278 [2024-05-15 17:17:33.824007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.278 [2024-05-15 17:17:33.824017] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.278 qpair failed and we were unable to recover it. 00:26:46.278 [2024-05-15 17:17:33.824126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.278 [2024-05-15 17:17:33.824237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.278 [2024-05-15 17:17:33.824248] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.278 qpair failed and we were unable to recover it. 00:26:46.278 [2024-05-15 17:17:33.824347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.278 [2024-05-15 17:17:33.824569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.278 [2024-05-15 17:17:33.824579] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.278 qpair failed and we were unable to recover it. 00:26:46.278 [2024-05-15 17:17:33.824706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.278 [2024-05-15 17:17:33.824826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.278 [2024-05-15 17:17:33.824843] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:46.278 qpair failed and we were unable to recover it. 00:26:46.278 [2024-05-15 17:17:33.825040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.278 [2024-05-15 17:17:33.825183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.278 [2024-05-15 17:17:33.825201] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f0000b90 with addr=10.0.0.2, port=4420 00:26:46.278 qpair failed and we were unable to recover it. 00:26:46.278 [2024-05-15 17:17:33.825388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.278 [2024-05-15 17:17:33.825494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.278 [2024-05-15 17:17:33.825507] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:46.278 qpair failed and we were unable to recover it. 00:26:46.279 [2024-05-15 17:17:33.825604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.279 [2024-05-15 17:17:33.825771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.279 [2024-05-15 17:17:33.825784] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:46.279 qpair failed and we were unable to recover it. 00:26:46.279 [2024-05-15 17:17:33.826032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.279 [2024-05-15 17:17:33.826262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.279 [2024-05-15 17:17:33.826276] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:46.279 qpair failed and we were unable to recover it. 00:26:46.279 [2024-05-15 17:17:33.826438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.279 [2024-05-15 17:17:33.826620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.279 [2024-05-15 17:17:33.826633] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:46.279 qpair failed and we were unable to recover it. 00:26:46.279 [2024-05-15 17:17:33.826810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.279 [2024-05-15 17:17:33.826976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.279 [2024-05-15 17:17:33.826989] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:46.279 qpair failed and we were unable to recover it. 00:26:46.279 [2024-05-15 17:17:33.827101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.279 [2024-05-15 17:17:33.827285] Starting SPDK v24.05-pre git sha1 0ba8ca574 / DPDK 23.11.0 initialization... 00:26:46.279 [2024-05-15 17:17:33.827323] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:46.279 [2024-05-15 17:17:33.827335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.279 [2024-05-15 17:17:33.827348] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:46.279 qpair failed and we were unable to recover it. 00:26:46.279 [2024-05-15 17:17:33.827452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.279 [2024-05-15 17:17:33.827540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.279 [2024-05-15 17:17:33.827552] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:46.279 qpair failed and we were unable to recover it. 00:26:46.279 [2024-05-15 17:17:33.827653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.279 [2024-05-15 17:17:33.827824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.279 [2024-05-15 17:17:33.827836] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:46.279 qpair failed and we were unable to recover it. 00:26:46.279 [2024-05-15 17:17:33.828023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.279 [2024-05-15 17:17:33.828195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.279 [2024-05-15 17:17:33.828208] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:46.279 qpair failed and we were unable to recover it. 00:26:46.279 [2024-05-15 17:17:33.828332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.279 [2024-05-15 17:17:33.828443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.279 [2024-05-15 17:17:33.828456] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:46.279 qpair failed and we were unable to recover it. 00:26:46.279 [2024-05-15 17:17:33.828660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.279 [2024-05-15 17:17:33.828826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.279 [2024-05-15 17:17:33.828839] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:46.279 qpair failed and we were unable to recover it. 00:26:46.279 [2024-05-15 17:17:33.829014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.279 [2024-05-15 17:17:33.829099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.279 [2024-05-15 17:17:33.829113] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:46.279 qpair failed and we were unable to recover it. 00:26:46.279 [2024-05-15 17:17:33.829288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.279 [2024-05-15 17:17:33.829491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.279 [2024-05-15 17:17:33.829504] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:46.279 qpair failed and we were unable to recover it. 00:26:46.279 [2024-05-15 17:17:33.829621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.279 [2024-05-15 17:17:33.829740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.279 [2024-05-15 17:17:33.829753] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:46.279 qpair failed and we were unable to recover it. 00:26:46.279 [2024-05-15 17:17:33.830013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.279 [2024-05-15 17:17:33.830180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.279 [2024-05-15 17:17:33.830194] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:46.279 qpair failed and we were unable to recover it. 00:26:46.279 [2024-05-15 17:17:33.830374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.279 [2024-05-15 17:17:33.830565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.279 [2024-05-15 17:17:33.830578] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:46.279 qpair failed and we were unable to recover it. 00:26:46.279 [2024-05-15 17:17:33.830706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.279 [2024-05-15 17:17:33.830893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.279 [2024-05-15 17:17:33.830906] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:46.279 qpair failed and we were unable to recover it. 00:26:46.279 [2024-05-15 17:17:33.831006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.279 [2024-05-15 17:17:33.831106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.279 [2024-05-15 17:17:33.831135] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:46.279 qpair failed and we were unable to recover it. 00:26:46.279 [2024-05-15 17:17:33.831233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.279 [2024-05-15 17:17:33.831416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.279 [2024-05-15 17:17:33.831429] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:46.279 qpair failed and we were unable to recover it. 00:26:46.279 [2024-05-15 17:17:33.831592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.279 [2024-05-15 17:17:33.831754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.279 [2024-05-15 17:17:33.831767] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:46.279 qpair failed and we were unable to recover it. 00:26:46.279 [2024-05-15 17:17:33.831886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.279 [2024-05-15 17:17:33.832074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.279 [2024-05-15 17:17:33.832087] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:46.279 qpair failed and we were unable to recover it. 00:26:46.279 [2024-05-15 17:17:33.832230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.279 [2024-05-15 17:17:33.832341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.279 [2024-05-15 17:17:33.832354] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:46.279 qpair failed and we were unable to recover it. 00:26:46.279 [2024-05-15 17:17:33.832516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.279 [2024-05-15 17:17:33.832693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.279 [2024-05-15 17:17:33.832707] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:46.279 qpair failed and we were unable to recover it. 00:26:46.279 [2024-05-15 17:17:33.832896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.279 [2024-05-15 17:17:33.833023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.279 [2024-05-15 17:17:33.833036] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:46.279 qpair failed and we were unable to recover it. 00:26:46.279 [2024-05-15 17:17:33.833282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.279 [2024-05-15 17:17:33.833422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.279 [2024-05-15 17:17:33.833436] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:46.279 qpair failed and we were unable to recover it. 00:26:46.279 [2024-05-15 17:17:33.833610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.280 [2024-05-15 17:17:33.833866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.280 [2024-05-15 17:17:33.833879] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:46.280 qpair failed and we were unable to recover it. 00:26:46.280 [2024-05-15 17:17:33.833984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.280 [2024-05-15 17:17:33.834218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.280 [2024-05-15 17:17:33.834232] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:46.280 qpair failed and we were unable to recover it. 00:26:46.280 [2024-05-15 17:17:33.834399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.280 [2024-05-15 17:17:33.834582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.280 [2024-05-15 17:17:33.834598] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:46.280 qpair failed and we were unable to recover it. 00:26:46.280 [2024-05-15 17:17:33.834772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.280 [2024-05-15 17:17:33.834860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.280 [2024-05-15 17:17:33.834872] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:46.280 qpair failed and we were unable to recover it. 00:26:46.280 [2024-05-15 17:17:33.835111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.280 [2024-05-15 17:17:33.835360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.280 [2024-05-15 17:17:33.835373] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:46.280 qpair failed and we were unable to recover it. 00:26:46.280 [2024-05-15 17:17:33.835538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.280 [2024-05-15 17:17:33.835635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.280 [2024-05-15 17:17:33.835648] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:46.280 qpair failed and we were unable to recover it. 00:26:46.280 [2024-05-15 17:17:33.835778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.280 [2024-05-15 17:17:33.835952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.280 [2024-05-15 17:17:33.835965] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:46.280 qpair failed and we were unable to recover it. 00:26:46.280 [2024-05-15 17:17:33.836135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.280 [2024-05-15 17:17:33.836392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.280 [2024-05-15 17:17:33.836406] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:46.280 qpair failed and we were unable to recover it. 00:26:46.280 [2024-05-15 17:17:33.836499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.280 [2024-05-15 17:17:33.836756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.280 [2024-05-15 17:17:33.836769] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:46.280 qpair failed and we were unable to recover it. 00:26:46.280 [2024-05-15 17:17:33.836953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.280 [2024-05-15 17:17:33.837078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.280 [2024-05-15 17:17:33.837091] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:46.280 qpair failed and we were unable to recover it. 00:26:46.280 [2024-05-15 17:17:33.837273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.280 [2024-05-15 17:17:33.837448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.280 [2024-05-15 17:17:33.837461] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:46.280 qpair failed and we were unable to recover it. 00:26:46.280 [2024-05-15 17:17:33.837696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.280 [2024-05-15 17:17:33.837885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.280 [2024-05-15 17:17:33.837898] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:46.280 qpair failed and we were unable to recover it. 00:26:46.280 [2024-05-15 17:17:33.838132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.280 [2024-05-15 17:17:33.838307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.280 [2024-05-15 17:17:33.838323] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:46.280 qpair failed and we were unable to recover it. 00:26:46.280 [2024-05-15 17:17:33.838446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.280 [2024-05-15 17:17:33.838549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.280 [2024-05-15 17:17:33.838562] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:46.280 qpair failed and we were unable to recover it. 00:26:46.280 [2024-05-15 17:17:33.838746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.280 [2024-05-15 17:17:33.838856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.280 [2024-05-15 17:17:33.838869] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:46.280 qpair failed and we were unable to recover it. 00:26:46.280 [2024-05-15 17:17:33.839144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.280 [2024-05-15 17:17:33.839251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.280 [2024-05-15 17:17:33.839265] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:46.280 qpair failed and we were unable to recover it. 00:26:46.280 [2024-05-15 17:17:33.839436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.280 [2024-05-15 17:17:33.839605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.280 [2024-05-15 17:17:33.839618] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:46.280 qpair failed and we were unable to recover it. 00:26:46.280 [2024-05-15 17:17:33.839806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.280 [2024-05-15 17:17:33.839921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.280 [2024-05-15 17:17:33.839934] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:46.280 qpair failed and we were unable to recover it. 00:26:46.280 [2024-05-15 17:17:33.840189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.280 [2024-05-15 17:17:33.840457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.280 [2024-05-15 17:17:33.840471] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:46.280 qpair failed and we were unable to recover it. 00:26:46.280 [2024-05-15 17:17:33.840679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.280 [2024-05-15 17:17:33.840866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.280 [2024-05-15 17:17:33.840879] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:46.280 qpair failed and we were unable to recover it. 00:26:46.280 [2024-05-15 17:17:33.841007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.280 [2024-05-15 17:17:33.841187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.280 [2024-05-15 17:17:33.841201] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:46.280 qpair failed and we were unable to recover it. 00:26:46.280 [2024-05-15 17:17:33.841435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.280 [2024-05-15 17:17:33.841560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.280 [2024-05-15 17:17:33.841573] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:46.280 qpair failed and we were unable to recover it. 00:26:46.280 [2024-05-15 17:17:33.841809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.280 [2024-05-15 17:17:33.842082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.280 [2024-05-15 17:17:33.842098] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:46.280 qpair failed and we were unable to recover it. 00:26:46.280 [2024-05-15 17:17:33.842333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.280 [2024-05-15 17:17:33.842441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.280 [2024-05-15 17:17:33.842455] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:46.280 qpair failed and we were unable to recover it. 00:26:46.280 [2024-05-15 17:17:33.842555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.280 [2024-05-15 17:17:33.842725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.280 [2024-05-15 17:17:33.842739] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:46.280 qpair failed and we were unable to recover it. 00:26:46.280 [2024-05-15 17:17:33.842872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.280 [2024-05-15 17:17:33.842974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.280 [2024-05-15 17:17:33.842988] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:46.280 qpair failed and we were unable to recover it. 00:26:46.280 [2024-05-15 17:17:33.843186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.280 [2024-05-15 17:17:33.843366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.281 [2024-05-15 17:17:33.843379] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:46.281 qpair failed and we were unable to recover it. 00:26:46.281 [2024-05-15 17:17:33.843508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.281 [2024-05-15 17:17:33.843762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.281 [2024-05-15 17:17:33.843776] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:46.281 qpair failed and we were unable to recover it. 00:26:46.281 [2024-05-15 17:17:33.843979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.281 [2024-05-15 17:17:33.844175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.281 [2024-05-15 17:17:33.844189] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:46.281 qpair failed and we were unable to recover it. 00:26:46.281 [2024-05-15 17:17:33.844315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.281 [2024-05-15 17:17:33.844419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.281 [2024-05-15 17:17:33.844432] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:46.281 qpair failed and we were unable to recover it. 00:26:46.281 [2024-05-15 17:17:33.844539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.281 [2024-05-15 17:17:33.844657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.281 [2024-05-15 17:17:33.844670] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:46.281 qpair failed and we were unable to recover it. 00:26:46.281 [2024-05-15 17:17:33.844909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.281 [2024-05-15 17:17:33.845018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.281 [2024-05-15 17:17:33.845031] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:46.281 qpair failed and we were unable to recover it. 00:26:46.281 [2024-05-15 17:17:33.845200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.281 [2024-05-15 17:17:33.845362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.281 [2024-05-15 17:17:33.845375] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:46.281 qpair failed and we were unable to recover it. 00:26:46.281 [2024-05-15 17:17:33.845477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.281 [2024-05-15 17:17:33.845651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.281 [2024-05-15 17:17:33.845665] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:46.281 qpair failed and we were unable to recover it. 00:26:46.281 [2024-05-15 17:17:33.845780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.281 [2024-05-15 17:17:33.845901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.281 [2024-05-15 17:17:33.845914] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:46.281 qpair failed and we were unable to recover it. 00:26:46.281 [2024-05-15 17:17:33.846096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.281 [2024-05-15 17:17:33.846220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.281 [2024-05-15 17:17:33.846233] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:46.281 qpair failed and we were unable to recover it. 00:26:46.281 [2024-05-15 17:17:33.846416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.281 [2024-05-15 17:17:33.846525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.281 [2024-05-15 17:17:33.846538] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:46.281 qpair failed and we were unable to recover it. 00:26:46.281 [2024-05-15 17:17:33.846713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.281 [2024-05-15 17:17:33.846893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.281 [2024-05-15 17:17:33.846906] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:46.281 qpair failed and we were unable to recover it. 00:26:46.281 [2024-05-15 17:17:33.847083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.281 [2024-05-15 17:17:33.847211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.281 [2024-05-15 17:17:33.847224] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:46.281 qpair failed and we were unable to recover it. 00:26:46.281 [2024-05-15 17:17:33.847427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.281 [2024-05-15 17:17:33.847609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.281 [2024-05-15 17:17:33.847622] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:46.281 qpair failed and we were unable to recover it. 00:26:46.281 [2024-05-15 17:17:33.847798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.281 [2024-05-15 17:17:33.847965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.281 [2024-05-15 17:17:33.847978] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:46.281 qpair failed and we were unable to recover it. 00:26:46.281 [2024-05-15 17:17:33.848149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.281 [2024-05-15 17:17:33.848252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.281 [2024-05-15 17:17:33.848266] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:46.281 qpair failed and we were unable to recover it. 00:26:46.281 [2024-05-15 17:17:33.848379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.281 [2024-05-15 17:17:33.848494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.281 [2024-05-15 17:17:33.848507] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:46.281 qpair failed and we were unable to recover it. 00:26:46.281 [2024-05-15 17:17:33.848620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.281 [2024-05-15 17:17:33.848788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.281 [2024-05-15 17:17:33.848801] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:46.281 qpair failed and we were unable to recover it. 00:26:46.281 [2024-05-15 17:17:33.848901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.281 [2024-05-15 17:17:33.849051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.281 [2024-05-15 17:17:33.849064] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:46.281 qpair failed and we were unable to recover it. 00:26:46.281 [2024-05-15 17:17:33.849220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.281 [2024-05-15 17:17:33.849384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.281 [2024-05-15 17:17:33.849397] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:46.281 qpair failed and we were unable to recover it. 00:26:46.281 [2024-05-15 17:17:33.849632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.281 [2024-05-15 17:17:33.849801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.281 [2024-05-15 17:17:33.849814] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:46.281 qpair failed and we were unable to recover it. 00:26:46.281 [2024-05-15 17:17:33.850050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.281 [2024-05-15 17:17:33.850222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.281 [2024-05-15 17:17:33.850236] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:46.281 qpair failed and we were unable to recover it. 00:26:46.281 [2024-05-15 17:17:33.850341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.281 [2024-05-15 17:17:33.850437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.281 [2024-05-15 17:17:33.850450] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:46.281 qpair failed and we were unable to recover it. 00:26:46.281 [2024-05-15 17:17:33.850558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.281 [2024-05-15 17:17:33.850788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.281 [2024-05-15 17:17:33.850801] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:46.281 qpair failed and we were unable to recover it. 00:26:46.281 [2024-05-15 17:17:33.850873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.281 [2024-05-15 17:17:33.850989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.281 [2024-05-15 17:17:33.851002] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:46.281 qpair failed and we were unable to recover it. 00:26:46.281 [2024-05-15 17:17:33.851185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.281 [2024-05-15 17:17:33.851353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.281 [2024-05-15 17:17:33.851367] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:46.281 qpair failed and we were unable to recover it. 00:26:46.282 [2024-05-15 17:17:33.851639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.282 [2024-05-15 17:17:33.851866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.282 [2024-05-15 17:17:33.851879] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:46.282 qpair failed and we were unable to recover it. 00:26:46.282 [2024-05-15 17:17:33.852141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.282 [2024-05-15 17:17:33.852329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.282 [2024-05-15 17:17:33.852343] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:46.282 qpair failed and we were unable to recover it. 00:26:46.282 [2024-05-15 17:17:33.852507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.282 [2024-05-15 17:17:33.852585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.282 [2024-05-15 17:17:33.852598] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:46.282 qpair failed and we were unable to recover it. 00:26:46.282 [2024-05-15 17:17:33.852851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.282 [2024-05-15 17:17:33.852955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.282 [2024-05-15 17:17:33.852968] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:46.282 qpair failed and we were unable to recover it. 00:26:46.282 [2024-05-15 17:17:33.853087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.282 [2024-05-15 17:17:33.853260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.282 [2024-05-15 17:17:33.853274] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:46.282 qpair failed and we were unable to recover it. 00:26:46.282 [2024-05-15 17:17:33.853452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.282 [2024-05-15 17:17:33.853614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.282 [2024-05-15 17:17:33.853628] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:46.282 qpair failed and we were unable to recover it. 00:26:46.282 [2024-05-15 17:17:33.853807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.282 [2024-05-15 17:17:33.853973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.282 [2024-05-15 17:17:33.853986] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:46.282 qpair failed and we were unable to recover it. 00:26:46.282 [2024-05-15 17:17:33.854079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.282 [2024-05-15 17:17:33.854243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.282 [2024-05-15 17:17:33.854257] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:46.282 qpair failed and we were unable to recover it. 00:26:46.282 EAL: No free 2048 kB hugepages reported on node 1 00:26:46.282 [2024-05-15 17:17:33.854358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.282 [2024-05-15 17:17:33.854558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.282 [2024-05-15 17:17:33.854572] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:46.282 qpair failed and we were unable to recover it. 00:26:46.282 [2024-05-15 17:17:33.854830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.282 [2024-05-15 17:17:33.854997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.282 [2024-05-15 17:17:33.855010] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:46.282 qpair failed and we were unable to recover it. 00:26:46.282 [2024-05-15 17:17:33.855098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.282 [2024-05-15 17:17:33.855328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.282 [2024-05-15 17:17:33.855342] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:46.282 qpair failed and we were unable to recover it. 00:26:46.282 [2024-05-15 17:17:33.855463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.282 [2024-05-15 17:17:33.855545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.282 [2024-05-15 17:17:33.855559] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:46.282 qpair failed and we were unable to recover it. 00:26:46.282 [2024-05-15 17:17:33.855768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.282 [2024-05-15 17:17:33.855951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.282 [2024-05-15 17:17:33.855964] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:46.282 qpair failed and we were unable to recover it. 00:26:46.282 [2024-05-15 17:17:33.856200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.282 [2024-05-15 17:17:33.856377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.282 [2024-05-15 17:17:33.856390] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:46.282 qpair failed and we were unable to recover it. 00:26:46.282 [2024-05-15 17:17:33.856476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.282 [2024-05-15 17:17:33.856728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.282 [2024-05-15 17:17:33.856741] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:46.282 qpair failed and we were unable to recover it. 00:26:46.282 [2024-05-15 17:17:33.856871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.282 [2024-05-15 17:17:33.857096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.282 [2024-05-15 17:17:33.857109] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:46.282 qpair failed and we were unable to recover it. 00:26:46.282 [2024-05-15 17:17:33.857218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.282 [2024-05-15 17:17:33.857346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.282 [2024-05-15 17:17:33.857359] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:46.282 qpair failed and we were unable to recover it. 00:26:46.282 [2024-05-15 17:17:33.857531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.282 [2024-05-15 17:17:33.857693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.282 [2024-05-15 17:17:33.857707] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:46.282 qpair failed and we were unable to recover it. 00:26:46.282 [2024-05-15 17:17:33.857895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.282 [2024-05-15 17:17:33.857987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.282 [2024-05-15 17:17:33.858000] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:46.282 qpair failed and we were unable to recover it. 00:26:46.282 [2024-05-15 17:17:33.858252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.282 [2024-05-15 17:17:33.858384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.282 [2024-05-15 17:17:33.858398] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:46.282 qpair failed and we were unable to recover it. 00:26:46.282 [2024-05-15 17:17:33.858628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.282 [2024-05-15 17:17:33.858739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.282 [2024-05-15 17:17:33.858752] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:46.282 qpair failed and we were unable to recover it. 00:26:46.282 [2024-05-15 17:17:33.858879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.282 [2024-05-15 17:17:33.858978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.282 [2024-05-15 17:17:33.858991] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:46.282 qpair failed and we were unable to recover it. 00:26:46.282 [2024-05-15 17:17:33.859182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.282 [2024-05-15 17:17:33.859448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.282 [2024-05-15 17:17:33.859460] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:46.282 qpair failed and we were unable to recover it. 00:26:46.282 [2024-05-15 17:17:33.859632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.282 [2024-05-15 17:17:33.859760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.282 [2024-05-15 17:17:33.859773] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:46.282 qpair failed and we were unable to recover it. 00:26:46.282 [2024-05-15 17:17:33.859957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.282 [2024-05-15 17:17:33.860041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.282 [2024-05-15 17:17:33.860054] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:46.282 qpair failed and we were unable to recover it. 00:26:46.282 [2024-05-15 17:17:33.860182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.282 [2024-05-15 17:17:33.860315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.282 [2024-05-15 17:17:33.860328] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:46.282 qpair failed and we were unable to recover it. 00:26:46.282 [2024-05-15 17:17:33.860501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.282 [2024-05-15 17:17:33.860611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.282 [2024-05-15 17:17:33.860625] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:46.282 qpair failed and we were unable to recover it. 00:26:46.282 [2024-05-15 17:17:33.860725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.282 [2024-05-15 17:17:33.860961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.282 [2024-05-15 17:17:33.860974] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:46.283 qpair failed and we were unable to recover it. 00:26:46.283 [2024-05-15 17:17:33.861154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.283 [2024-05-15 17:17:33.861410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.283 [2024-05-15 17:17:33.861425] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:46.283 qpair failed and we were unable to recover it. 00:26:46.283 [2024-05-15 17:17:33.861615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.283 [2024-05-15 17:17:33.861782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.283 [2024-05-15 17:17:33.861795] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:46.283 qpair failed and we were unable to recover it. 00:26:46.283 [2024-05-15 17:17:33.861973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.283 [2024-05-15 17:17:33.862151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.283 [2024-05-15 17:17:33.862170] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:46.283 qpair failed and we were unable to recover it. 00:26:46.283 [2024-05-15 17:17:33.862353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.283 [2024-05-15 17:17:33.862469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.283 [2024-05-15 17:17:33.862482] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:46.283 qpair failed and we were unable to recover it. 00:26:46.283 [2024-05-15 17:17:33.862588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.283 [2024-05-15 17:17:33.862689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.283 [2024-05-15 17:17:33.862703] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:46.283 qpair failed and we were unable to recover it. 00:26:46.283 [2024-05-15 17:17:33.862887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.283 [2024-05-15 17:17:33.863017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.283 [2024-05-15 17:17:33.863030] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:46.283 qpair failed and we were unable to recover it. 00:26:46.283 [2024-05-15 17:17:33.863198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.283 [2024-05-15 17:17:33.863321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.283 [2024-05-15 17:17:33.863334] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:46.283 qpair failed and we were unable to recover it. 00:26:46.283 [2024-05-15 17:17:33.863483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.283 [2024-05-15 17:17:33.863717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.283 [2024-05-15 17:17:33.863731] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:46.283 qpair failed and we were unable to recover it. 00:26:46.283 [2024-05-15 17:17:33.863964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.283 [2024-05-15 17:17:33.864130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.283 [2024-05-15 17:17:33.864143] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:46.283 qpair failed and we were unable to recover it. 00:26:46.283 [2024-05-15 17:17:33.864323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.283 [2024-05-15 17:17:33.864554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.283 [2024-05-15 17:17:33.864567] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:46.283 qpair failed and we were unable to recover it. 00:26:46.283 [2024-05-15 17:17:33.864804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.283 [2024-05-15 17:17:33.864922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.283 [2024-05-15 17:17:33.864935] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:46.283 qpair failed and we were unable to recover it. 00:26:46.283 [2024-05-15 17:17:33.865111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.283 [2024-05-15 17:17:33.865277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.283 [2024-05-15 17:17:33.865290] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:46.283 qpair failed and we were unable to recover it. 00:26:46.283 [2024-05-15 17:17:33.865486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.283 [2024-05-15 17:17:33.865609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.283 [2024-05-15 17:17:33.865623] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:46.283 qpair failed and we were unable to recover it. 00:26:46.283 [2024-05-15 17:17:33.865856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.283 [2024-05-15 17:17:33.866036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.283 [2024-05-15 17:17:33.866049] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:46.283 qpair failed and we were unable to recover it. 00:26:46.283 [2024-05-15 17:17:33.866289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.283 [2024-05-15 17:17:33.866401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.283 [2024-05-15 17:17:33.866414] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:46.283 qpair failed and we were unable to recover it. 00:26:46.283 [2024-05-15 17:17:33.866545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.283 [2024-05-15 17:17:33.866772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.283 [2024-05-15 17:17:33.866785] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:46.283 qpair failed and we were unable to recover it. 00:26:46.283 [2024-05-15 17:17:33.867015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.283 [2024-05-15 17:17:33.867302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.283 [2024-05-15 17:17:33.867316] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:46.283 qpair failed and we were unable to recover it. 00:26:46.283 [2024-05-15 17:17:33.867498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.283 [2024-05-15 17:17:33.867781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.283 [2024-05-15 17:17:33.867794] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:46.283 qpair failed and we were unable to recover it. 00:26:46.283 [2024-05-15 17:17:33.867897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.283 [2024-05-15 17:17:33.868058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.283 [2024-05-15 17:17:33.868071] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:46.283 qpair failed and we were unable to recover it. 00:26:46.283 [2024-05-15 17:17:33.868252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.283 [2024-05-15 17:17:33.868353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.283 [2024-05-15 17:17:33.868366] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:46.283 qpair failed and we were unable to recover it. 00:26:46.283 [2024-05-15 17:17:33.868597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.283 [2024-05-15 17:17:33.868685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.283 [2024-05-15 17:17:33.868698] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:46.283 qpair failed and we were unable to recover it. 00:26:46.283 [2024-05-15 17:17:33.868889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.283 [2024-05-15 17:17:33.869010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.283 [2024-05-15 17:17:33.869023] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:46.283 qpair failed and we were unable to recover it. 00:26:46.283 [2024-05-15 17:17:33.869189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.283 [2024-05-15 17:17:33.869308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.283 [2024-05-15 17:17:33.869321] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:46.283 qpair failed and we were unable to recover it. 00:26:46.283 [2024-05-15 17:17:33.869417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.283 [2024-05-15 17:17:33.869615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.283 [2024-05-15 17:17:33.869628] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:46.283 qpair failed and we were unable to recover it. 00:26:46.283 [2024-05-15 17:17:33.869803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.283 [2024-05-15 17:17:33.869977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.283 [2024-05-15 17:17:33.869990] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:46.283 qpair failed and we were unable to recover it. 00:26:46.283 [2024-05-15 17:17:33.870105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.283 [2024-05-15 17:17:33.870251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.283 [2024-05-15 17:17:33.870264] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:46.283 qpair failed and we were unable to recover it. 00:26:46.283 [2024-05-15 17:17:33.870390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.283 [2024-05-15 17:17:33.870587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.283 [2024-05-15 17:17:33.870600] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:46.283 qpair failed and we were unable to recover it. 00:26:46.283 [2024-05-15 17:17:33.870687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.283 [2024-05-15 17:17:33.870852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.283 [2024-05-15 17:17:33.870866] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:46.283 qpair failed and we were unable to recover it. 00:26:46.283 [2024-05-15 17:17:33.870984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.283 [2024-05-15 17:17:33.871111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.283 [2024-05-15 17:17:33.871125] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:46.283 qpair failed and we were unable to recover it. 00:26:46.283 [2024-05-15 17:17:33.871386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.283 [2024-05-15 17:17:33.871547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.283 [2024-05-15 17:17:33.871560] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:46.283 qpair failed and we were unable to recover it. 00:26:46.283 [2024-05-15 17:17:33.871744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.283 [2024-05-15 17:17:33.872010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.283 [2024-05-15 17:17:33.872023] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:46.283 qpair failed and we were unable to recover it. 00:26:46.283 [2024-05-15 17:17:33.872293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.284 [2024-05-15 17:17:33.872436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.284 [2024-05-15 17:17:33.872452] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:46.284 qpair failed and we were unable to recover it. 00:26:46.284 [2024-05-15 17:17:33.872633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.284 [2024-05-15 17:17:33.872808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.284 [2024-05-15 17:17:33.872822] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:46.284 qpair failed and we were unable to recover it. 00:26:46.284 [2024-05-15 17:17:33.873002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.284 [2024-05-15 17:17:33.873183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.284 [2024-05-15 17:17:33.873196] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:46.284 qpair failed and we were unable to recover it. 00:26:46.284 [2024-05-15 17:17:33.873448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.284 [2024-05-15 17:17:33.873686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.284 [2024-05-15 17:17:33.873700] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:46.284 qpair failed and we were unable to recover it. 00:26:46.284 [2024-05-15 17:17:33.873809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.284 [2024-05-15 17:17:33.873931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.284 [2024-05-15 17:17:33.873944] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:46.284 qpair failed and we were unable to recover it. 00:26:46.284 [2024-05-15 17:17:33.874202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.284 [2024-05-15 17:17:33.874396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.284 [2024-05-15 17:17:33.874410] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:46.284 qpair failed and we were unable to recover it. 00:26:46.284 [2024-05-15 17:17:33.874576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.284 [2024-05-15 17:17:33.874834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.284 [2024-05-15 17:17:33.874848] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:46.284 qpair failed and we were unable to recover it. 00:26:46.284 [2024-05-15 17:17:33.875013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.284 [2024-05-15 17:17:33.875187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.284 [2024-05-15 17:17:33.875201] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:46.284 qpair failed and we were unable to recover it. 00:26:46.284 [2024-05-15 17:17:33.875367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.284 [2024-05-15 17:17:33.875550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.284 [2024-05-15 17:17:33.875563] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:46.284 qpair failed and we were unable to recover it. 00:26:46.284 [2024-05-15 17:17:33.875731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.284 [2024-05-15 17:17:33.875899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.284 [2024-05-15 17:17:33.875912] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:46.284 qpair failed and we were unable to recover it. 00:26:46.284 [2024-05-15 17:17:33.876067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.284 [2024-05-15 17:17:33.876254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.284 [2024-05-15 17:17:33.876268] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:46.284 qpair failed and we were unable to recover it. 00:26:46.284 [2024-05-15 17:17:33.876446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.284 [2024-05-15 17:17:33.876699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.284 [2024-05-15 17:17:33.876712] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:46.284 qpair failed and we were unable to recover it. 00:26:46.284 [2024-05-15 17:17:33.876842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.284 [2024-05-15 17:17:33.876966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.284 [2024-05-15 17:17:33.876978] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.284 qpair failed and we were unable to recover it. 00:26:46.284 [2024-05-15 17:17:33.877105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.284 [2024-05-15 17:17:33.877264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.284 [2024-05-15 17:17:33.877275] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.284 qpair failed and we were unable to recover it. 00:26:46.284 [2024-05-15 17:17:33.877449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.284 [2024-05-15 17:17:33.877688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.284 [2024-05-15 17:17:33.877698] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.284 qpair failed and we were unable to recover it. 00:26:46.284 [2024-05-15 17:17:33.877861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.284 [2024-05-15 17:17:33.878084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.284 [2024-05-15 17:17:33.878094] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.284 qpair failed and we were unable to recover it. 00:26:46.284 [2024-05-15 17:17:33.878186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.284 [2024-05-15 17:17:33.878361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.284 [2024-05-15 17:17:33.878371] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.284 qpair failed and we were unable to recover it. 00:26:46.284 [2024-05-15 17:17:33.878476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.284 [2024-05-15 17:17:33.878654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.284 [2024-05-15 17:17:33.878663] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.284 qpair failed and we were unable to recover it. 00:26:46.284 [2024-05-15 17:17:33.878776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.284 [2024-05-15 17:17:33.878874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.284 [2024-05-15 17:17:33.878884] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.284 qpair failed and we were unable to recover it. 00:26:46.284 [2024-05-15 17:17:33.879060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.284 [2024-05-15 17:17:33.879183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.284 [2024-05-15 17:17:33.879194] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.284 qpair failed and we were unable to recover it. 00:26:46.284 [2024-05-15 17:17:33.879292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.284 [2024-05-15 17:17:33.879516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.284 [2024-05-15 17:17:33.879526] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.284 qpair failed and we were unable to recover it. 00:26:46.284 [2024-05-15 17:17:33.879764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.284 [2024-05-15 17:17:33.880000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.284 [2024-05-15 17:17:33.880010] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.284 qpair failed and we were unable to recover it. 00:26:46.284 [2024-05-15 17:17:33.880181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.284 [2024-05-15 17:17:33.880293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.284 [2024-05-15 17:17:33.880302] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.284 qpair failed and we were unable to recover it. 00:26:46.284 [2024-05-15 17:17:33.880466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.284 [2024-05-15 17:17:33.880639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.284 [2024-05-15 17:17:33.880649] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.284 qpair failed and we were unable to recover it. 00:26:46.284 [2024-05-15 17:17:33.880844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.284 [2024-05-15 17:17:33.881070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.284 [2024-05-15 17:17:33.881079] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.284 qpair failed and we were unable to recover it. 00:26:46.284 [2024-05-15 17:17:33.881244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.284 [2024-05-15 17:17:33.881334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.284 [2024-05-15 17:17:33.881344] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.284 qpair failed and we were unable to recover it. 00:26:46.284 [2024-05-15 17:17:33.881548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.284 [2024-05-15 17:17:33.881626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.284 [2024-05-15 17:17:33.881636] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.284 qpair failed and we were unable to recover it. 00:26:46.284 [2024-05-15 17:17:33.881858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.284 [2024-05-15 17:17:33.882014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.284 [2024-05-15 17:17:33.882027] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.284 qpair failed and we were unable to recover it. 00:26:46.284 [2024-05-15 17:17:33.882184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.284 [2024-05-15 17:17:33.882274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.284 [2024-05-15 17:17:33.882284] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.284 qpair failed and we were unable to recover it. 00:26:46.284 [2024-05-15 17:17:33.882394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.285 [2024-05-15 17:17:33.882506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.285 [2024-05-15 17:17:33.882516] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.285 qpair failed and we were unable to recover it. 00:26:46.285 [2024-05-15 17:17:33.882682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.285 [2024-05-15 17:17:33.882869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.285 [2024-05-15 17:17:33.882881] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.285 qpair failed and we were unable to recover it. 00:26:46.285 [2024-05-15 17:17:33.883039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.285 [2024-05-15 17:17:33.883287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.285 [2024-05-15 17:17:33.883299] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.285 qpair failed and we were unable to recover it. 00:26:46.285 [2024-05-15 17:17:33.883393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.285 [2024-05-15 17:17:33.883558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.285 [2024-05-15 17:17:33.883569] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.285 qpair failed and we were unable to recover it. 00:26:46.285 [2024-05-15 17:17:33.883731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.285 [2024-05-15 17:17:33.883845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.285 [2024-05-15 17:17:33.883855] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.285 qpair failed and we were unable to recover it. 00:26:46.285 [2024-05-15 17:17:33.884102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.285 [2024-05-15 17:17:33.884265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.285 [2024-05-15 17:17:33.884276] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.285 qpair failed and we were unable to recover it. 00:26:46.285 [2024-05-15 17:17:33.884445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.285 [2024-05-15 17:17:33.884550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.285 [2024-05-15 17:17:33.884560] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.285 qpair failed and we were unable to recover it. 00:26:46.285 [2024-05-15 17:17:33.884803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.285 [2024-05-15 17:17:33.884932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.285 [2024-05-15 17:17:33.884942] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.285 qpair failed and we were unable to recover it. 00:26:46.285 [2024-05-15 17:17:33.885142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.285 [2024-05-15 17:17:33.885344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.285 [2024-05-15 17:17:33.885358] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.285 qpair failed and we were unable to recover it. 00:26:46.285 [2024-05-15 17:17:33.885453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.285 [2024-05-15 17:17:33.885623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.285 [2024-05-15 17:17:33.885633] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.285 qpair failed and we were unable to recover it. 00:26:46.285 [2024-05-15 17:17:33.885800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.285 [2024-05-15 17:17:33.885977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.285 [2024-05-15 17:17:33.885987] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.285 qpair failed and we were unable to recover it. 00:26:46.285 [2024-05-15 17:17:33.886095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.285 [2024-05-15 17:17:33.886216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.285 [2024-05-15 17:17:33.886227] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.285 qpair failed and we were unable to recover it. 00:26:46.285 [2024-05-15 17:17:33.886396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.285 [2024-05-15 17:17:33.886548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.285 [2024-05-15 17:17:33.886558] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.285 qpair failed and we were unable to recover it. 00:26:46.285 [2024-05-15 17:17:33.886662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.285 [2024-05-15 17:17:33.886766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.285 [2024-05-15 17:17:33.886775] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.285 qpair failed and we were unable to recover it. 00:26:46.285 [2024-05-15 17:17:33.886887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.285 [2024-05-15 17:17:33.886992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.285 [2024-05-15 17:17:33.887002] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.285 qpair failed and we were unable to recover it. 00:26:46.285 [2024-05-15 17:17:33.887105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.285 [2024-05-15 17:17:33.887247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.285 [2024-05-15 17:17:33.887258] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.285 qpair failed and we were unable to recover it. 00:26:46.285 [2024-05-15 17:17:33.887359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.285 [2024-05-15 17:17:33.887585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.285 [2024-05-15 17:17:33.887594] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.285 qpair failed and we were unable to recover it. 00:26:46.285 [2024-05-15 17:17:33.887709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.285 [2024-05-15 17:17:33.887780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.285 [2024-05-15 17:17:33.887789] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.285 qpair failed and we were unable to recover it. 00:26:46.285 [2024-05-15 17:17:33.887897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.285 [2024-05-15 17:17:33.887997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.285 [2024-05-15 17:17:33.888007] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.285 qpair failed and we were unable to recover it. 00:26:46.568 [2024-05-15 17:17:33.888107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.568 [2024-05-15 17:17:33.888202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.568 [2024-05-15 17:17:33.888214] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.568 qpair failed and we were unable to recover it. 00:26:46.568 [2024-05-15 17:17:33.888317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.568 [2024-05-15 17:17:33.888474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.568 [2024-05-15 17:17:33.888484] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.568 qpair failed and we were unable to recover it. 00:26:46.568 [2024-05-15 17:17:33.888594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.568 [2024-05-15 17:17:33.888704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.568 [2024-05-15 17:17:33.888713] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.568 qpair failed and we were unable to recover it. 00:26:46.568 [2024-05-15 17:17:33.888820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.568 [2024-05-15 17:17:33.888972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.568 [2024-05-15 17:17:33.888983] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.568 qpair failed and we were unable to recover it. 00:26:46.568 [2024-05-15 17:17:33.889088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.568 [2024-05-15 17:17:33.889244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.568 [2024-05-15 17:17:33.889256] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.568 qpair failed and we were unable to recover it. 00:26:46.568 [2024-05-15 17:17:33.889360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.568 [2024-05-15 17:17:33.889525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.568 [2024-05-15 17:17:33.889537] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.568 qpair failed and we were unable to recover it. 00:26:46.568 [2024-05-15 17:17:33.889708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.568 [2024-05-15 17:17:33.889893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.568 [2024-05-15 17:17:33.889904] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.568 qpair failed and we were unable to recover it. 00:26:46.568 [2024-05-15 17:17:33.890091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.568 [2024-05-15 17:17:33.890194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.568 [2024-05-15 17:17:33.890206] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.568 qpair failed and we were unable to recover it. 00:26:46.568 [2024-05-15 17:17:33.890277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.568 [2024-05-15 17:17:33.890409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.568 [2024-05-15 17:17:33.890420] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.568 qpair failed and we were unable to recover it. 00:26:46.568 [2024-05-15 17:17:33.890516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.568 [2024-05-15 17:17:33.890612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.568 [2024-05-15 17:17:33.890622] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.568 qpair failed and we were unable to recover it. 00:26:46.568 [2024-05-15 17:17:33.890782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.568 [2024-05-15 17:17:33.890877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.568 [2024-05-15 17:17:33.890887] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.568 qpair failed and we were unable to recover it. 00:26:46.568 [2024-05-15 17:17:33.890993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.568 [2024-05-15 17:17:33.891079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.568 [2024-05-15 17:17:33.891089] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.568 qpair failed and we were unable to recover it. 00:26:46.568 [2024-05-15 17:17:33.891179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.568 [2024-05-15 17:17:33.891279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.568 [2024-05-15 17:17:33.891289] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.568 qpair failed and we were unable to recover it. 00:26:46.568 [2024-05-15 17:17:33.891443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.568 [2024-05-15 17:17:33.891537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.568 [2024-05-15 17:17:33.891547] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.568 qpair failed and we were unable to recover it. 00:26:46.568 [2024-05-15 17:17:33.891637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.568 [2024-05-15 17:17:33.891791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.568 [2024-05-15 17:17:33.891802] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.568 qpair failed and we were unable to recover it. 00:26:46.568 [2024-05-15 17:17:33.891972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.568 [2024-05-15 17:17:33.892076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.568 [2024-05-15 17:17:33.892086] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.568 qpair failed and we were unable to recover it. 00:26:46.568 [2024-05-15 17:17:33.892203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.568 [2024-05-15 17:17:33.892336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.568 [2024-05-15 17:17:33.892347] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.568 qpair failed and we were unable to recover it. 00:26:46.568 [2024-05-15 17:17:33.892499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.568 [2024-05-15 17:17:33.892649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.568 [2024-05-15 17:17:33.892659] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.568 qpair failed and we were unable to recover it. 00:26:46.568 [2024-05-15 17:17:33.892770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.568 [2024-05-15 17:17:33.892883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.568 [2024-05-15 17:17:33.892893] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.568 qpair failed and we were unable to recover it. 00:26:46.568 [2024-05-15 17:17:33.893074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.568 [2024-05-15 17:17:33.893175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.568 [2024-05-15 17:17:33.893186] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.568 qpair failed and we were unable to recover it. 00:26:46.568 [2024-05-15 17:17:33.893363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.568 [2024-05-15 17:17:33.893476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.568 [2024-05-15 17:17:33.893486] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.568 qpair failed and we were unable to recover it. 00:26:46.568 [2024-05-15 17:17:33.893661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.568 [2024-05-15 17:17:33.893885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.568 [2024-05-15 17:17:33.893895] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.568 qpair failed and we were unable to recover it. 00:26:46.568 [2024-05-15 17:17:33.894096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.568 [2024-05-15 17:17:33.894280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.568 [2024-05-15 17:17:33.894291] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.568 qpair failed and we were unable to recover it. 00:26:46.568 [2024-05-15 17:17:33.894416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.568 [2024-05-15 17:17:33.894531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.568 [2024-05-15 17:17:33.894541] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.568 qpair failed and we were unable to recover it. 00:26:46.568 [2024-05-15 17:17:33.894629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.568 [2024-05-15 17:17:33.894730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.569 [2024-05-15 17:17:33.894743] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.569 qpair failed and we were unable to recover it. 00:26:46.569 [2024-05-15 17:17:33.894904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.569 [2024-05-15 17:17:33.895004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.569 [2024-05-15 17:17:33.895015] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.569 qpair failed and we were unable to recover it. 00:26:46.569 [2024-05-15 17:17:33.895185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.569 [2024-05-15 17:17:33.895349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.569 [2024-05-15 17:17:33.895360] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.569 qpair failed and we were unable to recover it. 00:26:46.569 [2024-05-15 17:17:33.895522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.569 [2024-05-15 17:17:33.895622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.569 [2024-05-15 17:17:33.895632] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.569 qpair failed and we were unable to recover it. 00:26:46.569 [2024-05-15 17:17:33.895740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.569 [2024-05-15 17:17:33.895841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.569 [2024-05-15 17:17:33.895851] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.569 qpair failed and we were unable to recover it. 00:26:46.569 [2024-05-15 17:17:33.895940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.569 [2024-05-15 17:17:33.896109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.569 [2024-05-15 17:17:33.896118] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.569 qpair failed and we were unable to recover it. 00:26:46.569 [2024-05-15 17:17:33.896275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.569 [2024-05-15 17:17:33.896437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.569 [2024-05-15 17:17:33.896447] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.569 qpair failed and we were unable to recover it. 00:26:46.569 [2024-05-15 17:17:33.896544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.569 [2024-05-15 17:17:33.896655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.569 [2024-05-15 17:17:33.896666] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.569 qpair failed and we were unable to recover it. 00:26:46.569 [2024-05-15 17:17:33.896841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.569 [2024-05-15 17:17:33.897010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.569 [2024-05-15 17:17:33.897021] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.569 qpair failed and we were unable to recover it. 00:26:46.569 [2024-05-15 17:17:33.897186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.569 [2024-05-15 17:17:33.897274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.569 [2024-05-15 17:17:33.897285] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.569 qpair failed and we were unable to recover it. 00:26:46.569 [2024-05-15 17:17:33.897397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.569 [2024-05-15 17:17:33.897490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.569 [2024-05-15 17:17:33.897504] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.569 qpair failed and we were unable to recover it. 00:26:46.569 [2024-05-15 17:17:33.897663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.569 [2024-05-15 17:17:33.897804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.569 [2024-05-15 17:17:33.897814] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.569 qpair failed and we were unable to recover it. 00:26:46.569 [2024-05-15 17:17:33.897914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.569 [2024-05-15 17:17:33.898089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.569 [2024-05-15 17:17:33.898099] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.569 qpair failed and we were unable to recover it. 00:26:46.569 [2024-05-15 17:17:33.898172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.569 [2024-05-15 17:17:33.898276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.569 [2024-05-15 17:17:33.898286] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.569 qpair failed and we were unable to recover it. 00:26:46.569 [2024-05-15 17:17:33.898459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.569 [2024-05-15 17:17:33.898549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.569 [2024-05-15 17:17:33.898560] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.569 qpair failed and we were unable to recover it. 00:26:46.569 [2024-05-15 17:17:33.898679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.569 [2024-05-15 17:17:33.898779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.569 [2024-05-15 17:17:33.898789] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.569 qpair failed and we were unable to recover it. 00:26:46.569 [2024-05-15 17:17:33.898885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.569 [2024-05-15 17:17:33.899003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.569 [2024-05-15 17:17:33.899013] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.569 qpair failed and we were unable to recover it. 00:26:46.569 [2024-05-15 17:17:33.899092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.569 [2024-05-15 17:17:33.899178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.569 [2024-05-15 17:17:33.899189] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.569 qpair failed and we were unable to recover it. 00:26:46.569 [2024-05-15 17:17:33.899277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.569 [2024-05-15 17:17:33.899438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.569 [2024-05-15 17:17:33.899448] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.569 qpair failed and we were unable to recover it. 00:26:46.569 [2024-05-15 17:17:33.899553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.569 [2024-05-15 17:17:33.899616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.569 [2024-05-15 17:17:33.899628] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.569 qpair failed and we were unable to recover it. 00:26:46.569 [2024-05-15 17:17:33.899744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.569 [2024-05-15 17:17:33.899900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.569 [2024-05-15 17:17:33.899913] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.569 qpair failed and we were unable to recover it. 00:26:46.569 [2024-05-15 17:17:33.900004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.569 [2024-05-15 17:17:33.900094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.569 [2024-05-15 17:17:33.900105] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.569 qpair failed and we were unable to recover it. 00:26:46.569 [2024-05-15 17:17:33.900207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.569 [2024-05-15 17:17:33.900382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.569 [2024-05-15 17:17:33.900392] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.569 qpair failed and we were unable to recover it. 00:26:46.569 [2024-05-15 17:17:33.900500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.569 [2024-05-15 17:17:33.900610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.569 [2024-05-15 17:17:33.900620] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.569 qpair failed and we were unable to recover it. 00:26:46.569 [2024-05-15 17:17:33.900709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.569 [2024-05-15 17:17:33.900803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.569 [2024-05-15 17:17:33.900813] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.569 qpair failed and we were unable to recover it. 00:26:46.569 [2024-05-15 17:17:33.901043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.569 [2024-05-15 17:17:33.901245] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:26:46.569 [2024-05-15 17:17:33.901273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.569 [2024-05-15 17:17:33.901284] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.569 qpair failed and we were unable to recover it. 00:26:46.569 [2024-05-15 17:17:33.901401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.569 [2024-05-15 17:17:33.901524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.569 [2024-05-15 17:17:33.901535] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.569 qpair failed and we were unable to recover it. 00:26:46.569 [2024-05-15 17:17:33.901612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.569 [2024-05-15 17:17:33.901707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.569 [2024-05-15 17:17:33.901717] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.569 qpair failed and we were unable to recover it. 00:26:46.569 [2024-05-15 17:17:33.901823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.569 [2024-05-15 17:17:33.901922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.570 [2024-05-15 17:17:33.901932] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.570 qpair failed and we were unable to recover it. 00:26:46.570 [2024-05-15 17:17:33.902018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.570 [2024-05-15 17:17:33.902123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.570 [2024-05-15 17:17:33.902134] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.570 qpair failed and we were unable to recover it. 00:26:46.570 [2024-05-15 17:17:33.902297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.570 [2024-05-15 17:17:33.902392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.570 [2024-05-15 17:17:33.902404] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.570 qpair failed and we were unable to recover it. 00:26:46.570 [2024-05-15 17:17:33.902563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.570 [2024-05-15 17:17:33.902664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.570 [2024-05-15 17:17:33.902675] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.570 qpair failed and we were unable to recover it. 00:26:46.570 [2024-05-15 17:17:33.902847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.570 [2024-05-15 17:17:33.902925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.570 [2024-05-15 17:17:33.902936] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.570 qpair failed and we were unable to recover it. 00:26:46.570 [2024-05-15 17:17:33.903044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.570 [2024-05-15 17:17:33.903193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.570 [2024-05-15 17:17:33.903204] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.570 qpair failed and we were unable to recover it. 00:26:46.570 [2024-05-15 17:17:33.903340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.570 [2024-05-15 17:17:33.903469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.570 [2024-05-15 17:17:33.903479] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.570 qpair failed and we were unable to recover it. 00:26:46.570 [2024-05-15 17:17:33.903566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.570 [2024-05-15 17:17:33.903668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.570 [2024-05-15 17:17:33.903682] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.570 qpair failed and we were unable to recover it. 00:26:46.570 [2024-05-15 17:17:33.903783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.570 [2024-05-15 17:17:33.903886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.570 [2024-05-15 17:17:33.903896] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.570 qpair failed and we were unable to recover it. 00:26:46.570 [2024-05-15 17:17:33.904072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.570 [2024-05-15 17:17:33.904157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.570 [2024-05-15 17:17:33.904173] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.570 qpair failed and we were unable to recover it. 00:26:46.570 [2024-05-15 17:17:33.904332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.570 [2024-05-15 17:17:33.904404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.570 [2024-05-15 17:17:33.904414] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.570 qpair failed and we were unable to recover it. 00:26:46.570 [2024-05-15 17:17:33.904582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.570 [2024-05-15 17:17:33.904738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.570 [2024-05-15 17:17:33.904749] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.570 qpair failed and we were unable to recover it. 00:26:46.570 [2024-05-15 17:17:33.904905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.570 [2024-05-15 17:17:33.905086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.570 [2024-05-15 17:17:33.905098] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.570 qpair failed and we were unable to recover it. 00:26:46.570 [2024-05-15 17:17:33.905201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.570 [2024-05-15 17:17:33.905361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.570 [2024-05-15 17:17:33.905372] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.570 qpair failed and we were unable to recover it. 00:26:46.570 [2024-05-15 17:17:33.905466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.570 [2024-05-15 17:17:33.905552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.570 [2024-05-15 17:17:33.905561] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.570 qpair failed and we were unable to recover it. 00:26:46.570 [2024-05-15 17:17:33.905651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.570 [2024-05-15 17:17:33.905711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.570 [2024-05-15 17:17:33.905721] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.570 qpair failed and we were unable to recover it. 00:26:46.570 [2024-05-15 17:17:33.905885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.570 [2024-05-15 17:17:33.906042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.570 [2024-05-15 17:17:33.906052] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.570 qpair failed and we were unable to recover it. 00:26:46.570 [2024-05-15 17:17:33.906147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.570 [2024-05-15 17:17:33.906302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.570 [2024-05-15 17:17:33.906313] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.570 qpair failed and we were unable to recover it. 00:26:46.570 [2024-05-15 17:17:33.906469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.570 [2024-05-15 17:17:33.906573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.570 [2024-05-15 17:17:33.906585] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.570 qpair failed and we were unable to recover it. 00:26:46.570 [2024-05-15 17:17:33.906685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.570 [2024-05-15 17:17:33.906778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.570 [2024-05-15 17:17:33.906790] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.570 qpair failed and we were unable to recover it. 00:26:46.570 [2024-05-15 17:17:33.906895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.570 [2024-05-15 17:17:33.906987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.570 [2024-05-15 17:17:33.906999] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.570 qpair failed and we were unable to recover it. 00:26:46.570 [2024-05-15 17:17:33.907105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.570 [2024-05-15 17:17:33.907202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.570 [2024-05-15 17:17:33.907214] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.570 qpair failed and we were unable to recover it. 00:26:46.570 [2024-05-15 17:17:33.907446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.570 [2024-05-15 17:17:33.907564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.570 [2024-05-15 17:17:33.907577] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.570 qpair failed and we were unable to recover it. 00:26:46.570 [2024-05-15 17:17:33.907690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.570 [2024-05-15 17:17:33.907773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.570 [2024-05-15 17:17:33.907783] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.570 qpair failed and we were unable to recover it. 00:26:46.570 [2024-05-15 17:17:33.907944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.570 [2024-05-15 17:17:33.908113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.570 [2024-05-15 17:17:33.908124] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.570 qpair failed and we were unable to recover it. 00:26:46.570 [2024-05-15 17:17:33.908235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.570 [2024-05-15 17:17:33.908324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.570 [2024-05-15 17:17:33.908335] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.570 qpair failed and we were unable to recover it. 00:26:46.570 [2024-05-15 17:17:33.908425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.570 [2024-05-15 17:17:33.908520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.570 [2024-05-15 17:17:33.908530] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.570 qpair failed and we were unable to recover it. 00:26:46.570 [2024-05-15 17:17:33.908635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.570 [2024-05-15 17:17:33.908798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.570 [2024-05-15 17:17:33.908809] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.570 qpair failed and we were unable to recover it. 00:26:46.570 [2024-05-15 17:17:33.908897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.570 [2024-05-15 17:17:33.909032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.570 [2024-05-15 17:17:33.909044] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.571 qpair failed and we were unable to recover it. 00:26:46.571 [2024-05-15 17:17:33.909215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.571 [2024-05-15 17:17:33.909309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.571 [2024-05-15 17:17:33.909319] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.571 qpair failed and we were unable to recover it. 00:26:46.571 [2024-05-15 17:17:33.909514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.571 [2024-05-15 17:17:33.909597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.571 [2024-05-15 17:17:33.909607] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.571 qpair failed and we were unable to recover it. 00:26:46.571 [2024-05-15 17:17:33.909698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.571 [2024-05-15 17:17:33.909790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.571 [2024-05-15 17:17:33.909802] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.571 qpair failed and we were unable to recover it. 00:26:46.571 [2024-05-15 17:17:33.910008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.571 [2024-05-15 17:17:33.910185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.571 [2024-05-15 17:17:33.910197] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.571 qpair failed and we were unable to recover it. 00:26:46.571 [2024-05-15 17:17:33.910387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.571 [2024-05-15 17:17:33.910486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.571 [2024-05-15 17:17:33.910497] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.571 qpair failed and we were unable to recover it. 00:26:46.571 [2024-05-15 17:17:33.910593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.571 [2024-05-15 17:17:33.910685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.571 [2024-05-15 17:17:33.910695] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.571 qpair failed and we were unable to recover it. 00:26:46.571 [2024-05-15 17:17:33.910942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.571 [2024-05-15 17:17:33.911054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.571 [2024-05-15 17:17:33.911065] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.571 qpair failed and we were unable to recover it. 00:26:46.571 [2024-05-15 17:17:33.911189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.571 [2024-05-15 17:17:33.911293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.571 [2024-05-15 17:17:33.911304] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.571 qpair failed and we were unable to recover it. 00:26:46.571 [2024-05-15 17:17:33.911482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.571 [2024-05-15 17:17:33.911578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.571 [2024-05-15 17:17:33.911589] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.571 qpair failed and we were unable to recover it. 00:26:46.571 [2024-05-15 17:17:33.911667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.571 [2024-05-15 17:17:33.911786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.571 [2024-05-15 17:17:33.911797] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.571 qpair failed and we were unable to recover it. 00:26:46.571 [2024-05-15 17:17:33.911888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.571 [2024-05-15 17:17:33.911994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.571 [2024-05-15 17:17:33.912004] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.571 qpair failed and we were unable to recover it. 00:26:46.571 [2024-05-15 17:17:33.912091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.571 [2024-05-15 17:17:33.912200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.571 [2024-05-15 17:17:33.912212] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.571 qpair failed and we were unable to recover it. 00:26:46.571 [2024-05-15 17:17:33.912390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.571 [2024-05-15 17:17:33.912479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.571 [2024-05-15 17:17:33.912489] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.571 qpair failed and we were unable to recover it. 00:26:46.571 [2024-05-15 17:17:33.912657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.571 [2024-05-15 17:17:33.912751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.571 [2024-05-15 17:17:33.912761] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.571 qpair failed and we were unable to recover it. 00:26:46.571 [2024-05-15 17:17:33.912936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.571 [2024-05-15 17:17:33.913036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.571 [2024-05-15 17:17:33.913047] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.571 qpair failed and we were unable to recover it. 00:26:46.571 [2024-05-15 17:17:33.913209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.571 [2024-05-15 17:17:33.913375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.571 [2024-05-15 17:17:33.913386] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.571 qpair failed and we were unable to recover it. 00:26:46.571 [2024-05-15 17:17:33.913488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.571 [2024-05-15 17:17:33.913607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.571 [2024-05-15 17:17:33.913617] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.571 qpair failed and we were unable to recover it. 00:26:46.571 [2024-05-15 17:17:33.913715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.571 [2024-05-15 17:17:33.913885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.571 [2024-05-15 17:17:33.913895] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.571 qpair failed and we were unable to recover it. 00:26:46.571 [2024-05-15 17:17:33.914073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.571 [2024-05-15 17:17:33.914175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.571 [2024-05-15 17:17:33.914186] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.571 qpair failed and we were unable to recover it. 00:26:46.571 [2024-05-15 17:17:33.914289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.571 [2024-05-15 17:17:33.914382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.571 [2024-05-15 17:17:33.914392] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.571 qpair failed and we were unable to recover it. 00:26:46.571 [2024-05-15 17:17:33.914483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.571 [2024-05-15 17:17:33.914549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.571 [2024-05-15 17:17:33.914559] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.571 qpair failed and we were unable to recover it. 00:26:46.571 [2024-05-15 17:17:33.914647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.571 [2024-05-15 17:17:33.914786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.571 [2024-05-15 17:17:33.914796] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.571 qpair failed and we were unable to recover it. 00:26:46.571 [2024-05-15 17:17:33.914886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.571 [2024-05-15 17:17:33.915062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.571 [2024-05-15 17:17:33.915072] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.571 qpair failed and we were unable to recover it. 00:26:46.571 [2024-05-15 17:17:33.915174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.571 [2024-05-15 17:17:33.915254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.571 [2024-05-15 17:17:33.915264] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.571 qpair failed and we were unable to recover it. 00:26:46.571 [2024-05-15 17:17:33.915434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.571 [2024-05-15 17:17:33.915522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.571 [2024-05-15 17:17:33.915532] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.571 qpair failed and we were unable to recover it. 00:26:46.571 [2024-05-15 17:17:33.915625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.571 [2024-05-15 17:17:33.915719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.571 [2024-05-15 17:17:33.915729] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.571 qpair failed and we were unable to recover it. 00:26:46.571 [2024-05-15 17:17:33.915953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.571 [2024-05-15 17:17:33.916066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.571 [2024-05-15 17:17:33.916075] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.571 qpair failed and we were unable to recover it. 00:26:46.571 [2024-05-15 17:17:33.916232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.571 [2024-05-15 17:17:33.916331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.571 [2024-05-15 17:17:33.916342] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.571 qpair failed and we were unable to recover it. 00:26:46.572 [2024-05-15 17:17:33.916448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.572 [2024-05-15 17:17:33.916512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.572 [2024-05-15 17:17:33.916523] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.572 qpair failed and we were unable to recover it. 00:26:46.572 [2024-05-15 17:17:33.916623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.572 [2024-05-15 17:17:33.916709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.572 [2024-05-15 17:17:33.916720] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.572 qpair failed and we were unable to recover it. 00:26:46.572 [2024-05-15 17:17:33.916822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.572 [2024-05-15 17:17:33.916930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.572 [2024-05-15 17:17:33.916940] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.572 qpair failed and we were unable to recover it. 00:26:46.572 [2024-05-15 17:17:33.917018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.572 [2024-05-15 17:17:33.917150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.572 [2024-05-15 17:17:33.917161] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.572 qpair failed and we were unable to recover it. 00:26:46.572 [2024-05-15 17:17:33.917270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.572 [2024-05-15 17:17:33.917378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.572 [2024-05-15 17:17:33.917389] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.572 qpair failed and we were unable to recover it. 00:26:46.572 [2024-05-15 17:17:33.917544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.572 [2024-05-15 17:17:33.917631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.572 [2024-05-15 17:17:33.917641] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.572 qpair failed and we were unable to recover it. 00:26:46.572 [2024-05-15 17:17:33.917736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.572 [2024-05-15 17:17:33.917839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.572 [2024-05-15 17:17:33.917849] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.572 qpair failed and we were unable to recover it. 00:26:46.572 [2024-05-15 17:17:33.917952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.572 [2024-05-15 17:17:33.918028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.572 [2024-05-15 17:17:33.918038] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.572 qpair failed and we were unable to recover it. 00:26:46.572 [2024-05-15 17:17:33.918126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.572 [2024-05-15 17:17:33.918241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.572 [2024-05-15 17:17:33.918252] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.572 qpair failed and we were unable to recover it. 00:26:46.572 [2024-05-15 17:17:33.918471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.572 [2024-05-15 17:17:33.918563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.572 [2024-05-15 17:17:33.918573] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.572 qpair failed and we were unable to recover it. 00:26:46.572 [2024-05-15 17:17:33.918735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.572 [2024-05-15 17:17:33.918832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.572 [2024-05-15 17:17:33.918842] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.572 qpair failed and we were unable to recover it. 00:26:46.572 [2024-05-15 17:17:33.918943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.572 [2024-05-15 17:17:33.919059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.572 [2024-05-15 17:17:33.919069] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.572 qpair failed and we were unable to recover it. 00:26:46.572 [2024-05-15 17:17:33.919172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.572 [2024-05-15 17:17:33.919340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.572 [2024-05-15 17:17:33.919350] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.572 qpair failed and we were unable to recover it. 00:26:46.572 [2024-05-15 17:17:33.919530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.572 [2024-05-15 17:17:33.919631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.572 [2024-05-15 17:17:33.919641] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.572 qpair failed and we were unable to recover it. 00:26:46.572 [2024-05-15 17:17:33.919734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.572 [2024-05-15 17:17:33.919915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.572 [2024-05-15 17:17:33.919924] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.572 qpair failed and we were unable to recover it. 00:26:46.572 [2024-05-15 17:17:33.920097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.572 [2024-05-15 17:17:33.920192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.572 [2024-05-15 17:17:33.920202] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.572 qpair failed and we were unable to recover it. 00:26:46.572 [2024-05-15 17:17:33.920299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.572 [2024-05-15 17:17:33.920404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.572 [2024-05-15 17:17:33.920415] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.572 qpair failed and we were unable to recover it. 00:26:46.572 [2024-05-15 17:17:33.920502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.572 [2024-05-15 17:17:33.920604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.572 [2024-05-15 17:17:33.920616] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.572 qpair failed and we were unable to recover it. 00:26:46.572 [2024-05-15 17:17:33.920861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.572 [2024-05-15 17:17:33.921013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.572 [2024-05-15 17:17:33.921024] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.572 qpair failed and we were unable to recover it. 00:26:46.572 [2024-05-15 17:17:33.921199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.572 [2024-05-15 17:17:33.921368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.572 [2024-05-15 17:17:33.921379] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.572 qpair failed and we were unable to recover it. 00:26:46.572 [2024-05-15 17:17:33.921491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.572 [2024-05-15 17:17:33.921681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.572 [2024-05-15 17:17:33.921693] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.572 qpair failed and we were unable to recover it. 00:26:46.572 [2024-05-15 17:17:33.921766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.572 [2024-05-15 17:17:33.921995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.573 [2024-05-15 17:17:33.922005] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.573 qpair failed and we were unable to recover it. 00:26:46.573 [2024-05-15 17:17:33.922098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.573 [2024-05-15 17:17:33.922193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.573 [2024-05-15 17:17:33.922204] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.573 qpair failed and we were unable to recover it. 00:26:46.573 [2024-05-15 17:17:33.922288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.573 [2024-05-15 17:17:33.922379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.573 [2024-05-15 17:17:33.922389] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.573 qpair failed and we were unable to recover it. 00:26:46.573 [2024-05-15 17:17:33.922482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.573 [2024-05-15 17:17:33.922576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.573 [2024-05-15 17:17:33.922586] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.573 qpair failed and we were unable to recover it. 00:26:46.573 [2024-05-15 17:17:33.922679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.573 [2024-05-15 17:17:33.922783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.573 [2024-05-15 17:17:33.922795] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.573 qpair failed and we were unable to recover it. 00:26:46.573 [2024-05-15 17:17:33.922889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.573 [2024-05-15 17:17:33.922994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.573 [2024-05-15 17:17:33.923005] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.573 qpair failed and we were unable to recover it. 00:26:46.573 [2024-05-15 17:17:33.923175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.573 [2024-05-15 17:17:33.923247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.573 [2024-05-15 17:17:33.923257] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.573 qpair failed and we were unable to recover it. 00:26:46.573 [2024-05-15 17:17:33.923362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.573 [2024-05-15 17:17:33.923521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.573 [2024-05-15 17:17:33.923532] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.573 qpair failed and we were unable to recover it. 00:26:46.573 [2024-05-15 17:17:33.923623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.573 [2024-05-15 17:17:33.923716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.573 [2024-05-15 17:17:33.923727] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.573 qpair failed and we were unable to recover it. 00:26:46.573 [2024-05-15 17:17:33.923888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.573 [2024-05-15 17:17:33.923994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.573 [2024-05-15 17:17:33.924005] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.573 qpair failed and we were unable to recover it. 00:26:46.573 [2024-05-15 17:17:33.924170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.573 [2024-05-15 17:17:33.924261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.573 [2024-05-15 17:17:33.924271] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.573 qpair failed and we were unable to recover it. 00:26:46.573 [2024-05-15 17:17:33.924383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.573 [2024-05-15 17:17:33.924493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.573 [2024-05-15 17:17:33.924503] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.573 qpair failed and we were unable to recover it. 00:26:46.573 [2024-05-15 17:17:33.924589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.573 [2024-05-15 17:17:33.924681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.573 [2024-05-15 17:17:33.924696] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.573 qpair failed and we were unable to recover it. 00:26:46.573 [2024-05-15 17:17:33.924803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.573 [2024-05-15 17:17:33.924964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.573 [2024-05-15 17:17:33.924974] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.573 qpair failed and we were unable to recover it. 00:26:46.573 [2024-05-15 17:17:33.925133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.573 [2024-05-15 17:17:33.925225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.573 [2024-05-15 17:17:33.925236] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.573 qpair failed and we were unable to recover it. 00:26:46.573 [2024-05-15 17:17:33.925331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.573 [2024-05-15 17:17:33.925416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.573 [2024-05-15 17:17:33.925426] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.573 qpair failed and we were unable to recover it. 00:26:46.573 [2024-05-15 17:17:33.925522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.573 [2024-05-15 17:17:33.925593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.573 [2024-05-15 17:17:33.925603] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.573 qpair failed and we were unable to recover it. 00:26:46.573 [2024-05-15 17:17:33.925758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.573 [2024-05-15 17:17:33.925844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.573 [2024-05-15 17:17:33.925855] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.573 qpair failed and we were unable to recover it. 00:26:46.573 [2024-05-15 17:17:33.925963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.573 [2024-05-15 17:17:33.926063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.573 [2024-05-15 17:17:33.926073] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.573 qpair failed and we were unable to recover it. 00:26:46.573 [2024-05-15 17:17:33.926252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.573 [2024-05-15 17:17:33.926358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.573 [2024-05-15 17:17:33.926369] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.573 qpair failed and we were unable to recover it. 00:26:46.573 [2024-05-15 17:17:33.926541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.573 [2024-05-15 17:17:33.926623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.573 [2024-05-15 17:17:33.926635] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.573 qpair failed and we were unable to recover it. 00:26:46.573 [2024-05-15 17:17:33.926715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.573 [2024-05-15 17:17:33.926820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.573 [2024-05-15 17:17:33.926832] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.573 qpair failed and we were unable to recover it. 00:26:46.573 [2024-05-15 17:17:33.926940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.573 [2024-05-15 17:17:33.927043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.573 [2024-05-15 17:17:33.927055] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.573 qpair failed and we were unable to recover it. 00:26:46.573 [2024-05-15 17:17:33.927148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.573 [2024-05-15 17:17:33.927242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.573 [2024-05-15 17:17:33.927252] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.573 qpair failed and we were unable to recover it. 00:26:46.573 [2024-05-15 17:17:33.927414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.573 [2024-05-15 17:17:33.927482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.573 [2024-05-15 17:17:33.927492] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.573 qpair failed and we were unable to recover it. 00:26:46.573 [2024-05-15 17:17:33.927602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.573 [2024-05-15 17:17:33.927792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.573 [2024-05-15 17:17:33.927810] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f0000b90 with addr=10.0.0.2, port=4420 00:26:46.573 qpair failed and we were unable to recover it. 00:26:46.573 [2024-05-15 17:17:33.927943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.573 [2024-05-15 17:17:33.928068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.573 [2024-05-15 17:17:33.928085] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:46.573 qpair failed and we were unable to recover it. 00:26:46.573 [2024-05-15 17:17:33.928188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.573 [2024-05-15 17:17:33.928285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.573 [2024-05-15 17:17:33.928300] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:46.573 qpair failed and we were unable to recover it. 00:26:46.573 [2024-05-15 17:17:33.928405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.573 [2024-05-15 17:17:33.928515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.574 [2024-05-15 17:17:33.928529] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:46.574 qpair failed and we were unable to recover it. 00:26:46.574 [2024-05-15 17:17:33.928630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.574 [2024-05-15 17:17:33.928742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.574 [2024-05-15 17:17:33.928757] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:46.574 qpair failed and we were unable to recover it. 00:26:46.574 [2024-05-15 17:17:33.928861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.574 [2024-05-15 17:17:33.929034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.574 [2024-05-15 17:17:33.929048] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:46.574 qpair failed and we were unable to recover it. 00:26:46.574 [2024-05-15 17:17:33.929199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.574 [2024-05-15 17:17:33.929277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.574 [2024-05-15 17:17:33.929291] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:46.574 qpair failed and we were unable to recover it. 00:26:46.574 [2024-05-15 17:17:33.929389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.574 [2024-05-15 17:17:33.929574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.574 [2024-05-15 17:17:33.929589] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:46.574 qpair failed and we were unable to recover it. 00:26:46.574 [2024-05-15 17:17:33.929700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.574 [2024-05-15 17:17:33.929809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.574 [2024-05-15 17:17:33.929823] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:46.574 qpair failed and we were unable to recover it. 00:26:46.574 [2024-05-15 17:17:33.929999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.574 [2024-05-15 17:17:33.930110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.574 [2024-05-15 17:17:33.930124] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:46.574 qpair failed and we were unable to recover it. 00:26:46.574 [2024-05-15 17:17:33.930231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.574 [2024-05-15 17:17:33.930330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.574 [2024-05-15 17:17:33.930345] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:46.574 qpair failed and we were unable to recover it. 00:26:46.574 [2024-05-15 17:17:33.930516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.574 [2024-05-15 17:17:33.930697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.574 [2024-05-15 17:17:33.930711] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:46.574 qpair failed and we were unable to recover it. 00:26:46.574 [2024-05-15 17:17:33.930964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.574 [2024-05-15 17:17:33.931064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.574 [2024-05-15 17:17:33.931078] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:46.574 qpair failed and we were unable to recover it. 00:26:46.574 [2024-05-15 17:17:33.931180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.574 [2024-05-15 17:17:33.931279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.574 [2024-05-15 17:17:33.931293] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:46.574 qpair failed and we were unable to recover it. 00:26:46.574 [2024-05-15 17:17:33.931391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.574 [2024-05-15 17:17:33.931643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.574 [2024-05-15 17:17:33.931658] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:46.574 qpair failed and we were unable to recover it. 00:26:46.574 [2024-05-15 17:17:33.931756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.574 [2024-05-15 17:17:33.931919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.574 [2024-05-15 17:17:33.931932] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:46.574 qpair failed and we were unable to recover it. 00:26:46.574 [2024-05-15 17:17:33.932024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.574 [2024-05-15 17:17:33.932154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.574 [2024-05-15 17:17:33.932173] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:46.574 qpair failed and we were unable to recover it. 00:26:46.574 [2024-05-15 17:17:33.932275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.574 [2024-05-15 17:17:33.932374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.574 [2024-05-15 17:17:33.932388] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:46.574 qpair failed and we were unable to recover it. 00:26:46.574 [2024-05-15 17:17:33.932486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.574 [2024-05-15 17:17:33.932578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.574 [2024-05-15 17:17:33.932592] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:46.574 qpair failed and we were unable to recover it. 00:26:46.574 [2024-05-15 17:17:33.932783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.574 [2024-05-15 17:17:33.932947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.574 [2024-05-15 17:17:33.932960] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:46.574 qpair failed and we were unable to recover it. 00:26:46.574 [2024-05-15 17:17:33.933084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.574 [2024-05-15 17:17:33.933190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.574 [2024-05-15 17:17:33.933204] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:46.574 qpair failed and we were unable to recover it. 00:26:46.574 [2024-05-15 17:17:33.933300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.574 [2024-05-15 17:17:33.933408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.574 [2024-05-15 17:17:33.933422] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:46.574 qpair failed and we were unable to recover it. 00:26:46.574 [2024-05-15 17:17:33.933586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.574 [2024-05-15 17:17:33.933712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.574 [2024-05-15 17:17:33.933725] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:46.574 qpair failed and we were unable to recover it. 00:26:46.574 [2024-05-15 17:17:33.933828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.574 [2024-05-15 17:17:33.933901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.574 [2024-05-15 17:17:33.933914] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:46.574 qpair failed and we were unable to recover it. 00:26:46.574 [2024-05-15 17:17:33.934079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.574 [2024-05-15 17:17:33.934242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.574 [2024-05-15 17:17:33.934258] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:46.574 qpair failed and we were unable to recover it. 00:26:46.574 [2024-05-15 17:17:33.934425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.574 [2024-05-15 17:17:33.934543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.574 [2024-05-15 17:17:33.934556] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:46.574 qpair failed and we were unable to recover it. 00:26:46.574 [2024-05-15 17:17:33.934660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.574 [2024-05-15 17:17:33.934780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.574 [2024-05-15 17:17:33.934794] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:46.574 qpair failed and we were unable to recover it. 00:26:46.574 [2024-05-15 17:17:33.934895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.574 [2024-05-15 17:17:33.934992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.574 [2024-05-15 17:17:33.935006] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:46.574 qpair failed and we were unable to recover it. 00:26:46.574 [2024-05-15 17:17:33.935115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.574 [2024-05-15 17:17:33.935295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.574 [2024-05-15 17:17:33.935309] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:46.574 qpair failed and we were unable to recover it. 00:26:46.574 [2024-05-15 17:17:33.935411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.574 [2024-05-15 17:17:33.935492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.574 [2024-05-15 17:17:33.935505] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:46.574 qpair failed and we were unable to recover it. 00:26:46.574 [2024-05-15 17:17:33.935663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.574 [2024-05-15 17:17:33.935828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.574 [2024-05-15 17:17:33.935842] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:46.574 qpair failed and we were unable to recover it. 00:26:46.574 [2024-05-15 17:17:33.935937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.574 [2024-05-15 17:17:33.936034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.574 [2024-05-15 17:17:33.936047] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:46.575 qpair failed and we were unable to recover it. 00:26:46.575 [2024-05-15 17:17:33.936146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.575 [2024-05-15 17:17:33.936269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.575 [2024-05-15 17:17:33.936290] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:46.575 qpair failed and we were unable to recover it. 00:26:46.575 [2024-05-15 17:17:33.936389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.575 [2024-05-15 17:17:33.936486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.575 [2024-05-15 17:17:33.936500] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:46.575 qpair failed and we were unable to recover it. 00:26:46.575 [2024-05-15 17:17:33.936607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.575 [2024-05-15 17:17:33.936704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.575 [2024-05-15 17:17:33.936718] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:46.575 qpair failed and we were unable to recover it. 00:26:46.575 [2024-05-15 17:17:33.936830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.575 [2024-05-15 17:17:33.936999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.575 [2024-05-15 17:17:33.937014] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:46.575 qpair failed and we were unable to recover it. 00:26:46.575 [2024-05-15 17:17:33.937111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.575 [2024-05-15 17:17:33.937217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.575 [2024-05-15 17:17:33.937231] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:46.575 qpair failed and we were unable to recover it. 00:26:46.575 [2024-05-15 17:17:33.937400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.575 [2024-05-15 17:17:33.937507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.575 [2024-05-15 17:17:33.937522] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:46.575 qpair failed and we were unable to recover it. 00:26:46.575 [2024-05-15 17:17:33.937703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.575 [2024-05-15 17:17:33.937852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.575 [2024-05-15 17:17:33.937867] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:46.575 qpair failed and we were unable to recover it. 00:26:46.575 [2024-05-15 17:17:33.937972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.575 [2024-05-15 17:17:33.938144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.575 [2024-05-15 17:17:33.938158] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:46.575 qpair failed and we were unable to recover it. 00:26:46.575 [2024-05-15 17:17:33.938338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.575 [2024-05-15 17:17:33.938541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.575 [2024-05-15 17:17:33.938556] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:46.575 qpair failed and we were unable to recover it. 00:26:46.575 [2024-05-15 17:17:33.938657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.575 [2024-05-15 17:17:33.938759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.575 [2024-05-15 17:17:33.938774] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:46.575 qpair failed and we were unable to recover it. 00:26:46.575 [2024-05-15 17:17:33.938878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.575 [2024-05-15 17:17:33.939058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.575 [2024-05-15 17:17:33.939074] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:46.575 qpair failed and we were unable to recover it. 00:26:46.575 [2024-05-15 17:17:33.939323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.575 [2024-05-15 17:17:33.939502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.575 [2024-05-15 17:17:33.939517] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:46.575 qpair failed and we were unable to recover it. 00:26:46.575 [2024-05-15 17:17:33.939630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.575 [2024-05-15 17:17:33.939732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.575 [2024-05-15 17:17:33.939747] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:46.575 qpair failed and we were unable to recover it. 00:26:46.575 [2024-05-15 17:17:33.939932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.575 [2024-05-15 17:17:33.940059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.575 [2024-05-15 17:17:33.940075] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:46.575 qpair failed and we were unable to recover it. 00:26:46.575 [2024-05-15 17:17:33.940245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.575 [2024-05-15 17:17:33.940339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.575 [2024-05-15 17:17:33.940354] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:46.575 qpair failed and we were unable to recover it. 00:26:46.575 [2024-05-15 17:17:33.940597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.575 [2024-05-15 17:17:33.940722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.575 [2024-05-15 17:17:33.940737] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:46.575 qpair failed and we were unable to recover it. 00:26:46.575 [2024-05-15 17:17:33.940865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.575 [2024-05-15 17:17:33.940983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.575 [2024-05-15 17:17:33.940998] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:46.575 qpair failed and we were unable to recover it. 00:26:46.575 [2024-05-15 17:17:33.941105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.575 [2024-05-15 17:17:33.941200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.575 [2024-05-15 17:17:33.941216] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:46.575 qpair failed and we were unable to recover it. 00:26:46.575 [2024-05-15 17:17:33.941295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.575 [2024-05-15 17:17:33.941382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.575 [2024-05-15 17:17:33.941401] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:46.575 qpair failed and we were unable to recover it. 00:26:46.575 [2024-05-15 17:17:33.941515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.575 [2024-05-15 17:17:33.941687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.575 [2024-05-15 17:17:33.941701] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:46.575 qpair failed and we were unable to recover it. 00:26:46.575 [2024-05-15 17:17:33.941831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.575 [2024-05-15 17:17:33.941937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.575 [2024-05-15 17:17:33.941953] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:46.575 qpair failed and we were unable to recover it. 00:26:46.575 [2024-05-15 17:17:33.942056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.575 [2024-05-15 17:17:33.942179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.575 [2024-05-15 17:17:33.942195] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:46.575 qpair failed and we were unable to recover it. 00:26:46.575 [2024-05-15 17:17:33.942323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.575 [2024-05-15 17:17:33.942490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.575 [2024-05-15 17:17:33.942506] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:46.575 qpair failed and we were unable to recover it. 00:26:46.575 [2024-05-15 17:17:33.942743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.575 [2024-05-15 17:17:33.942853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.575 [2024-05-15 17:17:33.942868] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:46.575 qpair failed and we were unable to recover it. 00:26:46.575 [2024-05-15 17:17:33.942992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.575 [2024-05-15 17:17:33.943144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.575 [2024-05-15 17:17:33.943159] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:46.575 qpair failed and we were unable to recover it. 00:26:46.575 [2024-05-15 17:17:33.943245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.575 [2024-05-15 17:17:33.943358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.575 [2024-05-15 17:17:33.943373] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:46.575 qpair failed and we were unable to recover it. 00:26:46.575 [2024-05-15 17:17:33.943467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.575 [2024-05-15 17:17:33.943686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.575 [2024-05-15 17:17:33.943703] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:46.575 qpair failed and we were unable to recover it. 00:26:46.575 [2024-05-15 17:17:33.943800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.575 [2024-05-15 17:17:33.943919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.575 [2024-05-15 17:17:33.943938] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:46.575 qpair failed and we were unable to recover it. 00:26:46.575 [2024-05-15 17:17:33.944044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.576 [2024-05-15 17:17:33.944219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.576 [2024-05-15 17:17:33.944240] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:46.576 qpair failed and we were unable to recover it. 00:26:46.576 [2024-05-15 17:17:33.944409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.576 [2024-05-15 17:17:33.944581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.576 [2024-05-15 17:17:33.944596] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:46.576 qpair failed and we were unable to recover it. 00:26:46.576 [2024-05-15 17:17:33.944711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.576 [2024-05-15 17:17:33.944818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.576 [2024-05-15 17:17:33.944833] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:46.576 qpair failed and we were unable to recover it. 00:26:46.576 [2024-05-15 17:17:33.944959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.576 [2024-05-15 17:17:33.945055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.576 [2024-05-15 17:17:33.945071] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:46.576 qpair failed and we were unable to recover it. 00:26:46.576 [2024-05-15 17:17:33.945182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.576 [2024-05-15 17:17:33.945283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.576 [2024-05-15 17:17:33.945296] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:46.576 qpair failed and we were unable to recover it. 00:26:46.576 [2024-05-15 17:17:33.945404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.576 [2024-05-15 17:17:33.945510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.576 [2024-05-15 17:17:33.945528] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:46.576 qpair failed and we were unable to recover it. 00:26:46.576 [2024-05-15 17:17:33.945635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.576 [2024-05-15 17:17:33.945743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.576 [2024-05-15 17:17:33.945755] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.576 qpair failed and we were unable to recover it. 00:26:46.576 [2024-05-15 17:17:33.945859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.576 [2024-05-15 17:17:33.945970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.576 [2024-05-15 17:17:33.945981] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.576 qpair failed and we were unable to recover it. 00:26:46.576 [2024-05-15 17:17:33.946088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.576 [2024-05-15 17:17:33.946259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.576 [2024-05-15 17:17:33.946270] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.576 qpair failed and we were unable to recover it. 00:26:46.576 [2024-05-15 17:17:33.946372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.576 [2024-05-15 17:17:33.946469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.576 [2024-05-15 17:17:33.946480] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.576 qpair failed and we were unable to recover it. 00:26:46.576 [2024-05-15 17:17:33.946575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.576 [2024-05-15 17:17:33.946690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.576 [2024-05-15 17:17:33.946704] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.576 qpair failed and we were unable to recover it. 00:26:46.576 [2024-05-15 17:17:33.946863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.576 [2024-05-15 17:17:33.947018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.576 [2024-05-15 17:17:33.947028] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.576 qpair failed and we were unable to recover it. 00:26:46.576 [2024-05-15 17:17:33.947105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.576 [2024-05-15 17:17:33.947214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.576 [2024-05-15 17:17:33.947224] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.576 qpair failed and we were unable to recover it. 00:26:46.576 [2024-05-15 17:17:33.947342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.576 [2024-05-15 17:17:33.947506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.576 [2024-05-15 17:17:33.947517] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.576 qpair failed and we were unable to recover it. 00:26:46.576 [2024-05-15 17:17:33.947610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.576 [2024-05-15 17:17:33.947714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.576 [2024-05-15 17:17:33.947725] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.576 qpair failed and we were unable to recover it. 00:26:46.576 [2024-05-15 17:17:33.947886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.576 [2024-05-15 17:17:33.947985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.576 [2024-05-15 17:17:33.947995] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.576 qpair failed and we were unable to recover it. 00:26:46.576 [2024-05-15 17:17:33.948112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.576 [2024-05-15 17:17:33.948215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.576 [2024-05-15 17:17:33.948225] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.576 qpair failed and we were unable to recover it. 00:26:46.576 [2024-05-15 17:17:33.948312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.576 [2024-05-15 17:17:33.948390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.576 [2024-05-15 17:17:33.948400] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.576 qpair failed and we were unable to recover it. 00:26:46.576 [2024-05-15 17:17:33.948467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.576 [2024-05-15 17:17:33.948655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.576 [2024-05-15 17:17:33.948665] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.576 qpair failed and we were unable to recover it. 00:26:46.576 [2024-05-15 17:17:33.948790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.576 [2024-05-15 17:17:33.948888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.576 [2024-05-15 17:17:33.948898] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.576 qpair failed and we were unable to recover it. 00:26:46.576 [2024-05-15 17:17:33.948999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.576 [2024-05-15 17:17:33.949158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.576 [2024-05-15 17:17:33.949175] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.576 qpair failed and we were unable to recover it. 00:26:46.576 [2024-05-15 17:17:33.949286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.576 [2024-05-15 17:17:33.949393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.576 [2024-05-15 17:17:33.949402] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.576 qpair failed and we were unable to recover it. 00:26:46.576 [2024-05-15 17:17:33.949503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.576 [2024-05-15 17:17:33.949673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.576 [2024-05-15 17:17:33.949683] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.576 qpair failed and we were unable to recover it. 00:26:46.576 [2024-05-15 17:17:33.949783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.576 [2024-05-15 17:17:33.949871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.576 [2024-05-15 17:17:33.949882] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.576 qpair failed and we were unable to recover it. 00:26:46.576 [2024-05-15 17:17:33.950060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.576 [2024-05-15 17:17:33.950152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.576 [2024-05-15 17:17:33.950162] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.576 qpair failed and we were unable to recover it. 00:26:46.576 [2024-05-15 17:17:33.950334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.576 [2024-05-15 17:17:33.950423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.576 [2024-05-15 17:17:33.950432] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.576 qpair failed and we were unable to recover it. 00:26:46.576 [2024-05-15 17:17:33.950523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.576 [2024-05-15 17:17:33.950613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.577 [2024-05-15 17:17:33.950623] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.577 qpair failed and we were unable to recover it. 00:26:46.577 [2024-05-15 17:17:33.950714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.577 [2024-05-15 17:17:33.950789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.577 [2024-05-15 17:17:33.950799] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.577 qpair failed and we were unable to recover it. 00:26:46.577 [2024-05-15 17:17:33.950892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.577 [2024-05-15 17:17:33.951082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.577 [2024-05-15 17:17:33.951092] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.577 qpair failed and we were unable to recover it. 00:26:46.577 [2024-05-15 17:17:33.951169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.577 [2024-05-15 17:17:33.951262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.577 [2024-05-15 17:17:33.951273] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.577 qpair failed and we were unable to recover it. 00:26:46.577 [2024-05-15 17:17:33.951349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.577 [2024-05-15 17:17:33.951440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.577 [2024-05-15 17:17:33.951450] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.577 qpair failed and we were unable to recover it. 00:26:46.577 [2024-05-15 17:17:33.951543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.577 [2024-05-15 17:17:33.951612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.577 [2024-05-15 17:17:33.951621] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.577 qpair failed and we were unable to recover it. 00:26:46.577 [2024-05-15 17:17:33.951778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.577 [2024-05-15 17:17:33.951979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.577 [2024-05-15 17:17:33.951989] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.577 qpair failed and we were unable to recover it. 00:26:46.577 [2024-05-15 17:17:33.952152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.577 [2024-05-15 17:17:33.952258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.577 [2024-05-15 17:17:33.952269] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.577 qpair failed and we were unable to recover it. 00:26:46.577 [2024-05-15 17:17:33.952372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.577 [2024-05-15 17:17:33.952530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.577 [2024-05-15 17:17:33.952540] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.577 qpair failed and we were unable to recover it. 00:26:46.577 [2024-05-15 17:17:33.952697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.577 [2024-05-15 17:17:33.952792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.577 [2024-05-15 17:17:33.952801] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.577 qpair failed and we were unable to recover it. 00:26:46.577 [2024-05-15 17:17:33.952907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.577 [2024-05-15 17:17:33.953011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.577 [2024-05-15 17:17:33.953021] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.577 qpair failed and we were unable to recover it. 00:26:46.577 [2024-05-15 17:17:33.953102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.577 [2024-05-15 17:17:33.953206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.577 [2024-05-15 17:17:33.953217] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.577 qpair failed and we were unable to recover it. 00:26:46.577 [2024-05-15 17:17:33.953332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.577 [2024-05-15 17:17:33.953439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.577 [2024-05-15 17:17:33.953449] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.577 qpair failed and we were unable to recover it. 00:26:46.577 [2024-05-15 17:17:33.953552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.577 [2024-05-15 17:17:33.953648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.577 [2024-05-15 17:17:33.953657] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.577 qpair failed and we were unable to recover it. 00:26:46.577 [2024-05-15 17:17:33.953771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.577 [2024-05-15 17:17:33.953862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.577 [2024-05-15 17:17:33.953871] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.577 qpair failed and we were unable to recover it. 00:26:46.577 [2024-05-15 17:17:33.954024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.577 [2024-05-15 17:17:33.954112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.577 [2024-05-15 17:17:33.954123] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.577 qpair failed and we were unable to recover it. 00:26:46.577 [2024-05-15 17:17:33.954220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.577 [2024-05-15 17:17:33.954290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.577 [2024-05-15 17:17:33.954300] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.577 qpair failed and we were unable to recover it. 00:26:46.577 [2024-05-15 17:17:33.954487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.577 [2024-05-15 17:17:33.954584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.577 [2024-05-15 17:17:33.954594] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.577 qpair failed and we were unable to recover it. 00:26:46.577 [2024-05-15 17:17:33.954696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.577 [2024-05-15 17:17:33.954790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.577 [2024-05-15 17:17:33.954800] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.577 qpair failed and we were unable to recover it. 00:26:46.577 [2024-05-15 17:17:33.954975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.577 [2024-05-15 17:17:33.955076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.577 [2024-05-15 17:17:33.955087] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.577 qpair failed and we were unable to recover it. 00:26:46.577 [2024-05-15 17:17:33.955252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.577 [2024-05-15 17:17:33.955343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.577 [2024-05-15 17:17:33.955353] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.577 qpair failed and we were unable to recover it. 00:26:46.577 [2024-05-15 17:17:33.955464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.577 [2024-05-15 17:17:33.955554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.577 [2024-05-15 17:17:33.955565] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.578 qpair failed and we were unable to recover it. 00:26:46.578 [2024-05-15 17:17:33.955661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.578 [2024-05-15 17:17:33.955862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.578 [2024-05-15 17:17:33.955872] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.578 qpair failed and we were unable to recover it. 00:26:46.578 [2024-05-15 17:17:33.956034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.578 [2024-05-15 17:17:33.956204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.578 [2024-05-15 17:17:33.956215] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.578 qpair failed and we were unable to recover it. 00:26:46.578 [2024-05-15 17:17:33.956322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.578 [2024-05-15 17:17:33.956486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.578 [2024-05-15 17:17:33.956496] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.578 qpair failed and we were unable to recover it. 00:26:46.578 [2024-05-15 17:17:33.956604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.578 [2024-05-15 17:17:33.956774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.578 [2024-05-15 17:17:33.956784] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.578 qpair failed and we were unable to recover it. 00:26:46.578 [2024-05-15 17:17:33.956886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.578 [2024-05-15 17:17:33.956982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.578 [2024-05-15 17:17:33.956992] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.578 qpair failed and we were unable to recover it. 00:26:46.578 [2024-05-15 17:17:33.957085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.578 [2024-05-15 17:17:33.957185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.578 [2024-05-15 17:17:33.957194] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.578 qpair failed and we were unable to recover it. 00:26:46.578 [2024-05-15 17:17:33.957284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.578 [2024-05-15 17:17:33.957385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.578 [2024-05-15 17:17:33.957394] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.578 qpair failed and we were unable to recover it. 00:26:46.578 [2024-05-15 17:17:33.957559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.578 [2024-05-15 17:17:33.957621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.578 [2024-05-15 17:17:33.957630] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.578 qpair failed and we were unable to recover it. 00:26:46.578 [2024-05-15 17:17:33.957730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.578 [2024-05-15 17:17:33.957830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.578 [2024-05-15 17:17:33.957840] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.578 qpair failed and we were unable to recover it. 00:26:46.578 [2024-05-15 17:17:33.957936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.578 [2024-05-15 17:17:33.958129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.578 [2024-05-15 17:17:33.958139] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.578 qpair failed and we were unable to recover it. 00:26:46.578 [2024-05-15 17:17:33.958385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.578 [2024-05-15 17:17:33.958518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.578 [2024-05-15 17:17:33.958528] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.578 qpair failed and we were unable to recover it. 00:26:46.578 [2024-05-15 17:17:33.958681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.578 [2024-05-15 17:17:33.958768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.578 [2024-05-15 17:17:33.958777] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.578 qpair failed and we were unable to recover it. 00:26:46.578 [2024-05-15 17:17:33.958876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.578 [2024-05-15 17:17:33.959033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.578 [2024-05-15 17:17:33.959043] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.578 qpair failed and we were unable to recover it. 00:26:46.578 [2024-05-15 17:17:33.959219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.578 [2024-05-15 17:17:33.959442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.578 [2024-05-15 17:17:33.959452] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.578 qpair failed and we were unable to recover it. 00:26:46.578 [2024-05-15 17:17:33.959553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.578 [2024-05-15 17:17:33.959663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.578 [2024-05-15 17:17:33.959672] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.578 qpair failed and we were unable to recover it. 00:26:46.578 [2024-05-15 17:17:33.959784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.578 [2024-05-15 17:17:33.959880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.578 [2024-05-15 17:17:33.959889] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.578 qpair failed and we were unable to recover it. 00:26:46.578 [2024-05-15 17:17:33.960055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.578 [2024-05-15 17:17:33.960146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.578 [2024-05-15 17:17:33.960155] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.578 qpair failed and we were unable to recover it. 00:26:46.578 [2024-05-15 17:17:33.960257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.578 [2024-05-15 17:17:33.960341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.578 [2024-05-15 17:17:33.960351] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.578 qpair failed and we were unable to recover it. 00:26:46.578 [2024-05-15 17:17:33.960466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.578 [2024-05-15 17:17:33.960558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.578 [2024-05-15 17:17:33.960568] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.578 qpair failed and we were unable to recover it. 00:26:46.578 [2024-05-15 17:17:33.960695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.578 [2024-05-15 17:17:33.960767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.578 [2024-05-15 17:17:33.960776] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.578 qpair failed and we were unable to recover it. 00:26:46.578 [2024-05-15 17:17:33.960876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.578 [2024-05-15 17:17:33.960960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.578 [2024-05-15 17:17:33.960969] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.578 qpair failed and we were unable to recover it. 00:26:46.578 [2024-05-15 17:17:33.961147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.578 [2024-05-15 17:17:33.961252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.578 [2024-05-15 17:17:33.961262] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.578 qpair failed and we were unable to recover it. 00:26:46.578 [2024-05-15 17:17:33.961351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.578 [2024-05-15 17:17:33.961462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.578 [2024-05-15 17:17:33.961472] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.578 qpair failed and we were unable to recover it. 00:26:46.578 [2024-05-15 17:17:33.961585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.578 [2024-05-15 17:17:33.961683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.578 [2024-05-15 17:17:33.961692] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.578 qpair failed and we were unable to recover it. 00:26:46.578 [2024-05-15 17:17:33.961848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.578 [2024-05-15 17:17:33.962012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.578 [2024-05-15 17:17:33.962022] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.578 qpair failed and we were unable to recover it. 00:26:46.578 [2024-05-15 17:17:33.962115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.578 [2024-05-15 17:17:33.962259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.578 [2024-05-15 17:17:33.962281] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.578 qpair failed and we were unable to recover it. 00:26:46.578 [2024-05-15 17:17:33.962393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.578 [2024-05-15 17:17:33.962486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.579 [2024-05-15 17:17:33.962496] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.579 qpair failed and we were unable to recover it. 00:26:46.579 [2024-05-15 17:17:33.962585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.579 [2024-05-15 17:17:33.962686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.579 [2024-05-15 17:17:33.962696] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.579 qpair failed and we were unable to recover it. 00:26:46.579 [2024-05-15 17:17:33.962794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.579 [2024-05-15 17:17:33.962890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.579 [2024-05-15 17:17:33.962900] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.579 qpair failed and we were unable to recover it. 00:26:46.579 [2024-05-15 17:17:33.963092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.579 [2024-05-15 17:17:33.963177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.579 [2024-05-15 17:17:33.963188] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.579 qpair failed and we were unable to recover it. 00:26:46.579 [2024-05-15 17:17:33.963276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.579 [2024-05-15 17:17:33.963367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.579 [2024-05-15 17:17:33.963376] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.579 qpair failed and we were unable to recover it. 00:26:46.579 [2024-05-15 17:17:33.963530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.579 [2024-05-15 17:17:33.963631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.579 [2024-05-15 17:17:33.963641] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.579 qpair failed and we were unable to recover it. 00:26:46.579 [2024-05-15 17:17:33.963724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.579 [2024-05-15 17:17:33.963909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.579 [2024-05-15 17:17:33.963920] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.579 qpair failed and we were unable to recover it. 00:26:46.579 [2024-05-15 17:17:33.964079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.579 [2024-05-15 17:17:33.964193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.579 [2024-05-15 17:17:33.964203] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.579 qpair failed and we were unable to recover it. 00:26:46.579 [2024-05-15 17:17:33.964341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.579 [2024-05-15 17:17:33.964426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.579 [2024-05-15 17:17:33.964437] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.579 qpair failed and we were unable to recover it. 00:26:46.579 [2024-05-15 17:17:33.964531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.579 [2024-05-15 17:17:33.964604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.579 [2024-05-15 17:17:33.964613] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.579 qpair failed and we were unable to recover it. 00:26:46.579 [2024-05-15 17:17:33.964708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.579 [2024-05-15 17:17:33.964866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.579 [2024-05-15 17:17:33.964876] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.579 qpair failed and we were unable to recover it. 00:26:46.579 [2024-05-15 17:17:33.964970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.579 [2024-05-15 17:17:33.965062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.579 [2024-05-15 17:17:33.965071] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.579 qpair failed and we were unable to recover it. 00:26:46.579 [2024-05-15 17:17:33.965185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.579 [2024-05-15 17:17:33.965328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.579 [2024-05-15 17:17:33.965337] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.579 qpair failed and we were unable to recover it. 00:26:46.579 [2024-05-15 17:17:33.965438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.579 [2024-05-15 17:17:33.965515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.579 [2024-05-15 17:17:33.965524] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.579 qpair failed and we were unable to recover it. 00:26:46.579 [2024-05-15 17:17:33.965629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.579 [2024-05-15 17:17:33.965722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.579 [2024-05-15 17:17:33.965732] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.579 qpair failed and we were unable to recover it. 00:26:46.579 [2024-05-15 17:17:33.965823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.579 [2024-05-15 17:17:33.965988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.579 [2024-05-15 17:17:33.965997] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.579 qpair failed and we were unable to recover it. 00:26:46.579 [2024-05-15 17:17:33.966093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.579 [2024-05-15 17:17:33.966186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.579 [2024-05-15 17:17:33.966197] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.579 qpair failed and we were unable to recover it. 00:26:46.579 [2024-05-15 17:17:33.966306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.579 [2024-05-15 17:17:33.966391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.579 [2024-05-15 17:17:33.966401] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.579 qpair failed and we were unable to recover it. 00:26:46.579 [2024-05-15 17:17:33.966509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.579 [2024-05-15 17:17:33.966666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.579 [2024-05-15 17:17:33.966677] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.579 qpair failed and we were unable to recover it. 00:26:46.579 [2024-05-15 17:17:33.966764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.579 [2024-05-15 17:17:33.966872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.579 [2024-05-15 17:17:33.966881] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.579 qpair failed and we were unable to recover it. 00:26:46.579 [2024-05-15 17:17:33.967042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.579 [2024-05-15 17:17:33.967126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.579 [2024-05-15 17:17:33.967135] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.579 qpair failed and we were unable to recover it. 00:26:46.579 [2024-05-15 17:17:33.967330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.579 [2024-05-15 17:17:33.967425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.579 [2024-05-15 17:17:33.967434] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.579 qpair failed and we were unable to recover it. 00:26:46.579 [2024-05-15 17:17:33.967523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.579 [2024-05-15 17:17:33.967623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.579 [2024-05-15 17:17:33.967633] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.579 qpair failed and we were unable to recover it. 00:26:46.579 [2024-05-15 17:17:33.967699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.579 [2024-05-15 17:17:33.967781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.579 [2024-05-15 17:17:33.967790] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.579 qpair failed and we were unable to recover it. 00:26:46.579 [2024-05-15 17:17:33.967883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.579 [2024-05-15 17:17:33.967980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.579 [2024-05-15 17:17:33.967990] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.579 qpair failed and we were unable to recover it. 00:26:46.579 [2024-05-15 17:17:33.968113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.579 [2024-05-15 17:17:33.968211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.579 [2024-05-15 17:17:33.968222] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.579 qpair failed and we were unable to recover it. 00:26:46.579 [2024-05-15 17:17:33.968325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.579 [2024-05-15 17:17:33.968419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.579 [2024-05-15 17:17:33.968428] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.579 qpair failed and we were unable to recover it. 00:26:46.580 [2024-05-15 17:17:33.968525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.580 [2024-05-15 17:17:33.968617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.580 [2024-05-15 17:17:33.968627] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.580 qpair failed and we were unable to recover it. 00:26:46.580 [2024-05-15 17:17:33.968718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.580 [2024-05-15 17:17:33.968807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.580 [2024-05-15 17:17:33.968817] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.580 qpair failed and we were unable to recover it. 00:26:46.580 [2024-05-15 17:17:33.968905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.580 [2024-05-15 17:17:33.969006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.580 [2024-05-15 17:17:33.969016] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.580 qpair failed and we were unable to recover it. 00:26:46.580 [2024-05-15 17:17:33.969104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.580 [2024-05-15 17:17:33.969196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.580 [2024-05-15 17:17:33.969206] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.580 qpair failed and we were unable to recover it. 00:26:46.580 [2024-05-15 17:17:33.969312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.580 [2024-05-15 17:17:33.969412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.580 [2024-05-15 17:17:33.969422] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.580 qpair failed and we were unable to recover it. 00:26:46.580 [2024-05-15 17:17:33.969590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.580 [2024-05-15 17:17:33.969703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.580 [2024-05-15 17:17:33.969713] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.580 qpair failed and we were unable to recover it. 00:26:46.580 [2024-05-15 17:17:33.969810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.580 [2024-05-15 17:17:33.969909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.580 [2024-05-15 17:17:33.969918] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.580 qpair failed and we were unable to recover it. 00:26:46.580 [2024-05-15 17:17:33.970005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.580 [2024-05-15 17:17:33.970095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.580 [2024-05-15 17:17:33.970105] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.580 qpair failed and we were unable to recover it. 00:26:46.580 [2024-05-15 17:17:33.970222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.580 [2024-05-15 17:17:33.970317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.580 [2024-05-15 17:17:33.970328] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.580 qpair failed and we were unable to recover it. 00:26:46.580 [2024-05-15 17:17:33.970434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.580 [2024-05-15 17:17:33.970663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.580 [2024-05-15 17:17:33.970674] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.580 qpair failed and we were unable to recover it. 00:26:46.580 [2024-05-15 17:17:33.970768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.580 [2024-05-15 17:17:33.970906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.580 [2024-05-15 17:17:33.970915] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.580 qpair failed and we were unable to recover it. 00:26:46.580 [2024-05-15 17:17:33.971010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.580 [2024-05-15 17:17:33.971173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.580 [2024-05-15 17:17:33.971183] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.580 qpair failed and we were unable to recover it. 00:26:46.580 [2024-05-15 17:17:33.971254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.580 [2024-05-15 17:17:33.971364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.580 [2024-05-15 17:17:33.971374] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.580 qpair failed and we were unable to recover it. 00:26:46.580 [2024-05-15 17:17:33.971486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.580 [2024-05-15 17:17:33.971594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.580 [2024-05-15 17:17:33.971604] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.580 qpair failed and we were unable to recover it. 00:26:46.580 [2024-05-15 17:17:33.971717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.580 [2024-05-15 17:17:33.971808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.580 [2024-05-15 17:17:33.971818] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.580 qpair failed and we were unable to recover it. 00:26:46.580 [2024-05-15 17:17:33.971988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.580 [2024-05-15 17:17:33.972218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.580 [2024-05-15 17:17:33.972229] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.580 qpair failed and we were unable to recover it. 00:26:46.580 [2024-05-15 17:17:33.972328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.580 [2024-05-15 17:17:33.972420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.580 [2024-05-15 17:17:33.972430] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.580 qpair failed and we were unable to recover it. 00:26:46.580 [2024-05-15 17:17:33.972529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.580 [2024-05-15 17:17:33.972634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.580 [2024-05-15 17:17:33.972645] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.580 qpair failed and we were unable to recover it. 00:26:46.580 [2024-05-15 17:17:33.972740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.580 [2024-05-15 17:17:33.972838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.580 [2024-05-15 17:17:33.972847] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.580 qpair failed and we were unable to recover it. 00:26:46.580 [2024-05-15 17:17:33.972995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.580 [2024-05-15 17:17:33.973089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.580 [2024-05-15 17:17:33.973099] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.580 qpair failed and we were unable to recover it. 00:26:46.580 [2024-05-15 17:17:33.973210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.580 [2024-05-15 17:17:33.973297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.580 [2024-05-15 17:17:33.973308] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.580 qpair failed and we were unable to recover it. 00:26:46.580 [2024-05-15 17:17:33.973407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.580 [2024-05-15 17:17:33.973517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.580 [2024-05-15 17:17:33.973527] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.580 qpair failed and we were unable to recover it. 00:26:46.580 [2024-05-15 17:17:33.973680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.580 [2024-05-15 17:17:33.973838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.580 [2024-05-15 17:17:33.973850] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.580 qpair failed and we were unable to recover it. 00:26:46.580 [2024-05-15 17:17:33.973963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.580 [2024-05-15 17:17:33.974037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.580 [2024-05-15 17:17:33.974047] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.580 qpair failed and we were unable to recover it. 00:26:46.580 [2024-05-15 17:17:33.974143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.580 [2024-05-15 17:17:33.974239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.580 [2024-05-15 17:17:33.974250] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.580 qpair failed and we were unable to recover it. 00:26:46.580 [2024-05-15 17:17:33.974415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.580 [2024-05-15 17:17:33.974583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.580 [2024-05-15 17:17:33.974594] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.580 qpair failed and we were unable to recover it. 00:26:46.580 [2024-05-15 17:17:33.974694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.580 [2024-05-15 17:17:33.974791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.580 [2024-05-15 17:17:33.974800] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.580 qpair failed and we were unable to recover it. 00:26:46.580 [2024-05-15 17:17:33.974891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.581 [2024-05-15 17:17:33.974981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.581 [2024-05-15 17:17:33.974991] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.581 qpair failed and we were unable to recover it. 00:26:46.581 [2024-05-15 17:17:33.975169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.581 [2024-05-15 17:17:33.975183] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:46.581 [2024-05-15 17:17:33.975210] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:46.581 [2024-05-15 17:17:33.975217] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:46.581 [2024-05-15 17:17:33.975223] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:46.581 [2024-05-15 17:17:33.975228] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:46.581 [2024-05-15 17:17:33.975340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.581 [2024-05-15 17:17:33.975351] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.581 qpair failed and we were unable to recover it. 00:26:46.581 [2024-05-15 17:17:33.975337] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:26:46.581 [2024-05-15 17:17:33.975444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.581 [2024-05-15 17:17:33.975444] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:26:46.581 [2024-05-15 17:17:33.975618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.581 [2024-05-15 17:17:33.975629] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.581 qpair failed and we were unable to recover it. 00:26:46.581 [2024-05-15 17:17:33.975739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.581 [2024-05-15 17:17:33.975808] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:26:46.581 [2024-05-15 17:17:33.975841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.581 [2024-05-15 17:17:33.975850] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.581 qpair failed and we were unable to recover it. 00:26:46.581 [2024-05-15 17:17:33.975808] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 7 00:26:46.581 [2024-05-15 17:17:33.976004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.581 [2024-05-15 17:17:33.976162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.581 [2024-05-15 17:17:33.976176] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.581 qpair failed and we were unable to recover it. 00:26:46.581 [2024-05-15 17:17:33.976273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.581 [2024-05-15 17:17:33.976358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.581 [2024-05-15 17:17:33.976366] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.581 qpair failed and we were unable to recover it. 00:26:46.581 [2024-05-15 17:17:33.976464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.581 [2024-05-15 17:17:33.976532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.581 [2024-05-15 17:17:33.976542] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.581 qpair failed and we were unable to recover it. 00:26:46.581 [2024-05-15 17:17:33.976633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.581 [2024-05-15 17:17:33.976719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.581 [2024-05-15 17:17:33.976727] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.581 qpair failed and we were unable to recover it. 00:26:46.581 [2024-05-15 17:17:33.976885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.581 [2024-05-15 17:17:33.977048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.581 [2024-05-15 17:17:33.977058] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.581 qpair failed and we were unable to recover it. 00:26:46.581 [2024-05-15 17:17:33.977225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.581 [2024-05-15 17:17:33.977502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.581 [2024-05-15 17:17:33.977512] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.581 qpair failed and we were unable to recover it. 00:26:46.581 [2024-05-15 17:17:33.977603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.581 [2024-05-15 17:17:33.977827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.581 [2024-05-15 17:17:33.977837] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.581 qpair failed and we were unable to recover it. 00:26:46.581 [2024-05-15 17:17:33.977922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.581 [2024-05-15 17:17:33.978025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.581 [2024-05-15 17:17:33.978034] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.581 qpair failed and we were unable to recover it. 00:26:46.581 [2024-05-15 17:17:33.978112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.581 [2024-05-15 17:17:33.978204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.581 [2024-05-15 17:17:33.978214] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.581 qpair failed and we were unable to recover it. 00:26:46.581 [2024-05-15 17:17:33.978310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.581 [2024-05-15 17:17:33.978464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.581 [2024-05-15 17:17:33.978473] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.581 qpair failed and we were unable to recover it. 00:26:46.581 [2024-05-15 17:17:33.978700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.581 [2024-05-15 17:17:33.978811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.581 [2024-05-15 17:17:33.978820] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.581 qpair failed and we were unable to recover it. 00:26:46.581 [2024-05-15 17:17:33.978986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.581 [2024-05-15 17:17:33.979112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.581 [2024-05-15 17:17:33.979122] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.581 qpair failed and we were unable to recover it. 00:26:46.581 [2024-05-15 17:17:33.979219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.581 [2024-05-15 17:17:33.979384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.581 [2024-05-15 17:17:33.979394] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.581 qpair failed and we were unable to recover it. 00:26:46.581 [2024-05-15 17:17:33.979497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.581 [2024-05-15 17:17:33.979586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.581 [2024-05-15 17:17:33.979597] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.581 qpair failed and we were unable to recover it. 00:26:46.581 [2024-05-15 17:17:33.979719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.581 [2024-05-15 17:17:33.979804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.581 [2024-05-15 17:17:33.979813] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.581 qpair failed and we were unable to recover it. 00:26:46.581 [2024-05-15 17:17:33.979884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.581 [2024-05-15 17:17:33.980059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.581 [2024-05-15 17:17:33.980069] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.581 qpair failed and we were unable to recover it. 00:26:46.581 [2024-05-15 17:17:33.980238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.581 [2024-05-15 17:17:33.980315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.581 [2024-05-15 17:17:33.980325] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.581 qpair failed and we were unable to recover it. 00:26:46.581 [2024-05-15 17:17:33.980431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.581 [2024-05-15 17:17:33.980516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.581 [2024-05-15 17:17:33.980525] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.581 qpair failed and we were unable to recover it. 00:26:46.581 [2024-05-15 17:17:33.980621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.581 [2024-05-15 17:17:33.980698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.581 [2024-05-15 17:17:33.980708] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.581 qpair failed and we were unable to recover it. 00:26:46.581 [2024-05-15 17:17:33.980802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.581 [2024-05-15 17:17:33.980883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.581 [2024-05-15 17:17:33.980892] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.581 qpair failed and we were unable to recover it. 00:26:46.582 [2024-05-15 17:17:33.981056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.582 [2024-05-15 17:17:33.981153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.582 [2024-05-15 17:17:33.981182] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.582 qpair failed and we were unable to recover it. 00:26:46.582 [2024-05-15 17:17:33.981368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.582 [2024-05-15 17:17:33.981468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.582 [2024-05-15 17:17:33.981478] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.582 qpair failed and we were unable to recover it. 00:26:46.582 [2024-05-15 17:17:33.981575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.582 [2024-05-15 17:17:33.981654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.582 [2024-05-15 17:17:33.981663] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.582 qpair failed and we were unable to recover it. 00:26:46.582 [2024-05-15 17:17:33.981766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.582 [2024-05-15 17:17:33.981934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.582 [2024-05-15 17:17:33.981944] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.582 qpair failed and we were unable to recover it. 00:26:46.582 [2024-05-15 17:17:33.982034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.582 [2024-05-15 17:17:33.982190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.582 [2024-05-15 17:17:33.982201] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.582 qpair failed and we were unable to recover it. 00:26:46.582 [2024-05-15 17:17:33.982312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.582 [2024-05-15 17:17:33.982407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.582 [2024-05-15 17:17:33.982417] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.582 qpair failed and we were unable to recover it. 00:26:46.582 [2024-05-15 17:17:33.982531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.582 [2024-05-15 17:17:33.982637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.582 [2024-05-15 17:17:33.982646] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.582 qpair failed and we were unable to recover it. 00:26:46.582 [2024-05-15 17:17:33.982837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.582 [2024-05-15 17:17:33.982955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.582 [2024-05-15 17:17:33.982970] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:46.582 qpair failed and we were unable to recover it. 00:26:46.582 [2024-05-15 17:17:33.983099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.582 [2024-05-15 17:17:33.983235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.582 [2024-05-15 17:17:33.983251] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f0000b90 with addr=10.0.0.2, port=4420 00:26:46.582 qpair failed and we were unable to recover it. 00:26:46.582 [2024-05-15 17:17:33.983381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.582 [2024-05-15 17:17:33.983499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.582 [2024-05-15 17:17:33.983516] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:46.582 qpair failed and we were unable to recover it. 00:26:46.582 [2024-05-15 17:17:33.983619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.582 [2024-05-15 17:17:33.983697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.582 [2024-05-15 17:17:33.983706] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.582 qpair failed and we were unable to recover it. 00:26:46.582 [2024-05-15 17:17:33.983795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.582 [2024-05-15 17:17:33.983892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.582 [2024-05-15 17:17:33.983903] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.582 qpair failed and we were unable to recover it. 00:26:46.582 [2024-05-15 17:17:33.984012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.582 [2024-05-15 17:17:33.984126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.582 [2024-05-15 17:17:33.984136] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.582 qpair failed and we were unable to recover it. 00:26:46.582 [2024-05-15 17:17:33.984227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.582 [2024-05-15 17:17:33.984413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.582 [2024-05-15 17:17:33.984423] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.582 qpair failed and we were unable to recover it. 00:26:46.582 [2024-05-15 17:17:33.984581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.582 [2024-05-15 17:17:33.984686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.582 [2024-05-15 17:17:33.984696] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.582 qpair failed and we were unable to recover it. 00:26:46.582 [2024-05-15 17:17:33.984772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.582 [2024-05-15 17:17:33.984874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.582 [2024-05-15 17:17:33.984883] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.582 qpair failed and we were unable to recover it. 00:26:46.582 [2024-05-15 17:17:33.984992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.582 [2024-05-15 17:17:33.985094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.582 [2024-05-15 17:17:33.985104] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.582 qpair failed and we were unable to recover it. 00:26:46.582 [2024-05-15 17:17:33.985211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.582 [2024-05-15 17:17:33.985314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.582 [2024-05-15 17:17:33.985323] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.582 qpair failed and we were unable to recover it. 00:26:46.582 [2024-05-15 17:17:33.985384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.582 [2024-05-15 17:17:33.985469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.582 [2024-05-15 17:17:33.985479] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.582 qpair failed and we were unable to recover it. 00:26:46.582 [2024-05-15 17:17:33.985574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.582 [2024-05-15 17:17:33.985658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.582 [2024-05-15 17:17:33.985668] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.582 qpair failed and we were unable to recover it. 00:26:46.582 [2024-05-15 17:17:33.985828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.582 [2024-05-15 17:17:33.985926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.582 [2024-05-15 17:17:33.985936] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.582 qpair failed and we were unable to recover it. 00:26:46.582 [2024-05-15 17:17:33.986030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.582 [2024-05-15 17:17:33.986184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.582 [2024-05-15 17:17:33.986195] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.582 qpair failed and we were unable to recover it. 00:26:46.582 [2024-05-15 17:17:33.986353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.582 [2024-05-15 17:17:33.986451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.582 [2024-05-15 17:17:33.986461] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.583 qpair failed and we were unable to recover it. 00:26:46.583 [2024-05-15 17:17:33.986556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.583 [2024-05-15 17:17:33.986649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.583 [2024-05-15 17:17:33.986659] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.583 qpair failed and we were unable to recover it. 00:26:46.583 [2024-05-15 17:17:33.986826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.583 [2024-05-15 17:17:33.986911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.583 [2024-05-15 17:17:33.986922] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.583 qpair failed and we were unable to recover it. 00:26:46.583 [2024-05-15 17:17:33.987047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.583 [2024-05-15 17:17:33.987204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.583 [2024-05-15 17:17:33.987215] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.583 qpair failed and we were unable to recover it. 00:26:46.583 [2024-05-15 17:17:33.987379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.583 [2024-05-15 17:17:33.987473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.583 [2024-05-15 17:17:33.987483] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.583 qpair failed and we were unable to recover it. 00:26:46.583 [2024-05-15 17:17:33.987644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.583 [2024-05-15 17:17:33.987736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.583 [2024-05-15 17:17:33.987746] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.583 qpair failed and we were unable to recover it. 00:26:46.583 [2024-05-15 17:17:33.987849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.583 [2024-05-15 17:17:33.987939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.583 [2024-05-15 17:17:33.987949] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.583 qpair failed and we were unable to recover it. 00:26:46.583 [2024-05-15 17:17:33.988045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.583 [2024-05-15 17:17:33.988118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.583 [2024-05-15 17:17:33.988134] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.583 qpair failed and we were unable to recover it. 00:26:46.583 [2024-05-15 17:17:33.988248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.583 [2024-05-15 17:17:33.988348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.583 [2024-05-15 17:17:33.988357] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.583 qpair failed and we were unable to recover it. 00:26:46.583 [2024-05-15 17:17:33.988463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.583 [2024-05-15 17:17:33.988555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.583 [2024-05-15 17:17:33.988565] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.583 qpair failed and we were unable to recover it. 00:26:46.583 [2024-05-15 17:17:33.988741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.583 [2024-05-15 17:17:33.988829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.583 [2024-05-15 17:17:33.988838] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.583 qpair failed and we were unable to recover it. 00:26:46.583 [2024-05-15 17:17:33.988939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.583 [2024-05-15 17:17:33.989065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.583 [2024-05-15 17:17:33.989075] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.583 qpair failed and we were unable to recover it. 00:26:46.583 [2024-05-15 17:17:33.989170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.583 [2024-05-15 17:17:33.989278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.583 [2024-05-15 17:17:33.989289] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.583 qpair failed and we were unable to recover it. 00:26:46.583 [2024-05-15 17:17:33.989464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.583 [2024-05-15 17:17:33.989562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.583 [2024-05-15 17:17:33.989572] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.583 qpair failed and we were unable to recover it. 00:26:46.583 [2024-05-15 17:17:33.989777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.583 [2024-05-15 17:17:33.989914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.583 [2024-05-15 17:17:33.989923] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.583 qpair failed and we were unable to recover it. 00:26:46.583 [2024-05-15 17:17:33.990081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.583 [2024-05-15 17:17:33.990152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.583 [2024-05-15 17:17:33.990162] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.583 qpair failed and we were unable to recover it. 00:26:46.583 [2024-05-15 17:17:33.990270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.583 [2024-05-15 17:17:33.990378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.583 [2024-05-15 17:17:33.990387] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.583 qpair failed and we were unable to recover it. 00:26:46.583 [2024-05-15 17:17:33.990503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.583 [2024-05-15 17:17:33.990588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.583 [2024-05-15 17:17:33.990598] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.583 qpair failed and we were unable to recover it. 00:26:46.583 [2024-05-15 17:17:33.990696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.583 [2024-05-15 17:17:33.990851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.583 [2024-05-15 17:17:33.990861] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.583 qpair failed and we were unable to recover it. 00:26:46.583 [2024-05-15 17:17:33.990963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.583 [2024-05-15 17:17:33.991116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.583 [2024-05-15 17:17:33.991126] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.583 qpair failed and we were unable to recover it. 00:26:46.583 [2024-05-15 17:17:33.991221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.583 [2024-05-15 17:17:33.991327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.583 [2024-05-15 17:17:33.991338] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.583 qpair failed and we were unable to recover it. 00:26:46.583 [2024-05-15 17:17:33.991438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.583 [2024-05-15 17:17:33.991540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.583 [2024-05-15 17:17:33.991549] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.583 qpair failed and we were unable to recover it. 00:26:46.583 [2024-05-15 17:17:33.991647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.583 [2024-05-15 17:17:33.991745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.583 [2024-05-15 17:17:33.991754] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.583 qpair failed and we were unable to recover it. 00:26:46.583 [2024-05-15 17:17:33.991859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.583 [2024-05-15 17:17:33.991949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.583 [2024-05-15 17:17:33.991958] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.583 qpair failed and we were unable to recover it. 00:26:46.583 [2024-05-15 17:17:33.992052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.583 [2024-05-15 17:17:33.992126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.583 [2024-05-15 17:17:33.992136] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.583 qpair failed and we were unable to recover it. 00:26:46.583 [2024-05-15 17:17:33.992247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.583 [2024-05-15 17:17:33.992413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.583 [2024-05-15 17:17:33.992424] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.583 qpair failed and we were unable to recover it. 00:26:46.583 [2024-05-15 17:17:33.992531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.583 [2024-05-15 17:17:33.992647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.584 [2024-05-15 17:17:33.992657] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.584 qpair failed and we were unable to recover it. 00:26:46.584 [2024-05-15 17:17:33.992817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.584 [2024-05-15 17:17:33.992970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.584 [2024-05-15 17:17:33.992980] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.584 qpair failed and we were unable to recover it. 00:26:46.584 [2024-05-15 17:17:33.993077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.584 [2024-05-15 17:17:33.993238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.584 [2024-05-15 17:17:33.993248] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.584 qpair failed and we were unable to recover it. 00:26:46.584 [2024-05-15 17:17:33.993344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.584 [2024-05-15 17:17:33.993446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.584 [2024-05-15 17:17:33.993455] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.584 qpair failed and we were unable to recover it. 00:26:46.584 [2024-05-15 17:17:33.993555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.584 [2024-05-15 17:17:33.993724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.584 [2024-05-15 17:17:33.993734] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.584 qpair failed and we were unable to recover it. 00:26:46.584 [2024-05-15 17:17:33.993829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.584 [2024-05-15 17:17:33.993996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.584 [2024-05-15 17:17:33.994006] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.584 qpair failed and we were unable to recover it. 00:26:46.584 [2024-05-15 17:17:33.994094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.584 [2024-05-15 17:17:33.994159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.584 [2024-05-15 17:17:33.994172] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.584 qpair failed and we were unable to recover it. 00:26:46.584 [2024-05-15 17:17:33.994397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.584 [2024-05-15 17:17:33.994505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.584 [2024-05-15 17:17:33.994514] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.584 qpair failed and we were unable to recover it. 00:26:46.584 [2024-05-15 17:17:33.994674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.584 [2024-05-15 17:17:33.994777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.584 [2024-05-15 17:17:33.994786] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.584 qpair failed and we were unable to recover it. 00:26:46.584 [2024-05-15 17:17:33.994883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.584 [2024-05-15 17:17:33.994979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.584 [2024-05-15 17:17:33.994993] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.584 qpair failed and we were unable to recover it. 00:26:46.584 [2024-05-15 17:17:33.995158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.584 [2024-05-15 17:17:33.995271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.584 [2024-05-15 17:17:33.995280] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.584 qpair failed and we were unable to recover it. 00:26:46.584 [2024-05-15 17:17:33.995421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.584 [2024-05-15 17:17:33.995537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.584 [2024-05-15 17:17:33.995547] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.584 qpair failed and we were unable to recover it. 00:26:46.584 [2024-05-15 17:17:33.995651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.584 [2024-05-15 17:17:33.995739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.584 [2024-05-15 17:17:33.995749] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.584 qpair failed and we were unable to recover it. 00:26:46.584 [2024-05-15 17:17:33.995857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.584 [2024-05-15 17:17:33.995951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.584 [2024-05-15 17:17:33.995962] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.584 qpair failed and we were unable to recover it. 00:26:46.584 [2024-05-15 17:17:33.996176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.584 [2024-05-15 17:17:33.996369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.584 [2024-05-15 17:17:33.996380] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.584 qpair failed and we were unable to recover it. 00:26:46.584 [2024-05-15 17:17:33.996471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.584 [2024-05-15 17:17:33.996561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.584 [2024-05-15 17:17:33.996571] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.584 qpair failed and we were unable to recover it. 00:26:46.584 [2024-05-15 17:17:33.996664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.584 [2024-05-15 17:17:33.996848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.584 [2024-05-15 17:17:33.996860] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.584 qpair failed and we were unable to recover it. 00:26:46.584 [2024-05-15 17:17:33.996970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.584 [2024-05-15 17:17:33.997067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.584 [2024-05-15 17:17:33.997077] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.584 qpair failed and we were unable to recover it. 00:26:46.584 [2024-05-15 17:17:33.997194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.584 [2024-05-15 17:17:33.997293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.584 [2024-05-15 17:17:33.997304] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.584 qpair failed and we were unable to recover it. 00:26:46.584 [2024-05-15 17:17:33.997398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.584 [2024-05-15 17:17:33.997497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.584 [2024-05-15 17:17:33.997510] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.584 qpair failed and we were unable to recover it. 00:26:46.584 [2024-05-15 17:17:33.997674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.584 [2024-05-15 17:17:33.997762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.584 [2024-05-15 17:17:33.997772] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.584 qpair failed and we were unable to recover it. 00:26:46.584 [2024-05-15 17:17:33.997866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.584 [2024-05-15 17:17:33.997956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.584 [2024-05-15 17:17:33.997965] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.584 qpair failed and we were unable to recover it. 00:26:46.584 [2024-05-15 17:17:33.998095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.584 [2024-05-15 17:17:33.998192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.584 [2024-05-15 17:17:33.998202] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.584 qpair failed and we were unable to recover it. 00:26:46.584 [2024-05-15 17:17:33.998303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.584 [2024-05-15 17:17:33.998474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.584 [2024-05-15 17:17:33.998486] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.584 qpair failed and we were unable to recover it. 00:26:46.584 [2024-05-15 17:17:33.998707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.584 [2024-05-15 17:17:33.998802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.584 [2024-05-15 17:17:33.998812] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.584 qpair failed and we were unable to recover it. 00:26:46.584 [2024-05-15 17:17:33.998904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.584 [2024-05-15 17:17:33.998993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.584 [2024-05-15 17:17:33.999003] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.584 qpair failed and we were unable to recover it. 00:26:46.584 [2024-05-15 17:17:33.999236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.584 [2024-05-15 17:17:33.999352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.584 [2024-05-15 17:17:33.999364] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.584 qpair failed and we were unable to recover it. 00:26:46.584 [2024-05-15 17:17:33.999466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.584 [2024-05-15 17:17:33.999567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.584 [2024-05-15 17:17:33.999578] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.584 qpair failed and we were unable to recover it. 00:26:46.584 [2024-05-15 17:17:33.999735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.584 [2024-05-15 17:17:33.999903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.585 [2024-05-15 17:17:33.999915] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.585 qpair failed and we were unable to recover it. 00:26:46.585 [2024-05-15 17:17:34.000020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.585 [2024-05-15 17:17:34.000198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.585 [2024-05-15 17:17:34.000214] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.585 qpair failed and we were unable to recover it. 00:26:46.585 [2024-05-15 17:17:34.000327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.585 [2024-05-15 17:17:34.000422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.585 [2024-05-15 17:17:34.000433] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.585 qpair failed and we were unable to recover it. 00:26:46.585 [2024-05-15 17:17:34.000534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.585 [2024-05-15 17:17:34.000637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.585 [2024-05-15 17:17:34.000647] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.585 qpair failed and we were unable to recover it. 00:26:46.585 [2024-05-15 17:17:34.000741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.585 [2024-05-15 17:17:34.000829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.585 [2024-05-15 17:17:34.000838] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.585 qpair failed and we were unable to recover it. 00:26:46.585 [2024-05-15 17:17:34.000932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.585 [2024-05-15 17:17:34.001095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.585 [2024-05-15 17:17:34.001108] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.585 qpair failed and we were unable to recover it. 00:26:46.585 [2024-05-15 17:17:34.001197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.585 [2024-05-15 17:17:34.001286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.585 [2024-05-15 17:17:34.001296] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.585 qpair failed and we were unable to recover it. 00:26:46.585 [2024-05-15 17:17:34.001469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.585 [2024-05-15 17:17:34.001561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.585 [2024-05-15 17:17:34.001570] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.585 qpair failed and we were unable to recover it. 00:26:46.585 [2024-05-15 17:17:34.001666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.585 [2024-05-15 17:17:34.001767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.585 [2024-05-15 17:17:34.001777] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.585 qpair failed and we were unable to recover it. 00:26:46.585 [2024-05-15 17:17:34.001955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.585 [2024-05-15 17:17:34.002120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.585 [2024-05-15 17:17:34.002130] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.585 qpair failed and we were unable to recover it. 00:26:46.585 [2024-05-15 17:17:34.002244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.585 [2024-05-15 17:17:34.002395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.585 [2024-05-15 17:17:34.002407] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.585 qpair failed and we were unable to recover it. 00:26:46.585 [2024-05-15 17:17:34.002503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.585 [2024-05-15 17:17:34.002588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.585 [2024-05-15 17:17:34.002600] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.585 qpair failed and we were unable to recover it. 00:26:46.585 [2024-05-15 17:17:34.002689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.585 [2024-05-15 17:17:34.002792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.585 [2024-05-15 17:17:34.002802] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.585 qpair failed and we were unable to recover it. 00:26:46.585 [2024-05-15 17:17:34.002903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.585 [2024-05-15 17:17:34.003019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.585 [2024-05-15 17:17:34.003028] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.585 qpair failed and we were unable to recover it. 00:26:46.585 [2024-05-15 17:17:34.003190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.585 [2024-05-15 17:17:34.003298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.585 [2024-05-15 17:17:34.003309] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.585 qpair failed and we were unable to recover it. 00:26:46.585 [2024-05-15 17:17:34.003394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.585 [2024-05-15 17:17:34.003550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.585 [2024-05-15 17:17:34.003561] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.585 qpair failed and we were unable to recover it. 00:26:46.585 [2024-05-15 17:17:34.003661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.585 [2024-05-15 17:17:34.003749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.585 [2024-05-15 17:17:34.003760] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.585 qpair failed and we were unable to recover it. 00:26:46.585 [2024-05-15 17:17:34.003857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.585 [2024-05-15 17:17:34.003963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.585 [2024-05-15 17:17:34.003973] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.585 qpair failed and we were unable to recover it. 00:26:46.585 [2024-05-15 17:17:34.004074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.585 [2024-05-15 17:17:34.004248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.585 [2024-05-15 17:17:34.004259] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.585 qpair failed and we were unable to recover it. 00:26:46.585 [2024-05-15 17:17:34.004350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.585 [2024-05-15 17:17:34.004533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.585 [2024-05-15 17:17:34.004544] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.585 qpair failed and we were unable to recover it. 00:26:46.585 [2024-05-15 17:17:34.004670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.585 [2024-05-15 17:17:34.004827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.585 [2024-05-15 17:17:34.004838] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.585 qpair failed and we were unable to recover it. 00:26:46.585 [2024-05-15 17:17:34.005015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.585 [2024-05-15 17:17:34.005120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.585 [2024-05-15 17:17:34.005129] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.585 qpair failed and we were unable to recover it. 00:26:46.585 [2024-05-15 17:17:34.005303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.585 [2024-05-15 17:17:34.005400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.585 [2024-05-15 17:17:34.005410] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.585 qpair failed and we were unable to recover it. 00:26:46.585 [2024-05-15 17:17:34.005502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.585 [2024-05-15 17:17:34.005661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.585 [2024-05-15 17:17:34.005671] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.585 qpair failed and we were unable to recover it. 00:26:46.585 [2024-05-15 17:17:34.005776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.585 [2024-05-15 17:17:34.005952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.585 [2024-05-15 17:17:34.005961] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.585 qpair failed and we were unable to recover it. 00:26:46.585 [2024-05-15 17:17:34.006064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.585 [2024-05-15 17:17:34.006179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.585 [2024-05-15 17:17:34.006189] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.585 qpair failed and we were unable to recover it. 00:26:46.585 [2024-05-15 17:17:34.006277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.585 [2024-05-15 17:17:34.006369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.585 [2024-05-15 17:17:34.006379] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.585 qpair failed and we were unable to recover it. 00:26:46.585 [2024-05-15 17:17:34.006548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.585 [2024-05-15 17:17:34.006648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.585 [2024-05-15 17:17:34.006659] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.585 qpair failed and we were unable to recover it. 00:26:46.585 [2024-05-15 17:17:34.006755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.585 [2024-05-15 17:17:34.006865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.585 [2024-05-15 17:17:34.006875] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.586 qpair failed and we were unable to recover it. 00:26:46.586 [2024-05-15 17:17:34.006964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.586 [2024-05-15 17:17:34.007120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.586 [2024-05-15 17:17:34.007131] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.586 qpair failed and we were unable to recover it. 00:26:46.586 [2024-05-15 17:17:34.007304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.586 [2024-05-15 17:17:34.007405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.586 [2024-05-15 17:17:34.007415] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.586 qpair failed and we were unable to recover it. 00:26:46.586 [2024-05-15 17:17:34.007539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.586 [2024-05-15 17:17:34.007754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.586 [2024-05-15 17:17:34.007765] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.586 qpair failed and we were unable to recover it. 00:26:46.586 [2024-05-15 17:17:34.007941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.586 [2024-05-15 17:17:34.008039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.586 [2024-05-15 17:17:34.008049] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.586 qpair failed and we were unable to recover it. 00:26:46.586 [2024-05-15 17:17:34.008224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.586 [2024-05-15 17:17:34.008312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.586 [2024-05-15 17:17:34.008322] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.586 qpair failed and we were unable to recover it. 00:26:46.586 [2024-05-15 17:17:34.008488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.586 [2024-05-15 17:17:34.008575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.586 [2024-05-15 17:17:34.008585] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.586 qpair failed and we were unable to recover it. 00:26:46.586 [2024-05-15 17:17:34.008690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.586 [2024-05-15 17:17:34.008778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.586 [2024-05-15 17:17:34.008788] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.586 qpair failed and we were unable to recover it. 00:26:46.586 [2024-05-15 17:17:34.008891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.586 [2024-05-15 17:17:34.008990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.586 [2024-05-15 17:17:34.009000] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.586 qpair failed and we were unable to recover it. 00:26:46.586 [2024-05-15 17:17:34.009089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.586 [2024-05-15 17:17:34.009189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.586 [2024-05-15 17:17:34.009201] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.586 qpair failed and we were unable to recover it. 00:26:46.586 [2024-05-15 17:17:34.009378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.586 [2024-05-15 17:17:34.009477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.586 [2024-05-15 17:17:34.009488] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.586 qpair failed and we were unable to recover it. 00:26:46.586 [2024-05-15 17:17:34.009592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.586 [2024-05-15 17:17:34.009696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.586 [2024-05-15 17:17:34.009706] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.586 qpair failed and we were unable to recover it. 00:26:46.586 [2024-05-15 17:17:34.009806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.586 [2024-05-15 17:17:34.009910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.586 [2024-05-15 17:17:34.009920] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.586 qpair failed and we were unable to recover it. 00:26:46.586 [2024-05-15 17:17:34.010033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.586 [2024-05-15 17:17:34.010136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.586 [2024-05-15 17:17:34.010146] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.586 qpair failed and we were unable to recover it. 00:26:46.586 [2024-05-15 17:17:34.010403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.586 [2024-05-15 17:17:34.010507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.586 [2024-05-15 17:17:34.010516] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.586 qpair failed and we were unable to recover it. 00:26:46.586 [2024-05-15 17:17:34.010606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.586 [2024-05-15 17:17:34.010704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.586 [2024-05-15 17:17:34.010714] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.586 qpair failed and we were unable to recover it. 00:26:46.586 [2024-05-15 17:17:34.010885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.586 [2024-05-15 17:17:34.010984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.586 [2024-05-15 17:17:34.010995] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.586 qpair failed and we were unable to recover it. 00:26:46.586 [2024-05-15 17:17:34.011097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.586 [2024-05-15 17:17:34.011188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.586 [2024-05-15 17:17:34.011199] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.586 qpair failed and we were unable to recover it. 00:26:46.586 [2024-05-15 17:17:34.011303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.586 [2024-05-15 17:17:34.011388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.586 [2024-05-15 17:17:34.011397] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.586 qpair failed and we were unable to recover it. 00:26:46.586 [2024-05-15 17:17:34.011569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.586 [2024-05-15 17:17:34.011724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.586 [2024-05-15 17:17:34.011734] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.586 qpair failed and we were unable to recover it. 00:26:46.586 [2024-05-15 17:17:34.011908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.586 [2024-05-15 17:17:34.011997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.586 [2024-05-15 17:17:34.012006] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.586 qpair failed and we were unable to recover it. 00:26:46.586 [2024-05-15 17:17:34.012187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.586 [2024-05-15 17:17:34.012365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.586 [2024-05-15 17:17:34.012375] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.586 qpair failed and we were unable to recover it. 00:26:46.586 [2024-05-15 17:17:34.012548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.586 [2024-05-15 17:17:34.012648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.586 [2024-05-15 17:17:34.012657] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.586 qpair failed and we were unable to recover it. 00:26:46.586 [2024-05-15 17:17:34.012827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.586 [2024-05-15 17:17:34.012914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.586 [2024-05-15 17:17:34.012924] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.586 qpair failed and we were unable to recover it. 00:26:46.586 [2024-05-15 17:17:34.013010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.586 [2024-05-15 17:17:34.013225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.586 [2024-05-15 17:17:34.013236] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.586 qpair failed and we were unable to recover it. 00:26:46.586 [2024-05-15 17:17:34.013334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.586 [2024-05-15 17:17:34.013437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.586 [2024-05-15 17:17:34.013447] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.586 qpair failed and we were unable to recover it. 00:26:46.586 [2024-05-15 17:17:34.013559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.586 [2024-05-15 17:17:34.013705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.586 [2024-05-15 17:17:34.013715] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.586 qpair failed and we were unable to recover it. 00:26:46.586 [2024-05-15 17:17:34.013882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.586 [2024-05-15 17:17:34.013989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.586 [2024-05-15 17:17:34.013999] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.586 qpair failed and we were unable to recover it. 00:26:46.586 [2024-05-15 17:17:34.014104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.586 [2024-05-15 17:17:34.014263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.587 [2024-05-15 17:17:34.014274] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.587 qpair failed and we were unable to recover it. 00:26:46.587 [2024-05-15 17:17:34.014397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.587 [2024-05-15 17:17:34.014496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.587 [2024-05-15 17:17:34.014506] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.587 qpair failed and we were unable to recover it. 00:26:46.587 [2024-05-15 17:17:34.014609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.587 [2024-05-15 17:17:34.014708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.587 [2024-05-15 17:17:34.014718] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.587 qpair failed and we were unable to recover it. 00:26:46.587 [2024-05-15 17:17:34.014806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.587 [2024-05-15 17:17:34.014902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.587 [2024-05-15 17:17:34.014911] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.587 qpair failed and we were unable to recover it. 00:26:46.587 [2024-05-15 17:17:34.015002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.587 [2024-05-15 17:17:34.015095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.587 [2024-05-15 17:17:34.015103] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.587 qpair failed and we were unable to recover it. 00:26:46.587 [2024-05-15 17:17:34.015231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.587 [2024-05-15 17:17:34.015320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.587 [2024-05-15 17:17:34.015330] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.587 qpair failed and we were unable to recover it. 00:26:46.587 [2024-05-15 17:17:34.015421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.587 [2024-05-15 17:17:34.015502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.587 [2024-05-15 17:17:34.015512] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.587 qpair failed and we were unable to recover it. 00:26:46.587 [2024-05-15 17:17:34.015605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.587 [2024-05-15 17:17:34.015697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.587 [2024-05-15 17:17:34.015708] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.587 qpair failed and we were unable to recover it. 00:26:46.587 [2024-05-15 17:17:34.015806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.587 [2024-05-15 17:17:34.015902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.587 [2024-05-15 17:17:34.015912] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.587 qpair failed and we were unable to recover it. 00:26:46.587 [2024-05-15 17:17:34.015997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.587 [2024-05-15 17:17:34.016101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.587 [2024-05-15 17:17:34.016110] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.587 qpair failed and we were unable to recover it. 00:26:46.587 [2024-05-15 17:17:34.016272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.587 [2024-05-15 17:17:34.016381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.587 [2024-05-15 17:17:34.016390] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.587 qpair failed and we were unable to recover it. 00:26:46.587 [2024-05-15 17:17:34.016510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.587 [2024-05-15 17:17:34.016601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.587 [2024-05-15 17:17:34.016610] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.587 qpair failed and we were unable to recover it. 00:26:46.587 [2024-05-15 17:17:34.016700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.587 [2024-05-15 17:17:34.016793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.587 [2024-05-15 17:17:34.016804] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.587 qpair failed and we were unable to recover it. 00:26:46.587 [2024-05-15 17:17:34.016982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.587 [2024-05-15 17:17:34.017082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.587 [2024-05-15 17:17:34.017091] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.587 qpair failed and we were unable to recover it. 00:26:46.587 [2024-05-15 17:17:34.017201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.587 [2024-05-15 17:17:34.017365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.587 [2024-05-15 17:17:34.017376] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.587 qpair failed and we were unable to recover it. 00:26:46.587 [2024-05-15 17:17:34.017461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.587 [2024-05-15 17:17:34.017551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.587 [2024-05-15 17:17:34.017561] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.587 qpair failed and we were unable to recover it. 00:26:46.587 [2024-05-15 17:17:34.017660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.587 [2024-05-15 17:17:34.017757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.587 [2024-05-15 17:17:34.017768] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.587 qpair failed and we were unable to recover it. 00:26:46.587 [2024-05-15 17:17:34.017869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.587 [2024-05-15 17:17:34.017965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.587 [2024-05-15 17:17:34.017975] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.587 qpair failed and we were unable to recover it. 00:26:46.587 [2024-05-15 17:17:34.018092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.587 [2024-05-15 17:17:34.018181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.587 [2024-05-15 17:17:34.018191] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.587 qpair failed and we were unable to recover it. 00:26:46.587 [2024-05-15 17:17:34.018277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.587 [2024-05-15 17:17:34.018368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.587 [2024-05-15 17:17:34.018378] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.587 qpair failed and we were unable to recover it. 00:26:46.587 [2024-05-15 17:17:34.018465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.587 [2024-05-15 17:17:34.018551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.587 [2024-05-15 17:17:34.018561] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.587 qpair failed and we were unable to recover it. 00:26:46.587 [2024-05-15 17:17:34.018668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.587 [2024-05-15 17:17:34.018830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.587 [2024-05-15 17:17:34.018841] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.587 qpair failed and we were unable to recover it. 00:26:46.587 [2024-05-15 17:17:34.018918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.587 [2024-05-15 17:17:34.019036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.587 [2024-05-15 17:17:34.019046] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.587 qpair failed and we were unable to recover it. 00:26:46.587 [2024-05-15 17:17:34.019150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.587 [2024-05-15 17:17:34.019260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.588 [2024-05-15 17:17:34.019271] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.588 qpair failed and we were unable to recover it. 00:26:46.588 [2024-05-15 17:17:34.019373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.588 [2024-05-15 17:17:34.019596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.588 [2024-05-15 17:17:34.019607] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.588 qpair failed and we were unable to recover it. 00:26:46.588 [2024-05-15 17:17:34.019712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.588 [2024-05-15 17:17:34.019870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.588 [2024-05-15 17:17:34.019880] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.588 qpair failed and we were unable to recover it. 00:26:46.588 [2024-05-15 17:17:34.020047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.588 [2024-05-15 17:17:34.020203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.588 [2024-05-15 17:17:34.020214] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.588 qpair failed and we were unable to recover it. 00:26:46.588 [2024-05-15 17:17:34.020320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.588 [2024-05-15 17:17:34.020419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.588 [2024-05-15 17:17:34.020429] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.588 qpair failed and we were unable to recover it. 00:26:46.588 [2024-05-15 17:17:34.020540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.588 [2024-05-15 17:17:34.020663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.588 [2024-05-15 17:17:34.020673] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.588 qpair failed and we were unable to recover it. 00:26:46.588 [2024-05-15 17:17:34.020771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.588 [2024-05-15 17:17:34.020867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.588 [2024-05-15 17:17:34.020877] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.588 qpair failed and we were unable to recover it. 00:26:46.588 [2024-05-15 17:17:34.020974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.588 [2024-05-15 17:17:34.021135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.588 [2024-05-15 17:17:34.021145] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.588 qpair failed and we were unable to recover it. 00:26:46.588 [2024-05-15 17:17:34.021245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.588 [2024-05-15 17:17:34.021320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.588 [2024-05-15 17:17:34.021330] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.588 qpair failed and we were unable to recover it. 00:26:46.588 [2024-05-15 17:17:34.021427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.588 [2024-05-15 17:17:34.021604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.588 [2024-05-15 17:17:34.021614] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.588 qpair failed and we were unable to recover it. 00:26:46.588 [2024-05-15 17:17:34.021701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.588 [2024-05-15 17:17:34.021874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.588 [2024-05-15 17:17:34.021884] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.588 qpair failed and we were unable to recover it. 00:26:46.588 [2024-05-15 17:17:34.021993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.588 [2024-05-15 17:17:34.022113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.588 [2024-05-15 17:17:34.022122] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.588 qpair failed and we were unable to recover it. 00:26:46.588 [2024-05-15 17:17:34.022340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.588 [2024-05-15 17:17:34.022441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.588 [2024-05-15 17:17:34.022451] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.588 qpair failed and we were unable to recover it. 00:26:46.588 [2024-05-15 17:17:34.022590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.588 [2024-05-15 17:17:34.022773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.588 [2024-05-15 17:17:34.022787] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f0000b90 with addr=10.0.0.2, port=4420 00:26:46.588 qpair failed and we were unable to recover it. 00:26:46.588 [2024-05-15 17:17:34.023026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.588 [2024-05-15 17:17:34.023102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.588 [2024-05-15 17:17:34.023115] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f0000b90 with addr=10.0.0.2, port=4420 00:26:46.588 qpair failed and we were unable to recover it. 00:26:46.588 [2024-05-15 17:17:34.023308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.588 [2024-05-15 17:17:34.023423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.588 [2024-05-15 17:17:34.023437] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f0000b90 with addr=10.0.0.2, port=4420 00:26:46.588 qpair failed and we were unable to recover it. 00:26:46.588 [2024-05-15 17:17:34.023552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.588 [2024-05-15 17:17:34.023647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.588 [2024-05-15 17:17:34.023660] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f0000b90 with addr=10.0.0.2, port=4420 00:26:46.588 qpair failed and we were unable to recover it. 00:26:46.588 [2024-05-15 17:17:34.023841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.588 [2024-05-15 17:17:34.023938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.588 [2024-05-15 17:17:34.023951] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f0000b90 with addr=10.0.0.2, port=4420 00:26:46.588 qpair failed and we were unable to recover it. 00:26:46.588 [2024-05-15 17:17:34.024044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.588 [2024-05-15 17:17:34.024229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.588 [2024-05-15 17:17:34.024243] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f0000b90 with addr=10.0.0.2, port=4420 00:26:46.588 qpair failed and we were unable to recover it. 00:26:46.588 [2024-05-15 17:17:34.024439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.588 [2024-05-15 17:17:34.024539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.588 [2024-05-15 17:17:34.024552] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f0000b90 with addr=10.0.0.2, port=4420 00:26:46.588 qpair failed and we were unable to recover it. 00:26:46.588 [2024-05-15 17:17:34.024648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.588 [2024-05-15 17:17:34.024745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.588 [2024-05-15 17:17:34.024759] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f0000b90 with addr=10.0.0.2, port=4420 00:26:46.588 qpair failed and we were unable to recover it. 00:26:46.588 [2024-05-15 17:17:34.024857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.588 [2024-05-15 17:17:34.024980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.588 [2024-05-15 17:17:34.024994] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f0000b90 with addr=10.0.0.2, port=4420 00:26:46.588 qpair failed and we were unable to recover it. 00:26:46.588 [2024-05-15 17:17:34.025105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.588 [2024-05-15 17:17:34.025217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.588 [2024-05-15 17:17:34.025232] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f0000b90 with addr=10.0.0.2, port=4420 00:26:46.588 qpair failed and we were unable to recover it. 00:26:46.588 [2024-05-15 17:17:34.025341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.588 [2024-05-15 17:17:34.025518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.588 [2024-05-15 17:17:34.025532] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f0000b90 with addr=10.0.0.2, port=4420 00:26:46.588 qpair failed and we were unable to recover it. 00:26:46.588 [2024-05-15 17:17:34.025704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.588 [2024-05-15 17:17:34.025863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.588 [2024-05-15 17:17:34.025876] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f0000b90 with addr=10.0.0.2, port=4420 00:26:46.588 qpair failed and we were unable to recover it. 00:26:46.588 [2024-05-15 17:17:34.025989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.588 [2024-05-15 17:17:34.026134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.588 [2024-05-15 17:17:34.026147] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f0000b90 with addr=10.0.0.2, port=4420 00:26:46.588 qpair failed and we were unable to recover it. 00:26:46.588 [2024-05-15 17:17:34.026347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.588 [2024-05-15 17:17:34.026460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.588 [2024-05-15 17:17:34.026474] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f0000b90 with addr=10.0.0.2, port=4420 00:26:46.588 qpair failed and we were unable to recover it. 00:26:46.588 [2024-05-15 17:17:34.026579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.588 [2024-05-15 17:17:34.026694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.588 [2024-05-15 17:17:34.026707] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f0000b90 with addr=10.0.0.2, port=4420 00:26:46.588 qpair failed and we were unable to recover it. 00:26:46.588 [2024-05-15 17:17:34.026920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.588 [2024-05-15 17:17:34.027093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.588 [2024-05-15 17:17:34.027107] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f0000b90 with addr=10.0.0.2, port=4420 00:26:46.588 qpair failed and we were unable to recover it. 00:26:46.589 [2024-05-15 17:17:34.027222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.589 [2024-05-15 17:17:34.027339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.589 [2024-05-15 17:17:34.027354] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f0000b90 with addr=10.0.0.2, port=4420 00:26:46.589 qpair failed and we were unable to recover it. 00:26:46.589 [2024-05-15 17:17:34.027468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.589 [2024-05-15 17:17:34.027569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.589 [2024-05-15 17:17:34.027583] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f0000b90 with addr=10.0.0.2, port=4420 00:26:46.589 qpair failed and we were unable to recover it. 00:26:46.589 [2024-05-15 17:17:34.027700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.589 [2024-05-15 17:17:34.027881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.589 [2024-05-15 17:17:34.027898] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f0000b90 with addr=10.0.0.2, port=4420 00:26:46.589 qpair failed and we were unable to recover it. 00:26:46.589 [2024-05-15 17:17:34.028086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.589 [2024-05-15 17:17:34.028197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.589 [2024-05-15 17:17:34.028215] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f0000b90 with addr=10.0.0.2, port=4420 00:26:46.589 qpair failed and we were unable to recover it. 00:26:46.589 [2024-05-15 17:17:34.028345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.589 [2024-05-15 17:17:34.028494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.589 [2024-05-15 17:17:34.028510] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f0000b90 with addr=10.0.0.2, port=4420 00:26:46.589 qpair failed and we were unable to recover it. 00:26:46.589 [2024-05-15 17:17:34.028635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.589 [2024-05-15 17:17:34.028843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.589 [2024-05-15 17:17:34.028858] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.589 qpair failed and we were unable to recover it. 00:26:46.589 [2024-05-15 17:17:34.029029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.589 [2024-05-15 17:17:34.029190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.589 [2024-05-15 17:17:34.029204] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.589 qpair failed and we were unable to recover it. 00:26:46.589 [2024-05-15 17:17:34.029311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.589 [2024-05-15 17:17:34.029496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.589 [2024-05-15 17:17:34.029508] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.589 qpair failed and we were unable to recover it. 00:26:46.589 [2024-05-15 17:17:34.029671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.589 [2024-05-15 17:17:34.029764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.589 [2024-05-15 17:17:34.029774] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.589 qpair failed and we were unable to recover it. 00:26:46.589 [2024-05-15 17:17:34.029880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.589 [2024-05-15 17:17:34.030033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.589 [2024-05-15 17:17:34.030044] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.589 qpair failed and we were unable to recover it. 00:26:46.589 [2024-05-15 17:17:34.030162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.589 [2024-05-15 17:17:34.030329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.589 [2024-05-15 17:17:34.030339] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.589 qpair failed and we were unable to recover it. 00:26:46.589 [2024-05-15 17:17:34.030434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.589 [2024-05-15 17:17:34.030597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.589 [2024-05-15 17:17:34.030607] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.589 qpair failed and we were unable to recover it. 00:26:46.589 [2024-05-15 17:17:34.030784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.589 [2024-05-15 17:17:34.030870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.589 [2024-05-15 17:17:34.030880] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.589 qpair failed and we were unable to recover it. 00:26:46.589 [2024-05-15 17:17:34.031042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.589 [2024-05-15 17:17:34.031198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.589 [2024-05-15 17:17:34.031209] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.589 qpair failed and we were unable to recover it. 00:26:46.589 [2024-05-15 17:17:34.031300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.589 [2024-05-15 17:17:34.031405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.589 [2024-05-15 17:17:34.031416] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.589 qpair failed and we were unable to recover it. 00:26:46.589 [2024-05-15 17:17:34.031579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.589 [2024-05-15 17:17:34.031734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.589 [2024-05-15 17:17:34.031744] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.589 qpair failed and we were unable to recover it. 00:26:46.589 [2024-05-15 17:17:34.031916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.589 [2024-05-15 17:17:34.032027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.589 [2024-05-15 17:17:34.032037] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.589 qpair failed and we were unable to recover it. 00:26:46.589 [2024-05-15 17:17:34.032129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.589 [2024-05-15 17:17:34.032283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.589 [2024-05-15 17:17:34.032294] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.589 qpair failed and we were unable to recover it. 00:26:46.589 [2024-05-15 17:17:34.032490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.589 [2024-05-15 17:17:34.032665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.589 [2024-05-15 17:17:34.032674] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.589 qpair failed and we were unable to recover it. 00:26:46.589 [2024-05-15 17:17:34.032767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.589 [2024-05-15 17:17:34.032872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.589 [2024-05-15 17:17:34.032881] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.589 qpair failed and we were unable to recover it. 00:26:46.589 [2024-05-15 17:17:34.032979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.589 [2024-05-15 17:17:34.033066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.589 [2024-05-15 17:17:34.033076] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.589 qpair failed and we were unable to recover it. 00:26:46.589 [2024-05-15 17:17:34.033181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.589 [2024-05-15 17:17:34.033317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.589 [2024-05-15 17:17:34.033328] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.589 qpair failed and we were unable to recover it. 00:26:46.589 [2024-05-15 17:17:34.033558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.589 [2024-05-15 17:17:34.033732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.589 [2024-05-15 17:17:34.033741] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.589 qpair failed and we were unable to recover it. 00:26:46.589 [2024-05-15 17:17:34.033950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.589 [2024-05-15 17:17:34.034038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.589 [2024-05-15 17:17:34.034048] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.589 qpair failed and we were unable to recover it. 00:26:46.589 [2024-05-15 17:17:34.034160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.589 [2024-05-15 17:17:34.034274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.589 [2024-05-15 17:17:34.034284] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.589 qpair failed and we were unable to recover it. 00:26:46.589 [2024-05-15 17:17:34.034388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.589 [2024-05-15 17:17:34.034479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.589 [2024-05-15 17:17:34.034489] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.589 qpair failed and we were unable to recover it. 00:26:46.589 [2024-05-15 17:17:34.034663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.589 [2024-05-15 17:17:34.034781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.589 [2024-05-15 17:17:34.034791] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.589 qpair failed and we were unable to recover it. 00:26:46.589 [2024-05-15 17:17:34.034890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.589 [2024-05-15 17:17:34.034991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.589 [2024-05-15 17:17:34.035000] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.589 qpair failed and we were unable to recover it. 00:26:46.589 [2024-05-15 17:17:34.035154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.590 [2024-05-15 17:17:34.035321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.590 [2024-05-15 17:17:34.035331] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.590 qpair failed and we were unable to recover it. 00:26:46.590 [2024-05-15 17:17:34.035437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.590 [2024-05-15 17:17:34.035529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.590 [2024-05-15 17:17:34.035538] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.590 qpair failed and we were unable to recover it. 00:26:46.590 [2024-05-15 17:17:34.035695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.590 [2024-05-15 17:17:34.035768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.590 [2024-05-15 17:17:34.035777] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.590 qpair failed and we were unable to recover it. 00:26:46.590 [2024-05-15 17:17:34.035872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.590 [2024-05-15 17:17:34.035964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.590 [2024-05-15 17:17:34.035974] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.590 qpair failed and we were unable to recover it. 00:26:46.590 [2024-05-15 17:17:34.036066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.590 [2024-05-15 17:17:34.036152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.590 [2024-05-15 17:17:34.036162] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.590 qpair failed and we were unable to recover it. 00:26:46.590 [2024-05-15 17:17:34.036368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.590 [2024-05-15 17:17:34.036527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.590 [2024-05-15 17:17:34.036537] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.590 qpair failed and we were unable to recover it. 00:26:46.590 [2024-05-15 17:17:34.036722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.590 [2024-05-15 17:17:34.036899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.590 [2024-05-15 17:17:34.036910] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.590 qpair failed and we were unable to recover it. 00:26:46.590 [2024-05-15 17:17:34.037019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.590 [2024-05-15 17:17:34.037242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.590 [2024-05-15 17:17:34.037252] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.590 qpair failed and we were unable to recover it. 00:26:46.590 [2024-05-15 17:17:34.037370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.590 [2024-05-15 17:17:34.037477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.590 [2024-05-15 17:17:34.037487] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.590 qpair failed and we were unable to recover it. 00:26:46.590 [2024-05-15 17:17:34.037637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.590 [2024-05-15 17:17:34.037729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.590 [2024-05-15 17:17:34.037738] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.590 qpair failed and we were unable to recover it. 00:26:46.590 [2024-05-15 17:17:34.037836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.590 [2024-05-15 17:17:34.037946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.590 [2024-05-15 17:17:34.037956] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.590 qpair failed and we were unable to recover it. 00:26:46.590 [2024-05-15 17:17:34.038056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.590 [2024-05-15 17:17:34.038143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.590 [2024-05-15 17:17:34.038153] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.590 qpair failed and we were unable to recover it. 00:26:46.590 [2024-05-15 17:17:34.038284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.590 [2024-05-15 17:17:34.038374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.590 [2024-05-15 17:17:34.038383] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.590 qpair failed and we were unable to recover it. 00:26:46.590 [2024-05-15 17:17:34.038476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.590 [2024-05-15 17:17:34.038629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.590 [2024-05-15 17:17:34.038638] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.590 qpair failed and we were unable to recover it. 00:26:46.590 [2024-05-15 17:17:34.038734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.590 [2024-05-15 17:17:34.038884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.590 [2024-05-15 17:17:34.038894] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.590 qpair failed and we were unable to recover it. 00:26:46.590 [2024-05-15 17:17:34.038994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.590 [2024-05-15 17:17:34.039097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.590 [2024-05-15 17:17:34.039106] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.590 qpair failed and we were unable to recover it. 00:26:46.590 [2024-05-15 17:17:34.039208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.590 [2024-05-15 17:17:34.039316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.590 [2024-05-15 17:17:34.039329] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.590 qpair failed and we were unable to recover it. 00:26:46.590 [2024-05-15 17:17:34.039426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.590 [2024-05-15 17:17:34.039567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.590 [2024-05-15 17:17:34.039577] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.590 qpair failed and we were unable to recover it. 00:26:46.590 [2024-05-15 17:17:34.039732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.590 [2024-05-15 17:17:34.039820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.590 [2024-05-15 17:17:34.039831] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.590 qpair failed and we were unable to recover it. 00:26:46.590 [2024-05-15 17:17:34.039938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.590 [2024-05-15 17:17:34.040039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.590 [2024-05-15 17:17:34.040049] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.590 qpair failed and we were unable to recover it. 00:26:46.590 [2024-05-15 17:17:34.040144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.590 [2024-05-15 17:17:34.040253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.590 [2024-05-15 17:17:34.040270] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.590 qpair failed and we were unable to recover it. 00:26:46.590 [2024-05-15 17:17:34.040399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.590 [2024-05-15 17:17:34.040507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.590 [2024-05-15 17:17:34.040516] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.590 qpair failed and we were unable to recover it. 00:26:46.590 [2024-05-15 17:17:34.040669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.590 [2024-05-15 17:17:34.040762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.590 [2024-05-15 17:17:34.040771] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.590 qpair failed and we were unable to recover it. 00:26:46.590 [2024-05-15 17:17:34.040871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.590 [2024-05-15 17:17:34.041025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.590 [2024-05-15 17:17:34.041035] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.590 qpair failed and we were unable to recover it. 00:26:46.590 [2024-05-15 17:17:34.041207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.590 [2024-05-15 17:17:34.041306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.590 [2024-05-15 17:17:34.041316] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.590 qpair failed and we were unable to recover it. 00:26:46.590 [2024-05-15 17:17:34.041431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.590 [2024-05-15 17:17:34.041563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.590 [2024-05-15 17:17:34.041573] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.590 qpair failed and we were unable to recover it. 00:26:46.590 [2024-05-15 17:17:34.041682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.590 [2024-05-15 17:17:34.041777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.590 [2024-05-15 17:17:34.041789] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.590 qpair failed and we were unable to recover it. 00:26:46.590 [2024-05-15 17:17:34.041881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.590 [2024-05-15 17:17:34.041981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.590 [2024-05-15 17:17:34.041990] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.590 qpair failed and we were unable to recover it. 00:26:46.590 [2024-05-15 17:17:34.042085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.590 [2024-05-15 17:17:34.042182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.591 [2024-05-15 17:17:34.042193] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.591 qpair failed and we were unable to recover it. 00:26:46.591 [2024-05-15 17:17:34.042295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.591 [2024-05-15 17:17:34.042390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.591 [2024-05-15 17:17:34.042400] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.591 qpair failed and we were unable to recover it. 00:26:46.591 [2024-05-15 17:17:34.042521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.591 [2024-05-15 17:17:34.042676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.591 [2024-05-15 17:17:34.042686] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.591 qpair failed and we were unable to recover it. 00:26:46.591 [2024-05-15 17:17:34.042784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.591 [2024-05-15 17:17:34.042949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.591 [2024-05-15 17:17:34.042959] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.591 qpair failed and we were unable to recover it. 00:26:46.591 [2024-05-15 17:17:34.043051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.591 [2024-05-15 17:17:34.043261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.591 [2024-05-15 17:17:34.043272] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.591 qpair failed and we were unable to recover it. 00:26:46.591 [2024-05-15 17:17:34.043365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.591 [2024-05-15 17:17:34.043455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.591 [2024-05-15 17:17:34.043465] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.591 qpair failed and we were unable to recover it. 00:26:46.591 [2024-05-15 17:17:34.043535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.591 [2024-05-15 17:17:34.043623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.591 [2024-05-15 17:17:34.043632] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.591 qpair failed and we were unable to recover it. 00:26:46.591 [2024-05-15 17:17:34.043726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.591 [2024-05-15 17:17:34.043842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.591 [2024-05-15 17:17:34.043851] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.591 qpair failed and we were unable to recover it. 00:26:46.591 [2024-05-15 17:17:34.043976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.591 [2024-05-15 17:17:34.044061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.591 [2024-05-15 17:17:34.044072] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.591 qpair failed and we were unable to recover it. 00:26:46.591 [2024-05-15 17:17:34.044170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.591 [2024-05-15 17:17:34.044269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.591 [2024-05-15 17:17:34.044279] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.591 qpair failed and we were unable to recover it. 00:26:46.591 [2024-05-15 17:17:34.044453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.591 [2024-05-15 17:17:34.044553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.591 [2024-05-15 17:17:34.044562] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.591 qpair failed and we were unable to recover it. 00:26:46.591 [2024-05-15 17:17:34.044659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.591 [2024-05-15 17:17:34.044762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.591 [2024-05-15 17:17:34.044773] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.591 qpair failed and we were unable to recover it. 00:26:46.591 [2024-05-15 17:17:34.044863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.591 [2024-05-15 17:17:34.045017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.591 [2024-05-15 17:17:34.045027] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.591 qpair failed and we were unable to recover it. 00:26:46.591 [2024-05-15 17:17:34.045205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.591 [2024-05-15 17:17:34.045453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.591 [2024-05-15 17:17:34.045463] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.591 qpair failed and we were unable to recover it. 00:26:46.591 [2024-05-15 17:17:34.045551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.591 [2024-05-15 17:17:34.045659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.591 [2024-05-15 17:17:34.045669] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.591 qpair failed and we were unable to recover it. 00:26:46.591 [2024-05-15 17:17:34.045768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.591 [2024-05-15 17:17:34.045863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.591 [2024-05-15 17:17:34.045873] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.591 qpair failed and we were unable to recover it. 00:26:46.591 [2024-05-15 17:17:34.046039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.591 [2024-05-15 17:17:34.046150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.591 [2024-05-15 17:17:34.046160] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.591 qpair failed and we were unable to recover it. 00:26:46.591 [2024-05-15 17:17:34.046255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.591 [2024-05-15 17:17:34.046351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.591 [2024-05-15 17:17:34.046360] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.591 qpair failed and we were unable to recover it. 00:26:46.591 [2024-05-15 17:17:34.046453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.591 [2024-05-15 17:17:34.046648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.591 [2024-05-15 17:17:34.046659] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.591 qpair failed and we were unable to recover it. 00:26:46.591 [2024-05-15 17:17:34.046847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.591 [2024-05-15 17:17:34.046957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.591 [2024-05-15 17:17:34.046967] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.591 qpair failed and we were unable to recover it. 00:26:46.591 [2024-05-15 17:17:34.047061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.591 [2024-05-15 17:17:34.047295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.591 [2024-05-15 17:17:34.047305] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.591 qpair failed and we were unable to recover it. 00:26:46.591 [2024-05-15 17:17:34.047498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.591 [2024-05-15 17:17:34.047589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.591 [2024-05-15 17:17:34.047599] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.591 qpair failed and we were unable to recover it. 00:26:46.591 [2024-05-15 17:17:34.047702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.591 [2024-05-15 17:17:34.047803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.591 [2024-05-15 17:17:34.047812] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.591 qpair failed and we were unable to recover it. 00:26:46.591 [2024-05-15 17:17:34.047992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.591 [2024-05-15 17:17:34.048094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.591 [2024-05-15 17:17:34.048103] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.591 qpair failed and we were unable to recover it. 00:26:46.591 [2024-05-15 17:17:34.048214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.591 [2024-05-15 17:17:34.048320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.591 [2024-05-15 17:17:34.048330] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.591 qpair failed and we were unable to recover it. 00:26:46.591 [2024-05-15 17:17:34.048432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.591 [2024-05-15 17:17:34.048529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.591 [2024-05-15 17:17:34.048538] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.591 qpair failed and we were unable to recover it. 00:26:46.591 [2024-05-15 17:17:34.048634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.591 [2024-05-15 17:17:34.048725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.591 [2024-05-15 17:17:34.048735] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.592 qpair failed and we were unable to recover it. 00:26:46.592 [2024-05-15 17:17:34.048834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.592 [2024-05-15 17:17:34.048933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.592 [2024-05-15 17:17:34.048942] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.592 qpair failed and we were unable to recover it. 00:26:46.592 [2024-05-15 17:17:34.049030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.592 [2024-05-15 17:17:34.049117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.592 [2024-05-15 17:17:34.049126] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.592 qpair failed and we were unable to recover it. 00:26:46.592 [2024-05-15 17:17:34.049229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.592 [2024-05-15 17:17:34.049347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.592 [2024-05-15 17:17:34.049357] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.592 qpair failed and we were unable to recover it. 00:26:46.592 [2024-05-15 17:17:34.049516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.592 [2024-05-15 17:17:34.049671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.592 [2024-05-15 17:17:34.049680] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.592 qpair failed and we were unable to recover it. 00:26:46.592 [2024-05-15 17:17:34.049837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.592 [2024-05-15 17:17:34.049925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.592 [2024-05-15 17:17:34.049934] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.592 qpair failed and we were unable to recover it. 00:26:46.592 [2024-05-15 17:17:34.050113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.592 [2024-05-15 17:17:34.050338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.592 [2024-05-15 17:17:34.050348] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.592 qpair failed and we were unable to recover it. 00:26:46.592 [2024-05-15 17:17:34.050448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.592 [2024-05-15 17:17:34.050562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.592 [2024-05-15 17:17:34.050573] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.592 qpair failed and we were unable to recover it. 00:26:46.592 [2024-05-15 17:17:34.050665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.592 [2024-05-15 17:17:34.050775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.592 [2024-05-15 17:17:34.050785] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.592 qpair failed and we were unable to recover it. 00:26:46.592 [2024-05-15 17:17:34.050890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.592 [2024-05-15 17:17:34.050977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.592 [2024-05-15 17:17:34.050987] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.592 qpair failed and we were unable to recover it. 00:26:46.592 [2024-05-15 17:17:34.051147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.592 [2024-05-15 17:17:34.051292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.592 [2024-05-15 17:17:34.051303] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.592 qpair failed and we were unable to recover it. 00:26:46.592 [2024-05-15 17:17:34.051415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.592 [2024-05-15 17:17:34.051523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.592 [2024-05-15 17:17:34.051533] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.592 qpair failed and we were unable to recover it. 00:26:46.592 [2024-05-15 17:17:34.051684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.592 [2024-05-15 17:17:34.051781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.592 [2024-05-15 17:17:34.051790] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.592 qpair failed and we were unable to recover it. 00:26:46.592 [2024-05-15 17:17:34.051886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.592 [2024-05-15 17:17:34.051976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.592 [2024-05-15 17:17:34.051985] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.592 qpair failed and we were unable to recover it. 00:26:46.592 [2024-05-15 17:17:34.052099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.592 [2024-05-15 17:17:34.052194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.592 [2024-05-15 17:17:34.052204] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.592 qpair failed and we were unable to recover it. 00:26:46.592 [2024-05-15 17:17:34.052297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.592 [2024-05-15 17:17:34.052393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.592 [2024-05-15 17:17:34.052402] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.592 qpair failed and we were unable to recover it. 00:26:46.592 [2024-05-15 17:17:34.052504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.592 [2024-05-15 17:17:34.052597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.592 [2024-05-15 17:17:34.052607] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.592 qpair failed and we were unable to recover it. 00:26:46.592 [2024-05-15 17:17:34.052771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.592 [2024-05-15 17:17:34.052872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.592 [2024-05-15 17:17:34.052882] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.592 qpair failed and we were unable to recover it. 00:26:46.592 [2024-05-15 17:17:34.052973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.592 [2024-05-15 17:17:34.053152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.592 [2024-05-15 17:17:34.053162] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.592 qpair failed and we were unable to recover it. 00:26:46.592 [2024-05-15 17:17:34.053345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.592 [2024-05-15 17:17:34.053431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.592 [2024-05-15 17:17:34.053441] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.592 qpair failed and we were unable to recover it. 00:26:46.592 [2024-05-15 17:17:34.053546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.592 [2024-05-15 17:17:34.053641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.592 [2024-05-15 17:17:34.053650] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.592 qpair failed and we were unable to recover it. 00:26:46.592 [2024-05-15 17:17:34.053744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.592 [2024-05-15 17:17:34.053844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.592 [2024-05-15 17:17:34.053854] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.592 qpair failed and we were unable to recover it. 00:26:46.592 [2024-05-15 17:17:34.054009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.592 [2024-05-15 17:17:34.054103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.592 [2024-05-15 17:17:34.054112] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.592 qpair failed and we were unable to recover it. 00:26:46.592 [2024-05-15 17:17:34.054298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.592 [2024-05-15 17:17:34.054406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.592 [2024-05-15 17:17:34.054415] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.592 qpair failed and we were unable to recover it. 00:26:46.592 [2024-05-15 17:17:34.054522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.592 [2024-05-15 17:17:34.054619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.592 [2024-05-15 17:17:34.054629] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.592 qpair failed and we were unable to recover it. 00:26:46.592 [2024-05-15 17:17:34.054804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.592 [2024-05-15 17:17:34.054898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.592 [2024-05-15 17:17:34.054907] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.592 qpair failed and we were unable to recover it. 00:26:46.593 [2024-05-15 17:17:34.055013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.593 [2024-05-15 17:17:34.055191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.593 [2024-05-15 17:17:34.055201] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.593 qpair failed and we were unable to recover it. 00:26:46.593 [2024-05-15 17:17:34.055301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.593 [2024-05-15 17:17:34.055406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.593 [2024-05-15 17:17:34.055415] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.593 qpair failed and we were unable to recover it. 00:26:46.593 [2024-05-15 17:17:34.055571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.593 [2024-05-15 17:17:34.055736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.593 [2024-05-15 17:17:34.055746] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.593 qpair failed and we were unable to recover it. 00:26:46.593 [2024-05-15 17:17:34.055844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.593 [2024-05-15 17:17:34.055935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.593 [2024-05-15 17:17:34.055944] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.593 qpair failed and we were unable to recover it. 00:26:46.593 [2024-05-15 17:17:34.056039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.593 [2024-05-15 17:17:34.056131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.593 [2024-05-15 17:17:34.056141] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.593 qpair failed and we were unable to recover it. 00:26:46.593 [2024-05-15 17:17:34.056310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.593 [2024-05-15 17:17:34.056474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.593 [2024-05-15 17:17:34.056485] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.593 qpair failed and we were unable to recover it. 00:26:46.593 [2024-05-15 17:17:34.056580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.593 [2024-05-15 17:17:34.056679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.593 [2024-05-15 17:17:34.056689] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.593 qpair failed and we were unable to recover it. 00:26:46.593 [2024-05-15 17:17:34.056849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.593 [2024-05-15 17:17:34.056950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.593 [2024-05-15 17:17:34.056960] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.593 qpair failed and we were unable to recover it. 00:26:46.593 [2024-05-15 17:17:34.057080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.593 [2024-05-15 17:17:34.057173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.593 [2024-05-15 17:17:34.057184] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.593 qpair failed and we were unable to recover it. 00:26:46.593 [2024-05-15 17:17:34.057292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.593 [2024-05-15 17:17:34.057392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.593 [2024-05-15 17:17:34.057402] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.593 qpair failed and we were unable to recover it. 00:26:46.593 [2024-05-15 17:17:34.057489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.593 [2024-05-15 17:17:34.057581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.593 [2024-05-15 17:17:34.057590] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.593 qpair failed and we were unable to recover it. 00:26:46.593 [2024-05-15 17:17:34.057685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.593 [2024-05-15 17:17:34.057777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.593 [2024-05-15 17:17:34.057787] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.593 qpair failed and we were unable to recover it. 00:26:46.593 [2024-05-15 17:17:34.057948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.593 [2024-05-15 17:17:34.058105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.593 [2024-05-15 17:17:34.058114] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.593 qpair failed and we were unable to recover it. 00:26:46.593 [2024-05-15 17:17:34.058293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.593 [2024-05-15 17:17:34.058387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.593 [2024-05-15 17:17:34.058396] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.593 qpair failed and we were unable to recover it. 00:26:46.593 [2024-05-15 17:17:34.058497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.593 [2024-05-15 17:17:34.058589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.593 [2024-05-15 17:17:34.058598] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.593 qpair failed and we were unable to recover it. 00:26:46.593 [2024-05-15 17:17:34.058695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.593 [2024-05-15 17:17:34.058857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.593 [2024-05-15 17:17:34.058867] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.593 qpair failed and we were unable to recover it. 00:26:46.593 [2024-05-15 17:17:34.058967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.593 [2024-05-15 17:17:34.059061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.593 [2024-05-15 17:17:34.059070] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.593 qpair failed and we were unable to recover it. 00:26:46.593 [2024-05-15 17:17:34.059169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.593 [2024-05-15 17:17:34.059263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.593 [2024-05-15 17:17:34.059272] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.593 qpair failed and we were unable to recover it. 00:26:46.593 [2024-05-15 17:17:34.059374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.593 [2024-05-15 17:17:34.059477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.593 [2024-05-15 17:17:34.059487] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.593 qpair failed and we were unable to recover it. 00:26:46.593 [2024-05-15 17:17:34.059718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.593 [2024-05-15 17:17:34.059894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.593 [2024-05-15 17:17:34.059904] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.593 qpair failed and we were unable to recover it. 00:26:46.593 [2024-05-15 17:17:34.060004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.593 [2024-05-15 17:17:34.060158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.593 [2024-05-15 17:17:34.060173] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.593 qpair failed and we were unable to recover it. 00:26:46.593 [2024-05-15 17:17:34.060273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.593 [2024-05-15 17:17:34.060385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.593 [2024-05-15 17:17:34.060394] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.593 qpair failed and we were unable to recover it. 00:26:46.593 [2024-05-15 17:17:34.060511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.593 [2024-05-15 17:17:34.060715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.593 [2024-05-15 17:17:34.060724] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.593 qpair failed and we were unable to recover it. 00:26:46.593 [2024-05-15 17:17:34.060814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.593 [2024-05-15 17:17:34.060913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.593 [2024-05-15 17:17:34.060922] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.593 qpair failed and we were unable to recover it. 00:26:46.593 [2024-05-15 17:17:34.061075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.593 [2024-05-15 17:17:34.061175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.593 [2024-05-15 17:17:34.061185] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.593 qpair failed and we were unable to recover it. 00:26:46.593 [2024-05-15 17:17:34.061284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.593 [2024-05-15 17:17:34.061436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.593 [2024-05-15 17:17:34.061446] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.593 qpair failed and we were unable to recover it. 00:26:46.593 [2024-05-15 17:17:34.061541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.593 [2024-05-15 17:17:34.061652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.593 [2024-05-15 17:17:34.061661] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.593 qpair failed and we were unable to recover it. 00:26:46.593 [2024-05-15 17:17:34.061761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.593 [2024-05-15 17:17:34.061863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.593 [2024-05-15 17:17:34.061872] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.594 qpair failed and we were unable to recover it. 00:26:46.594 [2024-05-15 17:17:34.061958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.594 [2024-05-15 17:17:34.062089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.594 [2024-05-15 17:17:34.062099] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.594 qpair failed and we were unable to recover it. 00:26:46.594 [2024-05-15 17:17:34.062235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.594 [2024-05-15 17:17:34.062340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.594 [2024-05-15 17:17:34.062350] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.594 qpair failed and we were unable to recover it. 00:26:46.594 [2024-05-15 17:17:34.062440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.594 [2024-05-15 17:17:34.062665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.594 [2024-05-15 17:17:34.062675] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.594 qpair failed and we were unable to recover it. 00:26:46.594 [2024-05-15 17:17:34.062781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.594 [2024-05-15 17:17:34.062888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.594 [2024-05-15 17:17:34.062897] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.594 qpair failed and we were unable to recover it. 00:26:46.594 [2024-05-15 17:17:34.063124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.594 [2024-05-15 17:17:34.063224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.594 [2024-05-15 17:17:34.063234] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.594 qpair failed and we were unable to recover it. 00:26:46.594 [2024-05-15 17:17:34.063434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.594 [2024-05-15 17:17:34.063537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.594 [2024-05-15 17:17:34.063547] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.594 qpair failed and we were unable to recover it. 00:26:46.594 [2024-05-15 17:17:34.063703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.594 [2024-05-15 17:17:34.063804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.594 [2024-05-15 17:17:34.063813] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.594 qpair failed and we were unable to recover it. 00:26:46.594 [2024-05-15 17:17:34.063958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.594 [2024-05-15 17:17:34.064116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.594 [2024-05-15 17:17:34.064126] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.594 qpair failed and we were unable to recover it. 00:26:46.594 [2024-05-15 17:17:34.064235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.594 [2024-05-15 17:17:34.064427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.594 [2024-05-15 17:17:34.064437] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.594 qpair failed and we were unable to recover it. 00:26:46.594 [2024-05-15 17:17:34.064547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.594 [2024-05-15 17:17:34.064720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.594 [2024-05-15 17:17:34.064730] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.594 qpair failed and we were unable to recover it. 00:26:46.594 [2024-05-15 17:17:34.064820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.594 [2024-05-15 17:17:34.064892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.594 [2024-05-15 17:17:34.064902] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.594 qpair failed and we were unable to recover it. 00:26:46.594 [2024-05-15 17:17:34.064998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.594 [2024-05-15 17:17:34.065103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.594 [2024-05-15 17:17:34.065112] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.594 qpair failed and we were unable to recover it. 00:26:46.594 [2024-05-15 17:17:34.065208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.594 [2024-05-15 17:17:34.065366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.594 [2024-05-15 17:17:34.065375] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.594 qpair failed and we were unable to recover it. 00:26:46.594 [2024-05-15 17:17:34.065535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.594 [2024-05-15 17:17:34.065610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.594 [2024-05-15 17:17:34.065619] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.594 qpair failed and we were unable to recover it. 00:26:46.594 [2024-05-15 17:17:34.065711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.594 [2024-05-15 17:17:34.065821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.594 [2024-05-15 17:17:34.065830] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.594 qpair failed and we were unable to recover it. 00:26:46.594 [2024-05-15 17:17:34.065918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.594 [2024-05-15 17:17:34.066094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.594 [2024-05-15 17:17:34.066104] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.594 qpair failed and we were unable to recover it. 00:26:46.594 [2024-05-15 17:17:34.066291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.594 [2024-05-15 17:17:34.066388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.594 [2024-05-15 17:17:34.066398] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.594 qpair failed and we were unable to recover it. 00:26:46.594 [2024-05-15 17:17:34.066575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.594 [2024-05-15 17:17:34.066674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.594 [2024-05-15 17:17:34.066684] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.594 qpair failed and we were unable to recover it. 00:26:46.594 [2024-05-15 17:17:34.066771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.594 [2024-05-15 17:17:34.066862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.594 [2024-05-15 17:17:34.066872] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.594 qpair failed and we were unable to recover it. 00:26:46.594 [2024-05-15 17:17:34.067033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.594 [2024-05-15 17:17:34.067195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.594 [2024-05-15 17:17:34.067204] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.594 qpair failed and we were unable to recover it. 00:26:46.594 [2024-05-15 17:17:34.067313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.594 [2024-05-15 17:17:34.067479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.594 [2024-05-15 17:17:34.067489] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.594 qpair failed and we were unable to recover it. 00:26:46.594 [2024-05-15 17:17:34.067586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.594 [2024-05-15 17:17:34.067678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.594 [2024-05-15 17:17:34.067687] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.594 qpair failed and we were unable to recover it. 00:26:46.594 [2024-05-15 17:17:34.067866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.594 [2024-05-15 17:17:34.067960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.594 [2024-05-15 17:17:34.067970] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.594 qpair failed and we were unable to recover it. 00:26:46.594 [2024-05-15 17:17:34.068072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.594 [2024-05-15 17:17:34.068195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.594 [2024-05-15 17:17:34.068205] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.594 qpair failed and we were unable to recover it. 00:26:46.594 [2024-05-15 17:17:34.068306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.594 [2024-05-15 17:17:34.068470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.594 [2024-05-15 17:17:34.068480] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.594 qpair failed and we were unable to recover it. 00:26:46.594 [2024-05-15 17:17:34.068578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.594 [2024-05-15 17:17:34.068681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.594 [2024-05-15 17:17:34.068691] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.594 qpair failed and we were unable to recover it. 00:26:46.594 [2024-05-15 17:17:34.068776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.594 [2024-05-15 17:17:34.068930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.594 [2024-05-15 17:17:34.068939] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.594 qpair failed and we were unable to recover it. 00:26:46.594 [2024-05-15 17:17:34.069057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.594 [2024-05-15 17:17:34.069224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.595 [2024-05-15 17:17:34.069235] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.595 qpair failed and we were unable to recover it. 00:26:46.595 [2024-05-15 17:17:34.069326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.595 [2024-05-15 17:17:34.069428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.595 [2024-05-15 17:17:34.069437] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.595 qpair failed and we were unable to recover it. 00:26:46.595 [2024-05-15 17:17:34.069596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.595 [2024-05-15 17:17:34.069704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.595 [2024-05-15 17:17:34.069714] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.595 qpair failed and we were unable to recover it. 00:26:46.595 [2024-05-15 17:17:34.069873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.595 [2024-05-15 17:17:34.070091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.595 [2024-05-15 17:17:34.070102] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.595 qpair failed and we were unable to recover it. 00:26:46.595 [2024-05-15 17:17:34.070325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.595 [2024-05-15 17:17:34.070424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.595 [2024-05-15 17:17:34.070434] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.595 qpair failed and we were unable to recover it. 00:26:46.595 [2024-05-15 17:17:34.070597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.595 [2024-05-15 17:17:34.070751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.595 [2024-05-15 17:17:34.070761] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.595 qpair failed and we were unable to recover it. 00:26:46.595 [2024-05-15 17:17:34.070880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.595 [2024-05-15 17:17:34.071047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.595 [2024-05-15 17:17:34.071057] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.595 qpair failed and we were unable to recover it. 00:26:46.595 [2024-05-15 17:17:34.071163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.595 [2024-05-15 17:17:34.071268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.595 [2024-05-15 17:17:34.071278] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.595 qpair failed and we were unable to recover it. 00:26:46.595 [2024-05-15 17:17:34.071376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.595 [2024-05-15 17:17:34.071488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.595 [2024-05-15 17:17:34.071497] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.595 qpair failed and we were unable to recover it. 00:26:46.595 [2024-05-15 17:17:34.071615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.595 [2024-05-15 17:17:34.071773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.595 [2024-05-15 17:17:34.071782] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.595 qpair failed and we were unable to recover it. 00:26:46.595 [2024-05-15 17:17:34.071942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.595 [2024-05-15 17:17:34.072049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.595 [2024-05-15 17:17:34.072059] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.595 qpair failed and we were unable to recover it. 00:26:46.595 [2024-05-15 17:17:34.072234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.595 [2024-05-15 17:17:34.072390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.595 [2024-05-15 17:17:34.072401] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.595 qpair failed and we were unable to recover it. 00:26:46.595 [2024-05-15 17:17:34.072579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.595 [2024-05-15 17:17:34.072702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.595 [2024-05-15 17:17:34.072711] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.595 qpair failed and we were unable to recover it. 00:26:46.595 [2024-05-15 17:17:34.072805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.595 [2024-05-15 17:17:34.072966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.595 [2024-05-15 17:17:34.072975] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.595 qpair failed and we were unable to recover it. 00:26:46.595 [2024-05-15 17:17:34.073081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.595 [2024-05-15 17:17:34.073191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.595 [2024-05-15 17:17:34.073201] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.595 qpair failed and we were unable to recover it. 00:26:46.595 [2024-05-15 17:17:34.073292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.595 [2024-05-15 17:17:34.073488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.595 [2024-05-15 17:17:34.073498] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.595 qpair failed and we were unable to recover it. 00:26:46.595 [2024-05-15 17:17:34.073678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.595 [2024-05-15 17:17:34.073796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.595 [2024-05-15 17:17:34.073805] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.595 qpair failed and we were unable to recover it. 00:26:46.595 [2024-05-15 17:17:34.073914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.595 [2024-05-15 17:17:34.074053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.595 [2024-05-15 17:17:34.074062] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.595 qpair failed and we were unable to recover it. 00:26:46.595 [2024-05-15 17:17:34.074229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.595 [2024-05-15 17:17:34.074321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.595 [2024-05-15 17:17:34.074330] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.595 qpair failed and we were unable to recover it. 00:26:46.595 [2024-05-15 17:17:34.074442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.595 [2024-05-15 17:17:34.074532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.595 [2024-05-15 17:17:34.074541] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.595 qpair failed and we were unable to recover it. 00:26:46.595 [2024-05-15 17:17:34.074618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.595 [2024-05-15 17:17:34.074779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.595 [2024-05-15 17:17:34.074788] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.595 qpair failed and we were unable to recover it. 00:26:46.595 [2024-05-15 17:17:34.074883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.595 [2024-05-15 17:17:34.074971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.595 [2024-05-15 17:17:34.074980] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.595 qpair failed and we were unable to recover it. 00:26:46.595 [2024-05-15 17:17:34.075087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.595 [2024-05-15 17:17:34.075259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.595 [2024-05-15 17:17:34.075272] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.595 qpair failed and we were unable to recover it. 00:26:46.595 [2024-05-15 17:17:34.075371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.595 [2024-05-15 17:17:34.075528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.595 [2024-05-15 17:17:34.075538] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.595 qpair failed and we were unable to recover it. 00:26:46.595 [2024-05-15 17:17:34.075635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.595 [2024-05-15 17:17:34.075803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.595 [2024-05-15 17:17:34.075813] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.595 qpair failed and we were unable to recover it. 00:26:46.595 [2024-05-15 17:17:34.075915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.595 [2024-05-15 17:17:34.076070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.595 [2024-05-15 17:17:34.076080] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.595 qpair failed and we were unable to recover it. 00:26:46.595 [2024-05-15 17:17:34.076189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.595 [2024-05-15 17:17:34.076283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.595 [2024-05-15 17:17:34.076293] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.595 qpair failed and we were unable to recover it. 00:26:46.595 [2024-05-15 17:17:34.076415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.595 [2024-05-15 17:17:34.076569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.595 [2024-05-15 17:17:34.076580] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.595 qpair failed and we were unable to recover it. 00:26:46.595 [2024-05-15 17:17:34.076668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.595 [2024-05-15 17:17:34.076756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.595 [2024-05-15 17:17:34.076766] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.596 qpair failed and we were unable to recover it. 00:26:46.596 [2024-05-15 17:17:34.076858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.596 [2024-05-15 17:17:34.076989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.596 [2024-05-15 17:17:34.076998] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.596 qpair failed and we were unable to recover it. 00:26:46.596 [2024-05-15 17:17:34.077181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.596 [2024-05-15 17:17:34.077360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.596 [2024-05-15 17:17:34.077371] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.596 qpair failed and we were unable to recover it. 00:26:46.596 [2024-05-15 17:17:34.077456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.596 [2024-05-15 17:17:34.077540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.596 [2024-05-15 17:17:34.077550] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.596 qpair failed and we were unable to recover it. 00:26:46.596 [2024-05-15 17:17:34.077666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.596 [2024-05-15 17:17:34.077756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.596 [2024-05-15 17:17:34.077767] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.596 qpair failed and we were unable to recover it. 00:26:46.596 [2024-05-15 17:17:34.077959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.596 [2024-05-15 17:17:34.078060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.596 [2024-05-15 17:17:34.078070] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.596 qpair failed and we were unable to recover it. 00:26:46.596 [2024-05-15 17:17:34.078172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.596 [2024-05-15 17:17:34.078282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.596 [2024-05-15 17:17:34.078291] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.596 qpair failed and we were unable to recover it. 00:26:46.596 [2024-05-15 17:17:34.078392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.596 [2024-05-15 17:17:34.078485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.596 [2024-05-15 17:17:34.078494] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.596 qpair failed and we were unable to recover it. 00:26:46.596 [2024-05-15 17:17:34.078596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.596 [2024-05-15 17:17:34.078674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.596 [2024-05-15 17:17:34.078683] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.596 qpair failed and we were unable to recover it. 00:26:46.596 [2024-05-15 17:17:34.078789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.596 [2024-05-15 17:17:34.078973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.596 [2024-05-15 17:17:34.078983] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.596 qpair failed and we were unable to recover it. 00:26:46.596 [2024-05-15 17:17:34.079080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.596 [2024-05-15 17:17:34.079237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.596 [2024-05-15 17:17:34.079247] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.596 qpair failed and we were unable to recover it. 00:26:46.596 [2024-05-15 17:17:34.079336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.596 [2024-05-15 17:17:34.079436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.596 [2024-05-15 17:17:34.079445] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.596 qpair failed and we were unable to recover it. 00:26:46.596 [2024-05-15 17:17:34.079533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.596 [2024-05-15 17:17:34.079706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.596 [2024-05-15 17:17:34.079716] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.596 qpair failed and we were unable to recover it. 00:26:46.596 [2024-05-15 17:17:34.079825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.596 [2024-05-15 17:17:34.079984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.596 [2024-05-15 17:17:34.079993] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.596 qpair failed and we were unable to recover it. 00:26:46.596 [2024-05-15 17:17:34.080184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.596 [2024-05-15 17:17:34.080291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.596 [2024-05-15 17:17:34.080302] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.596 qpair failed and we were unable to recover it. 00:26:46.596 [2024-05-15 17:17:34.080454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.596 [2024-05-15 17:17:34.080563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.596 [2024-05-15 17:17:34.080572] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.596 qpair failed and we were unable to recover it. 00:26:46.596 [2024-05-15 17:17:34.080690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.596 [2024-05-15 17:17:34.080849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.596 [2024-05-15 17:17:34.080859] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.596 qpair failed and we were unable to recover it. 00:26:46.596 [2024-05-15 17:17:34.080959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.596 [2024-05-15 17:17:34.081062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.596 [2024-05-15 17:17:34.081072] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.596 qpair failed and we were unable to recover it. 00:26:46.596 [2024-05-15 17:17:34.081174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.596 [2024-05-15 17:17:34.081349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.596 [2024-05-15 17:17:34.081359] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.596 qpair failed and we were unable to recover it. 00:26:46.596 [2024-05-15 17:17:34.081459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.596 [2024-05-15 17:17:34.081560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.596 [2024-05-15 17:17:34.081570] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.596 qpair failed and we were unable to recover it. 00:26:46.596 [2024-05-15 17:17:34.081664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.596 [2024-05-15 17:17:34.081756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.596 [2024-05-15 17:17:34.081766] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.596 qpair failed and we were unable to recover it. 00:26:46.596 [2024-05-15 17:17:34.081852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.596 [2024-05-15 17:17:34.082020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.596 [2024-05-15 17:17:34.082029] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.596 qpair failed and we were unable to recover it. 00:26:46.596 [2024-05-15 17:17:34.082183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.596 [2024-05-15 17:17:34.082343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.596 [2024-05-15 17:17:34.082353] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.596 qpair failed and we were unable to recover it. 00:26:46.596 [2024-05-15 17:17:34.082480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.596 [2024-05-15 17:17:34.082568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.596 [2024-05-15 17:17:34.082577] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.596 qpair failed and we were unable to recover it. 00:26:46.596 [2024-05-15 17:17:34.082741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.596 [2024-05-15 17:17:34.082898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.596 [2024-05-15 17:17:34.082910] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.596 qpair failed and we were unable to recover it. 00:26:46.596 [2024-05-15 17:17:34.083071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.596 [2024-05-15 17:17:34.083167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.596 [2024-05-15 17:17:34.083178] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.596 qpair failed and we were unable to recover it. 00:26:46.597 [2024-05-15 17:17:34.083343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.597 [2024-05-15 17:17:34.083420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.597 [2024-05-15 17:17:34.083430] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.597 qpair failed and we were unable to recover it. 00:26:46.597 [2024-05-15 17:17:34.083592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.597 [2024-05-15 17:17:34.083691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.597 [2024-05-15 17:17:34.083701] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.597 qpair failed and we were unable to recover it. 00:26:46.597 [2024-05-15 17:17:34.083860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.597 [2024-05-15 17:17:34.083957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.597 [2024-05-15 17:17:34.083967] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.597 qpair failed and we were unable to recover it. 00:26:46.597 [2024-05-15 17:17:34.084057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.597 [2024-05-15 17:17:34.084146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.597 [2024-05-15 17:17:34.084155] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.597 qpair failed and we were unable to recover it. 00:26:46.597 [2024-05-15 17:17:34.084265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.597 [2024-05-15 17:17:34.084432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.597 [2024-05-15 17:17:34.084443] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.597 qpair failed and we were unable to recover it. 00:26:46.597 [2024-05-15 17:17:34.084564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.597 [2024-05-15 17:17:34.084663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.597 [2024-05-15 17:17:34.084673] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.597 qpair failed and we were unable to recover it. 00:26:46.597 [2024-05-15 17:17:34.084776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.597 [2024-05-15 17:17:34.084868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.597 [2024-05-15 17:17:34.084877] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.597 qpair failed and we were unable to recover it. 00:26:46.597 [2024-05-15 17:17:34.085013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.597 [2024-05-15 17:17:34.085115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.597 [2024-05-15 17:17:34.085124] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.597 qpair failed and we were unable to recover it. 00:26:46.597 [2024-05-15 17:17:34.085227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.597 [2024-05-15 17:17:34.085386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.597 [2024-05-15 17:17:34.085396] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.597 qpair failed and we were unable to recover it. 00:26:46.597 [2024-05-15 17:17:34.085558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.597 [2024-05-15 17:17:34.085660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.597 [2024-05-15 17:17:34.085671] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.597 qpair failed and we were unable to recover it. 00:26:46.597 [2024-05-15 17:17:34.085797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.597 [2024-05-15 17:17:34.085899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.597 [2024-05-15 17:17:34.085909] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.597 qpair failed and we were unable to recover it. 00:26:46.597 [2024-05-15 17:17:34.086007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.597 [2024-05-15 17:17:34.086108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.597 [2024-05-15 17:17:34.086118] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.597 qpair failed and we were unable to recover it. 00:26:46.597 [2024-05-15 17:17:34.086231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.597 [2024-05-15 17:17:34.086341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.597 [2024-05-15 17:17:34.086350] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.597 qpair failed and we were unable to recover it. 00:26:46.597 [2024-05-15 17:17:34.086462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.597 [2024-05-15 17:17:34.086620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.597 [2024-05-15 17:17:34.086630] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.597 qpair failed and we were unable to recover it. 00:26:46.597 [2024-05-15 17:17:34.086726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.597 [2024-05-15 17:17:34.086899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.597 [2024-05-15 17:17:34.086909] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.597 qpair failed and we were unable to recover it. 00:26:46.597 [2024-05-15 17:17:34.087015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.597 [2024-05-15 17:17:34.087152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.597 [2024-05-15 17:17:34.087162] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.597 qpair failed and we were unable to recover it. 00:26:46.597 [2024-05-15 17:17:34.087269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.597 [2024-05-15 17:17:34.087363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.597 [2024-05-15 17:17:34.087373] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.597 qpair failed and we were unable to recover it. 00:26:46.597 [2024-05-15 17:17:34.087502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.597 [2024-05-15 17:17:34.087596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.597 [2024-05-15 17:17:34.087605] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.597 qpair failed and we were unable to recover it. 00:26:46.597 [2024-05-15 17:17:34.087699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.597 [2024-05-15 17:17:34.087874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.597 [2024-05-15 17:17:34.087884] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.597 qpair failed and we were unable to recover it. 00:26:46.597 [2024-05-15 17:17:34.088002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.597 [2024-05-15 17:17:34.088102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.597 [2024-05-15 17:17:34.088111] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.597 qpair failed and we were unable to recover it. 00:26:46.597 [2024-05-15 17:17:34.088214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.597 [2024-05-15 17:17:34.088404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.597 [2024-05-15 17:17:34.088414] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.597 qpair failed and we were unable to recover it. 00:26:46.597 [2024-05-15 17:17:34.088536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.597 [2024-05-15 17:17:34.088639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.597 [2024-05-15 17:17:34.088648] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.597 qpair failed and we were unable to recover it. 00:26:46.597 [2024-05-15 17:17:34.088834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.597 [2024-05-15 17:17:34.089003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.597 [2024-05-15 17:17:34.089013] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.597 qpair failed and we were unable to recover it. 00:26:46.597 [2024-05-15 17:17:34.089130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.597 [2024-05-15 17:17:34.089322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.597 [2024-05-15 17:17:34.089333] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.597 qpair failed and we were unable to recover it. 00:26:46.597 [2024-05-15 17:17:34.089428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.597 [2024-05-15 17:17:34.089597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.597 [2024-05-15 17:17:34.089607] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.597 qpair failed and we were unable to recover it. 00:26:46.597 [2024-05-15 17:17:34.089860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.597 [2024-05-15 17:17:34.089959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.597 [2024-05-15 17:17:34.089968] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.597 qpair failed and we were unable to recover it. 00:26:46.597 [2024-05-15 17:17:34.090065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.597 [2024-05-15 17:17:34.090216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.597 [2024-05-15 17:17:34.090225] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.597 qpair failed and we were unable to recover it. 00:26:46.597 [2024-05-15 17:17:34.090321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.597 [2024-05-15 17:17:34.090420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.597 [2024-05-15 17:17:34.090429] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.597 qpair failed and we were unable to recover it. 00:26:46.597 [2024-05-15 17:17:34.090708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.597 [2024-05-15 17:17:34.090808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.597 [2024-05-15 17:17:34.090818] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.597 qpair failed and we were unable to recover it. 00:26:46.597 [2024-05-15 17:17:34.090918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.597 [2024-05-15 17:17:34.091017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.597 [2024-05-15 17:17:34.091027] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.597 qpair failed and we were unable to recover it. 00:26:46.597 [2024-05-15 17:17:34.091127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.597 [2024-05-15 17:17:34.091218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.597 [2024-05-15 17:17:34.091229] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.598 qpair failed and we were unable to recover it. 00:26:46.598 [2024-05-15 17:17:34.091325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.598 [2024-05-15 17:17:34.091427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.598 [2024-05-15 17:17:34.091437] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.598 qpair failed and we were unable to recover it. 00:26:46.598 [2024-05-15 17:17:34.091695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.598 [2024-05-15 17:17:34.091792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.598 [2024-05-15 17:17:34.091801] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.598 qpair failed and we were unable to recover it. 00:26:46.598 [2024-05-15 17:17:34.092024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.598 [2024-05-15 17:17:34.092119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.598 [2024-05-15 17:17:34.092128] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.598 qpair failed and we were unable to recover it. 00:26:46.598 [2024-05-15 17:17:34.092231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.598 [2024-05-15 17:17:34.092339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.598 [2024-05-15 17:17:34.092349] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.598 qpair failed and we were unable to recover it. 00:26:46.598 [2024-05-15 17:17:34.092578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.598 [2024-05-15 17:17:34.092675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.598 [2024-05-15 17:17:34.092684] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.598 qpair failed and we were unable to recover it. 00:26:46.598 [2024-05-15 17:17:34.092788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.598 [2024-05-15 17:17:34.092884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.598 [2024-05-15 17:17:34.092893] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.598 qpair failed and we were unable to recover it. 00:26:46.598 [2024-05-15 17:17:34.092987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.598 [2024-05-15 17:17:34.093089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.598 [2024-05-15 17:17:34.093099] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.598 qpair failed and we were unable to recover it. 00:26:46.598 [2024-05-15 17:17:34.093257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.598 [2024-05-15 17:17:34.093372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.598 [2024-05-15 17:17:34.093381] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.598 qpair failed and we were unable to recover it. 00:26:46.598 [2024-05-15 17:17:34.093470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.598 [2024-05-15 17:17:34.093630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.598 [2024-05-15 17:17:34.093640] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.598 qpair failed and we were unable to recover it. 00:26:46.598 [2024-05-15 17:17:34.093728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.598 [2024-05-15 17:17:34.093893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.598 [2024-05-15 17:17:34.093903] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.598 qpair failed and we were unable to recover it. 00:26:46.598 [2024-05-15 17:17:34.094078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.598 [2024-05-15 17:17:34.094238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.598 [2024-05-15 17:17:34.094248] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.598 qpair failed and we were unable to recover it. 00:26:46.598 [2024-05-15 17:17:34.094346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.598 [2024-05-15 17:17:34.094432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.598 [2024-05-15 17:17:34.094442] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.598 qpair failed and we were unable to recover it. 00:26:46.598 [2024-05-15 17:17:34.094540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.598 [2024-05-15 17:17:34.094711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.598 [2024-05-15 17:17:34.094721] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.598 qpair failed and we were unable to recover it. 00:26:46.598 [2024-05-15 17:17:34.094812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.598 [2024-05-15 17:17:34.094905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.598 [2024-05-15 17:17:34.094915] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.598 qpair failed and we were unable to recover it. 00:26:46.598 [2024-05-15 17:17:34.095073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.598 [2024-05-15 17:17:34.095158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.598 [2024-05-15 17:17:34.095173] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.598 qpair failed and we were unable to recover it. 00:26:46.598 [2024-05-15 17:17:34.095280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.598 [2024-05-15 17:17:34.095380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.598 [2024-05-15 17:17:34.095390] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.598 qpair failed and we were unable to recover it. 00:26:46.598 [2024-05-15 17:17:34.095501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.598 [2024-05-15 17:17:34.095658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.598 [2024-05-15 17:17:34.095667] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.598 qpair failed and we were unable to recover it. 00:26:46.598 [2024-05-15 17:17:34.095759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.598 [2024-05-15 17:17:34.095921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.598 [2024-05-15 17:17:34.095931] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.598 qpair failed and we were unable to recover it. 00:26:46.598 [2024-05-15 17:17:34.096022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.598 [2024-05-15 17:17:34.096247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.598 [2024-05-15 17:17:34.096258] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.598 qpair failed and we were unable to recover it. 00:26:46.598 [2024-05-15 17:17:34.096424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.598 [2024-05-15 17:17:34.096523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.598 [2024-05-15 17:17:34.096532] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.598 qpair failed and we were unable to recover it. 00:26:46.598 [2024-05-15 17:17:34.096630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.598 [2024-05-15 17:17:34.096737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.598 [2024-05-15 17:17:34.096747] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.598 qpair failed and we were unable to recover it. 00:26:46.598 [2024-05-15 17:17:34.096853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.598 [2024-05-15 17:17:34.097012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.598 [2024-05-15 17:17:34.097022] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.598 qpair failed and we were unable to recover it. 00:26:46.598 [2024-05-15 17:17:34.097121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.598 [2024-05-15 17:17:34.097210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.598 [2024-05-15 17:17:34.097220] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.598 qpair failed and we were unable to recover it. 00:26:46.598 [2024-05-15 17:17:34.097382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.598 [2024-05-15 17:17:34.097606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.598 [2024-05-15 17:17:34.097616] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.598 qpair failed and we were unable to recover it. 00:26:46.598 [2024-05-15 17:17:34.097718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.598 [2024-05-15 17:17:34.097853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.598 [2024-05-15 17:17:34.097862] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.598 qpair failed and we were unable to recover it. 00:26:46.598 [2024-05-15 17:17:34.097936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.598 [2024-05-15 17:17:34.098091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.598 [2024-05-15 17:17:34.098101] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.598 qpair failed and we were unable to recover it. 00:26:46.598 [2024-05-15 17:17:34.098209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.598 [2024-05-15 17:17:34.098318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.598 [2024-05-15 17:17:34.098328] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.598 qpair failed and we were unable to recover it. 00:26:46.598 [2024-05-15 17:17:34.098433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.598 [2024-05-15 17:17:34.098524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.598 [2024-05-15 17:17:34.098533] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.598 qpair failed and we were unable to recover it. 00:26:46.598 [2024-05-15 17:17:34.098698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.598 [2024-05-15 17:17:34.098800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.598 [2024-05-15 17:17:34.098809] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.599 qpair failed and we were unable to recover it. 00:26:46.599 [2024-05-15 17:17:34.098908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.599 [2024-05-15 17:17:34.098994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.599 [2024-05-15 17:17:34.099003] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.599 qpair failed and we were unable to recover it. 00:26:46.599 [2024-05-15 17:17:34.099105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.599 [2024-05-15 17:17:34.099194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.599 [2024-05-15 17:17:34.099204] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.599 qpair failed and we were unable to recover it. 00:26:46.599 [2024-05-15 17:17:34.099301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.599 [2024-05-15 17:17:34.099389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.599 [2024-05-15 17:17:34.099399] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.599 qpair failed and we were unable to recover it. 00:26:46.599 [2024-05-15 17:17:34.099501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.599 [2024-05-15 17:17:34.099596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.599 [2024-05-15 17:17:34.099605] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.599 qpair failed and we were unable to recover it. 00:26:46.599 [2024-05-15 17:17:34.099693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.599 [2024-05-15 17:17:34.099852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.599 [2024-05-15 17:17:34.099862] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.599 qpair failed and we were unable to recover it. 00:26:46.599 [2024-05-15 17:17:34.099950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.599 [2024-05-15 17:17:34.100045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.599 [2024-05-15 17:17:34.100055] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.599 qpair failed and we were unable to recover it. 00:26:46.599 [2024-05-15 17:17:34.100146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.599 [2024-05-15 17:17:34.100315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.599 [2024-05-15 17:17:34.100325] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.599 qpair failed and we were unable to recover it. 00:26:46.599 [2024-05-15 17:17:34.100425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.599 [2024-05-15 17:17:34.100513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.599 [2024-05-15 17:17:34.100523] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.599 qpair failed and we were unable to recover it. 00:26:46.599 [2024-05-15 17:17:34.100685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.599 [2024-05-15 17:17:34.100836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.599 [2024-05-15 17:17:34.100846] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.599 qpair failed and we were unable to recover it. 00:26:46.599 [2024-05-15 17:17:34.100955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.599 [2024-05-15 17:17:34.101047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.599 [2024-05-15 17:17:34.101057] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.599 qpair failed and we were unable to recover it. 00:26:46.599 [2024-05-15 17:17:34.101163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.599 [2024-05-15 17:17:34.101351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.599 [2024-05-15 17:17:34.101362] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.599 qpair failed and we were unable to recover it. 00:26:46.599 [2024-05-15 17:17:34.101459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.599 [2024-05-15 17:17:34.101566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.599 [2024-05-15 17:17:34.101577] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.599 qpair failed and we were unable to recover it. 00:26:46.599 [2024-05-15 17:17:34.101665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.599 [2024-05-15 17:17:34.101754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.599 [2024-05-15 17:17:34.101763] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.599 qpair failed and we were unable to recover it. 00:26:46.599 [2024-05-15 17:17:34.101920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.599 [2024-05-15 17:17:34.102016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.599 [2024-05-15 17:17:34.102026] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.599 qpair failed and we were unable to recover it. 00:26:46.599 [2024-05-15 17:17:34.102123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.599 [2024-05-15 17:17:34.102299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.599 [2024-05-15 17:17:34.102310] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.599 qpair failed and we were unable to recover it. 00:26:46.599 [2024-05-15 17:17:34.102448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.599 [2024-05-15 17:17:34.102606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.599 [2024-05-15 17:17:34.102615] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.599 qpair failed and we were unable to recover it. 00:26:46.599 [2024-05-15 17:17:34.102770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.599 [2024-05-15 17:17:34.102982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.599 [2024-05-15 17:17:34.102991] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.599 qpair failed and we were unable to recover it. 00:26:46.599 [2024-05-15 17:17:34.103087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.599 [2024-05-15 17:17:34.103186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.599 [2024-05-15 17:17:34.103196] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.599 qpair failed and we were unable to recover it. 00:26:46.599 [2024-05-15 17:17:34.103299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.599 [2024-05-15 17:17:34.103396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.599 [2024-05-15 17:17:34.103405] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.599 qpair failed and we were unable to recover it. 00:26:46.599 [2024-05-15 17:17:34.103503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.599 [2024-05-15 17:17:34.103595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.599 [2024-05-15 17:17:34.103605] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.599 qpair failed and we were unable to recover it. 00:26:46.599 [2024-05-15 17:17:34.103702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.599 [2024-05-15 17:17:34.103858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.599 [2024-05-15 17:17:34.103868] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.599 qpair failed and we were unable to recover it. 00:26:46.599 [2024-05-15 17:17:34.104115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.599 [2024-05-15 17:17:34.104286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.599 [2024-05-15 17:17:34.104296] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.599 qpair failed and we were unable to recover it. 00:26:46.599 [2024-05-15 17:17:34.104394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.599 [2024-05-15 17:17:34.104533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.599 [2024-05-15 17:17:34.104542] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.599 qpair failed and we were unable to recover it. 00:26:46.599 [2024-05-15 17:17:34.104706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.599 [2024-05-15 17:17:34.104809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.599 [2024-05-15 17:17:34.104818] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.599 qpair failed and we were unable to recover it. 00:26:46.599 [2024-05-15 17:17:34.104928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.599 [2024-05-15 17:17:34.105082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.599 [2024-05-15 17:17:34.105091] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.599 qpair failed and we were unable to recover it. 00:26:46.599 [2024-05-15 17:17:34.105252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.599 [2024-05-15 17:17:34.105411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.599 [2024-05-15 17:17:34.105421] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.599 qpair failed and we were unable to recover it. 00:26:46.599 [2024-05-15 17:17:34.105549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.599 [2024-05-15 17:17:34.105711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.599 [2024-05-15 17:17:34.105721] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.599 qpair failed and we were unable to recover it. 00:26:46.599 [2024-05-15 17:17:34.105902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.599 [2024-05-15 17:17:34.106081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.599 [2024-05-15 17:17:34.106090] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.599 qpair failed and we were unable to recover it. 00:26:46.599 [2024-05-15 17:17:34.106192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.599 [2024-05-15 17:17:34.106291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.599 [2024-05-15 17:17:34.106301] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.599 qpair failed and we were unable to recover it. 00:26:46.599 [2024-05-15 17:17:34.106415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.599 [2024-05-15 17:17:34.106520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.599 [2024-05-15 17:17:34.106530] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.599 qpair failed and we were unable to recover it. 00:26:46.599 [2024-05-15 17:17:34.106697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.599 [2024-05-15 17:17:34.106856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.599 [2024-05-15 17:17:34.106865] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.599 qpair failed and we were unable to recover it. 00:26:46.599 [2024-05-15 17:17:34.106973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.599 [2024-05-15 17:17:34.107079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.599 [2024-05-15 17:17:34.107088] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.599 qpair failed and we were unable to recover it. 00:26:46.599 [2024-05-15 17:17:34.107181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.599 [2024-05-15 17:17:34.107282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.599 [2024-05-15 17:17:34.107292] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.600 qpair failed and we were unable to recover it. 00:26:46.600 [2024-05-15 17:17:34.107454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.600 [2024-05-15 17:17:34.107631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.600 [2024-05-15 17:17:34.107640] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.600 qpair failed and we were unable to recover it. 00:26:46.600 [2024-05-15 17:17:34.107740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.600 [2024-05-15 17:17:34.107834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.600 [2024-05-15 17:17:34.107843] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.600 qpair failed and we were unable to recover it. 00:26:46.600 [2024-05-15 17:17:34.107956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.600 [2024-05-15 17:17:34.108113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.600 [2024-05-15 17:17:34.108123] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.600 qpair failed and we were unable to recover it. 00:26:46.600 [2024-05-15 17:17:34.108232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.600 [2024-05-15 17:17:34.108334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.600 [2024-05-15 17:17:34.108343] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.600 qpair failed and we were unable to recover it. 00:26:46.600 [2024-05-15 17:17:34.108586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.600 [2024-05-15 17:17:34.108748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.600 [2024-05-15 17:17:34.108757] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.600 qpair failed and we were unable to recover it. 00:26:46.600 [2024-05-15 17:17:34.108917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.600 [2024-05-15 17:17:34.109002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.600 [2024-05-15 17:17:34.109012] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.600 qpair failed and we were unable to recover it. 00:26:46.600 [2024-05-15 17:17:34.109116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.600 [2024-05-15 17:17:34.109335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.600 [2024-05-15 17:17:34.109346] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.600 qpair failed and we were unable to recover it. 00:26:46.600 [2024-05-15 17:17:34.109458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.600 [2024-05-15 17:17:34.109556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.600 [2024-05-15 17:17:34.109566] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.600 qpair failed and we were unable to recover it. 00:26:46.600 [2024-05-15 17:17:34.109655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.600 [2024-05-15 17:17:34.109819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.600 [2024-05-15 17:17:34.109829] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.600 qpair failed and we were unable to recover it. 00:26:46.600 [2024-05-15 17:17:34.109921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.600 [2024-05-15 17:17:34.110143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.600 [2024-05-15 17:17:34.110153] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.600 qpair failed and we were unable to recover it. 00:26:46.600 [2024-05-15 17:17:34.110274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.600 [2024-05-15 17:17:34.110382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.600 [2024-05-15 17:17:34.110392] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.600 qpair failed and we were unable to recover it. 00:26:46.600 [2024-05-15 17:17:34.110498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.600 [2024-05-15 17:17:34.110656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.600 [2024-05-15 17:17:34.110665] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.600 qpair failed and we were unable to recover it. 00:26:46.600 [2024-05-15 17:17:34.110829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.600 [2024-05-15 17:17:34.110935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.600 [2024-05-15 17:17:34.110945] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.600 qpair failed and we were unable to recover it. 00:26:46.600 [2024-05-15 17:17:34.111069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.600 [2024-05-15 17:17:34.111172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.600 [2024-05-15 17:17:34.111182] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.600 qpair failed and we were unable to recover it. 00:26:46.600 [2024-05-15 17:17:34.111272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.600 [2024-05-15 17:17:34.111374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.600 [2024-05-15 17:17:34.111383] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.600 qpair failed and we were unable to recover it. 00:26:46.600 [2024-05-15 17:17:34.111474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.600 [2024-05-15 17:17:34.111569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.600 [2024-05-15 17:17:34.111578] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.600 qpair failed and we were unable to recover it. 00:26:46.600 [2024-05-15 17:17:34.111746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.600 [2024-05-15 17:17:34.111838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.600 [2024-05-15 17:17:34.111848] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.600 qpair failed and we were unable to recover it. 00:26:46.600 [2024-05-15 17:17:34.112007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.600 [2024-05-15 17:17:34.112175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.600 [2024-05-15 17:17:34.112185] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.600 qpair failed and we were unable to recover it. 00:26:46.600 [2024-05-15 17:17:34.112312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.600 [2024-05-15 17:17:34.112464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.600 [2024-05-15 17:17:34.112475] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.600 qpair failed and we were unable to recover it. 00:26:46.600 [2024-05-15 17:17:34.112576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.600 [2024-05-15 17:17:34.112727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.600 [2024-05-15 17:17:34.112737] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.600 qpair failed and we were unable to recover it. 00:26:46.600 [2024-05-15 17:17:34.112899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.600 [2024-05-15 17:17:34.112982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.600 [2024-05-15 17:17:34.112992] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.600 qpair failed and we were unable to recover it. 00:26:46.600 [2024-05-15 17:17:34.113099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.600 [2024-05-15 17:17:34.113213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.600 [2024-05-15 17:17:34.113222] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.600 qpair failed and we were unable to recover it. 00:26:46.600 [2024-05-15 17:17:34.113385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.600 [2024-05-15 17:17:34.113485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.600 [2024-05-15 17:17:34.113495] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.600 qpair failed and we were unable to recover it. 00:26:46.600 [2024-05-15 17:17:34.113649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.600 [2024-05-15 17:17:34.113808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.600 [2024-05-15 17:17:34.113817] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.600 qpair failed and we were unable to recover it. 00:26:46.600 [2024-05-15 17:17:34.113915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.600 [2024-05-15 17:17:34.114066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.600 [2024-05-15 17:17:34.114076] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.600 qpair failed and we were unable to recover it. 00:26:46.600 [2024-05-15 17:17:34.114235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.600 [2024-05-15 17:17:34.114325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.600 [2024-05-15 17:17:34.114335] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.600 qpair failed and we were unable to recover it. 00:26:46.600 [2024-05-15 17:17:34.114505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.600 [2024-05-15 17:17:34.114680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.600 [2024-05-15 17:17:34.114690] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.600 qpair failed and we were unable to recover it. 00:26:46.600 [2024-05-15 17:17:34.114854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.600 [2024-05-15 17:17:34.114954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.600 [2024-05-15 17:17:34.114963] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.601 qpair failed and we were unable to recover it. 00:26:46.601 [2024-05-15 17:17:34.115057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.601 [2024-05-15 17:17:34.115280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.601 [2024-05-15 17:17:34.115289] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.601 qpair failed and we were unable to recover it. 00:26:46.601 [2024-05-15 17:17:34.115440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.601 [2024-05-15 17:17:34.115601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.601 [2024-05-15 17:17:34.115611] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.601 qpair failed and we were unable to recover it. 00:26:46.601 [2024-05-15 17:17:34.115680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.601 [2024-05-15 17:17:34.115779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.601 [2024-05-15 17:17:34.115789] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.601 qpair failed and we were unable to recover it. 00:26:46.601 [2024-05-15 17:17:34.115965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.601 [2024-05-15 17:17:34.116052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.601 [2024-05-15 17:17:34.116061] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.601 qpair failed and we were unable to recover it. 00:26:46.601 [2024-05-15 17:17:34.116155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.601 [2024-05-15 17:17:34.116262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.601 [2024-05-15 17:17:34.116273] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.601 qpair failed and we were unable to recover it. 00:26:46.601 [2024-05-15 17:17:34.116441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.601 [2024-05-15 17:17:34.116609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.601 [2024-05-15 17:17:34.116619] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.601 qpair failed and we were unable to recover it. 00:26:46.601 [2024-05-15 17:17:34.116702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.601 [2024-05-15 17:17:34.116864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.601 [2024-05-15 17:17:34.116874] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.601 qpair failed and we were unable to recover it. 00:26:46.601 [2024-05-15 17:17:34.117046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.601 [2024-05-15 17:17:34.117136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.601 [2024-05-15 17:17:34.117146] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.601 qpair failed and we were unable to recover it. 00:26:46.601 [2024-05-15 17:17:34.117246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.601 [2024-05-15 17:17:34.117361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.601 [2024-05-15 17:17:34.117373] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.601 qpair failed and we were unable to recover it. 00:26:46.601 [2024-05-15 17:17:34.117478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.601 [2024-05-15 17:17:34.117572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.601 [2024-05-15 17:17:34.117581] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.601 qpair failed and we were unable to recover it. 00:26:46.601 [2024-05-15 17:17:34.117738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.601 [2024-05-15 17:17:34.117828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.601 [2024-05-15 17:17:34.117837] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.601 qpair failed and we were unable to recover it. 00:26:46.601 [2024-05-15 17:17:34.117935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.601 [2024-05-15 17:17:34.118019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.601 [2024-05-15 17:17:34.118028] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.601 qpair failed and we were unable to recover it. 00:26:46.601 [2024-05-15 17:17:34.118117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.601 [2024-05-15 17:17:34.118289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.601 [2024-05-15 17:17:34.118299] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.601 qpair failed and we were unable to recover it. 00:26:46.601 [2024-05-15 17:17:34.118460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.601 [2024-05-15 17:17:34.118550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.601 [2024-05-15 17:17:34.118560] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.601 qpair failed and we were unable to recover it. 00:26:46.601 [2024-05-15 17:17:34.118718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.601 [2024-05-15 17:17:34.118882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.601 [2024-05-15 17:17:34.118891] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.601 qpair failed and we were unable to recover it. 00:26:46.601 [2024-05-15 17:17:34.119002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.601 [2024-05-15 17:17:34.119173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.601 [2024-05-15 17:17:34.119183] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.601 qpair failed and we were unable to recover it. 00:26:46.601 [2024-05-15 17:17:34.119276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.601 [2024-05-15 17:17:34.119371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.601 [2024-05-15 17:17:34.119380] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.601 qpair failed and we were unable to recover it. 00:26:46.601 [2024-05-15 17:17:34.119560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.601 [2024-05-15 17:17:34.119661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.601 [2024-05-15 17:17:34.119671] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.601 qpair failed and we were unable to recover it. 00:26:46.601 [2024-05-15 17:17:34.119772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.601 [2024-05-15 17:17:34.119870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.601 [2024-05-15 17:17:34.119882] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.601 qpair failed and we were unable to recover it. 00:26:46.601 [2024-05-15 17:17:34.119975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.601 [2024-05-15 17:17:34.120151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.601 [2024-05-15 17:17:34.120160] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.601 qpair failed and we were unable to recover it. 00:26:46.601 [2024-05-15 17:17:34.120330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.601 [2024-05-15 17:17:34.120522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.601 [2024-05-15 17:17:34.120531] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.601 qpair failed and we were unable to recover it. 00:26:46.601 [2024-05-15 17:17:34.120618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.601 [2024-05-15 17:17:34.120704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.601 [2024-05-15 17:17:34.120713] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.601 qpair failed and we were unable to recover it. 00:26:46.601 [2024-05-15 17:17:34.120809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.601 [2024-05-15 17:17:34.120906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.601 [2024-05-15 17:17:34.120915] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.601 qpair failed and we were unable to recover it. 00:26:46.601 [2024-05-15 17:17:34.121073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.601 [2024-05-15 17:17:34.121179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.601 [2024-05-15 17:17:34.121189] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.601 qpair failed and we were unable to recover it. 00:26:46.601 [2024-05-15 17:17:34.121290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.601 [2024-05-15 17:17:34.121381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.601 [2024-05-15 17:17:34.121390] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.601 qpair failed and we were unable to recover it. 00:26:46.601 [2024-05-15 17:17:34.121544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.601 [2024-05-15 17:17:34.121636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.601 [2024-05-15 17:17:34.121645] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.601 qpair failed and we were unable to recover it. 00:26:46.601 [2024-05-15 17:17:34.121752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.601 [2024-05-15 17:17:34.121863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.601 [2024-05-15 17:17:34.121872] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.601 qpair failed and we were unable to recover it. 00:26:46.601 [2024-05-15 17:17:34.122029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.601 [2024-05-15 17:17:34.122224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.601 [2024-05-15 17:17:34.122234] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.601 qpair failed and we were unable to recover it. 00:26:46.601 [2024-05-15 17:17:34.122331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.601 [2024-05-15 17:17:34.122424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.601 [2024-05-15 17:17:34.122436] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.601 qpair failed and we were unable to recover it. 00:26:46.601 [2024-05-15 17:17:34.122615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.601 [2024-05-15 17:17:34.122779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.601 [2024-05-15 17:17:34.122788] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.601 qpair failed and we were unable to recover it. 00:26:46.601 [2024-05-15 17:17:34.123010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.601 [2024-05-15 17:17:34.123116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.601 [2024-05-15 17:17:34.123126] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.601 qpair failed and we were unable to recover it. 00:26:46.601 [2024-05-15 17:17:34.123282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.601 [2024-05-15 17:17:34.123393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.601 [2024-05-15 17:17:34.123402] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.601 qpair failed and we were unable to recover it. 00:26:46.601 [2024-05-15 17:17:34.123493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.601 [2024-05-15 17:17:34.123648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.602 [2024-05-15 17:17:34.123659] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.602 qpair failed and we were unable to recover it. 00:26:46.602 [2024-05-15 17:17:34.123754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.602 [2024-05-15 17:17:34.123859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.602 [2024-05-15 17:17:34.123869] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.602 qpair failed and we were unable to recover it. 00:26:46.602 [2024-05-15 17:17:34.124022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.602 [2024-05-15 17:17:34.124185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.602 [2024-05-15 17:17:34.124195] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.602 qpair failed and we were unable to recover it. 00:26:46.602 [2024-05-15 17:17:34.124293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.602 [2024-05-15 17:17:34.124403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.602 [2024-05-15 17:17:34.124415] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.602 qpair failed and we were unable to recover it. 00:26:46.602 [2024-05-15 17:17:34.124523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.602 [2024-05-15 17:17:34.124683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.602 [2024-05-15 17:17:34.124693] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.602 qpair failed and we were unable to recover it. 00:26:46.602 [2024-05-15 17:17:34.124805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.602 [2024-05-15 17:17:34.125074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.602 [2024-05-15 17:17:34.125085] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.602 qpair failed and we were unable to recover it. 00:26:46.602 [2024-05-15 17:17:34.125181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.602 [2024-05-15 17:17:34.125292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.602 [2024-05-15 17:17:34.125303] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.602 qpair failed and we were unable to recover it. 00:26:46.602 [2024-05-15 17:17:34.125403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.602 [2024-05-15 17:17:34.125507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.602 [2024-05-15 17:17:34.125517] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.602 qpair failed and we were unable to recover it. 00:26:46.602 [2024-05-15 17:17:34.125720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.602 [2024-05-15 17:17:34.125888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.602 [2024-05-15 17:17:34.125897] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.602 qpair failed and we were unable to recover it. 00:26:46.602 [2024-05-15 17:17:34.126009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.602 [2024-05-15 17:17:34.126167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.602 [2024-05-15 17:17:34.126177] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.602 qpair failed and we were unable to recover it. 00:26:46.602 [2024-05-15 17:17:34.126277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.602 [2024-05-15 17:17:34.126386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.602 [2024-05-15 17:17:34.126396] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.602 qpair failed and we were unable to recover it. 00:26:46.602 [2024-05-15 17:17:34.126561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.602 [2024-05-15 17:17:34.126648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.602 [2024-05-15 17:17:34.126658] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.602 qpair failed and we were unable to recover it. 00:26:46.602 [2024-05-15 17:17:34.126763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.602 [2024-05-15 17:17:34.126877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.602 [2024-05-15 17:17:34.126887] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.602 qpair failed and we were unable to recover it. 00:26:46.602 [2024-05-15 17:17:34.126989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.602 [2024-05-15 17:17:34.127078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.602 [2024-05-15 17:17:34.127088] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.602 qpair failed and we were unable to recover it. 00:26:46.602 [2024-05-15 17:17:34.127188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.602 [2024-05-15 17:17:34.127270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.602 [2024-05-15 17:17:34.127279] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.602 qpair failed and we were unable to recover it. 00:26:46.602 [2024-05-15 17:17:34.127377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.602 [2024-05-15 17:17:34.127463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.602 [2024-05-15 17:17:34.127472] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.602 qpair failed and we were unable to recover it. 00:26:46.602 [2024-05-15 17:17:34.127625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.602 [2024-05-15 17:17:34.127872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.602 [2024-05-15 17:17:34.127881] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.602 qpair failed and we were unable to recover it. 00:26:46.602 [2024-05-15 17:17:34.127987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.602 [2024-05-15 17:17:34.128103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.602 [2024-05-15 17:17:34.128113] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.602 qpair failed and we were unable to recover it. 00:26:46.602 [2024-05-15 17:17:34.128208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.602 [2024-05-15 17:17:34.128363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.602 [2024-05-15 17:17:34.128373] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.602 qpair failed and we were unable to recover it. 00:26:46.602 [2024-05-15 17:17:34.128495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.602 [2024-05-15 17:17:34.128589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.602 [2024-05-15 17:17:34.128599] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.602 qpair failed and we were unable to recover it. 00:26:46.602 [2024-05-15 17:17:34.128708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.602 [2024-05-15 17:17:34.128801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.602 [2024-05-15 17:17:34.128810] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.602 qpair failed and we were unable to recover it. 00:26:46.602 [2024-05-15 17:17:34.128969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.602 [2024-05-15 17:17:34.129062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.602 [2024-05-15 17:17:34.129071] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.602 qpair failed and we were unable to recover it. 00:26:46.602 [2024-05-15 17:17:34.129181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.602 [2024-05-15 17:17:34.129278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.602 [2024-05-15 17:17:34.129288] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.602 qpair failed and we were unable to recover it. 00:26:46.602 [2024-05-15 17:17:34.129442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.602 [2024-05-15 17:17:34.129609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.602 [2024-05-15 17:17:34.129619] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.602 qpair failed and we were unable to recover it. 00:26:46.602 [2024-05-15 17:17:34.129713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.602 [2024-05-15 17:17:34.129871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.602 [2024-05-15 17:17:34.129881] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.602 qpair failed and we were unable to recover it. 00:26:46.602 [2024-05-15 17:17:34.129986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.602 [2024-05-15 17:17:34.130073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.602 [2024-05-15 17:17:34.130083] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.602 qpair failed and we were unable to recover it. 00:26:46.602 [2024-05-15 17:17:34.130193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.602 [2024-05-15 17:17:34.130363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.602 [2024-05-15 17:17:34.130373] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.602 qpair failed and we were unable to recover it. 00:26:46.602 [2024-05-15 17:17:34.130469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.602 [2024-05-15 17:17:34.130586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.602 [2024-05-15 17:17:34.130596] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.602 qpair failed and we were unable to recover it. 00:26:46.602 [2024-05-15 17:17:34.130779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.602 [2024-05-15 17:17:34.130869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.602 [2024-05-15 17:17:34.130879] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.602 qpair failed and we were unable to recover it. 00:26:46.602 [2024-05-15 17:17:34.130968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.602 [2024-05-15 17:17:34.131078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.602 [2024-05-15 17:17:34.131089] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.602 qpair failed and we were unable to recover it. 00:26:46.602 [2024-05-15 17:17:34.131273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.602 [2024-05-15 17:17:34.131435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.602 [2024-05-15 17:17:34.131445] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.602 qpair failed and we were unable to recover it. 00:26:46.602 [2024-05-15 17:17:34.131554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.602 [2024-05-15 17:17:34.131699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.602 [2024-05-15 17:17:34.131710] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.602 qpair failed and we were unable to recover it. 00:26:46.602 [2024-05-15 17:17:34.131811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.602 [2024-05-15 17:17:34.131900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.602 [2024-05-15 17:17:34.131910] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.602 qpair failed and we were unable to recover it. 00:26:46.602 [2024-05-15 17:17:34.132011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.602 [2024-05-15 17:17:34.132105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.603 [2024-05-15 17:17:34.132115] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.603 qpair failed and we were unable to recover it. 00:26:46.603 [2024-05-15 17:17:34.132213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.603 [2024-05-15 17:17:34.132317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.603 [2024-05-15 17:17:34.132326] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.603 qpair failed and we were unable to recover it. 00:26:46.603 [2024-05-15 17:17:34.132441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.603 [2024-05-15 17:17:34.132538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.603 [2024-05-15 17:17:34.132547] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.603 qpair failed and we were unable to recover it. 00:26:46.603 [2024-05-15 17:17:34.132650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.603 [2024-05-15 17:17:34.132810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.603 [2024-05-15 17:17:34.132819] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.603 qpair failed and we were unable to recover it. 00:26:46.603 [2024-05-15 17:17:34.132997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.603 [2024-05-15 17:17:34.133088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.603 [2024-05-15 17:17:34.133097] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.603 qpair failed and we were unable to recover it. 00:26:46.603 [2024-05-15 17:17:34.133192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.603 [2024-05-15 17:17:34.133282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.603 [2024-05-15 17:17:34.133292] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.603 qpair failed and we were unable to recover it. 00:26:46.603 [2024-05-15 17:17:34.133459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.603 [2024-05-15 17:17:34.133617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.603 [2024-05-15 17:17:34.133627] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.603 qpair failed and we were unable to recover it. 00:26:46.603 [2024-05-15 17:17:34.133740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.603 [2024-05-15 17:17:34.133942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.603 [2024-05-15 17:17:34.133953] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.603 qpair failed and we were unable to recover it. 00:26:46.603 [2024-05-15 17:17:34.134041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.603 [2024-05-15 17:17:34.134115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.603 [2024-05-15 17:17:34.134124] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.603 qpair failed and we were unable to recover it. 00:26:46.603 [2024-05-15 17:17:34.134221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.603 [2024-05-15 17:17:34.134328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.603 [2024-05-15 17:17:34.134337] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.603 qpair failed and we were unable to recover it. 00:26:46.603 [2024-05-15 17:17:34.134535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.603 [2024-05-15 17:17:34.134704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.603 [2024-05-15 17:17:34.134713] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.603 qpair failed and we were unable to recover it. 00:26:46.603 [2024-05-15 17:17:34.134805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.603 [2024-05-15 17:17:34.134923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.603 [2024-05-15 17:17:34.134932] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.603 qpair failed and we were unable to recover it. 00:26:46.603 [2024-05-15 17:17:34.135157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.603 [2024-05-15 17:17:34.135262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.603 [2024-05-15 17:17:34.135272] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.603 qpair failed and we were unable to recover it. 00:26:46.603 [2024-05-15 17:17:34.135365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.603 [2024-05-15 17:17:34.135556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.603 [2024-05-15 17:17:34.135566] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.603 qpair failed and we were unable to recover it. 00:26:46.603 [2024-05-15 17:17:34.135744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.603 [2024-05-15 17:17:34.135835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.603 [2024-05-15 17:17:34.135845] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.603 qpair failed and we were unable to recover it. 00:26:46.603 [2024-05-15 17:17:34.135934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.603 [2024-05-15 17:17:34.136035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.603 [2024-05-15 17:17:34.136045] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.603 qpair failed and we were unable to recover it. 00:26:46.603 [2024-05-15 17:17:34.136162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.603 [2024-05-15 17:17:34.136295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.603 [2024-05-15 17:17:34.136305] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.603 qpair failed and we were unable to recover it. 00:26:46.603 [2024-05-15 17:17:34.136481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.603 [2024-05-15 17:17:34.136642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.603 [2024-05-15 17:17:34.136652] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.603 qpair failed and we were unable to recover it. 00:26:46.603 [2024-05-15 17:17:34.136756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.603 [2024-05-15 17:17:34.136856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.603 [2024-05-15 17:17:34.136866] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.603 qpair failed and we were unable to recover it. 00:26:46.603 [2024-05-15 17:17:34.137034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.603 [2024-05-15 17:17:34.137129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.603 [2024-05-15 17:17:34.137139] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.603 qpair failed and we were unable to recover it. 00:26:46.603 [2024-05-15 17:17:34.137252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.603 [2024-05-15 17:17:34.137474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.603 [2024-05-15 17:17:34.137484] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.603 qpair failed and we were unable to recover it. 00:26:46.603 [2024-05-15 17:17:34.137585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.603 [2024-05-15 17:17:34.137698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.603 [2024-05-15 17:17:34.137708] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.603 qpair failed and we were unable to recover it. 00:26:46.603 [2024-05-15 17:17:34.137811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.603 [2024-05-15 17:17:34.137982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.603 [2024-05-15 17:17:34.137992] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.603 qpair failed and we were unable to recover it. 00:26:46.603 [2024-05-15 17:17:34.138079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.603 [2024-05-15 17:17:34.138195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.603 [2024-05-15 17:17:34.138205] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.603 qpair failed and we were unable to recover it. 00:26:46.603 [2024-05-15 17:17:34.138296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.603 [2024-05-15 17:17:34.138465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.603 [2024-05-15 17:17:34.138475] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.603 qpair failed and we were unable to recover it. 00:26:46.603 [2024-05-15 17:17:34.138546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.603 [2024-05-15 17:17:34.138666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.603 [2024-05-15 17:17:34.138675] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.603 qpair failed and we were unable to recover it. 00:26:46.603 [2024-05-15 17:17:34.138792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.603 [2024-05-15 17:17:34.138967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.603 [2024-05-15 17:17:34.138977] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.603 qpair failed and we were unable to recover it. 00:26:46.603 [2024-05-15 17:17:34.139056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.603 [2024-05-15 17:17:34.139211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.603 [2024-05-15 17:17:34.139221] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.603 qpair failed and we were unable to recover it. 00:26:46.603 [2024-05-15 17:17:34.139308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.603 [2024-05-15 17:17:34.139533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.603 [2024-05-15 17:17:34.139543] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.603 qpair failed and we were unable to recover it. 00:26:46.603 [2024-05-15 17:17:34.139737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.603 [2024-05-15 17:17:34.139959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.603 [2024-05-15 17:17:34.139968] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.603 qpair failed and we were unable to recover it. 00:26:46.603 [2024-05-15 17:17:34.140149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.603 [2024-05-15 17:17:34.140338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.603 [2024-05-15 17:17:34.140348] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.603 qpair failed and we were unable to recover it. 00:26:46.603 [2024-05-15 17:17:34.140447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.603 [2024-05-15 17:17:34.140558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.603 [2024-05-15 17:17:34.140568] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.603 qpair failed and we were unable to recover it. 00:26:46.603 [2024-05-15 17:17:34.140666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.603 [2024-05-15 17:17:34.140842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.603 [2024-05-15 17:17:34.140851] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.603 qpair failed and we were unable to recover it. 00:26:46.603 [2024-05-15 17:17:34.140954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.603 [2024-05-15 17:17:34.141121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.604 [2024-05-15 17:17:34.141130] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.604 qpair failed and we were unable to recover it. 00:26:46.604 [2024-05-15 17:17:34.141246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.604 [2024-05-15 17:17:34.141368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.604 [2024-05-15 17:17:34.141378] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.604 qpair failed and we were unable to recover it. 00:26:46.604 [2024-05-15 17:17:34.141578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.604 [2024-05-15 17:17:34.141827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.604 [2024-05-15 17:17:34.141837] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.604 qpair failed and we were unable to recover it. 00:26:46.604 [2024-05-15 17:17:34.141933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.604 [2024-05-15 17:17:34.142044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.604 [2024-05-15 17:17:34.142053] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.604 qpair failed and we were unable to recover it. 00:26:46.604 [2024-05-15 17:17:34.142169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.604 [2024-05-15 17:17:34.142351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.604 [2024-05-15 17:17:34.142361] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.604 qpair failed and we were unable to recover it. 00:26:46.604 [2024-05-15 17:17:34.142454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.604 [2024-05-15 17:17:34.142622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.604 [2024-05-15 17:17:34.142631] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.604 qpair failed and we were unable to recover it. 00:26:46.604 [2024-05-15 17:17:34.142754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.604 [2024-05-15 17:17:34.142870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.604 [2024-05-15 17:17:34.142881] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.604 qpair failed and we were unable to recover it. 00:26:46.604 [2024-05-15 17:17:34.142981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.604 [2024-05-15 17:17:34.143067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.604 [2024-05-15 17:17:34.143077] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.604 qpair failed and we were unable to recover it. 00:26:46.604 [2024-05-15 17:17:34.143237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.604 [2024-05-15 17:17:34.143322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.604 [2024-05-15 17:17:34.143332] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.604 qpair failed and we were unable to recover it. 00:26:46.604 [2024-05-15 17:17:34.143422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.604 [2024-05-15 17:17:34.143512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.604 [2024-05-15 17:17:34.143521] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.604 qpair failed and we were unable to recover it. 00:26:46.604 [2024-05-15 17:17:34.143629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.604 [2024-05-15 17:17:34.143739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.604 [2024-05-15 17:17:34.143748] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.604 qpair failed and we were unable to recover it. 00:26:46.604 [2024-05-15 17:17:34.143840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.604 [2024-05-15 17:17:34.143938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.604 [2024-05-15 17:17:34.143948] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.604 qpair failed and we were unable to recover it. 00:26:46.604 [2024-05-15 17:17:34.144049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.604 [2024-05-15 17:17:34.144228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.604 [2024-05-15 17:17:34.144239] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.604 qpair failed and we were unable to recover it. 00:26:46.604 [2024-05-15 17:17:34.144329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.604 [2024-05-15 17:17:34.144393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.604 [2024-05-15 17:17:34.144403] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.604 qpair failed and we were unable to recover it. 00:26:46.604 [2024-05-15 17:17:34.144568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.604 [2024-05-15 17:17:34.144676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.604 [2024-05-15 17:17:34.144685] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.604 qpair failed and we were unable to recover it. 00:26:46.604 [2024-05-15 17:17:34.144859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.604 [2024-05-15 17:17:34.145004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.604 [2024-05-15 17:17:34.145013] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.604 qpair failed and we were unable to recover it. 00:26:46.604 [2024-05-15 17:17:34.145089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.604 [2024-05-15 17:17:34.145200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.604 [2024-05-15 17:17:34.145210] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.604 qpair failed and we were unable to recover it. 00:26:46.604 [2024-05-15 17:17:34.145368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.604 [2024-05-15 17:17:34.145476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.604 [2024-05-15 17:17:34.145486] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.604 qpair failed and we were unable to recover it. 00:26:46.604 [2024-05-15 17:17:34.145643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.604 [2024-05-15 17:17:34.145810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.604 [2024-05-15 17:17:34.145820] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.604 qpair failed and we were unable to recover it. 00:26:46.604 [2024-05-15 17:17:34.145922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.604 [2024-05-15 17:17:34.146013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.604 [2024-05-15 17:17:34.146022] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.604 qpair failed and we were unable to recover it. 00:26:46.604 [2024-05-15 17:17:34.146120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.604 [2024-05-15 17:17:34.146211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.604 [2024-05-15 17:17:34.146221] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.604 qpair failed and we were unable to recover it. 00:26:46.604 [2024-05-15 17:17:34.146323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.604 [2024-05-15 17:17:34.146423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.604 [2024-05-15 17:17:34.146433] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.604 qpair failed and we were unable to recover it. 00:26:46.604 [2024-05-15 17:17:34.146603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.604 [2024-05-15 17:17:34.146711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.604 [2024-05-15 17:17:34.146721] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.604 qpair failed and we were unable to recover it. 00:26:46.604 [2024-05-15 17:17:34.146825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.604 [2024-05-15 17:17:34.146919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.604 [2024-05-15 17:17:34.146930] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.604 qpair failed and we were unable to recover it. 00:26:46.604 [2024-05-15 17:17:34.147035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.604 [2024-05-15 17:17:34.147143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.604 [2024-05-15 17:17:34.147153] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.604 qpair failed and we were unable to recover it. 00:26:46.604 [2024-05-15 17:17:34.147260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.604 [2024-05-15 17:17:34.147382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.604 [2024-05-15 17:17:34.147392] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.604 qpair failed and we were unable to recover it. 00:26:46.604 [2024-05-15 17:17:34.147510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.604 [2024-05-15 17:17:34.147662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.604 [2024-05-15 17:17:34.147673] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.604 qpair failed and we were unable to recover it. 00:26:46.604 [2024-05-15 17:17:34.147811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.604 [2024-05-15 17:17:34.147916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.604 [2024-05-15 17:17:34.147926] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.604 qpair failed and we were unable to recover it. 00:26:46.604 [2024-05-15 17:17:34.148079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.604 [2024-05-15 17:17:34.148170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.604 [2024-05-15 17:17:34.148180] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.604 qpair failed and we were unable to recover it. 00:26:46.604 [2024-05-15 17:17:34.148284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.604 [2024-05-15 17:17:34.148385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.605 [2024-05-15 17:17:34.148395] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.605 qpair failed and we were unable to recover it. 00:26:46.605 [2024-05-15 17:17:34.148500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.605 [2024-05-15 17:17:34.148688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.605 [2024-05-15 17:17:34.148698] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.605 qpair failed and we were unable to recover it. 00:26:46.605 [2024-05-15 17:17:34.148925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.605 [2024-05-15 17:17:34.149049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.605 [2024-05-15 17:17:34.149059] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.605 qpair failed and we were unable to recover it. 00:26:46.605 [2024-05-15 17:17:34.149178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.605 [2024-05-15 17:17:34.149294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.605 [2024-05-15 17:17:34.149304] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.605 qpair failed and we were unable to recover it. 00:26:46.605 [2024-05-15 17:17:34.149420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.605 [2024-05-15 17:17:34.149519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.605 [2024-05-15 17:17:34.149529] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.605 qpair failed and we were unable to recover it. 00:26:46.605 [2024-05-15 17:17:34.149632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.605 [2024-05-15 17:17:34.149797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.605 [2024-05-15 17:17:34.149806] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.605 qpair failed and we were unable to recover it. 00:26:46.605 [2024-05-15 17:17:34.149925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.605 [2024-05-15 17:17:34.150093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.605 [2024-05-15 17:17:34.150103] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.605 qpair failed and we were unable to recover it. 00:26:46.605 [2024-05-15 17:17:34.150252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.605 [2024-05-15 17:17:34.150339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.605 [2024-05-15 17:17:34.150348] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.605 qpair failed and we were unable to recover it. 00:26:46.605 [2024-05-15 17:17:34.150543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.605 [2024-05-15 17:17:34.150743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.605 [2024-05-15 17:17:34.150752] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.605 qpair failed and we were unable to recover it. 00:26:46.605 [2024-05-15 17:17:34.150921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.605 [2024-05-15 17:17:34.151007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.605 [2024-05-15 17:17:34.151016] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.605 qpair failed and we were unable to recover it. 00:26:46.605 [2024-05-15 17:17:34.151234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.605 [2024-05-15 17:17:34.151324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.605 [2024-05-15 17:17:34.151334] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.605 qpair failed and we were unable to recover it. 00:26:46.605 [2024-05-15 17:17:34.151452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.605 [2024-05-15 17:17:34.151643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.605 [2024-05-15 17:17:34.151653] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.605 qpair failed and we were unable to recover it. 00:26:46.605 [2024-05-15 17:17:34.151754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.605 [2024-05-15 17:17:34.151846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.605 [2024-05-15 17:17:34.151856] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.605 qpair failed and we were unable to recover it. 00:26:46.605 [2024-05-15 17:17:34.151962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.605 [2024-05-15 17:17:34.152127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.605 [2024-05-15 17:17:34.152137] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.605 qpair failed and we were unable to recover it. 00:26:46.605 [2024-05-15 17:17:34.152248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.605 [2024-05-15 17:17:34.152352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.605 [2024-05-15 17:17:34.152362] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.605 qpair failed and we were unable to recover it. 00:26:46.605 [2024-05-15 17:17:34.152462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.605 [2024-05-15 17:17:34.152548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.605 [2024-05-15 17:17:34.152558] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.605 qpair failed and we were unable to recover it. 00:26:46.605 [2024-05-15 17:17:34.152655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.605 [2024-05-15 17:17:34.152807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.605 [2024-05-15 17:17:34.152817] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.605 qpair failed and we were unable to recover it. 00:26:46.605 [2024-05-15 17:17:34.152993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.605 [2024-05-15 17:17:34.153145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.605 [2024-05-15 17:17:34.153155] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.605 qpair failed and we were unable to recover it. 00:26:46.605 [2024-05-15 17:17:34.153287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.605 [2024-05-15 17:17:34.153410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.605 [2024-05-15 17:17:34.153432] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:46.605 qpair failed and we were unable to recover it. 00:26:46.605 [2024-05-15 17:17:34.153559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.605 [2024-05-15 17:17:34.153686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.605 [2024-05-15 17:17:34.153700] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:46.605 qpair failed and we were unable to recover it. 00:26:46.605 [2024-05-15 17:17:34.153872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.605 [2024-05-15 17:17:34.153972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.605 [2024-05-15 17:17:34.153985] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:46.605 qpair failed and we were unable to recover it. 00:26:46.605 [2024-05-15 17:17:34.154172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.605 [2024-05-15 17:17:34.154263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.605 [2024-05-15 17:17:34.154277] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:46.605 qpair failed and we were unable to recover it. 00:26:46.605 [2024-05-15 17:17:34.154385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.605 [2024-05-15 17:17:34.154494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.605 [2024-05-15 17:17:34.154507] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:46.605 qpair failed and we were unable to recover it. 00:26:46.605 [2024-05-15 17:17:34.154690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.605 [2024-05-15 17:17:34.154854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.605 [2024-05-15 17:17:34.154867] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:46.605 qpair failed and we were unable to recover it. 00:26:46.605 [2024-05-15 17:17:34.154974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.605 [2024-05-15 17:17:34.155074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.605 [2024-05-15 17:17:34.155088] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:46.605 qpair failed and we were unable to recover it. 00:26:46.605 [2024-05-15 17:17:34.155195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.605 [2024-05-15 17:17:34.155281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.605 [2024-05-15 17:17:34.155294] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:46.605 qpair failed and we were unable to recover it. 00:26:46.605 [2024-05-15 17:17:34.155411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.605 [2024-05-15 17:17:34.155508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.605 [2024-05-15 17:17:34.155521] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:46.605 qpair failed and we were unable to recover it. 00:26:46.605 [2024-05-15 17:17:34.155630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.605 [2024-05-15 17:17:34.155732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.605 [2024-05-15 17:17:34.155744] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:46.605 qpair failed and we were unable to recover it. 00:26:46.605 [2024-05-15 17:17:34.155853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.605 [2024-05-15 17:17:34.155970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.605 [2024-05-15 17:17:34.155984] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:46.605 qpair failed and we were unable to recover it. 00:26:46.605 [2024-05-15 17:17:34.156153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.605 [2024-05-15 17:17:34.156254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.605 [2024-05-15 17:17:34.156270] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:46.605 qpair failed and we were unable to recover it. 00:26:46.605 [2024-05-15 17:17:34.156381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.605 [2024-05-15 17:17:34.156500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.605 [2024-05-15 17:17:34.156513] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:46.605 qpair failed and we were unable to recover it. 00:26:46.605 [2024-05-15 17:17:34.156722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.605 [2024-05-15 17:17:34.156824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.605 [2024-05-15 17:17:34.156837] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:46.605 qpair failed and we were unable to recover it. 00:26:46.605 [2024-05-15 17:17:34.156942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.605 [2024-05-15 17:17:34.157039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.605 [2024-05-15 17:17:34.157054] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:46.605 qpair failed and we were unable to recover it. 00:26:46.605 [2024-05-15 17:17:34.157171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.605 [2024-05-15 17:17:34.157348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.605 [2024-05-15 17:17:34.157361] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:46.605 qpair failed and we were unable to recover it. 00:26:46.605 [2024-05-15 17:17:34.157477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.605 [2024-05-15 17:17:34.157575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.605 [2024-05-15 17:17:34.157588] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:46.605 qpair failed and we were unable to recover it. 00:26:46.605 [2024-05-15 17:17:34.157693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.606 [2024-05-15 17:17:34.157793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.606 [2024-05-15 17:17:34.157806] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:46.606 qpair failed and we were unable to recover it. 00:26:46.606 [2024-05-15 17:17:34.157916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.606 [2024-05-15 17:17:34.158010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.606 [2024-05-15 17:17:34.158024] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:46.606 qpair failed and we were unable to recover it. 00:26:46.606 [2024-05-15 17:17:34.158119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.606 [2024-05-15 17:17:34.158246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.606 [2024-05-15 17:17:34.158260] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:46.606 qpair failed and we were unable to recover it. 00:26:46.606 [2024-05-15 17:17:34.158365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.606 [2024-05-15 17:17:34.158530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.606 [2024-05-15 17:17:34.158543] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:46.606 qpair failed and we were unable to recover it. 00:26:46.606 [2024-05-15 17:17:34.158663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.606 [2024-05-15 17:17:34.158760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.606 [2024-05-15 17:17:34.158773] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:46.606 qpair failed and we were unable to recover it. 00:26:46.606 [2024-05-15 17:17:34.158944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.606 [2024-05-15 17:17:34.159040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.606 [2024-05-15 17:17:34.159053] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:46.606 qpair failed and we were unable to recover it. 00:26:46.606 [2024-05-15 17:17:34.159179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.606 [2024-05-15 17:17:34.159293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.606 [2024-05-15 17:17:34.159306] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:46.606 qpair failed and we were unable to recover it. 00:26:46.606 [2024-05-15 17:17:34.159413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.606 [2024-05-15 17:17:34.159513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.606 [2024-05-15 17:17:34.159529] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:46.606 qpair failed and we were unable to recover it. 00:26:46.606 [2024-05-15 17:17:34.159763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.606 [2024-05-15 17:17:34.159944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.606 [2024-05-15 17:17:34.159957] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:46.606 qpair failed and we were unable to recover it. 00:26:46.606 [2024-05-15 17:17:34.160119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.606 [2024-05-15 17:17:34.160281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.606 [2024-05-15 17:17:34.160295] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:46.606 qpair failed and we were unable to recover it. 00:26:46.606 [2024-05-15 17:17:34.160398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.606 [2024-05-15 17:17:34.160614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.606 [2024-05-15 17:17:34.160627] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:46.606 qpair failed and we were unable to recover it. 00:26:46.606 [2024-05-15 17:17:34.160806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.606 [2024-05-15 17:17:34.160898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.606 [2024-05-15 17:17:34.160911] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:46.606 qpair failed and we were unable to recover it. 00:26:46.606 [2024-05-15 17:17:34.161014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.606 [2024-05-15 17:17:34.161117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.606 [2024-05-15 17:17:34.161130] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:46.606 qpair failed and we were unable to recover it. 00:26:46.606 [2024-05-15 17:17:34.161305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.606 [2024-05-15 17:17:34.161419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.606 [2024-05-15 17:17:34.161432] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:46.606 qpair failed and we were unable to recover it. 00:26:46.606 [2024-05-15 17:17:34.161538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.606 [2024-05-15 17:17:34.161665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.606 [2024-05-15 17:17:34.161679] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:46.606 qpair failed and we were unable to recover it. 00:26:46.606 [2024-05-15 17:17:34.161791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.606 [2024-05-15 17:17:34.161969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.606 [2024-05-15 17:17:34.161982] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:46.606 qpair failed and we were unable to recover it. 00:26:46.606 [2024-05-15 17:17:34.162081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.606 [2024-05-15 17:17:34.162203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.606 [2024-05-15 17:17:34.162219] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:46.606 qpair failed and we were unable to recover it. 00:26:46.606 [2024-05-15 17:17:34.162322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.606 [2024-05-15 17:17:34.162484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.606 [2024-05-15 17:17:34.162500] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:46.606 qpair failed and we were unable to recover it. 00:26:46.606 [2024-05-15 17:17:34.162603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.606 [2024-05-15 17:17:34.162749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.606 [2024-05-15 17:17:34.162762] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:46.606 qpair failed and we were unable to recover it. 00:26:46.606 [2024-05-15 17:17:34.162895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.606 [2024-05-15 17:17:34.162997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.606 [2024-05-15 17:17:34.163010] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:46.606 qpair failed and we were unable to recover it. 00:26:46.606 [2024-05-15 17:17:34.163138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.606 [2024-05-15 17:17:34.163238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.606 [2024-05-15 17:17:34.163252] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:46.606 qpair failed and we were unable to recover it. 00:26:46.606 [2024-05-15 17:17:34.163396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.606 [2024-05-15 17:17:34.163502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.606 [2024-05-15 17:17:34.163515] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:46.606 qpair failed and we were unable to recover it. 00:26:46.606 [2024-05-15 17:17:34.163684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.606 [2024-05-15 17:17:34.163792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.606 [2024-05-15 17:17:34.163805] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:46.606 qpair failed and we were unable to recover it. 00:26:46.606 [2024-05-15 17:17:34.163931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.606 [2024-05-15 17:17:34.164024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.606 [2024-05-15 17:17:34.164037] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:46.606 qpair failed and we were unable to recover it. 00:26:46.606 [2024-05-15 17:17:34.164147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.606 [2024-05-15 17:17:34.164247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.606 [2024-05-15 17:17:34.164261] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:46.606 qpair failed and we were unable to recover it. 00:26:46.606 [2024-05-15 17:17:34.164367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.606 [2024-05-15 17:17:34.164473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.606 [2024-05-15 17:17:34.164486] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:46.606 qpair failed and we were unable to recover it. 00:26:46.606 [2024-05-15 17:17:34.164589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.606 [2024-05-15 17:17:34.164685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.606 [2024-05-15 17:17:34.164698] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:46.606 qpair failed and we were unable to recover it. 00:26:46.606 [2024-05-15 17:17:34.164796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.606 [2024-05-15 17:17:34.164891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.606 [2024-05-15 17:17:34.164907] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9400000b90 with addr=10.0.0.2, port=4420 00:26:46.606 qpair failed and we were unable to recover it. 00:26:46.606 [2024-05-15 17:17:34.165147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.606 [2024-05-15 17:17:34.165275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.606 [2024-05-15 17:17:34.165285] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.606 qpair failed and we were unable to recover it. 00:26:46.606 [2024-05-15 17:17:34.165448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.606 [2024-05-15 17:17:34.165539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.606 [2024-05-15 17:17:34.165548] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.606 qpair failed and we were unable to recover it. 00:26:46.606 [2024-05-15 17:17:34.165656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.606 [2024-05-15 17:17:34.165753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.606 [2024-05-15 17:17:34.165763] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.606 qpair failed and we were unable to recover it. 00:26:46.606 [2024-05-15 17:17:34.165922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.606 [2024-05-15 17:17:34.166094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.606 [2024-05-15 17:17:34.166104] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.606 qpair failed and we were unable to recover it. 00:26:46.606 [2024-05-15 17:17:34.166209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.606 [2024-05-15 17:17:34.166278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.606 [2024-05-15 17:17:34.166288] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.606 qpair failed and we were unable to recover it. 00:26:46.606 [2024-05-15 17:17:34.166440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.606 [2024-05-15 17:17:34.166533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.606 [2024-05-15 17:17:34.166543] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.606 qpair failed and we were unable to recover it. 00:26:46.606 [2024-05-15 17:17:34.166646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.606 [2024-05-15 17:17:34.166825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.606 [2024-05-15 17:17:34.166835] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.606 qpair failed and we were unable to recover it. 00:26:46.606 [2024-05-15 17:17:34.166924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.606 [2024-05-15 17:17:34.167006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.606 [2024-05-15 17:17:34.167016] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.606 qpair failed and we were unable to recover it. 00:26:46.606 [2024-05-15 17:17:34.167127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.606 [2024-05-15 17:17:34.167226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.606 [2024-05-15 17:17:34.167235] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.606 qpair failed and we were unable to recover it. 00:26:46.606 [2024-05-15 17:17:34.167336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.606 [2024-05-15 17:17:34.167505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.606 [2024-05-15 17:17:34.167515] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.606 qpair failed and we were unable to recover it. 00:26:46.607 [2024-05-15 17:17:34.167620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.607 [2024-05-15 17:17:34.167718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.607 [2024-05-15 17:17:34.167728] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.607 qpair failed and we were unable to recover it. 00:26:46.607 [2024-05-15 17:17:34.167828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.607 [2024-05-15 17:17:34.167916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.607 [2024-05-15 17:17:34.167926] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.607 qpair failed and we were unable to recover it. 00:26:46.607 [2024-05-15 17:17:34.168104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.607 [2024-05-15 17:17:34.168216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.607 [2024-05-15 17:17:34.168226] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.607 qpair failed and we were unable to recover it. 00:26:46.607 [2024-05-15 17:17:34.168318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.607 [2024-05-15 17:17:34.168475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.607 [2024-05-15 17:17:34.168485] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.607 qpair failed and we were unable to recover it. 00:26:46.607 [2024-05-15 17:17:34.168641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.607 [2024-05-15 17:17:34.168745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.607 [2024-05-15 17:17:34.168755] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.607 qpair failed and we were unable to recover it. 00:26:46.607 [2024-05-15 17:17:34.168912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.607 [2024-05-15 17:17:34.169008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.607 [2024-05-15 17:17:34.169017] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.607 qpair failed and we were unable to recover it. 00:26:46.607 [2024-05-15 17:17:34.169126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.607 [2024-05-15 17:17:34.169209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.607 [2024-05-15 17:17:34.169219] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.607 qpair failed and we were unable to recover it. 00:26:46.607 [2024-05-15 17:17:34.169326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.607 [2024-05-15 17:17:34.169459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.607 [2024-05-15 17:17:34.169469] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.607 qpair failed and we were unable to recover it. 00:26:46.607 [2024-05-15 17:17:34.169641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.607 [2024-05-15 17:17:34.169747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.607 [2024-05-15 17:17:34.169756] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.607 qpair failed and we were unable to recover it. 00:26:46.607 [2024-05-15 17:17:34.169845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.607 [2024-05-15 17:17:34.169931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.607 [2024-05-15 17:17:34.169940] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.607 qpair failed and we were unable to recover it. 00:26:46.607 [2024-05-15 17:17:34.170042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.607 [2024-05-15 17:17:34.170154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.607 [2024-05-15 17:17:34.170163] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.607 qpair failed and we were unable to recover it. 00:26:46.607 [2024-05-15 17:17:34.170385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.607 [2024-05-15 17:17:34.170483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.607 [2024-05-15 17:17:34.170493] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.607 qpair failed and we were unable to recover it. 00:26:46.607 [2024-05-15 17:17:34.170590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.607 [2024-05-15 17:17:34.170815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.607 [2024-05-15 17:17:34.170825] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.607 qpair failed and we were unable to recover it. 00:26:46.607 [2024-05-15 17:17:34.170961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.607 [2024-05-15 17:17:34.171147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.607 [2024-05-15 17:17:34.171157] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.607 qpair failed and we were unable to recover it. 00:26:46.607 [2024-05-15 17:17:34.171259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.607 [2024-05-15 17:17:34.171370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.607 [2024-05-15 17:17:34.171379] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.607 qpair failed and we were unable to recover it. 00:26:46.607 [2024-05-15 17:17:34.171473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.607 [2024-05-15 17:17:34.171644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.607 [2024-05-15 17:17:34.171654] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.607 qpair failed and we were unable to recover it. 00:26:46.607 [2024-05-15 17:17:34.171738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.607 [2024-05-15 17:17:34.171839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.607 [2024-05-15 17:17:34.171849] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.607 qpair failed and we were unable to recover it. 00:26:46.607 [2024-05-15 17:17:34.171955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.607 [2024-05-15 17:17:34.172071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.607 [2024-05-15 17:17:34.172080] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.607 qpair failed and we were unable to recover it. 00:26:46.607 [2024-05-15 17:17:34.172171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.607 [2024-05-15 17:17:34.172290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.607 [2024-05-15 17:17:34.172299] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.607 qpair failed and we were unable to recover it. 00:26:46.607 [2024-05-15 17:17:34.172398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.607 [2024-05-15 17:17:34.172506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.607 [2024-05-15 17:17:34.172516] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.607 qpair failed and we were unable to recover it. 00:26:46.607 [2024-05-15 17:17:34.172633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.607 [2024-05-15 17:17:34.172759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.607 [2024-05-15 17:17:34.172768] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.607 qpair failed and we were unable to recover it. 00:26:46.607 [2024-05-15 17:17:34.172925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.607 [2024-05-15 17:17:34.173066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.607 [2024-05-15 17:17:34.173075] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.607 qpair failed and we were unable to recover it. 00:26:46.607 [2024-05-15 17:17:34.173246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.607 [2024-05-15 17:17:34.173402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.607 [2024-05-15 17:17:34.173412] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.607 qpair failed and we were unable to recover it. 00:26:46.607 [2024-05-15 17:17:34.173510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.607 [2024-05-15 17:17:34.173618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.607 [2024-05-15 17:17:34.173628] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.607 qpair failed and we were unable to recover it. 00:26:46.607 [2024-05-15 17:17:34.173749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.607 [2024-05-15 17:17:34.173906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.607 [2024-05-15 17:17:34.173916] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.607 qpair failed and we were unable to recover it. 00:26:46.607 [2024-05-15 17:17:34.174014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.607 [2024-05-15 17:17:34.174187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.607 [2024-05-15 17:17:34.174198] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.607 qpair failed and we were unable to recover it. 00:26:46.607 [2024-05-15 17:17:34.174374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.607 [2024-05-15 17:17:34.174465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.607 [2024-05-15 17:17:34.174475] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.607 qpair failed and we were unable to recover it. 00:26:46.607 [2024-05-15 17:17:34.174699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.607 [2024-05-15 17:17:34.174790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.607 [2024-05-15 17:17:34.174800] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.607 qpair failed and we were unable to recover it. 00:26:46.607 [2024-05-15 17:17:34.174902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.607 [2024-05-15 17:17:34.175005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.607 [2024-05-15 17:17:34.175014] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.607 qpair failed and we were unable to recover it. 00:26:46.607 [2024-05-15 17:17:34.175117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.607 [2024-05-15 17:17:34.175225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.607 [2024-05-15 17:17:34.175235] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.607 qpair failed and we were unable to recover it. 00:26:46.607 [2024-05-15 17:17:34.175485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.607 [2024-05-15 17:17:34.175645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.607 [2024-05-15 17:17:34.175654] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.607 qpair failed and we were unable to recover it. 00:26:46.607 [2024-05-15 17:17:34.175720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.607 [2024-05-15 17:17:34.175818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.607 [2024-05-15 17:17:34.175828] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.607 qpair failed and we were unable to recover it. 00:26:46.607 [2024-05-15 17:17:34.176001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.607 [2024-05-15 17:17:34.176095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.607 [2024-05-15 17:17:34.176104] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.607 qpair failed and we were unable to recover it. 00:26:46.607 [2024-05-15 17:17:34.176272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.607 [2024-05-15 17:17:34.176365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.607 [2024-05-15 17:17:34.176374] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.607 qpair failed and we were unable to recover it. 00:26:46.607 [2024-05-15 17:17:34.176462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.607 [2024-05-15 17:17:34.176640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.607 [2024-05-15 17:17:34.176649] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.607 qpair failed and we were unable to recover it. 00:26:46.607 [2024-05-15 17:17:34.176788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.607 [2024-05-15 17:17:34.176878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.607 [2024-05-15 17:17:34.176888] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.607 qpair failed and we were unable to recover it. 00:26:46.607 [2024-05-15 17:17:34.176988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.607 [2024-05-15 17:17:34.177078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.607 [2024-05-15 17:17:34.177087] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.607 qpair failed and we were unable to recover it. 00:26:46.607 [2024-05-15 17:17:34.177310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.607 [2024-05-15 17:17:34.177479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.608 [2024-05-15 17:17:34.177488] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.608 qpair failed and we were unable to recover it. 00:26:46.608 [2024-05-15 17:17:34.177591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.608 [2024-05-15 17:17:34.177685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.608 [2024-05-15 17:17:34.177695] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.608 qpair failed and we were unable to recover it. 00:26:46.608 [2024-05-15 17:17:34.177795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.608 [2024-05-15 17:17:34.177988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.608 [2024-05-15 17:17:34.177998] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.608 qpair failed and we were unable to recover it. 00:26:46.608 [2024-05-15 17:17:34.178096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.608 [2024-05-15 17:17:34.178184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.608 [2024-05-15 17:17:34.178195] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.608 qpair failed and we were unable to recover it. 00:26:46.608 [2024-05-15 17:17:34.178305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.608 [2024-05-15 17:17:34.178399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.608 [2024-05-15 17:17:34.178408] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.608 qpair failed and we were unable to recover it. 00:26:46.608 [2024-05-15 17:17:34.178562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.608 [2024-05-15 17:17:34.178717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.608 [2024-05-15 17:17:34.178728] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.608 qpair failed and we were unable to recover it. 00:26:46.608 [2024-05-15 17:17:34.178846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.608 [2024-05-15 17:17:34.178936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.608 [2024-05-15 17:17:34.178945] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.608 qpair failed and we were unable to recover it. 00:26:46.608 [2024-05-15 17:17:34.179063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.608 [2024-05-15 17:17:34.179156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.608 [2024-05-15 17:17:34.179168] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.608 qpair failed and we were unable to recover it. 00:26:46.608 [2024-05-15 17:17:34.179264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.608 [2024-05-15 17:17:34.179369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.608 [2024-05-15 17:17:34.179379] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.608 qpair failed and we were unable to recover it. 00:26:46.608 [2024-05-15 17:17:34.179488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.608 [2024-05-15 17:17:34.179602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.608 [2024-05-15 17:17:34.179613] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.608 qpair failed and we were unable to recover it. 00:26:46.608 [2024-05-15 17:17:34.179768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.608 [2024-05-15 17:17:34.179864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.608 [2024-05-15 17:17:34.179875] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.608 qpair failed and we were unable to recover it. 00:26:46.608 [2024-05-15 17:17:34.180045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.608 [2024-05-15 17:17:34.180143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.608 [2024-05-15 17:17:34.180152] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.608 qpair failed and we were unable to recover it. 00:26:46.608 [2024-05-15 17:17:34.180265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.608 [2024-05-15 17:17:34.180365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.608 [2024-05-15 17:17:34.180375] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.608 qpair failed and we were unable to recover it. 00:26:46.608 [2024-05-15 17:17:34.180473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.608 [2024-05-15 17:17:34.180561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.608 [2024-05-15 17:17:34.180570] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.608 qpair failed and we were unable to recover it. 00:26:46.608 [2024-05-15 17:17:34.180737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.608 [2024-05-15 17:17:34.180845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.608 [2024-05-15 17:17:34.180854] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.608 qpair failed and we were unable to recover it. 00:26:46.608 [2024-05-15 17:17:34.181014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.608 [2024-05-15 17:17:34.181111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.608 [2024-05-15 17:17:34.181121] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.608 qpair failed and we were unable to recover it. 00:26:46.608 [2024-05-15 17:17:34.181215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.608 [2024-05-15 17:17:34.181326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.608 [2024-05-15 17:17:34.181336] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.608 qpair failed and we were unable to recover it. 00:26:46.608 [2024-05-15 17:17:34.181454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.608 [2024-05-15 17:17:34.181543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.608 [2024-05-15 17:17:34.181552] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.608 qpair failed and we were unable to recover it. 00:26:46.608 [2024-05-15 17:17:34.181659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.608 [2024-05-15 17:17:34.181761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.608 [2024-05-15 17:17:34.181771] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.608 qpair failed and we were unable to recover it. 00:26:46.608 [2024-05-15 17:17:34.181925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.608 [2024-05-15 17:17:34.182025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.608 [2024-05-15 17:17:34.182034] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.608 qpair failed and we were unable to recover it. 00:26:46.608 [2024-05-15 17:17:34.182141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.608 [2024-05-15 17:17:34.182229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.608 [2024-05-15 17:17:34.182239] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.608 qpair failed and we were unable to recover it. 00:26:46.608 [2024-05-15 17:17:34.182401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.608 [2024-05-15 17:17:34.182490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.608 [2024-05-15 17:17:34.182499] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.608 qpair failed and we were unable to recover it. 00:26:46.608 [2024-05-15 17:17:34.182608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.608 [2024-05-15 17:17:34.182713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.608 [2024-05-15 17:17:34.182723] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.608 qpair failed and we were unable to recover it. 00:26:46.608 [2024-05-15 17:17:34.182884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.608 [2024-05-15 17:17:34.182991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.608 [2024-05-15 17:17:34.183001] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.608 qpair failed and we were unable to recover it. 00:26:46.608 [2024-05-15 17:17:34.183093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.608 [2024-05-15 17:17:34.183196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.608 [2024-05-15 17:17:34.183206] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.608 qpair failed and we were unable to recover it. 00:26:46.608 [2024-05-15 17:17:34.183310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.608 [2024-05-15 17:17:34.183408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.608 [2024-05-15 17:17:34.183418] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.608 qpair failed and we were unable to recover it. 00:26:46.608 [2024-05-15 17:17:34.183577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.608 [2024-05-15 17:17:34.183676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.608 [2024-05-15 17:17:34.183685] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.608 qpair failed and we were unable to recover it. 00:26:46.608 [2024-05-15 17:17:34.183801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.608 [2024-05-15 17:17:34.183956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.608 [2024-05-15 17:17:34.183966] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.608 qpair failed and we were unable to recover it. 00:26:46.608 [2024-05-15 17:17:34.184061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.608 [2024-05-15 17:17:34.184150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.608 [2024-05-15 17:17:34.184159] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.608 qpair failed and we were unable to recover it. 00:26:46.608 [2024-05-15 17:17:34.184318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.608 [2024-05-15 17:17:34.184420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.608 [2024-05-15 17:17:34.184429] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.608 qpair failed and we were unable to recover it. 00:26:46.608 [2024-05-15 17:17:34.184544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.608 [2024-05-15 17:17:34.184784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.608 [2024-05-15 17:17:34.184794] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.608 qpair failed and we were unable to recover it. 00:26:46.608 [2024-05-15 17:17:34.184897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.608 [2024-05-15 17:17:34.184998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.608 [2024-05-15 17:17:34.185008] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.608 qpair failed and we were unable to recover it. 00:26:46.609 [2024-05-15 17:17:34.185170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.609 [2024-05-15 17:17:34.185271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.609 [2024-05-15 17:17:34.185282] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.609 qpair failed and we were unable to recover it. 00:26:46.609 [2024-05-15 17:17:34.185381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.609 [2024-05-15 17:17:34.185494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.609 [2024-05-15 17:17:34.185504] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.609 qpair failed and we were unable to recover it. 00:26:46.609 [2024-05-15 17:17:34.185601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.609 [2024-05-15 17:17:34.185694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.609 [2024-05-15 17:17:34.185703] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.609 qpair failed and we were unable to recover it. 00:26:46.609 [2024-05-15 17:17:34.185796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.609 [2024-05-15 17:17:34.185902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.609 [2024-05-15 17:17:34.185912] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.609 qpair failed and we were unable to recover it. 00:26:46.609 [2024-05-15 17:17:34.186001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.609 [2024-05-15 17:17:34.186097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.609 [2024-05-15 17:17:34.186106] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.609 qpair failed and we were unable to recover it. 00:26:46.609 [2024-05-15 17:17:34.186208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.609 [2024-05-15 17:17:34.186315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.609 [2024-05-15 17:17:34.186326] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.609 qpair failed and we were unable to recover it. 00:26:46.609 [2024-05-15 17:17:34.186417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.609 [2024-05-15 17:17:34.186524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.609 [2024-05-15 17:17:34.186534] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.609 qpair failed and we were unable to recover it. 00:26:46.609 [2024-05-15 17:17:34.186694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.609 [2024-05-15 17:17:34.186787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.609 [2024-05-15 17:17:34.186796] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.609 qpair failed and we were unable to recover it. 00:26:46.609 [2024-05-15 17:17:34.186920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.609 [2024-05-15 17:17:34.187014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.609 [2024-05-15 17:17:34.187023] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.609 qpair failed and we were unable to recover it. 00:26:46.609 [2024-05-15 17:17:34.187186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.609 [2024-05-15 17:17:34.187335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.609 [2024-05-15 17:17:34.187345] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.609 qpair failed and we were unable to recover it. 00:26:46.609 [2024-05-15 17:17:34.187521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.609 [2024-05-15 17:17:34.187691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.609 [2024-05-15 17:17:34.187701] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.609 qpair failed and we were unable to recover it. 00:26:46.609 [2024-05-15 17:17:34.187807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.609 [2024-05-15 17:17:34.187997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.609 [2024-05-15 17:17:34.188007] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.609 qpair failed and we were unable to recover it. 00:26:46.609 [2024-05-15 17:17:34.188103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.609 [2024-05-15 17:17:34.188214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.609 [2024-05-15 17:17:34.188224] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.609 qpair failed and we were unable to recover it. 00:26:46.609 [2024-05-15 17:17:34.188333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.609 [2024-05-15 17:17:34.188502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.609 [2024-05-15 17:17:34.188511] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.609 qpair failed and we were unable to recover it. 00:26:46.609 [2024-05-15 17:17:34.188675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.609 [2024-05-15 17:17:34.188847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.609 [2024-05-15 17:17:34.188856] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.609 qpair failed and we were unable to recover it. 00:26:46.609 [2024-05-15 17:17:34.188964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.609 [2024-05-15 17:17:34.189034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.609 [2024-05-15 17:17:34.189044] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.609 qpair failed and we were unable to recover it. 00:26:46.609 [2024-05-15 17:17:34.189136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.609 [2024-05-15 17:17:34.189231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.609 [2024-05-15 17:17:34.189241] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.609 qpair failed and we were unable to recover it. 00:26:46.609 [2024-05-15 17:17:34.189328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.609 [2024-05-15 17:17:34.189488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.609 [2024-05-15 17:17:34.189497] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.609 qpair failed and we were unable to recover it. 00:26:46.609 [2024-05-15 17:17:34.189560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.609 [2024-05-15 17:17:34.189662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.609 [2024-05-15 17:17:34.189672] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.609 qpair failed and we were unable to recover it. 00:26:46.609 [2024-05-15 17:17:34.189763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.609 [2024-05-15 17:17:34.189853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.609 [2024-05-15 17:17:34.189863] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.609 qpair failed and we were unable to recover it. 00:26:46.609 [2024-05-15 17:17:34.189954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.609 [2024-05-15 17:17:34.190046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.609 [2024-05-15 17:17:34.190056] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.609 qpair failed and we were unable to recover it. 00:26:46.609 [2024-05-15 17:17:34.190200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.609 [2024-05-15 17:17:34.190368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.609 [2024-05-15 17:17:34.190378] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.609 qpair failed and we were unable to recover it. 00:26:46.609 [2024-05-15 17:17:34.190464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.609 [2024-05-15 17:17:34.190616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.609 [2024-05-15 17:17:34.190626] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.609 qpair failed and we were unable to recover it. 00:26:46.609 [2024-05-15 17:17:34.190789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.609 [2024-05-15 17:17:34.190909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.609 [2024-05-15 17:17:34.190919] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.609 qpair failed and we were unable to recover it. 00:26:46.609 [2024-05-15 17:17:34.191019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.609 [2024-05-15 17:17:34.191105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.609 [2024-05-15 17:17:34.191114] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.609 qpair failed and we were unable to recover it. 00:26:46.609 [2024-05-15 17:17:34.191230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.609 [2024-05-15 17:17:34.191350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.609 [2024-05-15 17:17:34.191359] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.609 qpair failed and we were unable to recover it. 00:26:46.609 [2024-05-15 17:17:34.191498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.609 [2024-05-15 17:17:34.191658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.609 [2024-05-15 17:17:34.191667] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.609 qpair failed and we were unable to recover it. 00:26:46.609 [2024-05-15 17:17:34.191757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.609 [2024-05-15 17:17:34.191846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.609 [2024-05-15 17:17:34.191856] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.609 qpair failed and we were unable to recover it. 00:26:46.609 [2024-05-15 17:17:34.192036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.609 [2024-05-15 17:17:34.192136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.609 [2024-05-15 17:17:34.192146] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.609 qpair failed and we were unable to recover it. 00:26:46.609 [2024-05-15 17:17:34.192246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.609 [2024-05-15 17:17:34.192336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.609 [2024-05-15 17:17:34.192345] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.609 qpair failed and we were unable to recover it. 00:26:46.609 [2024-05-15 17:17:34.192456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.609 [2024-05-15 17:17:34.192711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.609 [2024-05-15 17:17:34.192721] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.609 qpair failed and we were unable to recover it. 00:26:46.609 [2024-05-15 17:17:34.192819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.609 [2024-05-15 17:17:34.192983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.609 [2024-05-15 17:17:34.192992] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.609 qpair failed and we were unable to recover it. 00:26:46.609 [2024-05-15 17:17:34.193240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.609 [2024-05-15 17:17:34.193449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.609 [2024-05-15 17:17:34.193458] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.609 qpair failed and we were unable to recover it. 00:26:46.609 [2024-05-15 17:17:34.193550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.609 [2024-05-15 17:17:34.193656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.609 [2024-05-15 17:17:34.193665] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.609 qpair failed and we were unable to recover it. 00:26:46.609 [2024-05-15 17:17:34.193764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.609 [2024-05-15 17:17:34.193870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.609 [2024-05-15 17:17:34.193880] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.609 qpair failed and we were unable to recover it. 00:26:46.609 [2024-05-15 17:17:34.193974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.609 [2024-05-15 17:17:34.194074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.609 [2024-05-15 17:17:34.194083] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.609 qpair failed and we were unable to recover it. 00:26:46.609 [2024-05-15 17:17:34.194273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.609 [2024-05-15 17:17:34.194477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.609 [2024-05-15 17:17:34.194487] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.609 qpair failed and we were unable to recover it. 00:26:46.609 [2024-05-15 17:17:34.194580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.609 [2024-05-15 17:17:34.194683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.609 [2024-05-15 17:17:34.194693] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.609 qpair failed and we were unable to recover it. 00:26:46.609 [2024-05-15 17:17:34.194790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.609 [2024-05-15 17:17:34.194881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.609 [2024-05-15 17:17:34.194891] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.609 qpair failed and we were unable to recover it. 00:26:46.609 [2024-05-15 17:17:34.195054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.609 [2024-05-15 17:17:34.195236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.609 [2024-05-15 17:17:34.195246] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.609 qpair failed and we were unable to recover it. 00:26:46.610 [2024-05-15 17:17:34.195402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.610 [2024-05-15 17:17:34.195560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.610 [2024-05-15 17:17:34.195569] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.610 qpair failed and we were unable to recover it. 00:26:46.610 [2024-05-15 17:17:34.195663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.610 [2024-05-15 17:17:34.195788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.610 [2024-05-15 17:17:34.195798] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.610 qpair failed and we were unable to recover it. 00:26:46.610 [2024-05-15 17:17:34.195916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.610 [2024-05-15 17:17:34.196123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.610 [2024-05-15 17:17:34.196132] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.610 qpair failed and we were unable to recover it. 00:26:46.610 [2024-05-15 17:17:34.196234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.610 [2024-05-15 17:17:34.196340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.610 [2024-05-15 17:17:34.196349] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.610 qpair failed and we were unable to recover it. 00:26:46.610 [2024-05-15 17:17:34.196444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.610 [2024-05-15 17:17:34.196610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.610 [2024-05-15 17:17:34.196620] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.610 qpair failed and we were unable to recover it. 00:26:46.610 [2024-05-15 17:17:34.196794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.610 [2024-05-15 17:17:34.196964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.610 [2024-05-15 17:17:34.196973] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.610 qpair failed and we were unable to recover it. 00:26:46.610 [2024-05-15 17:17:34.197250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.610 [2024-05-15 17:17:34.197351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.610 [2024-05-15 17:17:34.197361] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.610 qpair failed and we were unable to recover it. 00:26:46.610 [2024-05-15 17:17:34.197617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.610 [2024-05-15 17:17:34.197724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.610 [2024-05-15 17:17:34.197733] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.610 qpair failed and we were unable to recover it. 00:26:46.610 [2024-05-15 17:17:34.197916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.610 [2024-05-15 17:17:34.198071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.610 [2024-05-15 17:17:34.198081] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.610 qpair failed and we were unable to recover it. 00:26:46.610 [2024-05-15 17:17:34.198195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.610 [2024-05-15 17:17:34.198299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.610 [2024-05-15 17:17:34.198308] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.610 qpair failed and we were unable to recover it. 00:26:46.610 [2024-05-15 17:17:34.198395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.610 [2024-05-15 17:17:34.198549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.610 [2024-05-15 17:17:34.198559] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.610 qpair failed and we were unable to recover it. 00:26:46.610 [2024-05-15 17:17:34.198662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.610 [2024-05-15 17:17:34.198762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.610 [2024-05-15 17:17:34.198773] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.610 qpair failed and we were unable to recover it. 00:26:46.610 [2024-05-15 17:17:34.198865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.610 [2024-05-15 17:17:34.198958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.610 [2024-05-15 17:17:34.198968] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.610 qpair failed and we were unable to recover it. 00:26:46.610 [2024-05-15 17:17:34.199076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.610 [2024-05-15 17:17:34.199194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.610 [2024-05-15 17:17:34.199204] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.610 qpair failed and we were unable to recover it. 00:26:46.610 [2024-05-15 17:17:34.199294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.610 [2024-05-15 17:17:34.199393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.610 [2024-05-15 17:17:34.199403] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.610 qpair failed and we were unable to recover it. 00:26:46.610 [2024-05-15 17:17:34.199513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.610 [2024-05-15 17:17:34.199601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.610 [2024-05-15 17:17:34.199610] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.610 qpair failed and we were unable to recover it. 00:26:46.610 [2024-05-15 17:17:34.199708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.610 [2024-05-15 17:17:34.199827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.610 [2024-05-15 17:17:34.199836] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.610 qpair failed and we were unable to recover it. 00:26:46.610 [2024-05-15 17:17:34.199933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.610 [2024-05-15 17:17:34.200087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.610 [2024-05-15 17:17:34.200097] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.610 qpair failed and we were unable to recover it. 00:26:46.610 [2024-05-15 17:17:34.200185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.610 [2024-05-15 17:17:34.200293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.610 [2024-05-15 17:17:34.200303] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.610 qpair failed and we were unable to recover it. 00:26:46.610 [2024-05-15 17:17:34.200394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.610 [2024-05-15 17:17:34.200527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.610 [2024-05-15 17:17:34.200537] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.610 qpair failed and we were unable to recover it. 00:26:46.610 [2024-05-15 17:17:34.200634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.610 [2024-05-15 17:17:34.200811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.610 [2024-05-15 17:17:34.200820] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.610 qpair failed and we were unable to recover it. 00:26:46.610 [2024-05-15 17:17:34.200925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.610 [2024-05-15 17:17:34.201081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.610 [2024-05-15 17:17:34.201093] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.610 qpair failed and we were unable to recover it. 00:26:46.610 [2024-05-15 17:17:34.201189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.610 [2024-05-15 17:17:34.201278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.610 [2024-05-15 17:17:34.201288] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.610 qpair failed and we were unable to recover it. 00:26:46.610 [2024-05-15 17:17:34.201393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.610 [2024-05-15 17:17:34.201483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.610 [2024-05-15 17:17:34.201492] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.610 qpair failed and we were unable to recover it. 00:26:46.610 [2024-05-15 17:17:34.201592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.610 [2024-05-15 17:17:34.201719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.610 [2024-05-15 17:17:34.201729] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.610 qpair failed and we were unable to recover it. 00:26:46.610 [2024-05-15 17:17:34.201830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.610 [2024-05-15 17:17:34.201938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.610 [2024-05-15 17:17:34.201947] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.610 qpair failed and we were unable to recover it. 00:26:46.610 [2024-05-15 17:17:34.202075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.610 [2024-05-15 17:17:34.202233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.610 [2024-05-15 17:17:34.202246] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.610 qpair failed and we were unable to recover it. 00:26:46.610 [2024-05-15 17:17:34.202411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.610 [2024-05-15 17:17:34.202517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.610 [2024-05-15 17:17:34.202527] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.610 qpair failed and we were unable to recover it. 00:26:46.610 [2024-05-15 17:17:34.202594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.610 [2024-05-15 17:17:34.202677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.610 [2024-05-15 17:17:34.202687] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.610 qpair failed and we were unable to recover it. 00:26:46.610 [2024-05-15 17:17:34.202840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.610 [2024-05-15 17:17:34.202951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.610 [2024-05-15 17:17:34.202961] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.610 qpair failed and we were unable to recover it. 00:26:46.886 [2024-05-15 17:17:34.203067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.886 [2024-05-15 17:17:34.203232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.886 [2024-05-15 17:17:34.203243] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.886 qpair failed and we were unable to recover it. 00:26:46.886 [2024-05-15 17:17:34.203403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.886 [2024-05-15 17:17:34.203493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.886 [2024-05-15 17:17:34.203506] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.886 qpair failed and we were unable to recover it. 00:26:46.886 [2024-05-15 17:17:34.203596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.886 [2024-05-15 17:17:34.203709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.886 [2024-05-15 17:17:34.203719] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.886 qpair failed and we were unable to recover it. 00:26:46.886 [2024-05-15 17:17:34.203881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.886 [2024-05-15 17:17:34.203984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.886 [2024-05-15 17:17:34.203993] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.886 qpair failed and we were unable to recover it. 00:26:46.886 [2024-05-15 17:17:34.204105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.886 [2024-05-15 17:17:34.204212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.886 [2024-05-15 17:17:34.204223] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.886 qpair failed and we were unable to recover it. 00:26:46.886 [2024-05-15 17:17:34.204326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.886 [2024-05-15 17:17:34.204482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.886 [2024-05-15 17:17:34.204491] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.886 qpair failed and we were unable to recover it. 00:26:46.886 [2024-05-15 17:17:34.204585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.886 [2024-05-15 17:17:34.204698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.886 [2024-05-15 17:17:34.204708] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.886 qpair failed and we were unable to recover it. 00:26:46.886 [2024-05-15 17:17:34.204799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.886 [2024-05-15 17:17:34.204976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.886 [2024-05-15 17:17:34.204986] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.886 qpair failed and we were unable to recover it. 00:26:46.886 [2024-05-15 17:17:34.205158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.886 [2024-05-15 17:17:34.205345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.886 [2024-05-15 17:17:34.205355] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.886 qpair failed and we were unable to recover it. 00:26:46.886 [2024-05-15 17:17:34.205455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.886 [2024-05-15 17:17:34.205560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.886 [2024-05-15 17:17:34.205570] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.886 qpair failed and we were unable to recover it. 00:26:46.886 [2024-05-15 17:17:34.205675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.886 [2024-05-15 17:17:34.205832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.887 [2024-05-15 17:17:34.205842] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.887 qpair failed and we were unable to recover it. 00:26:46.887 [2024-05-15 17:17:34.205946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.887 [2024-05-15 17:17:34.206112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.887 [2024-05-15 17:17:34.206125] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.887 qpair failed and we were unable to recover it. 00:26:46.887 [2024-05-15 17:17:34.206213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.887 [2024-05-15 17:17:34.206370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.887 [2024-05-15 17:17:34.206380] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.887 qpair failed and we were unable to recover it. 00:26:46.887 [2024-05-15 17:17:34.206474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.887 [2024-05-15 17:17:34.206638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.887 [2024-05-15 17:17:34.206647] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.887 qpair failed and we were unable to recover it. 00:26:46.887 [2024-05-15 17:17:34.206745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.887 [2024-05-15 17:17:34.206846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.887 [2024-05-15 17:17:34.206856] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.887 qpair failed and we were unable to recover it. 00:26:46.887 [2024-05-15 17:17:34.207001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.887 [2024-05-15 17:17:34.207102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.887 [2024-05-15 17:17:34.207111] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.887 qpair failed and we were unable to recover it. 00:26:46.887 [2024-05-15 17:17:34.207203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.887 [2024-05-15 17:17:34.207294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.887 [2024-05-15 17:17:34.207303] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.887 qpair failed and we were unable to recover it. 00:26:46.887 [2024-05-15 17:17:34.207420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.887 [2024-05-15 17:17:34.207518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.887 [2024-05-15 17:17:34.207528] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.887 qpair failed and we were unable to recover it. 00:26:46.887 [2024-05-15 17:17:34.207629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.887 [2024-05-15 17:17:34.207723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.887 [2024-05-15 17:17:34.207732] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.887 qpair failed and we were unable to recover it. 00:26:46.887 [2024-05-15 17:17:34.207819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.887 [2024-05-15 17:17:34.207914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.887 [2024-05-15 17:17:34.207924] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.887 qpair failed and we were unable to recover it. 00:26:46.887 [2024-05-15 17:17:34.208031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.887 [2024-05-15 17:17:34.208196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.887 [2024-05-15 17:17:34.208207] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.887 qpair failed and we were unable to recover it. 00:26:46.887 [2024-05-15 17:17:34.208297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.887 [2024-05-15 17:17:34.208466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.887 [2024-05-15 17:17:34.208476] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.887 qpair failed and we were unable to recover it. 00:26:46.887 [2024-05-15 17:17:34.208563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.887 [2024-05-15 17:17:34.208717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.887 [2024-05-15 17:17:34.208727] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.887 qpair failed and we were unable to recover it. 00:26:46.887 [2024-05-15 17:17:34.208897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.887 [2024-05-15 17:17:34.208996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.887 [2024-05-15 17:17:34.209005] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.887 qpair failed and we were unable to recover it. 00:26:46.887 [2024-05-15 17:17:34.209112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.887 [2024-05-15 17:17:34.209283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.887 [2024-05-15 17:17:34.209294] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.887 qpair failed and we were unable to recover it. 00:26:46.887 [2024-05-15 17:17:34.209384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.887 [2024-05-15 17:17:34.209484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.887 [2024-05-15 17:17:34.209493] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.887 qpair failed and we were unable to recover it. 00:26:46.887 [2024-05-15 17:17:34.209604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.887 [2024-05-15 17:17:34.209703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.887 [2024-05-15 17:17:34.209713] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.887 qpair failed and we were unable to recover it. 00:26:46.887 [2024-05-15 17:17:34.209870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.887 [2024-05-15 17:17:34.209959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.887 [2024-05-15 17:17:34.209969] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.887 qpair failed and we were unable to recover it. 00:26:46.887 [2024-05-15 17:17:34.210059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.887 [2024-05-15 17:17:34.210218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.887 [2024-05-15 17:17:34.210229] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.887 qpair failed and we were unable to recover it. 00:26:46.887 [2024-05-15 17:17:34.210330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.887 [2024-05-15 17:17:34.210426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.887 [2024-05-15 17:17:34.210436] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.887 qpair failed and we were unable to recover it. 00:26:46.887 [2024-05-15 17:17:34.210594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.887 [2024-05-15 17:17:34.210684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.887 [2024-05-15 17:17:34.210694] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.887 qpair failed and we were unable to recover it. 00:26:46.887 [2024-05-15 17:17:34.210786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.887 [2024-05-15 17:17:34.210884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.887 [2024-05-15 17:17:34.210894] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.887 qpair failed and we were unable to recover it. 00:26:46.887 [2024-05-15 17:17:34.211009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.887 [2024-05-15 17:17:34.211100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.887 [2024-05-15 17:17:34.211109] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.887 qpair failed and we were unable to recover it. 00:26:46.887 [2024-05-15 17:17:34.211213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.887 [2024-05-15 17:17:34.211322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.887 [2024-05-15 17:17:34.211332] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.887 qpair failed and we were unable to recover it. 00:26:46.887 [2024-05-15 17:17:34.211425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.887 [2024-05-15 17:17:34.211512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.887 [2024-05-15 17:17:34.211521] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.887 qpair failed and we were unable to recover it. 00:26:46.887 [2024-05-15 17:17:34.211611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.887 [2024-05-15 17:17:34.211696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.887 [2024-05-15 17:17:34.211705] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.887 qpair failed and we were unable to recover it. 00:26:46.887 [2024-05-15 17:17:34.211793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.887 [2024-05-15 17:17:34.211953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.887 [2024-05-15 17:17:34.211963] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.887 qpair failed and we were unable to recover it. 00:26:46.887 [2024-05-15 17:17:34.212064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.887 [2024-05-15 17:17:34.212175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.887 [2024-05-15 17:17:34.212186] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.887 qpair failed and we were unable to recover it. 00:26:46.887 [2024-05-15 17:17:34.212391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.887 [2024-05-15 17:17:34.212546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.888 [2024-05-15 17:17:34.212555] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.888 qpair failed and we were unable to recover it. 00:26:46.888 [2024-05-15 17:17:34.212723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.888 [2024-05-15 17:17:34.212823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.888 [2024-05-15 17:17:34.212832] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.888 qpair failed and we were unable to recover it. 00:26:46.888 [2024-05-15 17:17:34.213003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.888 [2024-05-15 17:17:34.213163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.888 [2024-05-15 17:17:34.213179] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.888 qpair failed and we were unable to recover it. 00:26:46.888 [2024-05-15 17:17:34.213269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.888 [2024-05-15 17:17:34.213374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.888 [2024-05-15 17:17:34.213384] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.888 qpair failed and we were unable to recover it. 00:26:46.888 [2024-05-15 17:17:34.213492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.888 [2024-05-15 17:17:34.213601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.888 [2024-05-15 17:17:34.213611] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.888 qpair failed and we were unable to recover it. 00:26:46.888 [2024-05-15 17:17:34.213698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.888 [2024-05-15 17:17:34.213790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.888 [2024-05-15 17:17:34.213800] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.888 qpair failed and we were unable to recover it. 00:26:46.888 [2024-05-15 17:17:34.213918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.888 [2024-05-15 17:17:34.214022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.888 [2024-05-15 17:17:34.214032] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.888 qpair failed and we were unable to recover it. 00:26:46.888 [2024-05-15 17:17:34.214162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.888 [2024-05-15 17:17:34.214305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.888 [2024-05-15 17:17:34.214314] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.888 qpair failed and we were unable to recover it. 00:26:46.888 [2024-05-15 17:17:34.214414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.888 [2024-05-15 17:17:34.214485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.888 [2024-05-15 17:17:34.214494] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.888 qpair failed and we were unable to recover it. 00:26:46.888 [2024-05-15 17:17:34.214590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.888 [2024-05-15 17:17:34.214707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.888 [2024-05-15 17:17:34.214716] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.888 qpair failed and we were unable to recover it. 00:26:46.888 [2024-05-15 17:17:34.214832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.888 [2024-05-15 17:17:34.214932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.888 [2024-05-15 17:17:34.214941] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.888 qpair failed and we were unable to recover it. 00:26:46.888 [2024-05-15 17:17:34.215110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.888 [2024-05-15 17:17:34.215217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.888 [2024-05-15 17:17:34.215227] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.888 qpair failed and we were unable to recover it. 00:26:46.888 [2024-05-15 17:17:34.215384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.888 [2024-05-15 17:17:34.215488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.888 [2024-05-15 17:17:34.215497] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.888 qpair failed and we were unable to recover it. 00:26:46.888 [2024-05-15 17:17:34.215590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.888 [2024-05-15 17:17:34.215750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.888 [2024-05-15 17:17:34.215760] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.888 qpair failed and we were unable to recover it. 00:26:46.888 [2024-05-15 17:17:34.215884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.888 [2024-05-15 17:17:34.215991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.888 [2024-05-15 17:17:34.216001] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.888 qpair failed and we were unable to recover it. 00:26:46.888 [2024-05-15 17:17:34.216099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.888 [2024-05-15 17:17:34.216233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.888 [2024-05-15 17:17:34.216243] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.888 qpair failed and we were unable to recover it. 00:26:46.888 [2024-05-15 17:17:34.216309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.888 [2024-05-15 17:17:34.216421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.888 [2024-05-15 17:17:34.216431] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.888 qpair failed and we were unable to recover it. 00:26:46.888 [2024-05-15 17:17:34.216541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.888 [2024-05-15 17:17:34.216735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.888 [2024-05-15 17:17:34.216745] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.888 qpair failed and we were unable to recover it. 00:26:46.888 [2024-05-15 17:17:34.216851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.888 [2024-05-15 17:17:34.216969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.888 [2024-05-15 17:17:34.216979] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.888 qpair failed and we were unable to recover it. 00:26:46.888 [2024-05-15 17:17:34.217137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.888 [2024-05-15 17:17:34.217236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.888 [2024-05-15 17:17:34.217247] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.888 qpair failed and we were unable to recover it. 00:26:46.888 [2024-05-15 17:17:34.217330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.888 [2024-05-15 17:17:34.217417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.888 [2024-05-15 17:17:34.217426] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.888 qpair failed and we were unable to recover it. 00:26:46.888 [2024-05-15 17:17:34.217657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.888 [2024-05-15 17:17:34.217747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.888 [2024-05-15 17:17:34.217757] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.888 qpair failed and we were unable to recover it. 00:26:46.888 [2024-05-15 17:17:34.217867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.888 [2024-05-15 17:17:34.217981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.888 [2024-05-15 17:17:34.217990] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.888 qpair failed and we were unable to recover it. 00:26:46.888 [2024-05-15 17:17:34.218222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.888 [2024-05-15 17:17:34.218380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.888 [2024-05-15 17:17:34.218390] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.888 qpair failed and we were unable to recover it. 00:26:46.888 [2024-05-15 17:17:34.218492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.888 [2024-05-15 17:17:34.218596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.888 [2024-05-15 17:17:34.218606] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.888 qpair failed and we were unable to recover it. 00:26:46.888 [2024-05-15 17:17:34.218696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.888 [2024-05-15 17:17:34.218784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.888 [2024-05-15 17:17:34.218793] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.888 qpair failed and we were unable to recover it. 00:26:46.888 [2024-05-15 17:17:34.218904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.888 [2024-05-15 17:17:34.218994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.888 [2024-05-15 17:17:34.219003] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.888 qpair failed and we were unable to recover it. 00:26:46.888 [2024-05-15 17:17:34.219098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.888 [2024-05-15 17:17:34.219262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.888 [2024-05-15 17:17:34.219272] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.888 qpair failed and we were unable to recover it. 00:26:46.888 [2024-05-15 17:17:34.219364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.888 [2024-05-15 17:17:34.219516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.888 [2024-05-15 17:17:34.219526] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.888 qpair failed and we were unable to recover it. 00:26:46.889 [2024-05-15 17:17:34.219647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.889 [2024-05-15 17:17:34.219818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.889 [2024-05-15 17:17:34.219828] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.889 qpair failed and we were unable to recover it. 00:26:46.889 [2024-05-15 17:17:34.219915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.889 [2024-05-15 17:17:34.220010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.889 [2024-05-15 17:17:34.220020] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.889 qpair failed and we were unable to recover it. 00:26:46.889 [2024-05-15 17:17:34.220192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.889 [2024-05-15 17:17:34.220286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.889 [2024-05-15 17:17:34.220296] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.889 qpair failed and we were unable to recover it. 00:26:46.889 [2024-05-15 17:17:34.220393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.889 [2024-05-15 17:17:34.220489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.889 [2024-05-15 17:17:34.220499] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.889 qpair failed and we were unable to recover it. 00:26:46.889 [2024-05-15 17:17:34.220590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.889 [2024-05-15 17:17:34.220700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.889 [2024-05-15 17:17:34.220710] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.889 qpair failed and we were unable to recover it. 00:26:46.889 [2024-05-15 17:17:34.220824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.889 [2024-05-15 17:17:34.220988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.889 [2024-05-15 17:17:34.220998] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.889 qpair failed and we were unable to recover it. 00:26:46.889 [2024-05-15 17:17:34.221088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.889 [2024-05-15 17:17:34.221186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.889 [2024-05-15 17:17:34.221196] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.889 qpair failed and we were unable to recover it. 00:26:46.889 [2024-05-15 17:17:34.221287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.889 [2024-05-15 17:17:34.221384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.889 [2024-05-15 17:17:34.221393] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.889 qpair failed and we were unable to recover it. 00:26:46.889 [2024-05-15 17:17:34.221485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.889 [2024-05-15 17:17:34.221602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.889 [2024-05-15 17:17:34.221612] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.889 qpair failed and we were unable to recover it. 00:26:46.889 [2024-05-15 17:17:34.221767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.889 [2024-05-15 17:17:34.221947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.889 [2024-05-15 17:17:34.221957] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.889 qpair failed and we were unable to recover it. 00:26:46.889 [2024-05-15 17:17:34.222127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.889 [2024-05-15 17:17:34.222235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.889 [2024-05-15 17:17:34.222245] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.889 qpair failed and we were unable to recover it. 00:26:46.889 [2024-05-15 17:17:34.222361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.889 [2024-05-15 17:17:34.222516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.889 [2024-05-15 17:17:34.222526] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.889 qpair failed and we were unable to recover it. 00:26:46.889 [2024-05-15 17:17:34.222628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.889 [2024-05-15 17:17:34.222727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.889 [2024-05-15 17:17:34.222736] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.889 qpair failed and we were unable to recover it. 00:26:46.889 [2024-05-15 17:17:34.222830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.889 [2024-05-15 17:17:34.222992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.889 [2024-05-15 17:17:34.223002] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.889 qpair failed and we were unable to recover it. 00:26:46.889 [2024-05-15 17:17:34.223156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.889 [2024-05-15 17:17:34.223252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.889 [2024-05-15 17:17:34.223263] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.889 qpair failed and we were unable to recover it. 00:26:46.889 [2024-05-15 17:17:34.223359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.889 [2024-05-15 17:17:34.223479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.889 [2024-05-15 17:17:34.223489] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.889 qpair failed and we were unable to recover it. 00:26:46.889 [2024-05-15 17:17:34.223578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.889 [2024-05-15 17:17:34.223662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.889 [2024-05-15 17:17:34.223671] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.889 qpair failed and we were unable to recover it. 00:26:46.889 [2024-05-15 17:17:34.223771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.889 [2024-05-15 17:17:34.223873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.889 [2024-05-15 17:17:34.223883] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.889 qpair failed and we were unable to recover it. 00:26:46.889 [2024-05-15 17:17:34.223973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.889 [2024-05-15 17:17:34.224059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.889 [2024-05-15 17:17:34.224069] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.889 qpair failed and we were unable to recover it. 00:26:46.889 [2024-05-15 17:17:34.224156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.889 [2024-05-15 17:17:34.224255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.889 [2024-05-15 17:17:34.224265] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.889 qpair failed and we were unable to recover it. 00:26:46.889 [2024-05-15 17:17:34.224377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.889 [2024-05-15 17:17:34.224492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.889 [2024-05-15 17:17:34.224501] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.889 qpair failed and we were unable to recover it. 00:26:46.889 [2024-05-15 17:17:34.224673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.889 [2024-05-15 17:17:34.224762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.889 [2024-05-15 17:17:34.224772] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.889 qpair failed and we were unable to recover it. 00:26:46.889 [2024-05-15 17:17:34.224877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.889 [2024-05-15 17:17:34.224962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.889 [2024-05-15 17:17:34.224971] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.889 qpair failed and we were unable to recover it. 00:26:46.889 [2024-05-15 17:17:34.225076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.889 [2024-05-15 17:17:34.225170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.889 [2024-05-15 17:17:34.225180] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.889 qpair failed and we were unable to recover it. 00:26:46.889 [2024-05-15 17:17:34.225270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.889 [2024-05-15 17:17:34.225434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.889 [2024-05-15 17:17:34.225443] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.889 qpair failed and we were unable to recover it. 00:26:46.889 [2024-05-15 17:17:34.225565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.889 [2024-05-15 17:17:34.225654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.889 [2024-05-15 17:17:34.225664] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.889 qpair failed and we were unable to recover it. 00:26:46.889 [2024-05-15 17:17:34.225755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.889 [2024-05-15 17:17:34.225980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.889 [2024-05-15 17:17:34.225990] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.889 qpair failed and we were unable to recover it. 00:26:46.889 [2024-05-15 17:17:34.226148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.889 [2024-05-15 17:17:34.226264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.889 [2024-05-15 17:17:34.226274] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.889 qpair failed and we were unable to recover it. 00:26:46.889 [2024-05-15 17:17:34.226378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.889 [2024-05-15 17:17:34.226464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.890 [2024-05-15 17:17:34.226473] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.890 qpair failed and we were unable to recover it. 00:26:46.890 [2024-05-15 17:17:34.226573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.890 [2024-05-15 17:17:34.226662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.890 [2024-05-15 17:17:34.226671] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.890 qpair failed and we were unable to recover it. 00:26:46.890 [2024-05-15 17:17:34.226763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.890 [2024-05-15 17:17:34.226862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.890 [2024-05-15 17:17:34.226871] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.890 qpair failed and we were unable to recover it. 00:26:46.890 [2024-05-15 17:17:34.227030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.890 [2024-05-15 17:17:34.227135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.890 [2024-05-15 17:17:34.227145] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.890 qpair failed and we were unable to recover it. 00:26:46.890 [2024-05-15 17:17:34.227252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.890 [2024-05-15 17:17:34.227413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.890 [2024-05-15 17:17:34.227423] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.890 qpair failed and we were unable to recover it. 00:26:46.890 [2024-05-15 17:17:34.227523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.890 [2024-05-15 17:17:34.227749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.890 [2024-05-15 17:17:34.227759] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.890 qpair failed and we were unable to recover it. 00:26:46.890 [2024-05-15 17:17:34.227960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.890 [2024-05-15 17:17:34.228046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.890 [2024-05-15 17:17:34.228055] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.890 qpair failed and we were unable to recover it. 00:26:46.890 [2024-05-15 17:17:34.228154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.890 [2024-05-15 17:17:34.228319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.890 [2024-05-15 17:17:34.228329] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.890 qpair failed and we were unable to recover it. 00:26:46.890 [2024-05-15 17:17:34.228422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.890 [2024-05-15 17:17:34.228520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.890 [2024-05-15 17:17:34.228530] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.890 qpair failed and we were unable to recover it. 00:26:46.890 [2024-05-15 17:17:34.228633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.890 [2024-05-15 17:17:34.228882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.890 [2024-05-15 17:17:34.228891] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.890 qpair failed and we were unable to recover it. 00:26:46.890 [2024-05-15 17:17:34.229003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.890 [2024-05-15 17:17:34.229123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.890 [2024-05-15 17:17:34.229132] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.890 qpair failed and we were unable to recover it. 00:26:46.890 [2024-05-15 17:17:34.229211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.890 [2024-05-15 17:17:34.229308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.890 [2024-05-15 17:17:34.229318] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.890 qpair failed and we were unable to recover it. 00:26:46.890 [2024-05-15 17:17:34.229409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.890 [2024-05-15 17:17:34.229505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.890 [2024-05-15 17:17:34.229514] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.890 qpair failed and we were unable to recover it. 00:26:46.890 [2024-05-15 17:17:34.229709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.890 [2024-05-15 17:17:34.229804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.890 [2024-05-15 17:17:34.229813] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.890 qpair failed and we were unable to recover it. 00:26:46.890 [2024-05-15 17:17:34.229989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.890 [2024-05-15 17:17:34.230083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.890 [2024-05-15 17:17:34.230093] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.890 qpair failed and we were unable to recover it. 00:26:46.890 [2024-05-15 17:17:34.230297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.890 [2024-05-15 17:17:34.230394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.890 [2024-05-15 17:17:34.230404] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.890 qpair failed and we were unable to recover it. 00:26:46.890 [2024-05-15 17:17:34.230565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.890 [2024-05-15 17:17:34.230745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.890 [2024-05-15 17:17:34.230754] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.890 qpair failed and we were unable to recover it. 00:26:46.890 [2024-05-15 17:17:34.230845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.890 [2024-05-15 17:17:34.230995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.890 [2024-05-15 17:17:34.231005] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.890 qpair failed and we were unable to recover it. 00:26:46.890 [2024-05-15 17:17:34.231099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.890 [2024-05-15 17:17:34.231202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.890 [2024-05-15 17:17:34.231212] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.890 qpair failed and we were unable to recover it. 00:26:46.890 [2024-05-15 17:17:34.231304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.890 [2024-05-15 17:17:34.231400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.890 [2024-05-15 17:17:34.231409] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.890 qpair failed and we were unable to recover it. 00:26:46.890 [2024-05-15 17:17:34.231520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.890 [2024-05-15 17:17:34.231630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.890 [2024-05-15 17:17:34.231640] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.890 qpair failed and we were unable to recover it. 00:26:46.890 [2024-05-15 17:17:34.231759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.890 [2024-05-15 17:17:34.231857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.890 [2024-05-15 17:17:34.231867] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.890 qpair failed and we were unable to recover it. 00:26:46.890 [2024-05-15 17:17:34.231961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.890 [2024-05-15 17:17:34.232054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.890 [2024-05-15 17:17:34.232063] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.890 qpair failed and we were unable to recover it. 00:26:46.890 [2024-05-15 17:17:34.232154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.890 [2024-05-15 17:17:34.232255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.890 [2024-05-15 17:17:34.232266] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.890 qpair failed and we were unable to recover it. 00:26:46.890 [2024-05-15 17:17:34.232366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.890 [2024-05-15 17:17:34.232468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.890 [2024-05-15 17:17:34.232477] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.890 qpair failed and we were unable to recover it. 00:26:46.890 [2024-05-15 17:17:34.232638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.890 [2024-05-15 17:17:34.232734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.890 [2024-05-15 17:17:34.232744] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.890 qpair failed and we were unable to recover it. 00:26:46.890 [2024-05-15 17:17:34.232834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.890 [2024-05-15 17:17:34.232922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.890 [2024-05-15 17:17:34.232931] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.890 qpair failed and we were unable to recover it. 00:26:46.890 [2024-05-15 17:17:34.233031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.890 [2024-05-15 17:17:34.233202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.890 [2024-05-15 17:17:34.233213] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.890 qpair failed and we were unable to recover it. 00:26:46.890 [2024-05-15 17:17:34.233313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.890 [2024-05-15 17:17:34.233493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.891 [2024-05-15 17:17:34.233503] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.891 qpair failed and we were unable to recover it. 00:26:46.891 [2024-05-15 17:17:34.233594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.891 [2024-05-15 17:17:34.233683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.891 [2024-05-15 17:17:34.233692] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.891 qpair failed and we were unable to recover it. 00:26:46.891 [2024-05-15 17:17:34.233829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.891 [2024-05-15 17:17:34.234002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.891 [2024-05-15 17:17:34.234012] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.891 qpair failed and we were unable to recover it. 00:26:46.891 [2024-05-15 17:17:34.234103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.891 [2024-05-15 17:17:34.234198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.891 [2024-05-15 17:17:34.234208] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.891 qpair failed and we were unable to recover it. 00:26:46.891 [2024-05-15 17:17:34.234302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.891 [2024-05-15 17:17:34.234461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.891 [2024-05-15 17:17:34.234471] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.891 qpair failed and we were unable to recover it. 00:26:46.891 [2024-05-15 17:17:34.234569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.891 [2024-05-15 17:17:34.234654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.891 [2024-05-15 17:17:34.234663] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.891 qpair failed and we were unable to recover it. 00:26:46.891 [2024-05-15 17:17:34.234748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.891 [2024-05-15 17:17:34.234883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.891 [2024-05-15 17:17:34.234893] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.891 qpair failed and we were unable to recover it. 00:26:46.891 [2024-05-15 17:17:34.235063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.891 [2024-05-15 17:17:34.235160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.891 [2024-05-15 17:17:34.235203] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.891 qpair failed and we were unable to recover it. 00:26:46.891 [2024-05-15 17:17:34.235352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.891 [2024-05-15 17:17:34.235448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.891 [2024-05-15 17:17:34.235458] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.891 qpair failed and we were unable to recover it. 00:26:46.891 [2024-05-15 17:17:34.235565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.891 [2024-05-15 17:17:34.235664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.891 [2024-05-15 17:17:34.235674] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.891 qpair failed and we were unable to recover it. 00:26:46.891 [2024-05-15 17:17:34.235765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.891 [2024-05-15 17:17:34.235852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.891 [2024-05-15 17:17:34.235861] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.891 qpair failed and we were unable to recover it. 00:26:46.891 [2024-05-15 17:17:34.236018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.891 [2024-05-15 17:17:34.236107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.891 [2024-05-15 17:17:34.236116] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.891 qpair failed and we were unable to recover it. 00:26:46.891 [2024-05-15 17:17:34.236222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.891 [2024-05-15 17:17:34.236328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.891 [2024-05-15 17:17:34.236338] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.891 qpair failed and we were unable to recover it. 00:26:46.891 [2024-05-15 17:17:34.236424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.891 [2024-05-15 17:17:34.236529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.891 [2024-05-15 17:17:34.236538] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.891 qpair failed and we were unable to recover it. 00:26:46.891 [2024-05-15 17:17:34.236697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.891 [2024-05-15 17:17:34.236793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.891 [2024-05-15 17:17:34.236803] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.891 qpair failed and we were unable to recover it. 00:26:46.891 [2024-05-15 17:17:34.236910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.891 [2024-05-15 17:17:34.236999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.891 [2024-05-15 17:17:34.237009] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.891 qpair failed and we were unable to recover it. 00:26:46.891 [2024-05-15 17:17:34.237187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.891 [2024-05-15 17:17:34.237280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.891 [2024-05-15 17:17:34.237290] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.891 qpair failed and we were unable to recover it. 00:26:46.891 [2024-05-15 17:17:34.237410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.891 [2024-05-15 17:17:34.237573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.891 [2024-05-15 17:17:34.237582] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.891 qpair failed and we were unable to recover it. 00:26:46.891 [2024-05-15 17:17:34.237736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.891 [2024-05-15 17:17:34.237828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.891 [2024-05-15 17:17:34.237837] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.891 qpair failed and we were unable to recover it. 00:26:46.891 [2024-05-15 17:17:34.237995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.891 [2024-05-15 17:17:34.238097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.891 [2024-05-15 17:17:34.238110] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.891 qpair failed and we were unable to recover it. 00:26:46.891 [2024-05-15 17:17:34.238193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.891 [2024-05-15 17:17:34.238285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.891 [2024-05-15 17:17:34.238295] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.891 qpair failed and we were unable to recover it. 00:26:46.891 [2024-05-15 17:17:34.238456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.891 [2024-05-15 17:17:34.238547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.891 [2024-05-15 17:17:34.238556] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.891 qpair failed and we were unable to recover it. 00:26:46.891 [2024-05-15 17:17:34.238650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.891 [2024-05-15 17:17:34.238852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.891 [2024-05-15 17:17:34.238862] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.891 qpair failed and we were unable to recover it. 00:26:46.891 [2024-05-15 17:17:34.238972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.892 [2024-05-15 17:17:34.239076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.892 [2024-05-15 17:17:34.239085] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.892 qpair failed and we were unable to recover it. 00:26:46.892 [2024-05-15 17:17:34.239209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.892 [2024-05-15 17:17:34.239294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.892 [2024-05-15 17:17:34.239304] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.892 qpair failed and we were unable to recover it. 00:26:46.892 [2024-05-15 17:17:34.239395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.892 [2024-05-15 17:17:34.239484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.892 [2024-05-15 17:17:34.239493] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.892 qpair failed and we were unable to recover it. 00:26:46.892 [2024-05-15 17:17:34.239647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.892 [2024-05-15 17:17:34.239748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.892 [2024-05-15 17:17:34.239757] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.892 qpair failed and we were unable to recover it. 00:26:46.892 [2024-05-15 17:17:34.239877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.892 [2024-05-15 17:17:34.240010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.892 [2024-05-15 17:17:34.240019] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.892 qpair failed and we were unable to recover it. 00:26:46.892 [2024-05-15 17:17:34.240121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.892 [2024-05-15 17:17:34.240287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.892 [2024-05-15 17:17:34.240297] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.892 qpair failed and we were unable to recover it. 00:26:46.892 [2024-05-15 17:17:34.240363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.892 [2024-05-15 17:17:34.240462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.892 [2024-05-15 17:17:34.240474] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.892 qpair failed and we were unable to recover it. 00:26:46.892 [2024-05-15 17:17:34.240566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.892 [2024-05-15 17:17:34.240652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.892 [2024-05-15 17:17:34.240661] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.892 qpair failed and we were unable to recover it. 00:26:46.892 [2024-05-15 17:17:34.240724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.892 [2024-05-15 17:17:34.240811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.892 [2024-05-15 17:17:34.240821] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.892 qpair failed and we were unable to recover it. 00:26:46.892 [2024-05-15 17:17:34.240911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.892 [2024-05-15 17:17:34.240999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.892 [2024-05-15 17:17:34.241008] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.892 qpair failed and we were unable to recover it. 00:26:46.892 [2024-05-15 17:17:34.241094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.892 [2024-05-15 17:17:34.241213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.892 [2024-05-15 17:17:34.241223] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.892 qpair failed and we were unable to recover it. 00:26:46.892 [2024-05-15 17:17:34.241402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.892 [2024-05-15 17:17:34.241487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.892 [2024-05-15 17:17:34.241497] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.892 qpair failed and we were unable to recover it. 00:26:46.892 [2024-05-15 17:17:34.241593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.892 [2024-05-15 17:17:34.241759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.892 [2024-05-15 17:17:34.241769] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.892 qpair failed and we were unable to recover it. 00:26:46.892 [2024-05-15 17:17:34.241877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.892 [2024-05-15 17:17:34.241981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.892 [2024-05-15 17:17:34.241991] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.892 qpair failed and we were unable to recover it. 00:26:46.892 [2024-05-15 17:17:34.242223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.892 [2024-05-15 17:17:34.242318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.892 [2024-05-15 17:17:34.242327] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.892 qpair failed and we were unable to recover it. 00:26:46.892 [2024-05-15 17:17:34.242429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.892 [2024-05-15 17:17:34.242577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.892 [2024-05-15 17:17:34.242587] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.892 qpair failed and we were unable to recover it. 00:26:46.892 [2024-05-15 17:17:34.242698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.892 [2024-05-15 17:17:34.242796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.892 [2024-05-15 17:17:34.242808] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.892 qpair failed and we were unable to recover it. 00:26:46.892 [2024-05-15 17:17:34.242907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.892 [2024-05-15 17:17:34.243008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.892 [2024-05-15 17:17:34.243017] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.892 qpair failed and we were unable to recover it. 00:26:46.892 [2024-05-15 17:17:34.243205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.892 [2024-05-15 17:17:34.243315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.892 [2024-05-15 17:17:34.243324] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.892 qpair failed and we were unable to recover it. 00:26:46.892 [2024-05-15 17:17:34.243500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.892 [2024-05-15 17:17:34.243601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.892 [2024-05-15 17:17:34.243611] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.892 qpair failed and we were unable to recover it. 00:26:46.892 [2024-05-15 17:17:34.243766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.892 [2024-05-15 17:17:34.243865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.892 [2024-05-15 17:17:34.243874] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.892 qpair failed and we were unable to recover it. 00:26:46.892 [2024-05-15 17:17:34.243965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.892 [2024-05-15 17:17:34.244054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.892 [2024-05-15 17:17:34.244064] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.892 qpair failed and we were unable to recover it. 00:26:46.892 [2024-05-15 17:17:34.244155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.892 [2024-05-15 17:17:34.244260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.892 [2024-05-15 17:17:34.244270] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.892 qpair failed and we were unable to recover it. 00:26:46.892 [2024-05-15 17:17:34.244426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.892 [2024-05-15 17:17:34.244537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.892 [2024-05-15 17:17:34.244547] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.892 qpair failed and we were unable to recover it. 00:26:46.892 [2024-05-15 17:17:34.244652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.892 [2024-05-15 17:17:34.244788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.892 [2024-05-15 17:17:34.244798] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.892 qpair failed and we were unable to recover it. 00:26:46.892 [2024-05-15 17:17:34.244956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.892 [2024-05-15 17:17:34.245113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.892 [2024-05-15 17:17:34.245122] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.892 qpair failed and we were unable to recover it. 00:26:46.892 [2024-05-15 17:17:34.245216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.892 [2024-05-15 17:17:34.245389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.892 [2024-05-15 17:17:34.245400] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.892 qpair failed and we were unable to recover it. 00:26:46.892 [2024-05-15 17:17:34.245500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.892 [2024-05-15 17:17:34.245601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.892 [2024-05-15 17:17:34.245610] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.892 qpair failed and we were unable to recover it. 00:26:46.892 [2024-05-15 17:17:34.245717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.892 [2024-05-15 17:17:34.245824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.893 [2024-05-15 17:17:34.245834] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.893 qpair failed and we were unable to recover it. 00:26:46.893 [2024-05-15 17:17:34.246004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.893 [2024-05-15 17:17:34.246181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.893 [2024-05-15 17:17:34.246192] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.893 qpair failed and we were unable to recover it. 00:26:46.893 [2024-05-15 17:17:34.246299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.893 [2024-05-15 17:17:34.246408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.893 [2024-05-15 17:17:34.246418] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.893 qpair failed and we were unable to recover it. 00:26:46.893 [2024-05-15 17:17:34.246508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.893 [2024-05-15 17:17:34.246688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.893 [2024-05-15 17:17:34.246698] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.893 qpair failed and we were unable to recover it. 00:26:46.893 [2024-05-15 17:17:34.246868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.893 [2024-05-15 17:17:34.246957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.893 [2024-05-15 17:17:34.246966] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.893 qpair failed and we were unable to recover it. 00:26:46.893 [2024-05-15 17:17:34.247065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.893 [2024-05-15 17:17:34.247174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.893 [2024-05-15 17:17:34.247185] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.893 qpair failed and we were unable to recover it. 00:26:46.893 [2024-05-15 17:17:34.247271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.893 [2024-05-15 17:17:34.247364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.893 [2024-05-15 17:17:34.247373] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.893 qpair failed and we were unable to recover it. 00:26:46.893 [2024-05-15 17:17:34.247471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.893 [2024-05-15 17:17:34.247625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.893 [2024-05-15 17:17:34.247634] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.893 qpair failed and we were unable to recover it. 00:26:46.893 [2024-05-15 17:17:34.247727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.893 [2024-05-15 17:17:34.247824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.893 [2024-05-15 17:17:34.247833] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.893 qpair failed and we were unable to recover it. 00:26:46.893 [2024-05-15 17:17:34.247933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.893 [2024-05-15 17:17:34.248033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.893 [2024-05-15 17:17:34.248043] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.893 qpair failed and we were unable to recover it. 00:26:46.893 [2024-05-15 17:17:34.248129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.893 [2024-05-15 17:17:34.248223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.893 [2024-05-15 17:17:34.248234] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.893 qpair failed and we were unable to recover it. 00:26:46.893 [2024-05-15 17:17:34.248326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.893 [2024-05-15 17:17:34.248425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.893 [2024-05-15 17:17:34.248434] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.893 qpair failed and we were unable to recover it. 00:26:46.893 [2024-05-15 17:17:34.248614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.893 [2024-05-15 17:17:34.248718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.893 [2024-05-15 17:17:34.248728] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.893 qpair failed and we were unable to recover it. 00:26:46.893 [2024-05-15 17:17:34.248822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.893 [2024-05-15 17:17:34.248943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.893 [2024-05-15 17:17:34.248952] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.893 qpair failed and we were unable to recover it. 00:26:46.893 [2024-05-15 17:17:34.249061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.893 [2024-05-15 17:17:34.249243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.893 [2024-05-15 17:17:34.249254] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.893 qpair failed and we were unable to recover it. 00:26:46.893 [2024-05-15 17:17:34.249372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.893 [2024-05-15 17:17:34.249555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.893 [2024-05-15 17:17:34.249565] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.893 qpair failed and we were unable to recover it. 00:26:46.893 [2024-05-15 17:17:34.249676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.893 [2024-05-15 17:17:34.249836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.893 [2024-05-15 17:17:34.249845] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.893 qpair failed and we were unable to recover it. 00:26:46.893 [2024-05-15 17:17:34.249946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.893 [2024-05-15 17:17:34.250035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.893 [2024-05-15 17:17:34.250045] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.893 qpair failed and we were unable to recover it. 00:26:46.893 [2024-05-15 17:17:34.250157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.893 [2024-05-15 17:17:34.250261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.893 [2024-05-15 17:17:34.250271] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.893 qpair failed and we were unable to recover it. 00:26:46.893 [2024-05-15 17:17:34.250519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.893 [2024-05-15 17:17:34.250612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.893 [2024-05-15 17:17:34.250621] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.893 qpair failed and we were unable to recover it. 00:26:46.893 [2024-05-15 17:17:34.250719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.893 [2024-05-15 17:17:34.250819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.893 [2024-05-15 17:17:34.250828] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.893 qpair failed and we were unable to recover it. 00:26:46.893 [2024-05-15 17:17:34.250994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.893 [2024-05-15 17:17:34.251170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.893 [2024-05-15 17:17:34.251180] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.893 qpair failed and we were unable to recover it. 00:26:46.893 [2024-05-15 17:17:34.251432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.893 [2024-05-15 17:17:34.251520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.893 [2024-05-15 17:17:34.251529] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.893 qpair failed and we were unable to recover it. 00:26:46.893 [2024-05-15 17:17:34.251627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.893 [2024-05-15 17:17:34.251733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.893 [2024-05-15 17:17:34.251743] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.893 qpair failed and we were unable to recover it. 00:26:46.893 [2024-05-15 17:17:34.251846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.893 [2024-05-15 17:17:34.252012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.893 [2024-05-15 17:17:34.252021] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.893 qpair failed and we were unable to recover it. 00:26:46.893 [2024-05-15 17:17:34.252117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.893 [2024-05-15 17:17:34.252220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.894 [2024-05-15 17:17:34.252231] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.894 qpair failed and we were unable to recover it. 00:26:46.894 [2024-05-15 17:17:34.252390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.894 [2024-05-15 17:17:34.252470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.894 [2024-05-15 17:17:34.252479] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.894 qpair failed and we were unable to recover it. 00:26:46.894 [2024-05-15 17:17:34.252591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.894 [2024-05-15 17:17:34.252765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.894 [2024-05-15 17:17:34.252774] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.894 qpair failed and we were unable to recover it. 00:26:46.894 [2024-05-15 17:17:34.252876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.894 [2024-05-15 17:17:34.253047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.894 [2024-05-15 17:17:34.253056] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.894 qpair failed and we were unable to recover it. 00:26:46.894 [2024-05-15 17:17:34.253145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.894 [2024-05-15 17:17:34.253249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.894 [2024-05-15 17:17:34.253260] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.894 qpair failed and we were unable to recover it. 00:26:46.894 [2024-05-15 17:17:34.253423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.894 [2024-05-15 17:17:34.253587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.894 [2024-05-15 17:17:34.253597] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.894 qpair failed and we were unable to recover it. 00:26:46.894 [2024-05-15 17:17:34.253763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.894 [2024-05-15 17:17:34.253942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.894 [2024-05-15 17:17:34.253952] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.894 qpair failed and we were unable to recover it. 00:26:46.894 [2024-05-15 17:17:34.254042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.894 [2024-05-15 17:17:34.254156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.894 [2024-05-15 17:17:34.254180] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.894 qpair failed and we were unable to recover it. 00:26:46.894 [2024-05-15 17:17:34.254338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.894 [2024-05-15 17:17:34.254503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.894 [2024-05-15 17:17:34.254513] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.894 qpair failed and we were unable to recover it. 00:26:46.894 [2024-05-15 17:17:34.254618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.894 [2024-05-15 17:17:34.254732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.894 [2024-05-15 17:17:34.254742] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.894 qpair failed and we were unable to recover it. 00:26:46.894 [2024-05-15 17:17:34.254911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.894 [2024-05-15 17:17:34.255012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.894 [2024-05-15 17:17:34.255021] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.894 qpair failed and we were unable to recover it. 00:26:46.894 [2024-05-15 17:17:34.255121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.894 [2024-05-15 17:17:34.255210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.894 [2024-05-15 17:17:34.255220] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.894 qpair failed and we were unable to recover it. 00:26:46.894 [2024-05-15 17:17:34.255310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.894 [2024-05-15 17:17:34.255400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.894 [2024-05-15 17:17:34.255410] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.894 qpair failed and we were unable to recover it. 00:26:46.894 [2024-05-15 17:17:34.255573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.894 [2024-05-15 17:17:34.255671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.894 [2024-05-15 17:17:34.255680] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.894 qpair failed and we were unable to recover it. 00:26:46.894 [2024-05-15 17:17:34.255784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.894 [2024-05-15 17:17:34.255881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.894 [2024-05-15 17:17:34.255890] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.894 qpair failed and we were unable to recover it. 00:26:46.894 [2024-05-15 17:17:34.255985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.894 [2024-05-15 17:17:34.256157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.894 [2024-05-15 17:17:34.256178] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.894 qpair failed and we were unable to recover it. 00:26:46.894 [2024-05-15 17:17:34.256360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.894 [2024-05-15 17:17:34.256515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.894 [2024-05-15 17:17:34.256525] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.894 qpair failed and we were unable to recover it. 00:26:46.894 [2024-05-15 17:17:34.256627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.894 [2024-05-15 17:17:34.256749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.894 [2024-05-15 17:17:34.256758] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.894 qpair failed and we were unable to recover it. 00:26:46.894 [2024-05-15 17:17:34.256918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.894 [2024-05-15 17:17:34.257009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.894 [2024-05-15 17:17:34.257019] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.894 qpair failed and we were unable to recover it. 00:26:46.894 [2024-05-15 17:17:34.257119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.894 [2024-05-15 17:17:34.257283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.894 [2024-05-15 17:17:34.257293] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.894 qpair failed and we were unable to recover it. 00:26:46.894 [2024-05-15 17:17:34.257491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.894 [2024-05-15 17:17:34.257715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.894 [2024-05-15 17:17:34.257724] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.894 qpair failed and we were unable to recover it. 00:26:46.894 [2024-05-15 17:17:34.257895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.894 [2024-05-15 17:17:34.257993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.894 [2024-05-15 17:17:34.258002] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.894 qpair failed and we were unable to recover it. 00:26:46.894 [2024-05-15 17:17:34.258177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.894 [2024-05-15 17:17:34.258270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.894 [2024-05-15 17:17:34.258280] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.894 qpair failed and we were unable to recover it. 00:26:46.894 [2024-05-15 17:17:34.258423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.894 [2024-05-15 17:17:34.258503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.894 [2024-05-15 17:17:34.258512] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.894 qpair failed and we were unable to recover it. 00:26:46.894 [2024-05-15 17:17:34.258603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.894 [2024-05-15 17:17:34.258850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.894 [2024-05-15 17:17:34.258860] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.894 qpair failed and we were unable to recover it. 00:26:46.894 [2024-05-15 17:17:34.258988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.894 [2024-05-15 17:17:34.259145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.894 [2024-05-15 17:17:34.259155] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.894 qpair failed and we were unable to recover it. 00:26:46.894 [2024-05-15 17:17:34.259315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.894 [2024-05-15 17:17:34.259423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.894 [2024-05-15 17:17:34.259432] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.894 qpair failed and we were unable to recover it. 00:26:46.894 [2024-05-15 17:17:34.259538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.894 [2024-05-15 17:17:34.259641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.894 [2024-05-15 17:17:34.259651] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.894 qpair failed and we were unable to recover it. 00:26:46.894 [2024-05-15 17:17:34.259745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.894 [2024-05-15 17:17:34.259909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.894 [2024-05-15 17:17:34.259919] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.895 qpair failed and we were unable to recover it. 00:26:46.895 [2024-05-15 17:17:34.260016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.895 [2024-05-15 17:17:34.260120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.895 [2024-05-15 17:17:34.260129] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.895 qpair failed and we were unable to recover it. 00:26:46.895 [2024-05-15 17:17:34.260292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.895 [2024-05-15 17:17:34.260390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.895 [2024-05-15 17:17:34.260399] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.895 qpair failed and we were unable to recover it. 00:26:46.895 [2024-05-15 17:17:34.260487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.895 [2024-05-15 17:17:34.260576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.895 [2024-05-15 17:17:34.260585] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.895 qpair failed and we were unable to recover it. 00:26:46.895 [2024-05-15 17:17:34.260696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.895 [2024-05-15 17:17:34.260856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.895 [2024-05-15 17:17:34.260865] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.895 qpair failed and we were unable to recover it. 00:26:46.895 [2024-05-15 17:17:34.260954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.895 [2024-05-15 17:17:34.261056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.895 [2024-05-15 17:17:34.261066] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.895 qpair failed and we were unable to recover it. 00:26:46.895 [2024-05-15 17:17:34.261228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.895 [2024-05-15 17:17:34.261318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.895 [2024-05-15 17:17:34.261327] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.895 qpair failed and we were unable to recover it. 00:26:46.895 [2024-05-15 17:17:34.261429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.895 [2024-05-15 17:17:34.261524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.895 [2024-05-15 17:17:34.261533] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.895 qpair failed and we were unable to recover it. 00:26:46.895 [2024-05-15 17:17:34.261685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.895 [2024-05-15 17:17:34.261843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.895 [2024-05-15 17:17:34.261852] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.895 qpair failed and we were unable to recover it. 00:26:46.895 [2024-05-15 17:17:34.261958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.895 [2024-05-15 17:17:34.262071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.895 [2024-05-15 17:17:34.262081] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.895 qpair failed and we were unable to recover it. 00:26:46.895 [2024-05-15 17:17:34.262183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.895 [2024-05-15 17:17:34.262299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.895 [2024-05-15 17:17:34.262309] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.895 qpair failed and we were unable to recover it. 00:26:46.895 [2024-05-15 17:17:34.262412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.895 [2024-05-15 17:17:34.262635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.895 [2024-05-15 17:17:34.262644] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.895 qpair failed and we were unable to recover it. 00:26:46.895 [2024-05-15 17:17:34.262811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.895 [2024-05-15 17:17:34.262906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.895 [2024-05-15 17:17:34.262916] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.895 qpair failed and we were unable to recover it. 00:26:46.895 [2024-05-15 17:17:34.263037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.895 [2024-05-15 17:17:34.263137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.895 [2024-05-15 17:17:34.263147] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.895 qpair failed and we were unable to recover it. 00:26:46.895 [2024-05-15 17:17:34.263316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.895 [2024-05-15 17:17:34.263419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.895 [2024-05-15 17:17:34.263428] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.895 qpair failed and we were unable to recover it. 00:26:46.895 [2024-05-15 17:17:34.263540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.895 [2024-05-15 17:17:34.263705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.895 [2024-05-15 17:17:34.263714] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.895 qpair failed and we were unable to recover it. 00:26:46.895 [2024-05-15 17:17:34.263829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.895 [2024-05-15 17:17:34.263945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.895 [2024-05-15 17:17:34.263955] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.895 qpair failed and we were unable to recover it. 00:26:46.895 [2024-05-15 17:17:34.264050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.895 [2024-05-15 17:17:34.264145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.895 [2024-05-15 17:17:34.264155] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.895 qpair failed and we were unable to recover it. 00:26:46.895 [2024-05-15 17:17:34.264324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.895 [2024-05-15 17:17:34.264434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.895 [2024-05-15 17:17:34.264443] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.895 qpair failed and we were unable to recover it. 00:26:46.895 [2024-05-15 17:17:34.264616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.895 [2024-05-15 17:17:34.264714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.895 [2024-05-15 17:17:34.264724] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.895 qpair failed and we were unable to recover it. 00:26:46.895 [2024-05-15 17:17:34.264827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.895 [2024-05-15 17:17:34.264986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.895 [2024-05-15 17:17:34.264995] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.895 qpair failed and we were unable to recover it. 00:26:46.895 [2024-05-15 17:17:34.265082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.895 [2024-05-15 17:17:34.265179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.895 [2024-05-15 17:17:34.265189] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.895 qpair failed and we were unable to recover it. 00:26:46.895 [2024-05-15 17:17:34.265281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.895 [2024-05-15 17:17:34.265442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.895 [2024-05-15 17:17:34.265453] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.895 qpair failed and we were unable to recover it. 00:26:46.895 [2024-05-15 17:17:34.265541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.895 [2024-05-15 17:17:34.265618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.895 [2024-05-15 17:17:34.265627] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.895 qpair failed and we were unable to recover it. 00:26:46.895 [2024-05-15 17:17:34.265786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.895 [2024-05-15 17:17:34.265892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.895 [2024-05-15 17:17:34.265901] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.895 qpair failed and we were unable to recover it. 00:26:46.895 [2024-05-15 17:17:34.266058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.895 [2024-05-15 17:17:34.266148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.895 [2024-05-15 17:17:34.266157] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.895 qpair failed and we were unable to recover it. 00:26:46.895 [2024-05-15 17:17:34.266272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.895 [2024-05-15 17:17:34.266375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.895 [2024-05-15 17:17:34.266384] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.896 qpair failed and we were unable to recover it. 00:26:46.896 [2024-05-15 17:17:34.266475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.896 [2024-05-15 17:17:34.266569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.896 [2024-05-15 17:17:34.266579] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.896 qpair failed and we were unable to recover it. 00:26:46.896 [2024-05-15 17:17:34.266685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.896 [2024-05-15 17:17:34.266794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.896 [2024-05-15 17:17:34.266804] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.896 qpair failed and we were unable to recover it. 00:26:46.896 [2024-05-15 17:17:34.266906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.896 [2024-05-15 17:17:34.267003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.896 [2024-05-15 17:17:34.267013] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.896 qpair failed and we were unable to recover it. 00:26:46.896 [2024-05-15 17:17:34.267102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.896 [2024-05-15 17:17:34.267269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.896 [2024-05-15 17:17:34.267279] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.896 qpair failed and we were unable to recover it. 00:26:46.896 [2024-05-15 17:17:34.267378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.896 [2024-05-15 17:17:34.267473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.896 [2024-05-15 17:17:34.267483] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.896 qpair failed and we were unable to recover it. 00:26:46.896 [2024-05-15 17:17:34.267584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.896 [2024-05-15 17:17:34.267674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.896 [2024-05-15 17:17:34.267683] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.896 qpair failed and we were unable to recover it. 00:26:46.896 [2024-05-15 17:17:34.267784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.896 [2024-05-15 17:17:34.267938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.896 [2024-05-15 17:17:34.267948] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.896 qpair failed and we were unable to recover it. 00:26:46.896 [2024-05-15 17:17:34.268047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.896 [2024-05-15 17:17:34.268134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.896 [2024-05-15 17:17:34.268144] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.896 qpair failed and we were unable to recover it. 00:26:46.896 [2024-05-15 17:17:34.268244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.896 [2024-05-15 17:17:34.268358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.896 [2024-05-15 17:17:34.268368] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.896 qpair failed and we were unable to recover it. 00:26:46.896 [2024-05-15 17:17:34.268456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.896 [2024-05-15 17:17:34.268566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.896 [2024-05-15 17:17:34.268576] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.896 qpair failed and we were unable to recover it. 00:26:46.896 [2024-05-15 17:17:34.268731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.896 [2024-05-15 17:17:34.268897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.896 [2024-05-15 17:17:34.268906] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.896 qpair failed and we were unable to recover it. 00:26:46.896 [2024-05-15 17:17:34.269012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.896 [2024-05-15 17:17:34.269160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.896 [2024-05-15 17:17:34.269186] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.896 qpair failed and we were unable to recover it. 00:26:46.896 [2024-05-15 17:17:34.269352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.896 [2024-05-15 17:17:34.269447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.896 [2024-05-15 17:17:34.269456] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.896 qpair failed and we were unable to recover it. 00:26:46.896 [2024-05-15 17:17:34.269615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.896 [2024-05-15 17:17:34.269782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.896 [2024-05-15 17:17:34.269792] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.896 qpair failed and we were unable to recover it. 00:26:46.896 [2024-05-15 17:17:34.269952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.896 [2024-05-15 17:17:34.270055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.896 [2024-05-15 17:17:34.270065] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.896 qpair failed and we were unable to recover it. 00:26:46.896 [2024-05-15 17:17:34.270173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.896 [2024-05-15 17:17:34.270280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.896 [2024-05-15 17:17:34.270290] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.896 qpair failed and we were unable to recover it. 00:26:46.896 [2024-05-15 17:17:34.270387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.896 [2024-05-15 17:17:34.270476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.896 [2024-05-15 17:17:34.270485] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.896 qpair failed and we were unable to recover it. 00:26:46.896 [2024-05-15 17:17:34.270606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.896 [2024-05-15 17:17:34.270845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.896 [2024-05-15 17:17:34.270855] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.896 qpair failed and we were unable to recover it. 00:26:46.896 [2024-05-15 17:17:34.271022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.896 [2024-05-15 17:17:34.271193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.896 [2024-05-15 17:17:34.271204] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.896 qpair failed and we were unable to recover it. 00:26:46.896 [2024-05-15 17:17:34.271301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.896 [2024-05-15 17:17:34.271402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.896 [2024-05-15 17:17:34.271412] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.896 qpair failed and we were unable to recover it. 00:26:46.896 [2024-05-15 17:17:34.271514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.896 [2024-05-15 17:17:34.271675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.896 [2024-05-15 17:17:34.271685] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.896 qpair failed and we were unable to recover it. 00:26:46.896 [2024-05-15 17:17:34.271776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.896 [2024-05-15 17:17:34.271867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.896 [2024-05-15 17:17:34.271876] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.896 qpair failed and we were unable to recover it. 00:26:46.897 [2024-05-15 17:17:34.271973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.897 [2024-05-15 17:17:34.272080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.897 [2024-05-15 17:17:34.272090] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.897 qpair failed and we were unable to recover it. 00:26:46.897 [2024-05-15 17:17:34.272368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.897 [2024-05-15 17:17:34.272633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.897 [2024-05-15 17:17:34.272643] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.897 qpair failed and we were unable to recover it. 00:26:46.897 [2024-05-15 17:17:34.272739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.897 [2024-05-15 17:17:34.272805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.897 [2024-05-15 17:17:34.272815] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.897 qpair failed and we were unable to recover it. 00:26:46.897 [2024-05-15 17:17:34.272978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.897 [2024-05-15 17:17:34.273066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.897 [2024-05-15 17:17:34.273075] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.897 qpair failed and we were unable to recover it. 00:26:46.897 [2024-05-15 17:17:34.273179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.897 [2024-05-15 17:17:34.273286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.897 [2024-05-15 17:17:34.273295] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.897 qpair failed and we were unable to recover it. 00:26:46.897 [2024-05-15 17:17:34.273386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.897 [2024-05-15 17:17:34.273475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.897 [2024-05-15 17:17:34.273485] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.897 qpair failed and we were unable to recover it. 00:26:46.897 [2024-05-15 17:17:34.273576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.897 [2024-05-15 17:17:34.273675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.897 [2024-05-15 17:17:34.273685] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.897 qpair failed and we were unable to recover it. 00:26:46.897 [2024-05-15 17:17:34.273793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.897 [2024-05-15 17:17:34.273885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.897 [2024-05-15 17:17:34.273895] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.897 qpair failed and we were unable to recover it. 00:26:46.897 [2024-05-15 17:17:34.274070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.897 [2024-05-15 17:17:34.274172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.897 [2024-05-15 17:17:34.274182] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.897 qpair failed and we were unable to recover it. 00:26:46.897 [2024-05-15 17:17:34.274375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.897 [2024-05-15 17:17:34.274534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.897 [2024-05-15 17:17:34.274543] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.897 qpair failed and we were unable to recover it. 00:26:46.897 [2024-05-15 17:17:34.274631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.897 [2024-05-15 17:17:34.274736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.897 [2024-05-15 17:17:34.274745] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.897 qpair failed and we were unable to recover it. 00:26:46.897 [2024-05-15 17:17:34.274911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.897 [2024-05-15 17:17:34.274977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.897 [2024-05-15 17:17:34.274987] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.897 qpair failed and we were unable to recover it. 00:26:46.897 [2024-05-15 17:17:34.275085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.897 [2024-05-15 17:17:34.275256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.897 [2024-05-15 17:17:34.275266] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.897 qpair failed and we were unable to recover it. 00:26:46.897 [2024-05-15 17:17:34.275356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.897 [2024-05-15 17:17:34.275511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.897 [2024-05-15 17:17:34.275521] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.897 qpair failed and we were unable to recover it. 00:26:46.897 [2024-05-15 17:17:34.275640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.897 [2024-05-15 17:17:34.275817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.897 [2024-05-15 17:17:34.275827] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.897 qpair failed and we were unable to recover it. 00:26:46.897 [2024-05-15 17:17:34.275938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.897 [2024-05-15 17:17:34.276035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.897 [2024-05-15 17:17:34.276044] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.897 qpair failed and we were unable to recover it. 00:26:46.897 [2024-05-15 17:17:34.276146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.897 [2024-05-15 17:17:34.276290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.897 [2024-05-15 17:17:34.276300] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.897 qpair failed and we were unable to recover it. 00:26:46.897 [2024-05-15 17:17:34.276459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.897 [2024-05-15 17:17:34.276628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.897 [2024-05-15 17:17:34.276642] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.897 qpair failed and we were unable to recover it. 00:26:46.897 [2024-05-15 17:17:34.276758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.897 [2024-05-15 17:17:34.276855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.897 [2024-05-15 17:17:34.276865] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.897 qpair failed and we were unable to recover it. 00:26:46.897 [2024-05-15 17:17:34.276973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.897 [2024-05-15 17:17:34.277137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.897 [2024-05-15 17:17:34.277147] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.897 qpair failed and we were unable to recover it. 00:26:46.897 [2024-05-15 17:17:34.277396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.897 [2024-05-15 17:17:34.277473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.897 [2024-05-15 17:17:34.277482] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.897 qpair failed and we were unable to recover it. 00:26:46.897 [2024-05-15 17:17:34.277574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.897 [2024-05-15 17:17:34.277665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.897 [2024-05-15 17:17:34.277674] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.897 qpair failed and we were unable to recover it. 00:26:46.897 [2024-05-15 17:17:34.277760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.897 [2024-05-15 17:17:34.277866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.897 [2024-05-15 17:17:34.277876] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.897 qpair failed and we were unable to recover it. 00:26:46.897 [2024-05-15 17:17:34.277976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.897 [2024-05-15 17:17:34.278073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.897 [2024-05-15 17:17:34.278082] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.897 qpair failed and we were unable to recover it. 00:26:46.897 [2024-05-15 17:17:34.278250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.897 [2024-05-15 17:17:34.278352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.897 [2024-05-15 17:17:34.278362] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.897 qpair failed and we were unable to recover it. 00:26:46.897 [2024-05-15 17:17:34.278453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.897 [2024-05-15 17:17:34.278558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.897 [2024-05-15 17:17:34.278567] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.897 qpair failed and we were unable to recover it. 00:26:46.897 [2024-05-15 17:17:34.278666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.897 [2024-05-15 17:17:34.278756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.897 [2024-05-15 17:17:34.278767] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.897 qpair failed and we were unable to recover it. 00:26:46.897 [2024-05-15 17:17:34.278885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.897 [2024-05-15 17:17:34.279055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.897 [2024-05-15 17:17:34.279067] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.897 qpair failed and we were unable to recover it. 00:26:46.898 [2024-05-15 17:17:34.279267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.898 [2024-05-15 17:17:34.279389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.898 [2024-05-15 17:17:34.279399] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.898 qpair failed and we were unable to recover it. 00:26:46.898 [2024-05-15 17:17:34.279485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.898 [2024-05-15 17:17:34.279588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.898 [2024-05-15 17:17:34.279597] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.898 qpair failed and we were unable to recover it. 00:26:46.898 [2024-05-15 17:17:34.279699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.898 [2024-05-15 17:17:34.279782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.898 [2024-05-15 17:17:34.279791] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.898 qpair failed and we were unable to recover it. 00:26:46.898 [2024-05-15 17:17:34.279955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.898 [2024-05-15 17:17:34.280045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.898 [2024-05-15 17:17:34.280055] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.898 qpair failed and we were unable to recover it. 00:26:46.898 [2024-05-15 17:17:34.280157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.898 [2024-05-15 17:17:34.280311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.898 [2024-05-15 17:17:34.280321] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.898 qpair failed and we were unable to recover it. 00:26:46.898 [2024-05-15 17:17:34.280478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.898 [2024-05-15 17:17:34.280566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.898 [2024-05-15 17:17:34.280575] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.898 qpair failed and we were unable to recover it. 00:26:46.898 [2024-05-15 17:17:34.280736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.898 [2024-05-15 17:17:34.280916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.898 [2024-05-15 17:17:34.280925] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.898 qpair failed and we were unable to recover it. 00:26:46.898 [2024-05-15 17:17:34.281028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.898 [2024-05-15 17:17:34.281199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.898 [2024-05-15 17:17:34.281209] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.898 qpair failed and we were unable to recover it. 00:26:46.898 [2024-05-15 17:17:34.281313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.898 [2024-05-15 17:17:34.281414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.898 [2024-05-15 17:17:34.281423] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.898 qpair failed and we were unable to recover it. 00:26:46.898 [2024-05-15 17:17:34.281526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.898 [2024-05-15 17:17:34.281645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.898 [2024-05-15 17:17:34.281657] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.898 qpair failed and we were unable to recover it. 00:26:46.898 [2024-05-15 17:17:34.281730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.898 [2024-05-15 17:17:34.281795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.898 [2024-05-15 17:17:34.281804] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.898 qpair failed and we were unable to recover it. 00:26:46.898 [2024-05-15 17:17:34.281909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.898 [2024-05-15 17:17:34.282091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.898 [2024-05-15 17:17:34.282101] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.898 qpair failed and we were unable to recover it. 00:26:46.898 [2024-05-15 17:17:34.282302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.898 [2024-05-15 17:17:34.282537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.898 [2024-05-15 17:17:34.282547] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.898 qpair failed and we were unable to recover it. 00:26:46.898 [2024-05-15 17:17:34.282647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.898 [2024-05-15 17:17:34.282750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.898 [2024-05-15 17:17:34.282760] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.898 qpair failed and we were unable to recover it. 00:26:46.898 [2024-05-15 17:17:34.282927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.898 [2024-05-15 17:17:34.283021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.898 [2024-05-15 17:17:34.283031] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.898 qpair failed and we were unable to recover it. 00:26:46.898 [2024-05-15 17:17:34.283200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.898 [2024-05-15 17:17:34.283370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.898 [2024-05-15 17:17:34.283380] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.898 qpair failed and we were unable to recover it. 00:26:46.898 [2024-05-15 17:17:34.283539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.898 [2024-05-15 17:17:34.283637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.898 [2024-05-15 17:17:34.283647] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.898 qpair failed and we were unable to recover it. 00:26:46.898 [2024-05-15 17:17:34.283736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.898 [2024-05-15 17:17:34.283828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.898 [2024-05-15 17:17:34.283837] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.898 qpair failed and we were unable to recover it. 00:26:46.898 [2024-05-15 17:17:34.283994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.898 [2024-05-15 17:17:34.284113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.898 [2024-05-15 17:17:34.284123] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.898 qpair failed and we were unable to recover it. 00:26:46.898 [2024-05-15 17:17:34.284281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.898 [2024-05-15 17:17:34.284372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.898 [2024-05-15 17:17:34.284384] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.898 qpair failed and we were unable to recover it. 00:26:46.898 [2024-05-15 17:17:34.284557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.898 [2024-05-15 17:17:34.284664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.898 [2024-05-15 17:17:34.284674] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.898 qpair failed and we were unable to recover it. 00:26:46.898 [2024-05-15 17:17:34.284770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.898 [2024-05-15 17:17:34.284975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.898 [2024-05-15 17:17:34.284984] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.898 qpair failed and we were unable to recover it. 00:26:46.898 [2024-05-15 17:17:34.285183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.898 [2024-05-15 17:17:34.285340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.898 [2024-05-15 17:17:34.285349] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.898 qpair failed and we were unable to recover it. 00:26:46.898 [2024-05-15 17:17:34.285454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.898 [2024-05-15 17:17:34.285621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.898 [2024-05-15 17:17:34.285630] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.898 qpair failed and we were unable to recover it. 00:26:46.898 [2024-05-15 17:17:34.285808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.898 [2024-05-15 17:17:34.285973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.898 [2024-05-15 17:17:34.285983] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.898 qpair failed and we were unable to recover it. 00:26:46.898 [2024-05-15 17:17:34.286148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.898 [2024-05-15 17:17:34.286340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.899 [2024-05-15 17:17:34.286350] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.899 qpair failed and we were unable to recover it. 00:26:46.899 [2024-05-15 17:17:34.286462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.899 [2024-05-15 17:17:34.286574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.899 [2024-05-15 17:17:34.286584] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.899 qpair failed and we were unable to recover it. 00:26:46.899 [2024-05-15 17:17:34.286859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.899 [2024-05-15 17:17:34.287025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.899 [2024-05-15 17:17:34.287034] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.899 qpair failed and we were unable to recover it. 00:26:46.899 [2024-05-15 17:17:34.287198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.899 [2024-05-15 17:17:34.287432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.899 [2024-05-15 17:17:34.287442] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.899 qpair failed and we were unable to recover it. 00:26:46.899 [2024-05-15 17:17:34.287621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.899 [2024-05-15 17:17:34.287870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.899 [2024-05-15 17:17:34.287880] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.899 qpair failed and we were unable to recover it. 00:26:46.899 [2024-05-15 17:17:34.288077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.899 [2024-05-15 17:17:34.288244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.899 [2024-05-15 17:17:34.288255] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.899 qpair failed and we were unable to recover it. 00:26:46.899 [2024-05-15 17:17:34.288426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.899 [2024-05-15 17:17:34.288619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.899 [2024-05-15 17:17:34.288629] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.899 qpair failed and we were unable to recover it. 00:26:46.899 [2024-05-15 17:17:34.288798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.899 [2024-05-15 17:17:34.288953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.899 [2024-05-15 17:17:34.288963] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.899 qpair failed and we were unable to recover it. 00:26:46.899 [2024-05-15 17:17:34.289130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.899 [2024-05-15 17:17:34.289351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.899 [2024-05-15 17:17:34.289360] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.899 qpair failed and we were unable to recover it. 00:26:46.899 [2024-05-15 17:17:34.289576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.899 [2024-05-15 17:17:34.289764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.899 [2024-05-15 17:17:34.289774] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.899 qpair failed and we were unable to recover it. 00:26:46.899 [2024-05-15 17:17:34.289971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.899 [2024-05-15 17:17:34.290158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.899 [2024-05-15 17:17:34.290171] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.899 qpair failed and we were unable to recover it. 00:26:46.899 [2024-05-15 17:17:34.290446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.899 [2024-05-15 17:17:34.290575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.899 [2024-05-15 17:17:34.290585] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.899 qpair failed and we were unable to recover it. 00:26:46.899 [2024-05-15 17:17:34.290697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.899 [2024-05-15 17:17:34.290924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.899 [2024-05-15 17:17:34.290934] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.899 qpair failed and we were unable to recover it. 00:26:46.899 [2024-05-15 17:17:34.291170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.899 [2024-05-15 17:17:34.291333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.899 [2024-05-15 17:17:34.291343] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.899 qpair failed and we were unable to recover it. 00:26:46.899 [2024-05-15 17:17:34.291442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.899 [2024-05-15 17:17:34.291560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.899 [2024-05-15 17:17:34.291570] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.899 qpair failed and we were unable to recover it. 00:26:46.899 [2024-05-15 17:17:34.291819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.899 [2024-05-15 17:17:34.291986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.899 [2024-05-15 17:17:34.291996] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.899 qpair failed and we were unable to recover it. 00:26:46.899 [2024-05-15 17:17:34.292173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.899 [2024-05-15 17:17:34.292370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.899 [2024-05-15 17:17:34.292380] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.899 qpair failed and we were unable to recover it. 00:26:46.899 [2024-05-15 17:17:34.292535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.899 [2024-05-15 17:17:34.292728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.899 [2024-05-15 17:17:34.292737] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.899 qpair failed and we were unable to recover it. 00:26:46.899 [2024-05-15 17:17:34.292891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.899 [2024-05-15 17:17:34.293008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.899 [2024-05-15 17:17:34.293018] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.899 qpair failed and we were unable to recover it. 00:26:46.899 [2024-05-15 17:17:34.293246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.899 [2024-05-15 17:17:34.293434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.899 [2024-05-15 17:17:34.293444] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.899 qpair failed and we were unable to recover it. 00:26:46.899 [2024-05-15 17:17:34.293565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.899 [2024-05-15 17:17:34.293656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.899 [2024-05-15 17:17:34.293666] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.899 qpair failed and we were unable to recover it. 00:26:46.899 [2024-05-15 17:17:34.293912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.899 [2024-05-15 17:17:34.294013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.899 [2024-05-15 17:17:34.294023] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.899 qpair failed and we were unable to recover it. 00:26:46.899 [2024-05-15 17:17:34.294263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.899 [2024-05-15 17:17:34.294445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.899 [2024-05-15 17:17:34.294454] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.899 qpair failed and we were unable to recover it. 00:26:46.899 [2024-05-15 17:17:34.294649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.899 [2024-05-15 17:17:34.294834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.899 [2024-05-15 17:17:34.294843] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.899 qpair failed and we were unable to recover it. 00:26:46.899 [2024-05-15 17:17:34.294998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.899 [2024-05-15 17:17:34.295162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.899 [2024-05-15 17:17:34.295175] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.899 qpair failed and we were unable to recover it. 00:26:46.899 [2024-05-15 17:17:34.295308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.899 [2024-05-15 17:17:34.295462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.899 [2024-05-15 17:17:34.295472] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.899 qpair failed and we were unable to recover it. 00:26:46.899 [2024-05-15 17:17:34.295601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.899 [2024-05-15 17:17:34.295712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.899 [2024-05-15 17:17:34.295722] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.899 qpair failed and we were unable to recover it. 00:26:46.899 [2024-05-15 17:17:34.296029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.899 [2024-05-15 17:17:34.296225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.899 [2024-05-15 17:17:34.296235] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.899 qpair failed and we were unable to recover it. 00:26:46.899 [2024-05-15 17:17:34.296495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.899 [2024-05-15 17:17:34.296734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.899 [2024-05-15 17:17:34.296743] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.900 qpair failed and we were unable to recover it. 00:26:46.900 [2024-05-15 17:17:34.296850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.900 [2024-05-15 17:17:34.297099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.900 [2024-05-15 17:17:34.297109] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.900 qpair failed and we were unable to recover it. 00:26:46.900 [2024-05-15 17:17:34.297304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.900 [2024-05-15 17:17:34.297479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.900 [2024-05-15 17:17:34.297489] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.900 qpair failed and we were unable to recover it. 00:26:46.900 [2024-05-15 17:17:34.297602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.900 [2024-05-15 17:17:34.297768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.900 [2024-05-15 17:17:34.297778] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.900 qpair failed and we were unable to recover it. 00:26:46.900 [2024-05-15 17:17:34.297878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.900 [2024-05-15 17:17:34.297994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.900 [2024-05-15 17:17:34.298004] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.900 qpair failed and we were unable to recover it. 00:26:46.900 [2024-05-15 17:17:34.298199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.900 [2024-05-15 17:17:34.298299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.900 [2024-05-15 17:17:34.298308] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.900 qpair failed and we were unable to recover it. 00:26:46.900 [2024-05-15 17:17:34.298481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.900 [2024-05-15 17:17:34.298641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.900 [2024-05-15 17:17:34.298651] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.900 qpair failed and we were unable to recover it. 00:26:46.900 [2024-05-15 17:17:34.298832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.900 [2024-05-15 17:17:34.299013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.900 [2024-05-15 17:17:34.299023] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.900 qpair failed and we were unable to recover it. 00:26:46.900 [2024-05-15 17:17:34.299229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.900 [2024-05-15 17:17:34.299424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.900 [2024-05-15 17:17:34.299433] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.900 qpair failed and we were unable to recover it. 00:26:46.900 [2024-05-15 17:17:34.299658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.900 [2024-05-15 17:17:34.299785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.900 [2024-05-15 17:17:34.299795] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.900 qpair failed and we were unable to recover it. 00:26:46.900 [2024-05-15 17:17:34.300031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.900 [2024-05-15 17:17:34.300237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.900 [2024-05-15 17:17:34.300247] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.900 qpair failed and we were unable to recover it. 00:26:46.900 [2024-05-15 17:17:34.300341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.900 [2024-05-15 17:17:34.300512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.900 [2024-05-15 17:17:34.300522] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.900 qpair failed and we were unable to recover it. 00:26:46.900 [2024-05-15 17:17:34.300690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.900 [2024-05-15 17:17:34.300871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.900 [2024-05-15 17:17:34.300880] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.900 qpair failed and we were unable to recover it. 00:26:46.900 [2024-05-15 17:17:34.300989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.900 [2024-05-15 17:17:34.301098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.900 [2024-05-15 17:17:34.301108] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.900 qpair failed and we were unable to recover it. 00:26:46.900 [2024-05-15 17:17:34.301308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.900 [2024-05-15 17:17:34.301510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.900 [2024-05-15 17:17:34.301520] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.900 qpair failed and we were unable to recover it. 00:26:46.900 [2024-05-15 17:17:34.301686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.900 [2024-05-15 17:17:34.301973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.900 [2024-05-15 17:17:34.301982] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.900 qpair failed and we were unable to recover it. 00:26:46.900 [2024-05-15 17:17:34.302214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.900 [2024-05-15 17:17:34.302410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.900 [2024-05-15 17:17:34.302419] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.900 qpair failed and we were unable to recover it. 00:26:46.900 [2024-05-15 17:17:34.302592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.900 [2024-05-15 17:17:34.302761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.900 [2024-05-15 17:17:34.302770] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.900 qpair failed and we were unable to recover it. 00:26:46.900 [2024-05-15 17:17:34.302943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.900 [2024-05-15 17:17:34.303171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.900 [2024-05-15 17:17:34.303181] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.900 qpair failed and we were unable to recover it. 00:26:46.900 [2024-05-15 17:17:34.303281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.900 [2024-05-15 17:17:34.303479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.900 [2024-05-15 17:17:34.303489] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.900 qpair failed and we were unable to recover it. 00:26:46.900 [2024-05-15 17:17:34.303605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.900 [2024-05-15 17:17:34.303716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.900 [2024-05-15 17:17:34.303726] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.900 qpair failed and we were unable to recover it. 00:26:46.900 [2024-05-15 17:17:34.303905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.900 [2024-05-15 17:17:34.304141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.900 [2024-05-15 17:17:34.304151] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.900 qpair failed and we were unable to recover it. 00:26:46.900 [2024-05-15 17:17:34.304260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.900 [2024-05-15 17:17:34.304428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.900 [2024-05-15 17:17:34.304438] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.900 qpair failed and we were unable to recover it. 00:26:46.900 [2024-05-15 17:17:34.304563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.900 [2024-05-15 17:17:34.304754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.900 [2024-05-15 17:17:34.304764] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.900 qpair failed and we were unable to recover it. 00:26:46.900 [2024-05-15 17:17:34.304999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.900 [2024-05-15 17:17:34.305157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.900 [2024-05-15 17:17:34.305171] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.901 qpair failed and we were unable to recover it. 00:26:46.901 [2024-05-15 17:17:34.305302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.901 [2024-05-15 17:17:34.305416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.901 [2024-05-15 17:17:34.305426] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.901 qpair failed and we were unable to recover it. 00:26:46.901 [2024-05-15 17:17:34.305595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.901 [2024-05-15 17:17:34.305815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.901 [2024-05-15 17:17:34.305824] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.901 qpair failed and we were unable to recover it. 00:26:46.901 [2024-05-15 17:17:34.306116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.901 [2024-05-15 17:17:34.306362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.901 [2024-05-15 17:17:34.306372] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.901 qpair failed and we were unable to recover it. 00:26:46.901 [2024-05-15 17:17:34.306527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.901 [2024-05-15 17:17:34.306693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.901 [2024-05-15 17:17:34.306702] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.901 qpair failed and we were unable to recover it. 00:26:46.901 [2024-05-15 17:17:34.306968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.901 [2024-05-15 17:17:34.307138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.901 [2024-05-15 17:17:34.307148] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.901 qpair failed and we were unable to recover it. 00:26:46.901 [2024-05-15 17:17:34.307382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.901 [2024-05-15 17:17:34.307582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.901 [2024-05-15 17:17:34.307592] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.901 qpair failed and we were unable to recover it. 00:26:46.901 [2024-05-15 17:17:34.307704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.901 [2024-05-15 17:17:34.307979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.901 [2024-05-15 17:17:34.307989] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.901 qpair failed and we were unable to recover it. 00:26:46.901 [2024-05-15 17:17:34.308159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.901 [2024-05-15 17:17:34.308338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.901 [2024-05-15 17:17:34.308348] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.901 qpair failed and we were unable to recover it. 00:26:46.901 [2024-05-15 17:17:34.308495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.901 [2024-05-15 17:17:34.308613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.901 [2024-05-15 17:17:34.308623] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.901 qpair failed and we were unable to recover it. 00:26:46.901 [2024-05-15 17:17:34.308749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.901 [2024-05-15 17:17:34.309008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.901 [2024-05-15 17:17:34.309017] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.901 qpair failed and we were unable to recover it. 00:26:46.901 [2024-05-15 17:17:34.309192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.901 [2024-05-15 17:17:34.309382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.901 [2024-05-15 17:17:34.309391] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.901 qpair failed and we were unable to recover it. 00:26:46.901 [2024-05-15 17:17:34.309559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.901 [2024-05-15 17:17:34.309732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.901 [2024-05-15 17:17:34.309741] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.901 qpair failed and we were unable to recover it. 00:26:46.901 [2024-05-15 17:17:34.309955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.901 [2024-05-15 17:17:34.310119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.901 [2024-05-15 17:17:34.310128] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.901 qpair failed and we were unable to recover it. 00:26:46.901 [2024-05-15 17:17:34.310290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.901 [2024-05-15 17:17:34.310467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.901 [2024-05-15 17:17:34.310476] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.901 qpair failed and we were unable to recover it. 00:26:46.901 [2024-05-15 17:17:34.310699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.901 [2024-05-15 17:17:34.310964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.901 [2024-05-15 17:17:34.310974] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.901 qpair failed and we were unable to recover it. 00:26:46.901 [2024-05-15 17:17:34.311140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.901 [2024-05-15 17:17:34.311367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.901 [2024-05-15 17:17:34.311377] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.901 qpair failed and we were unable to recover it. 00:26:46.901 [2024-05-15 17:17:34.311563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.901 [2024-05-15 17:17:34.311682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.901 [2024-05-15 17:17:34.311692] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.901 qpair failed and we were unable to recover it. 00:26:46.901 [2024-05-15 17:17:34.311882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.901 [2024-05-15 17:17:34.312058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.901 [2024-05-15 17:17:34.312068] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.901 qpair failed and we were unable to recover it. 00:26:46.901 [2024-05-15 17:17:34.312270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.901 [2024-05-15 17:17:34.312371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.901 [2024-05-15 17:17:34.312381] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.901 qpair failed and we were unable to recover it. 00:26:46.901 [2024-05-15 17:17:34.312558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.901 [2024-05-15 17:17:34.312688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.901 [2024-05-15 17:17:34.312698] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.901 qpair failed and we were unable to recover it. 00:26:46.901 [2024-05-15 17:17:34.312915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.901 [2024-05-15 17:17:34.313128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.901 [2024-05-15 17:17:34.313139] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.901 qpair failed and we were unable to recover it. 00:26:46.901 [2024-05-15 17:17:34.313374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.901 [2024-05-15 17:17:34.313478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.901 [2024-05-15 17:17:34.313488] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.901 qpair failed and we were unable to recover it. 00:26:46.901 [2024-05-15 17:17:34.313659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.901 [2024-05-15 17:17:34.313863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.901 [2024-05-15 17:17:34.313872] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.901 qpair failed and we were unable to recover it. 00:26:46.901 [2024-05-15 17:17:34.314059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.901 [2024-05-15 17:17:34.314222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.901 [2024-05-15 17:17:34.314232] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.901 qpair failed and we were unable to recover it. 00:26:46.901 [2024-05-15 17:17:34.314377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.901 [2024-05-15 17:17:34.314477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.902 [2024-05-15 17:17:34.314487] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.902 qpair failed and we were unable to recover it. 00:26:46.902 [2024-05-15 17:17:34.314609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.902 [2024-05-15 17:17:34.314729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.902 [2024-05-15 17:17:34.314739] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.902 qpair failed and we were unable to recover it. 00:26:46.902 [2024-05-15 17:17:34.314921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.902 [2024-05-15 17:17:34.315042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.902 [2024-05-15 17:17:34.315051] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.902 qpair failed and we were unable to recover it. 00:26:46.902 [2024-05-15 17:17:34.315240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.902 [2024-05-15 17:17:34.315396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.902 [2024-05-15 17:17:34.315406] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.902 qpair failed and we were unable to recover it. 00:26:46.902 [2024-05-15 17:17:34.315659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.902 [2024-05-15 17:17:34.315780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.902 [2024-05-15 17:17:34.315791] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.902 qpair failed and we were unable to recover it. 00:26:46.902 [2024-05-15 17:17:34.315949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.902 [2024-05-15 17:17:34.316124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.902 [2024-05-15 17:17:34.316134] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.902 qpair failed and we were unable to recover it. 00:26:46.902 [2024-05-15 17:17:34.316316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.902 [2024-05-15 17:17:34.316491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.902 [2024-05-15 17:17:34.316500] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.902 qpair failed and we were unable to recover it. 00:26:46.902 [2024-05-15 17:17:34.316631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.902 [2024-05-15 17:17:34.316825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.902 [2024-05-15 17:17:34.316835] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.902 qpair failed and we were unable to recover it. 00:26:46.902 [2024-05-15 17:17:34.317003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.902 [2024-05-15 17:17:34.317181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.902 [2024-05-15 17:17:34.317191] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.902 qpair failed and we were unable to recover it. 00:26:46.902 [2024-05-15 17:17:34.317400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.902 [2024-05-15 17:17:34.317624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.902 [2024-05-15 17:17:34.317634] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.902 qpair failed and we were unable to recover it. 00:26:46.902 [2024-05-15 17:17:34.317839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.902 [2024-05-15 17:17:34.318105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.902 [2024-05-15 17:17:34.318115] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.902 qpair failed and we were unable to recover it. 00:26:46.902 [2024-05-15 17:17:34.318315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.902 [2024-05-15 17:17:34.318471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.902 [2024-05-15 17:17:34.318480] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.902 qpair failed and we were unable to recover it. 00:26:46.902 [2024-05-15 17:17:34.318610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.902 [2024-05-15 17:17:34.318766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.902 [2024-05-15 17:17:34.318776] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.902 qpair failed and we were unable to recover it. 00:26:46.902 [2024-05-15 17:17:34.319031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.902 [2024-05-15 17:17:34.319305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.902 [2024-05-15 17:17:34.319315] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.902 qpair failed and we were unable to recover it. 00:26:46.902 [2024-05-15 17:17:34.319562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.902 [2024-05-15 17:17:34.319724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.902 [2024-05-15 17:17:34.319733] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.902 qpair failed and we were unable to recover it. 00:26:46.902 [2024-05-15 17:17:34.319962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.902 [2024-05-15 17:17:34.320127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.902 [2024-05-15 17:17:34.320136] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.902 qpair failed and we were unable to recover it. 00:26:46.902 [2024-05-15 17:17:34.320384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.902 [2024-05-15 17:17:34.320557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.902 [2024-05-15 17:17:34.320567] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.902 qpair failed and we were unable to recover it. 00:26:46.902 [2024-05-15 17:17:34.320659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.902 [2024-05-15 17:17:34.320919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.902 [2024-05-15 17:17:34.320929] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.902 qpair failed and we were unable to recover it. 00:26:46.902 [2024-05-15 17:17:34.321153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.902 [2024-05-15 17:17:34.321354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.902 [2024-05-15 17:17:34.321364] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.902 qpair failed and we were unable to recover it. 00:26:46.902 [2024-05-15 17:17:34.321620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.902 [2024-05-15 17:17:34.321774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.902 [2024-05-15 17:17:34.321783] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.902 qpair failed and we were unable to recover it. 00:26:46.902 [2024-05-15 17:17:34.322045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.902 [2024-05-15 17:17:34.322332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.902 [2024-05-15 17:17:34.322342] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.902 qpair failed and we were unable to recover it. 00:26:46.902 [2024-05-15 17:17:34.322468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.902 [2024-05-15 17:17:34.322579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.902 [2024-05-15 17:17:34.322589] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.902 qpair failed and we were unable to recover it. 00:26:46.902 [2024-05-15 17:17:34.322716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.902 [2024-05-15 17:17:34.322868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.902 [2024-05-15 17:17:34.322877] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.902 qpair failed and we were unable to recover it. 00:26:46.902 [2024-05-15 17:17:34.322989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.902 [2024-05-15 17:17:34.323163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.902 [2024-05-15 17:17:34.323176] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.902 qpair failed and we were unable to recover it. 00:26:46.902 [2024-05-15 17:17:34.323460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.902 [2024-05-15 17:17:34.323708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.902 [2024-05-15 17:17:34.323718] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.902 qpair failed and we were unable to recover it. 00:26:46.902 [2024-05-15 17:17:34.323944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.902 [2024-05-15 17:17:34.324171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.902 [2024-05-15 17:17:34.324181] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.902 qpair failed and we were unable to recover it. 00:26:46.902 [2024-05-15 17:17:34.324360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.902 [2024-05-15 17:17:34.324470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.902 [2024-05-15 17:17:34.324480] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.902 qpair failed and we were unable to recover it. 00:26:46.902 [2024-05-15 17:17:34.324644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.902 [2024-05-15 17:17:34.324820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.902 [2024-05-15 17:17:34.324830] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.902 qpair failed and we were unable to recover it. 00:26:46.902 [2024-05-15 17:17:34.325014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.902 [2024-05-15 17:17:34.325191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.902 [2024-05-15 17:17:34.325201] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.903 qpair failed and we were unable to recover it. 00:26:46.903 [2024-05-15 17:17:34.325305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.903 [2024-05-15 17:17:34.325471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.903 [2024-05-15 17:17:34.325481] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.903 qpair failed and we were unable to recover it. 00:26:46.903 [2024-05-15 17:17:34.325673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.903 [2024-05-15 17:17:34.325829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.903 [2024-05-15 17:17:34.325839] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.903 qpair failed and we were unable to recover it. 00:26:46.903 [2024-05-15 17:17:34.326065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.903 [2024-05-15 17:17:34.326238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.903 [2024-05-15 17:17:34.326248] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.903 qpair failed and we were unable to recover it. 00:26:46.903 [2024-05-15 17:17:34.326504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.903 [2024-05-15 17:17:34.326728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.903 [2024-05-15 17:17:34.326737] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.903 qpair failed and we were unable to recover it. 00:26:46.903 [2024-05-15 17:17:34.327014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.903 [2024-05-15 17:17:34.327269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.903 [2024-05-15 17:17:34.327279] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.903 qpair failed and we were unable to recover it. 00:26:46.903 [2024-05-15 17:17:34.327450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.903 [2024-05-15 17:17:34.327670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.903 [2024-05-15 17:17:34.327680] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.903 qpair failed and we were unable to recover it. 00:26:46.903 [2024-05-15 17:17:34.327857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.903 [2024-05-15 17:17:34.328014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.903 [2024-05-15 17:17:34.328024] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.903 qpair failed and we were unable to recover it. 00:26:46.903 [2024-05-15 17:17:34.328137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.903 [2024-05-15 17:17:34.328363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.903 [2024-05-15 17:17:34.328373] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.903 qpair failed and we were unable to recover it. 00:26:46.903 [2024-05-15 17:17:34.328576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.903 [2024-05-15 17:17:34.328749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.903 [2024-05-15 17:17:34.328759] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.903 qpair failed and we were unable to recover it. 00:26:46.903 [2024-05-15 17:17:34.329023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.903 [2024-05-15 17:17:34.329257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.903 [2024-05-15 17:17:34.329268] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.903 qpair failed and we were unable to recover it. 00:26:46.903 [2024-05-15 17:17:34.329517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.903 [2024-05-15 17:17:34.329739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.903 [2024-05-15 17:17:34.329749] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.903 qpair failed and we were unable to recover it. 00:26:46.903 [2024-05-15 17:17:34.329933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.903 [2024-05-15 17:17:34.330174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.903 [2024-05-15 17:17:34.330184] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.903 qpair failed and we were unable to recover it. 00:26:46.903 [2024-05-15 17:17:34.330430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.903 [2024-05-15 17:17:34.330694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.903 [2024-05-15 17:17:34.330703] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.903 qpair failed and we were unable to recover it. 00:26:46.903 [2024-05-15 17:17:34.330948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.903 [2024-05-15 17:17:34.331138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.903 [2024-05-15 17:17:34.331148] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.903 qpair failed and we were unable to recover it. 00:26:46.903 [2024-05-15 17:17:34.331422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.903 [2024-05-15 17:17:34.331689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.903 [2024-05-15 17:17:34.331699] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.903 qpair failed and we were unable to recover it. 00:26:46.903 [2024-05-15 17:17:34.331802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.903 [2024-05-15 17:17:34.332048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.903 [2024-05-15 17:17:34.332057] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.903 qpair failed and we were unable to recover it. 00:26:46.903 [2024-05-15 17:17:34.332232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.903 [2024-05-15 17:17:34.332388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.903 [2024-05-15 17:17:34.332398] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.903 qpair failed and we were unable to recover it. 00:26:46.903 [2024-05-15 17:17:34.332572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.903 [2024-05-15 17:17:34.332741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.903 [2024-05-15 17:17:34.332751] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.903 qpair failed and we were unable to recover it. 00:26:46.903 [2024-05-15 17:17:34.333001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.903 [2024-05-15 17:17:34.333273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.903 [2024-05-15 17:17:34.333283] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.903 qpair failed and we were unable to recover it. 00:26:46.903 [2024-05-15 17:17:34.333530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.903 [2024-05-15 17:17:34.333691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.903 [2024-05-15 17:17:34.333702] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.903 qpair failed and we were unable to recover it. 00:26:46.903 [2024-05-15 17:17:34.333934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.903 [2024-05-15 17:17:34.334155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.903 [2024-05-15 17:17:34.334168] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.903 qpair failed and we were unable to recover it. 00:26:46.903 [2024-05-15 17:17:34.334345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.903 [2024-05-15 17:17:34.334516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.903 [2024-05-15 17:17:34.334525] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.903 qpair failed and we were unable to recover it. 00:26:46.903 [2024-05-15 17:17:34.334749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.903 [2024-05-15 17:17:34.334970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.903 [2024-05-15 17:17:34.334980] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.903 qpair failed and we were unable to recover it. 00:26:46.903 [2024-05-15 17:17:34.335142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.903 [2024-05-15 17:17:34.335375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.903 [2024-05-15 17:17:34.335385] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.903 qpair failed and we were unable to recover it. 00:26:46.903 [2024-05-15 17:17:34.335626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.903 [2024-05-15 17:17:34.335899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.903 [2024-05-15 17:17:34.335909] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.903 qpair failed and we were unable to recover it. 00:26:46.903 [2024-05-15 17:17:34.336145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.903 [2024-05-15 17:17:34.336439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.903 [2024-05-15 17:17:34.336449] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.903 qpair failed and we were unable to recover it. 00:26:46.903 [2024-05-15 17:17:34.336625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.903 [2024-05-15 17:17:34.336785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.903 [2024-05-15 17:17:34.336795] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.903 qpair failed and we were unable to recover it. 00:26:46.903 [2024-05-15 17:17:34.337037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.903 [2024-05-15 17:17:34.337280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.903 [2024-05-15 17:17:34.337291] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.903 qpair failed and we were unable to recover it. 00:26:46.904 [2024-05-15 17:17:34.337467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.904 [2024-05-15 17:17:34.337696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.904 [2024-05-15 17:17:34.337706] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.904 qpair failed and we were unable to recover it. 00:26:46.904 [2024-05-15 17:17:34.337871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.904 [2024-05-15 17:17:34.338041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.904 [2024-05-15 17:17:34.338053] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.904 qpair failed and we were unable to recover it. 00:26:46.904 [2024-05-15 17:17:34.338348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.904 [2024-05-15 17:17:34.338449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.904 [2024-05-15 17:17:34.338459] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.904 qpair failed and we were unable to recover it. 00:26:46.904 [2024-05-15 17:17:34.338693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.904 [2024-05-15 17:17:34.338801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.904 [2024-05-15 17:17:34.338810] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.904 qpair failed and we were unable to recover it. 00:26:46.904 [2024-05-15 17:17:34.339055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.904 [2024-05-15 17:17:34.339276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.904 [2024-05-15 17:17:34.339287] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.904 qpair failed and we were unable to recover it. 00:26:46.904 [2024-05-15 17:17:34.339455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.904 [2024-05-15 17:17:34.339626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.904 [2024-05-15 17:17:34.339636] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.904 qpair failed and we were unable to recover it. 00:26:46.904 [2024-05-15 17:17:34.339824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.904 [2024-05-15 17:17:34.339921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.904 [2024-05-15 17:17:34.339931] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.904 qpair failed and we were unable to recover it. 00:26:46.904 [2024-05-15 17:17:34.340112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.904 [2024-05-15 17:17:34.340357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.904 [2024-05-15 17:17:34.340366] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.904 qpair failed and we were unable to recover it. 00:26:46.904 [2024-05-15 17:17:34.340531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.904 [2024-05-15 17:17:34.340703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.904 [2024-05-15 17:17:34.340713] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.904 qpair failed and we were unable to recover it. 00:26:46.904 [2024-05-15 17:17:34.341009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.904 [2024-05-15 17:17:34.341177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.904 [2024-05-15 17:17:34.341187] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.904 qpair failed and we were unable to recover it. 00:26:46.904 [2024-05-15 17:17:34.341440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.904 [2024-05-15 17:17:34.341612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.904 [2024-05-15 17:17:34.341622] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.904 qpair failed and we were unable to recover it. 00:26:46.904 [2024-05-15 17:17:34.341798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.904 [2024-05-15 17:17:34.342084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.904 [2024-05-15 17:17:34.342095] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.904 qpair failed and we were unable to recover it. 00:26:46.904 [2024-05-15 17:17:34.342263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.904 [2024-05-15 17:17:34.342484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.904 [2024-05-15 17:17:34.342494] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.904 qpair failed and we were unable to recover it. 00:26:46.904 [2024-05-15 17:17:34.342611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.904 [2024-05-15 17:17:34.342789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.904 [2024-05-15 17:17:34.342799] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.904 qpair failed and we were unable to recover it. 00:26:46.904 [2024-05-15 17:17:34.342963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.904 [2024-05-15 17:17:34.343211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.904 [2024-05-15 17:17:34.343221] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.904 qpair failed and we were unable to recover it. 00:26:46.904 [2024-05-15 17:17:34.343449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.904 [2024-05-15 17:17:34.343609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.904 [2024-05-15 17:17:34.343618] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.904 qpair failed and we were unable to recover it. 00:26:46.904 [2024-05-15 17:17:34.343837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.904 [2024-05-15 17:17:34.344058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.904 [2024-05-15 17:17:34.344068] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.904 qpair failed and we were unable to recover it. 00:26:46.904 [2024-05-15 17:17:34.344258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.904 [2024-05-15 17:17:34.344363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.904 [2024-05-15 17:17:34.344373] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.904 qpair failed and we were unable to recover it. 00:26:46.904 [2024-05-15 17:17:34.344571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.904 [2024-05-15 17:17:34.344692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.904 [2024-05-15 17:17:34.344701] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.904 qpair failed and we were unable to recover it. 00:26:46.904 [2024-05-15 17:17:34.344804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.904 [2024-05-15 17:17:34.345044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.904 [2024-05-15 17:17:34.345053] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.904 qpair failed and we were unable to recover it. 00:26:46.904 [2024-05-15 17:17:34.345279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.904 [2024-05-15 17:17:34.345469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.904 [2024-05-15 17:17:34.345479] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.904 qpair failed and we were unable to recover it. 00:26:46.904 [2024-05-15 17:17:34.345589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.904 [2024-05-15 17:17:34.345819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.904 [2024-05-15 17:17:34.345831] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.904 qpair failed and we were unable to recover it. 00:26:46.904 [2024-05-15 17:17:34.346077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.904 [2024-05-15 17:17:34.346351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.904 [2024-05-15 17:17:34.346361] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.904 qpair failed and we were unable to recover it. 00:26:46.904 [2024-05-15 17:17:34.346585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.904 [2024-05-15 17:17:34.346827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.904 [2024-05-15 17:17:34.346836] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.904 qpair failed and we were unable to recover it. 00:26:46.904 [2024-05-15 17:17:34.347059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.904 [2024-05-15 17:17:34.347214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.904 [2024-05-15 17:17:34.347224] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.904 qpair failed and we were unable to recover it. 00:26:46.904 [2024-05-15 17:17:34.347477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.904 [2024-05-15 17:17:34.347656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.904 [2024-05-15 17:17:34.347666] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.904 qpair failed and we were unable to recover it. 00:26:46.904 [2024-05-15 17:17:34.347972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.904 [2024-05-15 17:17:34.348169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.904 [2024-05-15 17:17:34.348178] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.904 qpair failed and we were unable to recover it. 00:26:46.904 [2024-05-15 17:17:34.348348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.904 [2024-05-15 17:17:34.348596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.904 [2024-05-15 17:17:34.348605] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.904 qpair failed and we were unable to recover it. 00:26:46.904 [2024-05-15 17:17:34.348847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.904 [2024-05-15 17:17:34.348973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.905 [2024-05-15 17:17:34.348983] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.905 qpair failed and we were unable to recover it. 00:26:46.905 [2024-05-15 17:17:34.349206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.905 [2024-05-15 17:17:34.349448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.905 [2024-05-15 17:17:34.349458] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.905 qpair failed and we were unable to recover it. 00:26:46.905 [2024-05-15 17:17:34.349732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.905 [2024-05-15 17:17:34.350016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.905 [2024-05-15 17:17:34.350026] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.905 qpair failed and we were unable to recover it. 00:26:46.905 [2024-05-15 17:17:34.350274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.905 [2024-05-15 17:17:34.350447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.905 [2024-05-15 17:17:34.350457] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.905 qpair failed and we were unable to recover it. 00:26:46.905 [2024-05-15 17:17:34.350707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.905 [2024-05-15 17:17:34.350872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.905 [2024-05-15 17:17:34.350881] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.905 qpair failed and we were unable to recover it. 00:26:46.905 [2024-05-15 17:17:34.351141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.905 [2024-05-15 17:17:34.351367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.905 [2024-05-15 17:17:34.351377] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.905 qpair failed and we were unable to recover it. 00:26:46.905 [2024-05-15 17:17:34.351561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.905 [2024-05-15 17:17:34.351817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.905 [2024-05-15 17:17:34.351827] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.905 qpair failed and we were unable to recover it. 00:26:46.905 [2024-05-15 17:17:34.352024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.905 [2024-05-15 17:17:34.352216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.905 [2024-05-15 17:17:34.352226] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.905 qpair failed and we were unable to recover it. 00:26:46.905 [2024-05-15 17:17:34.352386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.905 [2024-05-15 17:17:34.352628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.905 [2024-05-15 17:17:34.352638] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.905 qpair failed and we were unable to recover it. 00:26:46.905 [2024-05-15 17:17:34.352799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.905 [2024-05-15 17:17:34.352977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.905 [2024-05-15 17:17:34.352987] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.905 qpair failed and we were unable to recover it. 00:26:46.905 [2024-05-15 17:17:34.353172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.905 [2024-05-15 17:17:34.353347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.905 [2024-05-15 17:17:34.353356] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.905 qpair failed and we were unable to recover it. 00:26:46.905 [2024-05-15 17:17:34.353632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.905 [2024-05-15 17:17:34.353821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.905 [2024-05-15 17:17:34.353831] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.905 qpair failed and we were unable to recover it. 00:26:46.905 [2024-05-15 17:17:34.354005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.905 [2024-05-15 17:17:34.354177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.905 [2024-05-15 17:17:34.354187] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.905 qpair failed and we were unable to recover it. 00:26:46.905 [2024-05-15 17:17:34.354300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.905 [2024-05-15 17:17:34.354588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.905 [2024-05-15 17:17:34.354597] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.905 qpair failed and we were unable to recover it. 00:26:46.905 [2024-05-15 17:17:34.354833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.905 [2024-05-15 17:17:34.355078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.905 [2024-05-15 17:17:34.355088] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.905 qpair failed and we were unable to recover it. 00:26:46.905 [2024-05-15 17:17:34.355280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.905 [2024-05-15 17:17:34.355536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.905 [2024-05-15 17:17:34.355545] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.905 qpair failed and we were unable to recover it. 00:26:46.905 [2024-05-15 17:17:34.355716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.905 [2024-05-15 17:17:34.355889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.905 [2024-05-15 17:17:34.355898] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.905 qpair failed and we were unable to recover it. 00:26:46.905 [2024-05-15 17:17:34.356157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.905 [2024-05-15 17:17:34.356358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.905 [2024-05-15 17:17:34.356367] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.905 qpair failed and we were unable to recover it. 00:26:46.905 [2024-05-15 17:17:34.356556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.905 [2024-05-15 17:17:34.356713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.905 [2024-05-15 17:17:34.356722] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.905 qpair failed and we were unable to recover it. 00:26:46.905 [2024-05-15 17:17:34.356894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.905 [2024-05-15 17:17:34.357133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.905 [2024-05-15 17:17:34.357142] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.905 qpair failed and we were unable to recover it. 00:26:46.905 [2024-05-15 17:17:34.357367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.905 [2024-05-15 17:17:34.357649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.905 [2024-05-15 17:17:34.357659] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.905 qpair failed and we were unable to recover it. 00:26:46.905 [2024-05-15 17:17:34.357850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.905 [2024-05-15 17:17:34.358022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.905 [2024-05-15 17:17:34.358032] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.905 qpair failed and we were unable to recover it. 00:26:46.905 [2024-05-15 17:17:34.358275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.905 [2024-05-15 17:17:34.358379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.905 [2024-05-15 17:17:34.358389] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.905 qpair failed and we were unable to recover it. 00:26:46.905 [2024-05-15 17:17:34.358589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.905 [2024-05-15 17:17:34.358684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.905 [2024-05-15 17:17:34.358693] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.905 qpair failed and we were unable to recover it. 00:26:46.905 [2024-05-15 17:17:34.358947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.905 [2024-05-15 17:17:34.359181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.906 [2024-05-15 17:17:34.359192] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.906 qpair failed and we were unable to recover it. 00:26:46.906 [2024-05-15 17:17:34.359382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.906 [2024-05-15 17:17:34.359506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.906 [2024-05-15 17:17:34.359516] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.906 qpair failed and we were unable to recover it. 00:26:46.906 [2024-05-15 17:17:34.359762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.906 [2024-05-15 17:17:34.359977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.906 [2024-05-15 17:17:34.359987] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.906 qpair failed and we were unable to recover it. 00:26:46.906 [2024-05-15 17:17:34.360210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.906 [2024-05-15 17:17:34.360463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.906 [2024-05-15 17:17:34.360473] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.906 qpair failed and we were unable to recover it. 00:26:46.906 [2024-05-15 17:17:34.360696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.906 [2024-05-15 17:17:34.360929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.906 [2024-05-15 17:17:34.360938] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.906 qpair failed and we were unable to recover it. 00:26:46.906 [2024-05-15 17:17:34.361095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.906 [2024-05-15 17:17:34.361275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.906 [2024-05-15 17:17:34.361285] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.906 qpair failed and we were unable to recover it. 00:26:46.906 [2024-05-15 17:17:34.361529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.906 [2024-05-15 17:17:34.361776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.906 [2024-05-15 17:17:34.361785] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.906 qpair failed and we were unable to recover it. 00:26:46.906 [2024-05-15 17:17:34.362014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.906 [2024-05-15 17:17:34.362171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.906 [2024-05-15 17:17:34.362181] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.906 qpair failed and we were unable to recover it. 00:26:46.906 [2024-05-15 17:17:34.362373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.906 [2024-05-15 17:17:34.362530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.906 [2024-05-15 17:17:34.362539] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.906 qpair failed and we were unable to recover it. 00:26:46.906 [2024-05-15 17:17:34.362749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.906 [2024-05-15 17:17:34.362941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.906 [2024-05-15 17:17:34.362950] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.906 qpair failed and we were unable to recover it. 00:26:46.906 [2024-05-15 17:17:34.363136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.906 [2024-05-15 17:17:34.363257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.906 [2024-05-15 17:17:34.363267] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.906 qpair failed and we were unable to recover it. 00:26:46.906 [2024-05-15 17:17:34.363512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.906 [2024-05-15 17:17:34.363785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.906 [2024-05-15 17:17:34.363795] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.906 qpair failed and we were unable to recover it. 00:26:46.906 [2024-05-15 17:17:34.364033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.906 [2024-05-15 17:17:34.364277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.906 [2024-05-15 17:17:34.364287] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.906 qpair failed and we were unable to recover it. 00:26:46.906 [2024-05-15 17:17:34.364545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.906 [2024-05-15 17:17:34.364771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.906 [2024-05-15 17:17:34.364781] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.906 qpair failed and we were unable to recover it. 00:26:46.906 [2024-05-15 17:17:34.364934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.906 [2024-05-15 17:17:34.365095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.906 [2024-05-15 17:17:34.365104] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.906 qpair failed and we were unable to recover it. 00:26:46.906 [2024-05-15 17:17:34.365275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.906 [2024-05-15 17:17:34.365528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.906 [2024-05-15 17:17:34.365538] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.906 qpair failed and we were unable to recover it. 00:26:46.906 [2024-05-15 17:17:34.365799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.906 [2024-05-15 17:17:34.365998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.906 [2024-05-15 17:17:34.366008] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.906 qpair failed and we were unable to recover it. 00:26:46.906 [2024-05-15 17:17:34.366194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.906 [2024-05-15 17:17:34.366441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.906 [2024-05-15 17:17:34.366450] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.906 qpair failed and we were unable to recover it. 00:26:46.906 [2024-05-15 17:17:34.366624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.906 [2024-05-15 17:17:34.366746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.906 [2024-05-15 17:17:34.366756] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.906 qpair failed and we were unable to recover it. 00:26:46.906 [2024-05-15 17:17:34.367002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.906 [2024-05-15 17:17:34.367212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.906 [2024-05-15 17:17:34.367222] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.906 qpair failed and we were unable to recover it. 00:26:46.906 [2024-05-15 17:17:34.367423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.906 [2024-05-15 17:17:34.367614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.906 [2024-05-15 17:17:34.367624] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.906 qpair failed and we were unable to recover it. 00:26:46.906 [2024-05-15 17:17:34.367799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.906 [2024-05-15 17:17:34.367890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.906 [2024-05-15 17:17:34.367900] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.906 qpair failed and we were unable to recover it. 00:26:46.906 [2024-05-15 17:17:34.368122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.906 [2024-05-15 17:17:34.368305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.906 [2024-05-15 17:17:34.368315] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.906 qpair failed and we were unable to recover it. 00:26:46.906 [2024-05-15 17:17:34.368553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.906 [2024-05-15 17:17:34.368800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.906 [2024-05-15 17:17:34.368810] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.906 qpair failed and we were unable to recover it. 00:26:46.906 [2024-05-15 17:17:34.368975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.906 [2024-05-15 17:17:34.369150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.906 [2024-05-15 17:17:34.369159] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.906 qpair failed and we were unable to recover it. 00:26:46.906 [2024-05-15 17:17:34.369343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.906 [2024-05-15 17:17:34.369537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.906 [2024-05-15 17:17:34.369547] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.906 qpair failed and we were unable to recover it. 00:26:46.906 [2024-05-15 17:17:34.369664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.906 [2024-05-15 17:17:34.369910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.906 [2024-05-15 17:17:34.369920] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.906 qpair failed and we were unable to recover it. 00:26:46.906 [2024-05-15 17:17:34.370156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.906 [2024-05-15 17:17:34.370332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.906 [2024-05-15 17:17:34.370342] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.906 qpair failed and we were unable to recover it. 00:26:46.906 [2024-05-15 17:17:34.370502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.906 [2024-05-15 17:17:34.370748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.906 [2024-05-15 17:17:34.370758] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.907 qpair failed and we were unable to recover it. 00:26:46.907 [2024-05-15 17:17:34.371010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.907 [2024-05-15 17:17:34.371186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.907 [2024-05-15 17:17:34.371197] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.907 qpair failed and we were unable to recover it. 00:26:46.907 [2024-05-15 17:17:34.371445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.907 [2024-05-15 17:17:34.371650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.907 [2024-05-15 17:17:34.371659] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.907 qpair failed and we were unable to recover it. 00:26:46.907 [2024-05-15 17:17:34.371826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.907 [2024-05-15 17:17:34.371993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.907 [2024-05-15 17:17:34.372002] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.907 qpair failed and we were unable to recover it. 00:26:46.907 [2024-05-15 17:17:34.372196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.907 [2024-05-15 17:17:34.372315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.907 [2024-05-15 17:17:34.372325] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.907 qpair failed and we were unable to recover it. 00:26:46.907 [2024-05-15 17:17:34.372588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.907 [2024-05-15 17:17:34.372758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.907 [2024-05-15 17:17:34.372768] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.907 qpair failed and we were unable to recover it. 00:26:46.907 [2024-05-15 17:17:34.372989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.907 [2024-05-15 17:17:34.373261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.907 [2024-05-15 17:17:34.373271] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.907 qpair failed and we were unable to recover it. 00:26:46.907 [2024-05-15 17:17:34.373432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.907 [2024-05-15 17:17:34.373679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.907 [2024-05-15 17:17:34.373688] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.907 qpair failed and we were unable to recover it. 00:26:46.907 [2024-05-15 17:17:34.373915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.907 [2024-05-15 17:17:34.374172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.907 [2024-05-15 17:17:34.374182] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.907 qpair failed and we were unable to recover it. 00:26:46.907 [2024-05-15 17:17:34.374366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.907 [2024-05-15 17:17:34.374610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.907 [2024-05-15 17:17:34.374620] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.907 qpair failed and we were unable to recover it. 00:26:46.907 [2024-05-15 17:17:34.374811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.907 [2024-05-15 17:17:34.374938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.907 [2024-05-15 17:17:34.374947] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.907 qpair failed and we were unable to recover it. 00:26:46.907 [2024-05-15 17:17:34.375191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.907 [2024-05-15 17:17:34.375439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.907 [2024-05-15 17:17:34.375448] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.907 qpair failed and we were unable to recover it. 00:26:46.907 [2024-05-15 17:17:34.375639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.907 [2024-05-15 17:17:34.375864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.907 [2024-05-15 17:17:34.375873] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.907 qpair failed and we were unable to recover it. 00:26:46.907 [2024-05-15 17:17:34.376047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.907 [2024-05-15 17:17:34.376161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.907 [2024-05-15 17:17:34.376173] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.907 qpair failed and we were unable to recover it. 00:26:46.907 [2024-05-15 17:17:34.376431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.907 [2024-05-15 17:17:34.376654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.907 [2024-05-15 17:17:34.376664] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.907 qpair failed and we were unable to recover it. 00:26:46.907 [2024-05-15 17:17:34.376920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.907 [2024-05-15 17:17:34.377175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.907 [2024-05-15 17:17:34.377185] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.907 qpair failed and we were unable to recover it. 00:26:46.907 [2024-05-15 17:17:34.377357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.907 [2024-05-15 17:17:34.377603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.907 [2024-05-15 17:17:34.377612] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.907 qpair failed and we were unable to recover it. 00:26:46.907 [2024-05-15 17:17:34.377858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.907 [2024-05-15 17:17:34.378122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.907 [2024-05-15 17:17:34.378131] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.907 qpair failed and we were unable to recover it. 00:26:46.907 [2024-05-15 17:17:34.378383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.907 [2024-05-15 17:17:34.378637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.907 [2024-05-15 17:17:34.378646] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.907 qpair failed and we were unable to recover it. 00:26:46.907 [2024-05-15 17:17:34.378803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.907 [2024-05-15 17:17:34.378986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.907 [2024-05-15 17:17:34.378996] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.907 qpair failed and we were unable to recover it. 00:26:46.907 [2024-05-15 17:17:34.379205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.907 [2024-05-15 17:17:34.379458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.907 [2024-05-15 17:17:34.379467] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.907 qpair failed and we were unable to recover it. 00:26:46.907 [2024-05-15 17:17:34.379711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.907 [2024-05-15 17:17:34.379966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.907 [2024-05-15 17:17:34.379976] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.907 qpair failed and we were unable to recover it. 00:26:46.907 [2024-05-15 17:17:34.380225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.907 [2024-05-15 17:17:34.380400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.907 [2024-05-15 17:17:34.380410] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.907 qpair failed and we were unable to recover it. 00:26:46.907 [2024-05-15 17:17:34.380619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.907 [2024-05-15 17:17:34.380790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.907 [2024-05-15 17:17:34.380799] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.907 qpair failed and we were unable to recover it. 00:26:46.907 [2024-05-15 17:17:34.380956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.907 [2024-05-15 17:17:34.381202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.907 [2024-05-15 17:17:34.381212] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.907 qpair failed and we were unable to recover it. 00:26:46.907 [2024-05-15 17:17:34.381385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.907 [2024-05-15 17:17:34.381609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.907 [2024-05-15 17:17:34.381618] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.907 qpair failed and we were unable to recover it. 00:26:46.907 [2024-05-15 17:17:34.381806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.907 [2024-05-15 17:17:34.382039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.907 [2024-05-15 17:17:34.382049] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.907 qpair failed and we were unable to recover it. 00:26:46.907 [2024-05-15 17:17:34.382280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.907 [2024-05-15 17:17:34.382551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.907 [2024-05-15 17:17:34.382561] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.907 qpair failed and we were unable to recover it. 00:26:46.907 [2024-05-15 17:17:34.382819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.907 [2024-05-15 17:17:34.382995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.907 [2024-05-15 17:17:34.383004] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.907 qpair failed and we were unable to recover it. 00:26:46.908 [2024-05-15 17:17:34.383269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.908 [2024-05-15 17:17:34.383444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.908 [2024-05-15 17:17:34.383454] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.908 qpair failed and we were unable to recover it. 00:26:46.908 [2024-05-15 17:17:34.383701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.908 [2024-05-15 17:17:34.383803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.908 [2024-05-15 17:17:34.383813] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.908 qpair failed and we were unable to recover it. 00:26:46.908 [2024-05-15 17:17:34.383995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.908 [2024-05-15 17:17:34.384173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.908 [2024-05-15 17:17:34.384183] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.908 qpair failed and we were unable to recover it. 00:26:46.908 [2024-05-15 17:17:34.384349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.908 [2024-05-15 17:17:34.384520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.908 [2024-05-15 17:17:34.384529] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.908 qpair failed and we were unable to recover it. 00:26:46.908 [2024-05-15 17:17:34.384634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.908 [2024-05-15 17:17:34.384858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.908 [2024-05-15 17:17:34.384868] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.908 qpair failed and we were unable to recover it. 00:26:46.908 [2024-05-15 17:17:34.385049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.908 [2024-05-15 17:17:34.385292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.908 [2024-05-15 17:17:34.385302] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.908 qpair failed and we were unable to recover it. 00:26:46.908 [2024-05-15 17:17:34.385496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.908 [2024-05-15 17:17:34.385743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.908 [2024-05-15 17:17:34.385753] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.908 qpair failed and we were unable to recover it. 00:26:46.908 [2024-05-15 17:17:34.386003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.908 [2024-05-15 17:17:34.386259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.908 [2024-05-15 17:17:34.386269] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.908 qpair failed and we were unable to recover it. 00:26:46.908 [2024-05-15 17:17:34.386372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.908 [2024-05-15 17:17:34.386641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.908 [2024-05-15 17:17:34.386651] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.908 qpair failed and we were unable to recover it. 00:26:46.908 [2024-05-15 17:17:34.386827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.908 [2024-05-15 17:17:34.386930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.908 [2024-05-15 17:17:34.386940] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.908 qpair failed and we were unable to recover it. 00:26:46.908 [2024-05-15 17:17:34.387125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.908 [2024-05-15 17:17:34.387279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.908 [2024-05-15 17:17:34.387290] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.908 qpair failed and we were unable to recover it. 00:26:46.908 [2024-05-15 17:17:34.387449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.908 [2024-05-15 17:17:34.387639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.908 [2024-05-15 17:17:34.387649] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.908 qpair failed and we were unable to recover it. 00:26:46.908 [2024-05-15 17:17:34.387922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.908 [2024-05-15 17:17:34.388104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.908 [2024-05-15 17:17:34.388114] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.908 qpair failed and we were unable to recover it. 00:26:46.908 [2024-05-15 17:17:34.388397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.908 [2024-05-15 17:17:34.388620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.908 [2024-05-15 17:17:34.388630] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.908 qpair failed and we were unable to recover it. 00:26:46.908 [2024-05-15 17:17:34.388880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.908 [2024-05-15 17:17:34.389054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.908 [2024-05-15 17:17:34.389064] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.908 qpair failed and we were unable to recover it. 00:26:46.908 [2024-05-15 17:17:34.389310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.908 [2024-05-15 17:17:34.389424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.908 [2024-05-15 17:17:34.389434] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.908 qpair failed and we were unable to recover it. 00:26:46.908 [2024-05-15 17:17:34.389530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.908 [2024-05-15 17:17:34.389636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.908 [2024-05-15 17:17:34.389645] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.908 qpair failed and we were unable to recover it. 00:26:46.908 [2024-05-15 17:17:34.389751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.908 [2024-05-15 17:17:34.390004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.908 [2024-05-15 17:17:34.390013] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.908 qpair failed and we were unable to recover it. 00:26:46.908 [2024-05-15 17:17:34.390262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.908 [2024-05-15 17:17:34.390438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.908 [2024-05-15 17:17:34.390448] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.908 qpair failed and we were unable to recover it. 00:26:46.908 [2024-05-15 17:17:34.390696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.908 [2024-05-15 17:17:34.390863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.908 [2024-05-15 17:17:34.390873] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.908 qpair failed and we were unable to recover it. 00:26:46.908 [2024-05-15 17:17:34.391059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.908 [2024-05-15 17:17:34.391238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.908 [2024-05-15 17:17:34.391248] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.908 qpair failed and we were unable to recover it. 00:26:46.908 [2024-05-15 17:17:34.391499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.908 [2024-05-15 17:17:34.391739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.908 [2024-05-15 17:17:34.391749] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.908 qpair failed and we were unable to recover it. 00:26:46.908 [2024-05-15 17:17:34.391946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.908 [2024-05-15 17:17:34.392169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.908 [2024-05-15 17:17:34.392179] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.908 qpair failed and we were unable to recover it. 00:26:46.908 [2024-05-15 17:17:34.392358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.908 [2024-05-15 17:17:34.392482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.908 [2024-05-15 17:17:34.392491] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.908 qpair failed and we were unable to recover it. 00:26:46.908 [2024-05-15 17:17:34.392728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.908 [2024-05-15 17:17:34.392882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.908 [2024-05-15 17:17:34.392892] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.908 qpair failed and we were unable to recover it. 00:26:46.908 [2024-05-15 17:17:34.393130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.908 [2024-05-15 17:17:34.393285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.908 [2024-05-15 17:17:34.393295] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.908 qpair failed and we were unable to recover it. 00:26:46.908 [2024-05-15 17:17:34.393519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.908 [2024-05-15 17:17:34.393787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.908 [2024-05-15 17:17:34.393797] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.908 qpair failed and we were unable to recover it. 00:26:46.908 [2024-05-15 17:17:34.394055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.908 [2024-05-15 17:17:34.394281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.908 [2024-05-15 17:17:34.394291] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.908 qpair failed and we were unable to recover it. 00:26:46.908 [2024-05-15 17:17:34.394523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.909 [2024-05-15 17:17:34.394769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.909 [2024-05-15 17:17:34.394778] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.909 qpair failed and we were unable to recover it. 00:26:46.909 [2024-05-15 17:17:34.395019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.909 [2024-05-15 17:17:34.395190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.909 [2024-05-15 17:17:34.395200] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.909 qpair failed and we were unable to recover it. 00:26:46.909 [2024-05-15 17:17:34.395420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.909 [2024-05-15 17:17:34.395677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.909 [2024-05-15 17:17:34.395686] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.909 qpair failed and we were unable to recover it. 00:26:46.909 [2024-05-15 17:17:34.395872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.909 [2024-05-15 17:17:34.396046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.909 [2024-05-15 17:17:34.396056] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.909 qpair failed and we were unable to recover it. 00:26:46.909 [2024-05-15 17:17:34.396314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.909 [2024-05-15 17:17:34.396574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.909 [2024-05-15 17:17:34.396584] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.909 qpair failed and we were unable to recover it. 00:26:46.909 [2024-05-15 17:17:34.396688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.909 [2024-05-15 17:17:34.396846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.909 [2024-05-15 17:17:34.396856] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.909 qpair failed and we were unable to recover it. 00:26:46.909 [2024-05-15 17:17:34.397093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.909 [2024-05-15 17:17:34.397325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.909 [2024-05-15 17:17:34.397335] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.909 qpair failed and we were unable to recover it. 00:26:46.909 [2024-05-15 17:17:34.397559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.909 [2024-05-15 17:17:34.397783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.909 [2024-05-15 17:17:34.397793] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.909 qpair failed and we were unable to recover it. 00:26:46.909 [2024-05-15 17:17:34.398038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.909 [2024-05-15 17:17:34.398259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.909 [2024-05-15 17:17:34.398269] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.909 qpair failed and we were unable to recover it. 00:26:46.909 [2024-05-15 17:17:34.398516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.909 [2024-05-15 17:17:34.398628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.909 [2024-05-15 17:17:34.398639] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.909 qpair failed and we were unable to recover it. 00:26:46.909 [2024-05-15 17:17:34.398906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.909 [2024-05-15 17:17:34.399153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.909 [2024-05-15 17:17:34.399163] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.909 qpair failed and we were unable to recover it. 00:26:46.909 [2024-05-15 17:17:34.399334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.909 [2024-05-15 17:17:34.399582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.909 [2024-05-15 17:17:34.399592] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.909 qpair failed and we were unable to recover it. 00:26:46.909 [2024-05-15 17:17:34.399842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.909 [2024-05-15 17:17:34.399945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.909 [2024-05-15 17:17:34.399954] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.909 qpair failed and we were unable to recover it. 00:26:46.909 [2024-05-15 17:17:34.400220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.909 [2024-05-15 17:17:34.400392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.909 [2024-05-15 17:17:34.400402] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.909 qpair failed and we were unable to recover it. 00:26:46.909 [2024-05-15 17:17:34.400651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.909 [2024-05-15 17:17:34.400751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.909 [2024-05-15 17:17:34.400761] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.909 qpair failed and we were unable to recover it. 00:26:46.909 [2024-05-15 17:17:34.400871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.909 [2024-05-15 17:17:34.401118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.909 [2024-05-15 17:17:34.401129] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.909 qpair failed and we were unable to recover it. 00:26:46.909 [2024-05-15 17:17:34.401344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.909 [2024-05-15 17:17:34.401518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.909 [2024-05-15 17:17:34.401528] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.909 qpair failed and we were unable to recover it. 00:26:46.909 [2024-05-15 17:17:34.401719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.909 [2024-05-15 17:17:34.401880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.909 [2024-05-15 17:17:34.401889] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.909 qpair failed and we were unable to recover it. 00:26:46.909 [2024-05-15 17:17:34.402134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.909 [2024-05-15 17:17:34.402289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.909 [2024-05-15 17:17:34.402299] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.909 qpair failed and we were unable to recover it. 00:26:46.909 [2024-05-15 17:17:34.402482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.909 [2024-05-15 17:17:34.402652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.909 [2024-05-15 17:17:34.402661] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.909 qpair failed and we were unable to recover it. 00:26:46.909 [2024-05-15 17:17:34.402885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.909 [2024-05-15 17:17:34.403004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.909 [2024-05-15 17:17:34.403014] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.909 qpair failed and we were unable to recover it. 00:26:46.909 [2024-05-15 17:17:34.403257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.909 [2024-05-15 17:17:34.403371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.909 [2024-05-15 17:17:34.403380] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.909 qpair failed and we were unable to recover it. 00:26:46.909 [2024-05-15 17:17:34.403645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.909 [2024-05-15 17:17:34.403812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.909 [2024-05-15 17:17:34.403822] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.909 qpair failed and we were unable to recover it. 00:26:46.909 [2024-05-15 17:17:34.404069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.909 [2024-05-15 17:17:34.404306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.909 [2024-05-15 17:17:34.404316] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.909 qpair failed and we were unable to recover it. 00:26:46.909 [2024-05-15 17:17:34.404474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.909 [2024-05-15 17:17:34.404671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.909 [2024-05-15 17:17:34.404681] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.909 qpair failed and we were unable to recover it. 00:26:46.909 [2024-05-15 17:17:34.404855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.909 [2024-05-15 17:17:34.405100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.909 [2024-05-15 17:17:34.405112] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.909 qpair failed and we were unable to recover it. 00:26:46.909 [2024-05-15 17:17:34.405283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.909 [2024-05-15 17:17:34.405532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.909 [2024-05-15 17:17:34.405541] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.909 qpair failed and we were unable to recover it. 00:26:46.909 [2024-05-15 17:17:34.405720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.909 [2024-05-15 17:17:34.405985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.909 [2024-05-15 17:17:34.405994] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.909 qpair failed and we were unable to recover it. 00:26:46.909 [2024-05-15 17:17:34.406096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.910 [2024-05-15 17:17:34.406273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.910 [2024-05-15 17:17:34.406283] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.910 qpair failed and we were unable to recover it. 00:26:46.910 [2024-05-15 17:17:34.406502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.910 [2024-05-15 17:17:34.406657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.910 [2024-05-15 17:17:34.406667] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.910 qpair failed and we were unable to recover it. 00:26:46.910 [2024-05-15 17:17:34.406839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.910 [2024-05-15 17:17:34.407068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.910 [2024-05-15 17:17:34.407077] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.910 qpair failed and we were unable to recover it. 00:26:46.910 [2024-05-15 17:17:34.407343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.910 [2024-05-15 17:17:34.407600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.910 [2024-05-15 17:17:34.407609] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.910 qpair failed and we were unable to recover it. 00:26:46.910 [2024-05-15 17:17:34.407842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.910 [2024-05-15 17:17:34.408018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.910 [2024-05-15 17:17:34.408027] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.910 qpair failed and we were unable to recover it. 00:26:46.910 [2024-05-15 17:17:34.408147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.910 [2024-05-15 17:17:34.408316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.910 [2024-05-15 17:17:34.408326] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.910 qpair failed and we were unable to recover it. 00:26:46.910 [2024-05-15 17:17:34.408507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.910 [2024-05-15 17:17:34.408752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.910 [2024-05-15 17:17:34.408761] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.910 qpair failed and we were unable to recover it. 00:26:46.910 [2024-05-15 17:17:34.409013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.910 [2024-05-15 17:17:34.409269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.910 [2024-05-15 17:17:34.409281] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.910 qpair failed and we were unable to recover it. 00:26:46.910 [2024-05-15 17:17:34.409507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.910 [2024-05-15 17:17:34.409699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.910 [2024-05-15 17:17:34.409709] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.910 qpair failed and we were unable to recover it. 00:26:46.910 [2024-05-15 17:17:34.409957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.910 [2024-05-15 17:17:34.410207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.910 [2024-05-15 17:17:34.410217] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.910 qpair failed and we were unable to recover it. 00:26:46.910 [2024-05-15 17:17:34.410418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.910 [2024-05-15 17:17:34.410649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.910 [2024-05-15 17:17:34.410658] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.910 qpair failed and we were unable to recover it. 00:26:46.910 [2024-05-15 17:17:34.410880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.910 [2024-05-15 17:17:34.411045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.910 [2024-05-15 17:17:34.411055] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.910 qpair failed and we were unable to recover it. 00:26:46.910 [2024-05-15 17:17:34.411331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.910 [2024-05-15 17:17:34.411579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.910 [2024-05-15 17:17:34.411588] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.910 qpair failed and we were unable to recover it. 00:26:46.910 [2024-05-15 17:17:34.411820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.910 [2024-05-15 17:17:34.412094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.910 [2024-05-15 17:17:34.412104] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.910 qpair failed and we were unable to recover it. 00:26:46.910 [2024-05-15 17:17:34.412331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.910 [2024-05-15 17:17:34.412503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.910 [2024-05-15 17:17:34.412513] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.910 qpair failed and we were unable to recover it. 00:26:46.910 [2024-05-15 17:17:34.412677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.910 [2024-05-15 17:17:34.412897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.910 [2024-05-15 17:17:34.412906] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.910 qpair failed and we were unable to recover it. 00:26:46.910 [2024-05-15 17:17:34.413155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.910 [2024-05-15 17:17:34.413317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.910 [2024-05-15 17:17:34.413327] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.910 qpair failed and we were unable to recover it. 00:26:46.910 [2024-05-15 17:17:34.413554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.910 [2024-05-15 17:17:34.413805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.910 [2024-05-15 17:17:34.413817] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.910 qpair failed and we were unable to recover it. 00:26:46.910 [2024-05-15 17:17:34.414085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.910 [2024-05-15 17:17:34.414259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.910 [2024-05-15 17:17:34.414270] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.910 qpair failed and we were unable to recover it. 00:26:46.910 [2024-05-15 17:17:34.414396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.910 [2024-05-15 17:17:34.414582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.910 [2024-05-15 17:17:34.414592] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.910 qpair failed and we were unable to recover it. 00:26:46.910 [2024-05-15 17:17:34.414690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.910 [2024-05-15 17:17:34.414930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.910 [2024-05-15 17:17:34.414939] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.910 qpair failed and we were unable to recover it. 00:26:46.910 [2024-05-15 17:17:34.415146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.910 [2024-05-15 17:17:34.415374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.910 [2024-05-15 17:17:34.415384] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.910 qpair failed and we were unable to recover it. 00:26:46.910 [2024-05-15 17:17:34.415629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.910 [2024-05-15 17:17:34.415799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.910 [2024-05-15 17:17:34.415809] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.910 qpair failed and we were unable to recover it. 00:26:46.910 [2024-05-15 17:17:34.415919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.910 [2024-05-15 17:17:34.416083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.910 [2024-05-15 17:17:34.416092] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.911 qpair failed and we were unable to recover it. 00:26:46.911 [2024-05-15 17:17:34.416182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.911 [2024-05-15 17:17:34.416361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.911 [2024-05-15 17:17:34.416370] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.911 qpair failed and we were unable to recover it. 00:26:46.911 [2024-05-15 17:17:34.416549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.911 [2024-05-15 17:17:34.416793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.911 [2024-05-15 17:17:34.416802] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.911 qpair failed and we were unable to recover it. 00:26:46.911 [2024-05-15 17:17:34.416915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.911 [2024-05-15 17:17:34.417074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.911 [2024-05-15 17:17:34.417084] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.911 qpair failed and we were unable to recover it. 00:26:46.911 [2024-05-15 17:17:34.417208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.911 [2024-05-15 17:17:34.417454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.911 [2024-05-15 17:17:34.417463] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.911 qpair failed and we were unable to recover it. 00:26:46.911 [2024-05-15 17:17:34.417718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.911 [2024-05-15 17:17:34.417993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.911 [2024-05-15 17:17:34.418002] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.911 qpair failed and we were unable to recover it. 00:26:46.911 [2024-05-15 17:17:34.418095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.911 [2024-05-15 17:17:34.418365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.911 [2024-05-15 17:17:34.418375] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.911 qpair failed and we were unable to recover it. 00:26:46.911 [2024-05-15 17:17:34.418625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.911 [2024-05-15 17:17:34.418856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.911 [2024-05-15 17:17:34.418866] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.911 qpair failed and we were unable to recover it. 00:26:46.911 [2024-05-15 17:17:34.419024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.911 [2024-05-15 17:17:34.419181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.911 [2024-05-15 17:17:34.419191] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.911 qpair failed and we were unable to recover it. 00:26:46.911 [2024-05-15 17:17:34.419434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.911 [2024-05-15 17:17:34.419698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.911 [2024-05-15 17:17:34.419707] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.911 qpair failed and we were unable to recover it. 00:26:46.911 [2024-05-15 17:17:34.419882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.911 [2024-05-15 17:17:34.420108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.911 [2024-05-15 17:17:34.420117] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.911 qpair failed and we were unable to recover it. 00:26:46.911 [2024-05-15 17:17:34.420232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.911 [2024-05-15 17:17:34.420385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.911 [2024-05-15 17:17:34.420395] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.911 qpair failed and we were unable to recover it. 00:26:46.911 [2024-05-15 17:17:34.420574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.911 [2024-05-15 17:17:34.420823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.911 [2024-05-15 17:17:34.420832] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.911 qpair failed and we were unable to recover it. 00:26:46.911 [2024-05-15 17:17:34.421002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.911 [2024-05-15 17:17:34.421172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.911 [2024-05-15 17:17:34.421182] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.911 qpair failed and we were unable to recover it. 00:26:46.911 [2024-05-15 17:17:34.421272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.911 [2024-05-15 17:17:34.421445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.911 [2024-05-15 17:17:34.421454] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.911 qpair failed and we were unable to recover it. 00:26:46.911 [2024-05-15 17:17:34.421706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.911 [2024-05-15 17:17:34.421896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.911 [2024-05-15 17:17:34.421906] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.911 qpair failed and we were unable to recover it. 00:26:46.911 [2024-05-15 17:17:34.422099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.911 [2024-05-15 17:17:34.422300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.911 [2024-05-15 17:17:34.422310] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.911 qpair failed and we were unable to recover it. 00:26:46.911 [2024-05-15 17:17:34.422560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.911 [2024-05-15 17:17:34.422780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.911 [2024-05-15 17:17:34.422790] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.911 qpair failed and we were unable to recover it. 00:26:46.911 [2024-05-15 17:17:34.422968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.911 [2024-05-15 17:17:34.423215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.911 [2024-05-15 17:17:34.423225] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.911 qpair failed and we were unable to recover it. 00:26:46.911 [2024-05-15 17:17:34.423446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.911 [2024-05-15 17:17:34.423668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.911 [2024-05-15 17:17:34.423677] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.911 qpair failed and we were unable to recover it. 00:26:46.911 [2024-05-15 17:17:34.423793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.911 [2024-05-15 17:17:34.423902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.911 [2024-05-15 17:17:34.423912] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.911 qpair failed and we were unable to recover it. 00:26:46.911 [2024-05-15 17:17:34.424010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.911 [2024-05-15 17:17:34.424229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.911 [2024-05-15 17:17:34.424239] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.911 qpair failed and we were unable to recover it. 00:26:46.911 [2024-05-15 17:17:34.424408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.911 [2024-05-15 17:17:34.424679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.911 [2024-05-15 17:17:34.424688] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.911 qpair failed and we were unable to recover it. 00:26:46.911 [2024-05-15 17:17:34.424955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.911 [2024-05-15 17:17:34.425124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.911 [2024-05-15 17:17:34.425134] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.911 qpair failed and we were unable to recover it. 00:26:46.911 [2024-05-15 17:17:34.425290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.911 [2024-05-15 17:17:34.425513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.911 [2024-05-15 17:17:34.425522] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.911 qpair failed and we were unable to recover it. 00:26:46.911 [2024-05-15 17:17:34.425773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.911 [2024-05-15 17:17:34.425882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.911 [2024-05-15 17:17:34.425892] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.911 qpair failed and we were unable to recover it. 00:26:46.911 [2024-05-15 17:17:34.426055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.911 [2024-05-15 17:17:34.426225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.911 [2024-05-15 17:17:34.426236] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.911 qpair failed and we were unable to recover it. 00:26:46.911 [2024-05-15 17:17:34.426396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.911 [2024-05-15 17:17:34.426652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.911 [2024-05-15 17:17:34.426662] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.911 qpair failed and we were unable to recover it. 00:26:46.911 [2024-05-15 17:17:34.426925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.911 [2024-05-15 17:17:34.427183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.911 [2024-05-15 17:17:34.427193] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.911 qpair failed and we were unable to recover it. 00:26:46.911 [2024-05-15 17:17:34.427394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.912 [2024-05-15 17:17:34.427558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.912 [2024-05-15 17:17:34.427568] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.912 qpair failed and we were unable to recover it. 00:26:46.912 [2024-05-15 17:17:34.427790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.912 [2024-05-15 17:17:34.427957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.912 [2024-05-15 17:17:34.427967] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.912 qpair failed and we were unable to recover it. 00:26:46.912 [2024-05-15 17:17:34.428086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.912 [2024-05-15 17:17:34.428250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.912 [2024-05-15 17:17:34.428261] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.912 qpair failed and we were unable to recover it. 00:26:46.912 [2024-05-15 17:17:34.428432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.912 [2024-05-15 17:17:34.428672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.912 [2024-05-15 17:17:34.428682] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.912 qpair failed and we were unable to recover it. 00:26:46.912 [2024-05-15 17:17:34.428885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.912 [2024-05-15 17:17:34.429128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.912 [2024-05-15 17:17:34.429137] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.912 qpair failed and we were unable to recover it. 00:26:46.912 [2024-05-15 17:17:34.429251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.912 [2024-05-15 17:17:34.429369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.912 [2024-05-15 17:17:34.429379] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.912 qpair failed and we were unable to recover it. 00:26:46.912 [2024-05-15 17:17:34.429645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.912 [2024-05-15 17:17:34.429911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.912 [2024-05-15 17:17:34.429921] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.912 qpair failed and we were unable to recover it. 00:26:46.912 [2024-05-15 17:17:34.430092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.912 [2024-05-15 17:17:34.430277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.912 [2024-05-15 17:17:34.430287] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.912 qpair failed and we were unable to recover it. 00:26:46.912 [2024-05-15 17:17:34.430443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.912 [2024-05-15 17:17:34.430635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.912 [2024-05-15 17:17:34.430645] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.912 qpair failed and we were unable to recover it. 00:26:46.912 [2024-05-15 17:17:34.430806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.912 [2024-05-15 17:17:34.431048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.912 [2024-05-15 17:17:34.431058] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.912 qpair failed and we were unable to recover it. 00:26:46.912 [2024-05-15 17:17:34.431221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.912 [2024-05-15 17:17:34.431455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.912 [2024-05-15 17:17:34.431465] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.912 qpair failed and we were unable to recover it. 00:26:46.912 [2024-05-15 17:17:34.431706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.912 [2024-05-15 17:17:34.431860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.912 [2024-05-15 17:17:34.431869] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.912 qpair failed and we were unable to recover it. 00:26:46.912 [2024-05-15 17:17:34.432109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.912 [2024-05-15 17:17:34.432277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.912 [2024-05-15 17:17:34.432287] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.912 qpair failed and we were unable to recover it. 00:26:46.912 [2024-05-15 17:17:34.432534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.912 [2024-05-15 17:17:34.432766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.912 [2024-05-15 17:17:34.432776] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.912 qpair failed and we were unable to recover it. 00:26:46.912 [2024-05-15 17:17:34.432956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.912 [2024-05-15 17:17:34.433144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.912 [2024-05-15 17:17:34.433153] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.912 qpair failed and we were unable to recover it. 00:26:46.912 [2024-05-15 17:17:34.433415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.912 [2024-05-15 17:17:34.433568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.912 [2024-05-15 17:17:34.433578] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.912 qpair failed and we were unable to recover it. 00:26:46.912 [2024-05-15 17:17:34.433811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.912 [2024-05-15 17:17:34.433975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.912 [2024-05-15 17:17:34.433985] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.912 qpair failed and we were unable to recover it. 00:26:46.912 [2024-05-15 17:17:34.434232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.912 [2024-05-15 17:17:34.434388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.912 [2024-05-15 17:17:34.434398] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.912 qpair failed and we were unable to recover it. 00:26:46.912 [2024-05-15 17:17:34.434570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.912 [2024-05-15 17:17:34.434791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.912 [2024-05-15 17:17:34.434801] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.912 qpair failed and we were unable to recover it. 00:26:46.912 [2024-05-15 17:17:34.435042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.912 [2024-05-15 17:17:34.435313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.912 [2024-05-15 17:17:34.435323] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.912 qpair failed and we were unable to recover it. 00:26:46.912 [2024-05-15 17:17:34.435571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.912 [2024-05-15 17:17:34.435727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.912 [2024-05-15 17:17:34.435736] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.912 qpair failed and we were unable to recover it. 00:26:46.912 [2024-05-15 17:17:34.435969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.912 [2024-05-15 17:17:34.436126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.912 [2024-05-15 17:17:34.436136] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.912 qpair failed and we were unable to recover it. 00:26:46.912 [2024-05-15 17:17:34.436384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.912 [2024-05-15 17:17:34.436499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.912 [2024-05-15 17:17:34.436508] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.912 qpair failed and we were unable to recover it. 00:26:46.912 [2024-05-15 17:17:34.436674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.912 [2024-05-15 17:17:34.436894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.912 [2024-05-15 17:17:34.436903] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.912 qpair failed and we were unable to recover it. 00:26:46.912 [2024-05-15 17:17:34.437065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.912 [2024-05-15 17:17:34.437319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.912 [2024-05-15 17:17:34.437329] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.912 qpair failed and we were unable to recover it. 00:26:46.912 [2024-05-15 17:17:34.437422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.912 [2024-05-15 17:17:34.437602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.912 [2024-05-15 17:17:34.437612] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.912 qpair failed and we were unable to recover it. 00:26:46.912 [2024-05-15 17:17:34.437777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.912 [2024-05-15 17:17:34.437892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.912 [2024-05-15 17:17:34.437901] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.912 qpair failed and we were unable to recover it. 00:26:46.912 [2024-05-15 17:17:34.438058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.912 [2024-05-15 17:17:34.438241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.912 [2024-05-15 17:17:34.438251] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.912 qpair failed and we were unable to recover it. 00:26:46.912 [2024-05-15 17:17:34.438500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.912 [2024-05-15 17:17:34.438601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.913 [2024-05-15 17:17:34.438610] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.913 qpair failed and we were unable to recover it. 00:26:46.913 [2024-05-15 17:17:34.438775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.913 [2024-05-15 17:17:34.439015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.913 [2024-05-15 17:17:34.439024] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.913 qpair failed and we were unable to recover it. 00:26:46.913 [2024-05-15 17:17:34.439187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.913 [2024-05-15 17:17:34.439380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.913 [2024-05-15 17:17:34.439389] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.913 qpair failed and we were unable to recover it. 00:26:46.913 [2024-05-15 17:17:34.439604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.913 [2024-05-15 17:17:34.439845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.913 [2024-05-15 17:17:34.439855] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.913 qpair failed and we were unable to recover it. 00:26:46.913 [2024-05-15 17:17:34.440095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.913 [2024-05-15 17:17:34.440367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.913 [2024-05-15 17:17:34.440377] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.913 qpair failed and we were unable to recover it. 00:26:46.913 [2024-05-15 17:17:34.440545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.913 [2024-05-15 17:17:34.440737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.913 [2024-05-15 17:17:34.440747] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.913 qpair failed and we were unable to recover it. 00:26:46.913 [2024-05-15 17:17:34.440914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.913 [2024-05-15 17:17:34.441108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.913 [2024-05-15 17:17:34.441117] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.913 qpair failed and we were unable to recover it. 00:26:46.913 [2024-05-15 17:17:34.441347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.913 [2024-05-15 17:17:34.441621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.913 [2024-05-15 17:17:34.441630] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.913 qpair failed and we were unable to recover it. 00:26:46.913 [2024-05-15 17:17:34.441871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.913 [2024-05-15 17:17:34.442026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.913 [2024-05-15 17:17:34.442036] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.913 qpair failed and we were unable to recover it. 00:26:46.913 [2024-05-15 17:17:34.442275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.913 [2024-05-15 17:17:34.442443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.913 [2024-05-15 17:17:34.442453] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.913 qpair failed and we were unable to recover it. 00:26:46.913 [2024-05-15 17:17:34.442634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.913 [2024-05-15 17:17:34.442829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.913 [2024-05-15 17:17:34.442838] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.913 qpair failed and we were unable to recover it. 00:26:46.913 [2024-05-15 17:17:34.443040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.913 [2024-05-15 17:17:34.443217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.913 [2024-05-15 17:17:34.443227] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.913 qpair failed and we were unable to recover it. 00:26:46.913 [2024-05-15 17:17:34.443420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.913 [2024-05-15 17:17:34.443693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.913 [2024-05-15 17:17:34.443703] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.913 qpair failed and we were unable to recover it. 00:26:46.913 [2024-05-15 17:17:34.443938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.913 [2024-05-15 17:17:34.444185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.913 [2024-05-15 17:17:34.444195] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.913 qpair failed and we were unable to recover it. 00:26:46.913 [2024-05-15 17:17:34.444302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.913 [2024-05-15 17:17:34.444464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.913 [2024-05-15 17:17:34.444474] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.913 qpair failed and we were unable to recover it. 00:26:46.913 [2024-05-15 17:17:34.444671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.913 [2024-05-15 17:17:34.444892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.913 [2024-05-15 17:17:34.444902] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.913 qpair failed and we were unable to recover it. 00:26:46.913 [2024-05-15 17:17:34.445010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.913 [2024-05-15 17:17:34.445185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.913 [2024-05-15 17:17:34.445195] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.913 qpair failed and we were unable to recover it. 00:26:46.913 [2024-05-15 17:17:34.445364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.913 [2024-05-15 17:17:34.445529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.913 [2024-05-15 17:17:34.445539] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.913 qpair failed and we were unable to recover it. 00:26:46.913 [2024-05-15 17:17:34.445792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.913 [2024-05-15 17:17:34.446030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.913 [2024-05-15 17:17:34.446039] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.913 qpair failed and we were unable to recover it. 00:26:46.913 [2024-05-15 17:17:34.446289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.913 [2024-05-15 17:17:34.446474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.913 [2024-05-15 17:17:34.446484] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.913 qpair failed and we were unable to recover it. 00:26:46.913 [2024-05-15 17:17:34.446681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.913 [2024-05-15 17:17:34.446836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.913 [2024-05-15 17:17:34.446845] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.913 qpair failed and we were unable to recover it. 00:26:46.913 [2024-05-15 17:17:34.447020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.913 [2024-05-15 17:17:34.447193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.913 [2024-05-15 17:17:34.447203] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.913 qpair failed and we were unable to recover it. 00:26:46.913 [2024-05-15 17:17:34.447358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.913 [2024-05-15 17:17:34.447530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.913 [2024-05-15 17:17:34.447540] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.913 qpair failed and we were unable to recover it. 00:26:46.913 [2024-05-15 17:17:34.447792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.913 [2024-05-15 17:17:34.448016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.913 [2024-05-15 17:17:34.448025] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.913 qpair failed and we were unable to recover it. 00:26:46.913 [2024-05-15 17:17:34.448211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.913 [2024-05-15 17:17:34.448454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.913 [2024-05-15 17:17:34.448463] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.913 qpair failed and we were unable to recover it. 00:26:46.913 [2024-05-15 17:17:34.448710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.913 [2024-05-15 17:17:34.448883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.913 [2024-05-15 17:17:34.448893] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.913 qpair failed and we were unable to recover it. 00:26:46.913 [2024-05-15 17:17:34.449015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.913 [2024-05-15 17:17:34.449178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.913 [2024-05-15 17:17:34.449187] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.913 qpair failed and we were unable to recover it. 00:26:46.913 [2024-05-15 17:17:34.449404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.913 [2024-05-15 17:17:34.449518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.913 [2024-05-15 17:17:34.449528] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.913 qpair failed and we were unable to recover it. 00:26:46.913 [2024-05-15 17:17:34.449748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.913 [2024-05-15 17:17:34.449999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.913 [2024-05-15 17:17:34.450009] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.913 qpair failed and we were unable to recover it. 00:26:46.914 [2024-05-15 17:17:34.450279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.914 [2024-05-15 17:17:34.450509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.914 [2024-05-15 17:17:34.450518] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.914 qpair failed and we were unable to recover it. 00:26:46.914 [2024-05-15 17:17:34.450739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.914 [2024-05-15 17:17:34.450903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.914 [2024-05-15 17:17:34.450913] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.914 qpair failed and we were unable to recover it. 00:26:46.914 [2024-05-15 17:17:34.451172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.914 [2024-05-15 17:17:34.451418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.914 [2024-05-15 17:17:34.451427] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.914 qpair failed and we were unable to recover it. 00:26:46.914 [2024-05-15 17:17:34.451638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.914 [2024-05-15 17:17:34.451793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.914 [2024-05-15 17:17:34.451802] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.914 qpair failed and we were unable to recover it. 00:26:46.914 [2024-05-15 17:17:34.452046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.914 [2024-05-15 17:17:34.452293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.914 [2024-05-15 17:17:34.452303] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.914 qpair failed and we were unable to recover it. 00:26:46.914 [2024-05-15 17:17:34.452566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.914 [2024-05-15 17:17:34.452762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.914 [2024-05-15 17:17:34.452771] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.914 qpair failed and we were unable to recover it. 00:26:46.914 [2024-05-15 17:17:34.453017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.914 [2024-05-15 17:17:34.453267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.914 [2024-05-15 17:17:34.453277] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.914 qpair failed and we were unable to recover it. 00:26:46.914 [2024-05-15 17:17:34.453455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.914 [2024-05-15 17:17:34.453620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.914 [2024-05-15 17:17:34.453630] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.914 qpair failed and we were unable to recover it. 00:26:46.914 [2024-05-15 17:17:34.453830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.914 [2024-05-15 17:17:34.454049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.914 [2024-05-15 17:17:34.454059] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.914 qpair failed and we were unable to recover it. 00:26:46.914 [2024-05-15 17:17:34.454324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.914 [2024-05-15 17:17:34.454613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.914 [2024-05-15 17:17:34.454622] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.914 qpair failed and we were unable to recover it. 00:26:46.914 [2024-05-15 17:17:34.454813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.914 [2024-05-15 17:17:34.455008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.914 [2024-05-15 17:17:34.455018] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.914 qpair failed and we were unable to recover it. 00:26:46.914 [2024-05-15 17:17:34.455194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.914 [2024-05-15 17:17:34.455359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.914 [2024-05-15 17:17:34.455370] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.914 qpair failed and we were unable to recover it. 00:26:46.914 [2024-05-15 17:17:34.455637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.914 [2024-05-15 17:17:34.455770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.914 [2024-05-15 17:17:34.455779] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.914 qpair failed and we were unable to recover it. 00:26:46.914 [2024-05-15 17:17:34.456026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.914 [2024-05-15 17:17:34.456275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.914 [2024-05-15 17:17:34.456285] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.914 qpair failed and we were unable to recover it. 00:26:46.914 [2024-05-15 17:17:34.456531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.914 [2024-05-15 17:17:34.456794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.914 [2024-05-15 17:17:34.456804] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.914 qpair failed and we were unable to recover it. 00:26:46.914 [2024-05-15 17:17:34.456974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.914 [2024-05-15 17:17:34.457221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.914 [2024-05-15 17:17:34.457231] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.914 qpair failed and we were unable to recover it. 00:26:46.914 [2024-05-15 17:17:34.457455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.914 [2024-05-15 17:17:34.457626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.914 [2024-05-15 17:17:34.457635] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.914 qpair failed and we were unable to recover it. 00:26:46.914 [2024-05-15 17:17:34.457881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.914 [2024-05-15 17:17:34.458109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.914 [2024-05-15 17:17:34.458119] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.914 qpair failed and we were unable to recover it. 00:26:46.914 [2024-05-15 17:17:34.458364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.914 [2024-05-15 17:17:34.458610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.914 [2024-05-15 17:17:34.458619] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.914 qpair failed and we were unable to recover it. 00:26:46.914 [2024-05-15 17:17:34.458811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.914 [2024-05-15 17:17:34.459032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.914 [2024-05-15 17:17:34.459041] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.914 qpair failed and we were unable to recover it. 00:26:46.914 [2024-05-15 17:17:34.459221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.914 [2024-05-15 17:17:34.459464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.914 [2024-05-15 17:17:34.459474] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.914 qpair failed and we were unable to recover it. 00:26:46.914 [2024-05-15 17:17:34.459696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.914 [2024-05-15 17:17:34.459804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.914 [2024-05-15 17:17:34.459813] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.914 qpair failed and we were unable to recover it. 00:26:46.914 [2024-05-15 17:17:34.460037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.914 [2024-05-15 17:17:34.460285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.914 [2024-05-15 17:17:34.460295] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.914 qpair failed and we were unable to recover it. 00:26:46.914 [2024-05-15 17:17:34.460474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.914 [2024-05-15 17:17:34.460629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.914 [2024-05-15 17:17:34.460639] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.914 qpair failed and we were unable to recover it. 00:26:46.914 [2024-05-15 17:17:34.460901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.914 [2024-05-15 17:17:34.461073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.914 [2024-05-15 17:17:34.461083] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.914 qpair failed and we were unable to recover it. 00:26:46.914 [2024-05-15 17:17:34.461182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.914 [2024-05-15 17:17:34.461431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.914 [2024-05-15 17:17:34.461440] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.915 qpair failed and we were unable to recover it. 00:26:46.915 [2024-05-15 17:17:34.461597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.915 [2024-05-15 17:17:34.461858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.915 [2024-05-15 17:17:34.461867] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.915 qpair failed and we were unable to recover it. 00:26:46.915 [2024-05-15 17:17:34.461969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.915 [2024-05-15 17:17:34.462142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.915 [2024-05-15 17:17:34.462151] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.915 qpair failed and we were unable to recover it. 00:26:46.915 [2024-05-15 17:17:34.462404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.915 [2024-05-15 17:17:34.462572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.915 [2024-05-15 17:17:34.462582] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.915 qpair failed and we were unable to recover it. 00:26:46.915 [2024-05-15 17:17:34.462829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.915 [2024-05-15 17:17:34.463015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.915 [2024-05-15 17:17:34.463027] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.915 qpair failed and we were unable to recover it. 00:26:46.915 [2024-05-15 17:17:34.463275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.915 [2024-05-15 17:17:34.463495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.915 [2024-05-15 17:17:34.463505] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.915 qpair failed and we were unable to recover it. 00:26:46.915 [2024-05-15 17:17:34.463777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.915 [2024-05-15 17:17:34.463965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.915 [2024-05-15 17:17:34.463974] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.915 qpair failed and we were unable to recover it. 00:26:46.915 [2024-05-15 17:17:34.464195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.915 [2024-05-15 17:17:34.464434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.915 [2024-05-15 17:17:34.464444] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.915 qpair failed and we were unable to recover it. 00:26:46.915 [2024-05-15 17:17:34.464609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.915 [2024-05-15 17:17:34.464780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.915 [2024-05-15 17:17:34.464790] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.915 qpair failed and we were unable to recover it. 00:26:46.915 [2024-05-15 17:17:34.465031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.915 [2024-05-15 17:17:34.465196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.915 [2024-05-15 17:17:34.465206] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.915 qpair failed and we were unable to recover it. 00:26:46.915 [2024-05-15 17:17:34.465430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.915 [2024-05-15 17:17:34.465535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.915 [2024-05-15 17:17:34.465545] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.915 qpair failed and we were unable to recover it. 00:26:46.915 [2024-05-15 17:17:34.465768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.915 [2024-05-15 17:17:34.465986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.915 [2024-05-15 17:17:34.465996] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.915 qpair failed and we were unable to recover it. 00:26:46.915 [2024-05-15 17:17:34.466245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.915 [2024-05-15 17:17:34.466409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.915 [2024-05-15 17:17:34.466418] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.915 qpair failed and we were unable to recover it. 00:26:46.915 [2024-05-15 17:17:34.466594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.915 [2024-05-15 17:17:34.466855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.915 [2024-05-15 17:17:34.466865] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.915 qpair failed and we were unable to recover it. 00:26:46.915 [2024-05-15 17:17:34.467088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.915 [2024-05-15 17:17:34.467359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.915 [2024-05-15 17:17:34.467370] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.915 qpair failed and we were unable to recover it. 00:26:46.915 [2024-05-15 17:17:34.467617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.915 [2024-05-15 17:17:34.467808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.915 [2024-05-15 17:17:34.467818] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.915 qpair failed and we were unable to recover it. 00:26:46.915 [2024-05-15 17:17:34.467980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.915 [2024-05-15 17:17:34.468225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.915 [2024-05-15 17:17:34.468235] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.915 qpair failed and we were unable to recover it. 00:26:46.915 [2024-05-15 17:17:34.468412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.915 [2024-05-15 17:17:34.468582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.915 [2024-05-15 17:17:34.468592] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.915 qpair failed and we were unable to recover it. 00:26:46.915 [2024-05-15 17:17:34.468863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.915 [2024-05-15 17:17:34.469027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.915 [2024-05-15 17:17:34.469037] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.915 qpair failed and we were unable to recover it. 00:26:46.915 [2024-05-15 17:17:34.469147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.915 [2024-05-15 17:17:34.469323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.915 [2024-05-15 17:17:34.469333] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.915 qpair failed and we were unable to recover it. 00:26:46.915 [2024-05-15 17:17:34.469527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.915 [2024-05-15 17:17:34.469688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.915 [2024-05-15 17:17:34.469698] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.915 qpair failed and we were unable to recover it. 00:26:46.915 [2024-05-15 17:17:34.469866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.915 [2024-05-15 17:17:34.469998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.915 [2024-05-15 17:17:34.470008] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.915 qpair failed and we were unable to recover it. 00:26:46.915 [2024-05-15 17:17:34.470252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.915 [2024-05-15 17:17:34.470423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.915 [2024-05-15 17:17:34.470433] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.915 qpair failed and we were unable to recover it. 00:26:46.915 [2024-05-15 17:17:34.470632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.915 [2024-05-15 17:17:34.470742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.915 [2024-05-15 17:17:34.470751] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.915 qpair failed and we were unable to recover it. 00:26:46.915 [2024-05-15 17:17:34.470915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.915 [2024-05-15 17:17:34.471185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.915 [2024-05-15 17:17:34.471197] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.915 qpair failed and we were unable to recover it. 00:26:46.915 [2024-05-15 17:17:34.471445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.915 [2024-05-15 17:17:34.471690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.915 [2024-05-15 17:17:34.471699] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.915 qpair failed and we were unable to recover it. 00:26:46.915 [2024-05-15 17:17:34.471952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.915 [2024-05-15 17:17:34.472225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.915 [2024-05-15 17:17:34.472235] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.916 qpair failed and we were unable to recover it. 00:26:46.916 [2024-05-15 17:17:34.472393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.916 [2024-05-15 17:17:34.472498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.916 [2024-05-15 17:17:34.472507] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.916 qpair failed and we were unable to recover it. 00:26:46.916 [2024-05-15 17:17:34.472684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.916 [2024-05-15 17:17:34.472903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.916 [2024-05-15 17:17:34.472914] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.916 qpair failed and we were unable to recover it. 00:26:46.916 [2024-05-15 17:17:34.473036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.916 [2024-05-15 17:17:34.473205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.916 [2024-05-15 17:17:34.473215] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.916 qpair failed and we were unable to recover it. 00:26:46.916 [2024-05-15 17:17:34.473446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.916 [2024-05-15 17:17:34.473691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.916 [2024-05-15 17:17:34.473701] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.916 qpair failed and we were unable to recover it. 00:26:46.916 [2024-05-15 17:17:34.473942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.916 [2024-05-15 17:17:34.474210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.916 [2024-05-15 17:17:34.474221] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.916 qpair failed and we were unable to recover it. 00:26:46.916 [2024-05-15 17:17:34.474471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.916 [2024-05-15 17:17:34.474593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.916 [2024-05-15 17:17:34.474604] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.916 qpair failed and we were unable to recover it. 00:26:46.916 [2024-05-15 17:17:34.474828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.916 [2024-05-15 17:17:34.475081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.916 [2024-05-15 17:17:34.475091] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.916 qpair failed and we were unable to recover it. 00:26:46.916 [2024-05-15 17:17:34.475358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.916 [2024-05-15 17:17:34.475496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.916 [2024-05-15 17:17:34.475508] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.916 qpair failed and we were unable to recover it. 00:26:46.916 [2024-05-15 17:17:34.475768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.916 [2024-05-15 17:17:34.475965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.916 [2024-05-15 17:17:34.475975] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.916 qpair failed and we were unable to recover it. 00:26:46.916 [2024-05-15 17:17:34.476130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.916 [2024-05-15 17:17:34.476314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.916 [2024-05-15 17:17:34.476324] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.916 qpair failed and we were unable to recover it. 00:26:46.916 [2024-05-15 17:17:34.476506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.916 [2024-05-15 17:17:34.476660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.916 [2024-05-15 17:17:34.476670] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.916 qpair failed and we were unable to recover it. 00:26:46.916 [2024-05-15 17:17:34.476851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.916 [2024-05-15 17:17:34.477099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.916 [2024-05-15 17:17:34.477108] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.916 qpair failed and we were unable to recover it. 00:26:46.916 [2024-05-15 17:17:34.477341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.916 [2024-05-15 17:17:34.477447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.916 [2024-05-15 17:17:34.477456] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.916 qpair failed and we were unable to recover it. 00:26:46.916 [2024-05-15 17:17:34.477694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.916 [2024-05-15 17:17:34.477866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.916 [2024-05-15 17:17:34.477876] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.916 qpair failed and we were unable to recover it. 00:26:46.916 [2024-05-15 17:17:34.478044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.916 [2024-05-15 17:17:34.478242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.916 [2024-05-15 17:17:34.478252] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.916 qpair failed and we were unable to recover it. 00:26:46.916 [2024-05-15 17:17:34.478357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.916 [2024-05-15 17:17:34.478539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.916 [2024-05-15 17:17:34.478549] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.916 qpair failed and we were unable to recover it. 00:26:46.916 [2024-05-15 17:17:34.478751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.916 [2024-05-15 17:17:34.478937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.916 [2024-05-15 17:17:34.478947] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.916 qpair failed and we were unable to recover it. 00:26:46.916 [2024-05-15 17:17:34.479149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.916 [2024-05-15 17:17:34.479289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.916 [2024-05-15 17:17:34.479298] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.916 qpair failed and we were unable to recover it. 00:26:46.916 [2024-05-15 17:17:34.479480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.916 [2024-05-15 17:17:34.479721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.916 [2024-05-15 17:17:34.479731] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.916 qpair failed and we were unable to recover it. 00:26:46.916 [2024-05-15 17:17:34.479854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.916 [2024-05-15 17:17:34.479945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.916 [2024-05-15 17:17:34.479955] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.916 qpair failed and we were unable to recover it. 00:26:46.916 [2024-05-15 17:17:34.480232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.916 [2024-05-15 17:17:34.480458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.916 [2024-05-15 17:17:34.480468] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.916 qpair failed and we were unable to recover it. 00:26:46.916 [2024-05-15 17:17:34.480691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.916 [2024-05-15 17:17:34.480816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.916 [2024-05-15 17:17:34.480825] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.916 qpair failed and we were unable to recover it. 00:26:46.916 [2024-05-15 17:17:34.481048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.916 [2024-05-15 17:17:34.481146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.916 [2024-05-15 17:17:34.481155] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.916 qpair failed and we were unable to recover it. 00:26:46.916 [2024-05-15 17:17:34.481342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.916 [2024-05-15 17:17:34.481589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.916 [2024-05-15 17:17:34.481599] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.916 qpair failed and we were unable to recover it. 00:26:46.916 [2024-05-15 17:17:34.481771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.916 [2024-05-15 17:17:34.482001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.916 [2024-05-15 17:17:34.482011] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.916 qpair failed and we were unable to recover it. 00:26:46.916 [2024-05-15 17:17:34.482191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.916 [2024-05-15 17:17:34.482388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.916 [2024-05-15 17:17:34.482398] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.916 qpair failed and we were unable to recover it. 00:26:46.916 [2024-05-15 17:17:34.482627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.916 [2024-05-15 17:17:34.482802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.916 [2024-05-15 17:17:34.482812] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.916 qpair failed and we were unable to recover it. 00:26:46.916 [2024-05-15 17:17:34.482995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.916 [2024-05-15 17:17:34.483186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.916 [2024-05-15 17:17:34.483196] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.916 qpair failed and we were unable to recover it. 00:26:46.916 [2024-05-15 17:17:34.483447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.917 [2024-05-15 17:17:34.483695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.917 [2024-05-15 17:17:34.483704] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.917 qpair failed and we were unable to recover it. 00:26:46.917 [2024-05-15 17:17:34.483929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.917 [2024-05-15 17:17:34.484098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.917 [2024-05-15 17:17:34.484108] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.917 qpair failed and we were unable to recover it. 00:26:46.917 [2024-05-15 17:17:34.484282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.917 [2024-05-15 17:17:34.484469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.917 [2024-05-15 17:17:34.484479] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.917 qpair failed and we were unable to recover it. 00:26:46.917 [2024-05-15 17:17:34.484751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.917 [2024-05-15 17:17:34.484999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.917 [2024-05-15 17:17:34.485008] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.917 qpair failed and we were unable to recover it. 00:26:46.917 [2024-05-15 17:17:34.485257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.917 [2024-05-15 17:17:34.485513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.917 [2024-05-15 17:17:34.485523] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.917 qpair failed and we were unable to recover it. 00:26:46.917 [2024-05-15 17:17:34.485714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.917 [2024-05-15 17:17:34.485888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.917 [2024-05-15 17:17:34.485897] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.917 qpair failed and we were unable to recover it. 00:26:46.917 [2024-05-15 17:17:34.486120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.917 [2024-05-15 17:17:34.486235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.917 [2024-05-15 17:17:34.486246] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.917 qpair failed and we were unable to recover it. 00:26:46.917 [2024-05-15 17:17:34.486516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.917 [2024-05-15 17:17:34.486743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.917 [2024-05-15 17:17:34.486752] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.917 qpair failed and we were unable to recover it. 00:26:46.917 [2024-05-15 17:17:34.486875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.917 [2024-05-15 17:17:34.487000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.917 [2024-05-15 17:17:34.487009] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.917 qpair failed and we were unable to recover it. 00:26:46.917 [2024-05-15 17:17:34.487231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.917 [2024-05-15 17:17:34.487464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.917 [2024-05-15 17:17:34.487473] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.917 qpair failed and we were unable to recover it. 00:26:46.917 [2024-05-15 17:17:34.487636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.917 [2024-05-15 17:17:34.487826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.917 [2024-05-15 17:17:34.487835] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.917 qpair failed and we were unable to recover it. 00:26:46.917 [2024-05-15 17:17:34.488099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.917 [2024-05-15 17:17:34.488216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.917 [2024-05-15 17:17:34.488226] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.917 qpair failed and we were unable to recover it. 00:26:46.917 [2024-05-15 17:17:34.488420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.917 [2024-05-15 17:17:34.488605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.917 [2024-05-15 17:17:34.488617] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.917 qpair failed and we were unable to recover it. 00:26:46.917 [2024-05-15 17:17:34.488720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.917 [2024-05-15 17:17:34.488973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.917 [2024-05-15 17:17:34.488983] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.917 qpair failed and we were unable to recover it. 00:26:46.917 [2024-05-15 17:17:34.489205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.917 [2024-05-15 17:17:34.489452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.917 [2024-05-15 17:17:34.489462] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.917 qpair failed and we were unable to recover it. 00:26:46.917 [2024-05-15 17:17:34.489653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.917 [2024-05-15 17:17:34.489901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.917 [2024-05-15 17:17:34.489911] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.917 qpair failed and we were unable to recover it. 00:26:46.917 [2024-05-15 17:17:34.490133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.917 [2024-05-15 17:17:34.490422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.917 [2024-05-15 17:17:34.490432] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.917 qpair failed and we were unable to recover it. 00:26:46.917 [2024-05-15 17:17:34.490686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.917 [2024-05-15 17:17:34.490907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.917 [2024-05-15 17:17:34.490917] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.917 qpair failed and we were unable to recover it. 00:26:46.917 [2024-05-15 17:17:34.491038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.917 [2024-05-15 17:17:34.491260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.917 [2024-05-15 17:17:34.491270] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.917 qpair failed and we were unable to recover it. 00:26:46.917 [2024-05-15 17:17:34.491510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.917 [2024-05-15 17:17:34.491732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.917 [2024-05-15 17:17:34.491742] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.917 qpair failed and we were unable to recover it. 00:26:46.917 [2024-05-15 17:17:34.491914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.917 [2024-05-15 17:17:34.492162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.917 [2024-05-15 17:17:34.492176] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.917 qpair failed and we were unable to recover it. 00:26:46.917 [2024-05-15 17:17:34.492441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.917 [2024-05-15 17:17:34.492566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.917 [2024-05-15 17:17:34.492576] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.917 qpair failed and we were unable to recover it. 00:26:46.917 [2024-05-15 17:17:34.492769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.917 [2024-05-15 17:17:34.492953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.917 [2024-05-15 17:17:34.492963] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.917 qpair failed and we were unable to recover it. 00:26:46.917 [2024-05-15 17:17:34.493206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.917 [2024-05-15 17:17:34.493453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.917 [2024-05-15 17:17:34.493463] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.917 qpair failed and we were unable to recover it. 00:26:46.917 [2024-05-15 17:17:34.493568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.917 [2024-05-15 17:17:34.493681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.917 [2024-05-15 17:17:34.493691] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.917 qpair failed and we were unable to recover it. 00:26:46.917 [2024-05-15 17:17:34.493792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.917 [2024-05-15 17:17:34.493903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.917 [2024-05-15 17:17:34.493912] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.917 qpair failed and we were unable to recover it. 00:26:46.917 [2024-05-15 17:17:34.494074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.917 [2024-05-15 17:17:34.494174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.917 [2024-05-15 17:17:34.494185] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.917 qpair failed and we were unable to recover it. 00:26:46.917 [2024-05-15 17:17:34.494341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.917 [2024-05-15 17:17:34.494432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.917 [2024-05-15 17:17:34.494442] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.917 qpair failed and we were unable to recover it. 00:26:46.917 [2024-05-15 17:17:34.494617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.917 [2024-05-15 17:17:34.494709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.918 [2024-05-15 17:17:34.494718] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.918 qpair failed and we were unable to recover it. 00:26:46.918 [2024-05-15 17:17:34.494819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.918 [2024-05-15 17:17:34.495062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.918 [2024-05-15 17:17:34.495072] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.918 qpair failed and we were unable to recover it. 00:26:46.918 [2024-05-15 17:17:34.495298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.918 [2024-05-15 17:17:34.495474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.918 [2024-05-15 17:17:34.495484] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.918 qpair failed and we were unable to recover it. 00:26:46.918 [2024-05-15 17:17:34.495668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.918 [2024-05-15 17:17:34.495771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.918 [2024-05-15 17:17:34.495781] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.918 qpair failed and we were unable to recover it. 00:26:46.918 [2024-05-15 17:17:34.495942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.918 [2024-05-15 17:17:34.496101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.918 [2024-05-15 17:17:34.496111] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.918 qpair failed and we were unable to recover it. 00:26:46.918 [2024-05-15 17:17:34.496278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.918 [2024-05-15 17:17:34.496452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.918 [2024-05-15 17:17:34.496462] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.918 qpair failed and we were unable to recover it. 00:26:46.918 [2024-05-15 17:17:34.496580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.918 [2024-05-15 17:17:34.496756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.918 [2024-05-15 17:17:34.496766] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.918 qpair failed and we were unable to recover it. 00:26:46.918 [2024-05-15 17:17:34.496874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.918 [2024-05-15 17:17:34.497032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.918 [2024-05-15 17:17:34.497042] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.918 qpair failed and we were unable to recover it. 00:26:46.918 [2024-05-15 17:17:34.497152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.918 [2024-05-15 17:17:34.497260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.918 [2024-05-15 17:17:34.497271] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.918 qpair failed and we were unable to recover it. 00:26:46.918 [2024-05-15 17:17:34.497378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.918 [2024-05-15 17:17:34.497574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.918 [2024-05-15 17:17:34.497584] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.918 qpair failed and we were unable to recover it. 00:26:46.918 [2024-05-15 17:17:34.497677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.918 [2024-05-15 17:17:34.497909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.918 [2024-05-15 17:17:34.497919] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.918 qpair failed and we were unable to recover it. 00:26:46.918 [2024-05-15 17:17:34.498018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.918 [2024-05-15 17:17:34.498153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.918 [2024-05-15 17:17:34.498162] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.918 qpair failed and we were unable to recover it. 00:26:46.918 [2024-05-15 17:17:34.498287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.918 [2024-05-15 17:17:34.498447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.918 [2024-05-15 17:17:34.498457] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.918 qpair failed and we were unable to recover it. 00:26:46.918 [2024-05-15 17:17:34.498633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.918 [2024-05-15 17:17:34.498758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.918 [2024-05-15 17:17:34.498767] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.918 qpair failed and we were unable to recover it. 00:26:46.918 [2024-05-15 17:17:34.498934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.918 [2024-05-15 17:17:34.499020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.918 [2024-05-15 17:17:34.499030] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.918 qpair failed and we were unable to recover it. 00:26:46.918 [2024-05-15 17:17:34.499190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.918 [2024-05-15 17:17:34.499287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.918 [2024-05-15 17:17:34.499296] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.918 qpair failed and we were unable to recover it. 00:26:46.918 [2024-05-15 17:17:34.499459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.918 [2024-05-15 17:17:34.499552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.918 [2024-05-15 17:17:34.499561] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.918 qpair failed and we were unable to recover it. 00:26:46.918 [2024-05-15 17:17:34.499718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.918 [2024-05-15 17:17:34.500015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.918 [2024-05-15 17:17:34.500025] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.918 qpair failed and we were unable to recover it. 00:26:46.918 [2024-05-15 17:17:34.500186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.918 [2024-05-15 17:17:34.500410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.918 [2024-05-15 17:17:34.500420] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.918 qpair failed and we were unable to recover it. 00:26:46.918 [2024-05-15 17:17:34.500593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.918 [2024-05-15 17:17:34.500769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.918 [2024-05-15 17:17:34.500778] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.918 qpair failed and we were unable to recover it. 00:26:46.918 [2024-05-15 17:17:34.500905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.918 [2024-05-15 17:17:34.501148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.918 [2024-05-15 17:17:34.501157] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.918 qpair failed and we were unable to recover it. 00:26:46.918 [2024-05-15 17:17:34.501326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.918 [2024-05-15 17:17:34.501461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.918 [2024-05-15 17:17:34.501470] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.918 qpair failed and we were unable to recover it. 00:26:46.918 [2024-05-15 17:17:34.501680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.918 [2024-05-15 17:17:34.501837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.918 [2024-05-15 17:17:34.501847] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.918 qpair failed and we were unable to recover it. 00:26:46.918 [2024-05-15 17:17:34.502062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.918 [2024-05-15 17:17:34.502169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.918 [2024-05-15 17:17:34.502180] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.918 qpair failed and we were unable to recover it. 00:26:46.918 [2024-05-15 17:17:34.502373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.918 [2024-05-15 17:17:34.502547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.918 [2024-05-15 17:17:34.502556] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.918 qpair failed and we were unable to recover it. 00:26:46.918 [2024-05-15 17:17:34.502656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.918 [2024-05-15 17:17:34.502811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.918 [2024-05-15 17:17:34.502820] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.918 qpair failed and we were unable to recover it. 00:26:46.918 [2024-05-15 17:17:34.503092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.918 [2024-05-15 17:17:34.503337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.918 [2024-05-15 17:17:34.503347] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.918 qpair failed and we were unable to recover it. 00:26:46.918 [2024-05-15 17:17:34.503581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.918 [2024-05-15 17:17:34.503786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.918 [2024-05-15 17:17:34.503796] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.918 qpair failed and we were unable to recover it. 00:26:46.918 [2024-05-15 17:17:34.504035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.918 [2024-05-15 17:17:34.504229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.918 [2024-05-15 17:17:34.504239] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.918 qpair failed and we were unable to recover it. 00:26:46.919 [2024-05-15 17:17:34.504417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.919 [2024-05-15 17:17:34.504608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.919 [2024-05-15 17:17:34.504618] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.919 qpair failed and we were unable to recover it. 00:26:46.919 [2024-05-15 17:17:34.504795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.919 [2024-05-15 17:17:34.504961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.919 [2024-05-15 17:17:34.504971] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.919 qpair failed and we were unable to recover it. 00:26:46.919 [2024-05-15 17:17:34.505161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.919 [2024-05-15 17:17:34.505281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.919 [2024-05-15 17:17:34.505292] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.919 qpair failed and we were unable to recover it. 00:26:46.919 [2024-05-15 17:17:34.505374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.919 [2024-05-15 17:17:34.505551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.919 [2024-05-15 17:17:34.505561] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.919 qpair failed and we were unable to recover it. 00:26:46.919 [2024-05-15 17:17:34.505739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.919 [2024-05-15 17:17:34.505801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.919 [2024-05-15 17:17:34.505811] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.919 qpair failed and we were unable to recover it. 00:26:46.919 [2024-05-15 17:17:34.505996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.919 [2024-05-15 17:17:34.506158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.919 [2024-05-15 17:17:34.506176] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.919 qpair failed and we were unable to recover it. 00:26:46.919 [2024-05-15 17:17:34.506277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.919 [2024-05-15 17:17:34.506392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.919 [2024-05-15 17:17:34.506403] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.919 qpair failed and we were unable to recover it. 00:26:46.919 [2024-05-15 17:17:34.506505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.919 [2024-05-15 17:17:34.506681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.919 [2024-05-15 17:17:34.506692] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.919 qpair failed and we were unable to recover it. 00:26:46.919 [2024-05-15 17:17:34.506882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.919 [2024-05-15 17:17:34.506993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.919 [2024-05-15 17:17:34.507002] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.919 qpair failed and we were unable to recover it. 00:26:46.919 [2024-05-15 17:17:34.507226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.919 [2024-05-15 17:17:34.507469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.919 [2024-05-15 17:17:34.507479] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.919 qpair failed and we were unable to recover it. 00:26:46.919 [2024-05-15 17:17:34.507652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.919 [2024-05-15 17:17:34.507745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.919 [2024-05-15 17:17:34.507754] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.919 qpair failed and we were unable to recover it. 00:26:46.919 [2024-05-15 17:17:34.507913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.919 [2024-05-15 17:17:34.508075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.919 [2024-05-15 17:17:34.508084] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.919 qpair failed and we were unable to recover it. 00:26:46.919 [2024-05-15 17:17:34.508184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.919 [2024-05-15 17:17:34.508303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.919 [2024-05-15 17:17:34.508313] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.919 qpair failed and we were unable to recover it. 00:26:46.919 [2024-05-15 17:17:34.508497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.919 [2024-05-15 17:17:34.508587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.919 [2024-05-15 17:17:34.508597] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.919 qpair failed and we were unable to recover it. 00:26:46.919 [2024-05-15 17:17:34.508773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.919 [2024-05-15 17:17:34.508871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.919 [2024-05-15 17:17:34.508881] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.919 qpair failed and we were unable to recover it. 00:26:46.919 [2024-05-15 17:17:34.509037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.919 [2024-05-15 17:17:34.509188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.919 [2024-05-15 17:17:34.509198] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.919 qpair failed and we were unable to recover it. 00:26:46.919 [2024-05-15 17:17:34.509363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.919 [2024-05-15 17:17:34.509522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.919 [2024-05-15 17:17:34.509531] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.919 qpair failed and we were unable to recover it. 00:26:46.919 [2024-05-15 17:17:34.509769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.919 [2024-05-15 17:17:34.509875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.919 [2024-05-15 17:17:34.509885] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.919 qpair failed and we were unable to recover it. 00:26:46.919 [2024-05-15 17:17:34.510035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.919 [2024-05-15 17:17:34.510276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.919 [2024-05-15 17:17:34.510287] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.919 qpair failed and we were unable to recover it. 00:26:46.919 [2024-05-15 17:17:34.510446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.919 [2024-05-15 17:17:34.510554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.919 [2024-05-15 17:17:34.510564] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.919 qpair failed and we were unable to recover it. 00:26:46.919 [2024-05-15 17:17:34.510730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.919 [2024-05-15 17:17:34.510913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.919 [2024-05-15 17:17:34.510922] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.919 qpair failed and we were unable to recover it. 00:26:46.919 [2024-05-15 17:17:34.511095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.919 [2024-05-15 17:17:34.511200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.919 [2024-05-15 17:17:34.511210] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.919 qpair failed and we were unable to recover it. 00:26:46.919 [2024-05-15 17:17:34.511314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.919 [2024-05-15 17:17:34.511492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.919 [2024-05-15 17:17:34.511501] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.919 qpair failed and we were unable to recover it. 00:26:46.919 [2024-05-15 17:17:34.511779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.919 [2024-05-15 17:17:34.511874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.919 [2024-05-15 17:17:34.511885] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.920 qpair failed and we were unable to recover it. 00:26:46.920 [2024-05-15 17:17:34.511974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.920 [2024-05-15 17:17:34.512137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.920 [2024-05-15 17:17:34.512147] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.920 qpair failed and we were unable to recover it. 00:26:46.920 [2024-05-15 17:17:34.512311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.920 [2024-05-15 17:17:34.512470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.920 [2024-05-15 17:17:34.512480] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.920 qpair failed and we were unable to recover it. 00:26:46.920 [2024-05-15 17:17:34.512562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.920 [2024-05-15 17:17:34.512737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.920 [2024-05-15 17:17:34.512747] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.920 qpair failed and we were unable to recover it. 00:26:46.920 [2024-05-15 17:17:34.512972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.920 [2024-05-15 17:17:34.513142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.920 [2024-05-15 17:17:34.513151] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.920 qpair failed and we were unable to recover it. 00:26:46.920 [2024-05-15 17:17:34.513259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.920 [2024-05-15 17:17:34.513488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.920 [2024-05-15 17:17:34.513498] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.920 qpair failed and we were unable to recover it. 00:26:46.920 [2024-05-15 17:17:34.513596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.920 [2024-05-15 17:17:34.513756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.920 [2024-05-15 17:17:34.513766] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.920 qpair failed and we were unable to recover it. 00:26:46.920 [2024-05-15 17:17:34.513941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.920 [2024-05-15 17:17:34.514113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.920 [2024-05-15 17:17:34.514123] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.920 qpair failed and we were unable to recover it. 00:26:46.920 [2024-05-15 17:17:34.514294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.920 [2024-05-15 17:17:34.514396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.920 [2024-05-15 17:17:34.514405] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.920 qpair failed and we were unable to recover it. 00:26:46.920 [2024-05-15 17:17:34.514653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.920 [2024-05-15 17:17:34.514747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.920 [2024-05-15 17:17:34.514756] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.920 qpair failed and we were unable to recover it. 00:26:46.920 [2024-05-15 17:17:34.514862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.920 [2024-05-15 17:17:34.514961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.920 [2024-05-15 17:17:34.514971] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.920 qpair failed and we were unable to recover it. 00:26:46.920 [2024-05-15 17:17:34.515086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.920 [2024-05-15 17:17:34.515333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.920 [2024-05-15 17:17:34.515344] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.920 qpair failed and we were unable to recover it. 00:26:46.920 [2024-05-15 17:17:34.515503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.920 [2024-05-15 17:17:34.515614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.920 [2024-05-15 17:17:34.515624] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.920 qpair failed and we were unable to recover it. 00:26:46.920 [2024-05-15 17:17:34.515714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.920 [2024-05-15 17:17:34.515964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.920 [2024-05-15 17:17:34.515974] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.920 qpair failed and we were unable to recover it. 00:26:46.920 [2024-05-15 17:17:34.516079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.920 [2024-05-15 17:17:34.516247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.920 [2024-05-15 17:17:34.516256] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.920 qpair failed and we were unable to recover it. 00:26:46.920 [2024-05-15 17:17:34.516374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.920 [2024-05-15 17:17:34.516553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.920 [2024-05-15 17:17:34.516563] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.920 qpair failed and we were unable to recover it. 00:26:46.920 [2024-05-15 17:17:34.516664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.920 [2024-05-15 17:17:34.516765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.920 [2024-05-15 17:17:34.516775] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.920 qpair failed and we were unable to recover it. 00:26:46.920 [2024-05-15 17:17:34.517017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.920 [2024-05-15 17:17:34.517094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.920 [2024-05-15 17:17:34.517103] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.920 qpair failed and we were unable to recover it. 00:26:46.920 [2024-05-15 17:17:34.517269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.920 [2024-05-15 17:17:34.517375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.920 [2024-05-15 17:17:34.517385] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.920 qpair failed and we were unable to recover it. 00:26:46.920 [2024-05-15 17:17:34.517498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.920 [2024-05-15 17:17:34.517617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.920 [2024-05-15 17:17:34.517627] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.920 qpair failed and we were unable to recover it. 00:26:46.920 [2024-05-15 17:17:34.517740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.920 [2024-05-15 17:17:34.517896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.920 [2024-05-15 17:17:34.517906] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.920 qpair failed and we were unable to recover it. 00:26:46.920 [2024-05-15 17:17:34.517975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.920 [2024-05-15 17:17:34.518135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.920 [2024-05-15 17:17:34.518145] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.920 qpair failed and we were unable to recover it. 00:26:46.920 [2024-05-15 17:17:34.518282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.920 [2024-05-15 17:17:34.518511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.920 [2024-05-15 17:17:34.518521] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.920 qpair failed and we were unable to recover it. 00:26:46.920 [2024-05-15 17:17:34.518677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.920 [2024-05-15 17:17:34.518790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.920 [2024-05-15 17:17:34.518800] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.920 qpair failed and we were unable to recover it. 00:26:46.920 [2024-05-15 17:17:34.518935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.920 [2024-05-15 17:17:34.519043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.920 [2024-05-15 17:17:34.519052] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.920 qpair failed and we were unable to recover it. 00:26:46.920 [2024-05-15 17:17:34.519151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.920 [2024-05-15 17:17:34.519247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.920 [2024-05-15 17:17:34.519257] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.920 qpair failed and we were unable to recover it. 00:26:46.920 [2024-05-15 17:17:34.519511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.920 [2024-05-15 17:17:34.519682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.920 [2024-05-15 17:17:34.519692] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.920 qpair failed and we were unable to recover it. 00:26:46.920 [2024-05-15 17:17:34.519827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.920 [2024-05-15 17:17:34.519932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.920 [2024-05-15 17:17:34.519942] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.920 qpair failed and we were unable to recover it. 00:26:46.920 [2024-05-15 17:17:34.520172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.920 [2024-05-15 17:17:34.520398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.920 [2024-05-15 17:17:34.520408] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.920 qpair failed and we were unable to recover it. 00:26:46.920 [2024-05-15 17:17:34.520576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.920 [2024-05-15 17:17:34.520746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.920 [2024-05-15 17:17:34.520755] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.920 qpair failed and we were unable to recover it. 00:26:46.920 [2024-05-15 17:17:34.520866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.920 [2024-05-15 17:17:34.521122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.921 [2024-05-15 17:17:34.521133] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.921 qpair failed and we were unable to recover it. 00:26:46.921 [2024-05-15 17:17:34.521303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.921 [2024-05-15 17:17:34.521450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.921 [2024-05-15 17:17:34.521459] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.921 qpair failed and we were unable to recover it. 00:26:46.921 [2024-05-15 17:17:34.521573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.921 [2024-05-15 17:17:34.521780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.921 [2024-05-15 17:17:34.521789] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.921 qpair failed and we were unable to recover it. 00:26:46.921 [2024-05-15 17:17:34.521961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.921 [2024-05-15 17:17:34.522069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.921 [2024-05-15 17:17:34.522078] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.921 qpair failed and we were unable to recover it. 00:26:46.921 [2024-05-15 17:17:34.522179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.921 [2024-05-15 17:17:34.522297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.921 [2024-05-15 17:17:34.522307] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.921 qpair failed and we were unable to recover it. 00:26:46.921 [2024-05-15 17:17:34.522544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.921 [2024-05-15 17:17:34.522698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.921 [2024-05-15 17:17:34.522708] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.921 qpair failed and we were unable to recover it. 00:26:46.921 [2024-05-15 17:17:34.522823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.921 [2024-05-15 17:17:34.523075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.921 [2024-05-15 17:17:34.523084] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.921 qpair failed and we were unable to recover it. 00:26:46.921 [2024-05-15 17:17:34.523257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.921 [2024-05-15 17:17:34.523374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.921 [2024-05-15 17:17:34.523383] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.921 qpair failed and we were unable to recover it. 00:26:46.921 [2024-05-15 17:17:34.523541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.921 [2024-05-15 17:17:34.523723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.921 [2024-05-15 17:17:34.523732] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.921 qpair failed and we were unable to recover it. 00:26:46.921 [2024-05-15 17:17:34.523973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.921 [2024-05-15 17:17:34.524137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.921 [2024-05-15 17:17:34.524147] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.921 qpair failed and we were unable to recover it. 00:26:46.921 [2024-05-15 17:17:34.524312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.921 [2024-05-15 17:17:34.524471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.921 [2024-05-15 17:17:34.524483] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.921 qpair failed and we were unable to recover it. 00:26:46.921 [2024-05-15 17:17:34.524643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.921 [2024-05-15 17:17:34.524847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.921 [2024-05-15 17:17:34.524856] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.921 qpair failed and we were unable to recover it. 00:26:46.921 [2024-05-15 17:17:34.525042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.921 [2024-05-15 17:17:34.525295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.921 [2024-05-15 17:17:34.525305] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.921 qpair failed and we were unable to recover it. 00:26:46.921 [2024-05-15 17:17:34.525426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.921 [2024-05-15 17:17:34.525530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.921 [2024-05-15 17:17:34.525540] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.921 qpair failed and we were unable to recover it. 00:26:46.921 [2024-05-15 17:17:34.525717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.921 [2024-05-15 17:17:34.525827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.921 [2024-05-15 17:17:34.525837] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.921 qpair failed and we were unable to recover it. 00:26:46.921 [2024-05-15 17:17:34.525989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.921 [2024-05-15 17:17:34.526105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.921 [2024-05-15 17:17:34.526115] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.921 qpair failed and we were unable to recover it. 00:26:46.921 [2024-05-15 17:17:34.526208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.921 [2024-05-15 17:17:34.526372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.921 [2024-05-15 17:17:34.526381] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.921 qpair failed and we were unable to recover it. 00:26:46.921 [2024-05-15 17:17:34.526633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.921 [2024-05-15 17:17:34.526851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:46.921 [2024-05-15 17:17:34.526860] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:46.921 qpair failed and we were unable to recover it. 00:26:46.921 [2024-05-15 17:17:34.527050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.192 [2024-05-15 17:17:34.527309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.192 [2024-05-15 17:17:34.527319] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:47.192 qpair failed and we were unable to recover it. 00:26:47.192 [2024-05-15 17:17:34.527498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.192 [2024-05-15 17:17:34.527598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.192 [2024-05-15 17:17:34.527608] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:47.192 qpair failed and we were unable to recover it. 00:26:47.192 [2024-05-15 17:17:34.527801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.192 [2024-05-15 17:17:34.527971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.192 [2024-05-15 17:17:34.527982] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:47.192 qpair failed and we were unable to recover it. 00:26:47.192 [2024-05-15 17:17:34.528233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.192 [2024-05-15 17:17:34.528346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.192 [2024-05-15 17:17:34.528356] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:47.192 qpair failed and we were unable to recover it. 00:26:47.192 [2024-05-15 17:17:34.528534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.192 [2024-05-15 17:17:34.528689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.192 [2024-05-15 17:17:34.528700] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:47.192 qpair failed and we were unable to recover it. 00:26:47.192 [2024-05-15 17:17:34.528862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.192 [2024-05-15 17:17:34.529040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.193 [2024-05-15 17:17:34.529050] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:47.193 qpair failed and we were unable to recover it. 00:26:47.193 [2024-05-15 17:17:34.529157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.193 [2024-05-15 17:17:34.529327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.193 [2024-05-15 17:17:34.529338] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:47.193 qpair failed and we were unable to recover it. 00:26:47.193 [2024-05-15 17:17:34.529440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.193 [2024-05-15 17:17:34.529542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.193 [2024-05-15 17:17:34.529552] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:47.193 qpair failed and we were unable to recover it. 00:26:47.193 [2024-05-15 17:17:34.529798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.193 [2024-05-15 17:17:34.529931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.193 [2024-05-15 17:17:34.529940] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:47.193 qpair failed and we were unable to recover it. 00:26:47.193 [2024-05-15 17:17:34.530100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.193 [2024-05-15 17:17:34.530291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.193 [2024-05-15 17:17:34.530302] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:47.193 qpair failed and we were unable to recover it. 00:26:47.193 [2024-05-15 17:17:34.530461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.193 [2024-05-15 17:17:34.530564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.193 [2024-05-15 17:17:34.530573] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:47.193 qpair failed and we were unable to recover it. 00:26:47.193 [2024-05-15 17:17:34.530687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.193 [2024-05-15 17:17:34.530793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.193 [2024-05-15 17:17:34.530802] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:47.193 qpair failed and we were unable to recover it. 00:26:47.193 [2024-05-15 17:17:34.530962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.193 [2024-05-15 17:17:34.531069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.193 [2024-05-15 17:17:34.531081] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:47.193 qpair failed and we were unable to recover it. 00:26:47.193 [2024-05-15 17:17:34.531252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.193 [2024-05-15 17:17:34.531417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.193 [2024-05-15 17:17:34.531426] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:47.193 qpair failed and we were unable to recover it. 00:26:47.193 [2024-05-15 17:17:34.531660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.193 [2024-05-15 17:17:34.531894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.193 [2024-05-15 17:17:34.531904] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:47.193 qpair failed and we were unable to recover it. 00:26:47.193 [2024-05-15 17:17:34.532086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.193 [2024-05-15 17:17:34.532187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.193 [2024-05-15 17:17:34.532198] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:47.193 qpair failed and we were unable to recover it. 00:26:47.193 [2024-05-15 17:17:34.532364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.193 [2024-05-15 17:17:34.532470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.193 [2024-05-15 17:17:34.532480] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:47.193 qpair failed and we were unable to recover it. 00:26:47.193 [2024-05-15 17:17:34.532586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.193 [2024-05-15 17:17:34.532762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.193 [2024-05-15 17:17:34.532771] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:47.193 qpair failed and we were unable to recover it. 00:26:47.193 [2024-05-15 17:17:34.532927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.193 [2024-05-15 17:17:34.533036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.193 [2024-05-15 17:17:34.533045] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:47.193 qpair failed and we were unable to recover it. 00:26:47.193 [2024-05-15 17:17:34.533181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.193 [2024-05-15 17:17:34.533270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.193 [2024-05-15 17:17:34.533279] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:47.193 qpair failed and we were unable to recover it. 00:26:47.193 [2024-05-15 17:17:34.533466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.193 [2024-05-15 17:17:34.533634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.193 [2024-05-15 17:17:34.533643] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:47.193 qpair failed and we were unable to recover it. 00:26:47.193 [2024-05-15 17:17:34.533808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.193 [2024-05-15 17:17:34.533993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.193 [2024-05-15 17:17:34.534003] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:47.193 qpair failed and we were unable to recover it. 00:26:47.193 [2024-05-15 17:17:34.534107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.193 [2024-05-15 17:17:34.534204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.193 [2024-05-15 17:17:34.534214] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:47.193 qpair failed and we were unable to recover it. 00:26:47.193 [2024-05-15 17:17:34.534378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.193 [2024-05-15 17:17:34.534488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.193 [2024-05-15 17:17:34.534498] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:47.193 qpair failed and we were unable to recover it. 00:26:47.193 [2024-05-15 17:17:34.534741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.193 [2024-05-15 17:17:34.534844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.193 [2024-05-15 17:17:34.534854] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:47.193 qpair failed and we were unable to recover it. 00:26:47.193 [2024-05-15 17:17:34.535085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.193 [2024-05-15 17:17:34.535255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.193 [2024-05-15 17:17:34.535269] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:47.193 qpair failed and we were unable to recover it. 00:26:47.193 [2024-05-15 17:17:34.535442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.193 [2024-05-15 17:17:34.535546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.193 [2024-05-15 17:17:34.535555] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:47.193 qpair failed and we were unable to recover it. 00:26:47.193 [2024-05-15 17:17:34.535813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.193 [2024-05-15 17:17:34.535985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.193 [2024-05-15 17:17:34.535995] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:47.193 qpair failed and we were unable to recover it. 00:26:47.193 [2024-05-15 17:17:34.536117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.193 [2024-05-15 17:17:34.536312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.193 [2024-05-15 17:17:34.536322] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:47.193 qpair failed and we were unable to recover it. 00:26:47.193 [2024-05-15 17:17:34.536491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.193 [2024-05-15 17:17:34.536651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.193 [2024-05-15 17:17:34.536660] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:47.193 qpair failed and we were unable to recover it. 00:26:47.193 [2024-05-15 17:17:34.536885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.193 [2024-05-15 17:17:34.537049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.193 [2024-05-15 17:17:34.537058] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:47.193 qpair failed and we were unable to recover it. 00:26:47.193 [2024-05-15 17:17:34.537172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.193 [2024-05-15 17:17:34.537342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.193 [2024-05-15 17:17:34.537352] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:47.193 qpair failed and we were unable to recover it. 00:26:47.193 [2024-05-15 17:17:34.537522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.193 [2024-05-15 17:17:34.537688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.193 [2024-05-15 17:17:34.537697] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:47.193 qpair failed and we were unable to recover it. 00:26:47.193 [2024-05-15 17:17:34.537812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.193 [2024-05-15 17:17:34.537979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.193 [2024-05-15 17:17:34.537989] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:47.194 qpair failed and we were unable to recover it. 00:26:47.194 [2024-05-15 17:17:34.538090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.194 [2024-05-15 17:17:34.538319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.194 [2024-05-15 17:17:34.538330] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:47.194 qpair failed and we were unable to recover it. 00:26:47.194 [2024-05-15 17:17:34.538526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.194 [2024-05-15 17:17:34.538691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.194 [2024-05-15 17:17:34.538700] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:47.194 qpair failed and we were unable to recover it. 00:26:47.194 [2024-05-15 17:17:34.538866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.194 [2024-05-15 17:17:34.539042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.194 [2024-05-15 17:17:34.539051] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:47.194 qpair failed and we were unable to recover it. 00:26:47.194 [2024-05-15 17:17:34.539153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.194 [2024-05-15 17:17:34.539340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.194 [2024-05-15 17:17:34.539351] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:47.194 qpair failed and we were unable to recover it. 00:26:47.194 [2024-05-15 17:17:34.539523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.194 [2024-05-15 17:17:34.539612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.194 [2024-05-15 17:17:34.539622] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:47.194 qpair failed and we were unable to recover it. 00:26:47.194 [2024-05-15 17:17:34.539875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.194 [2024-05-15 17:17:34.539998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.194 [2024-05-15 17:17:34.540008] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:47.194 qpair failed and we were unable to recover it. 00:26:47.194 [2024-05-15 17:17:34.540235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.194 [2024-05-15 17:17:34.540423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.194 [2024-05-15 17:17:34.540433] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:47.194 qpair failed and we were unable to recover it. 00:26:47.194 [2024-05-15 17:17:34.540544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.194 [2024-05-15 17:17:34.540709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.194 [2024-05-15 17:17:34.540718] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:47.194 qpair failed and we were unable to recover it. 00:26:47.194 [2024-05-15 17:17:34.540862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.194 [2024-05-15 17:17:34.540970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.194 [2024-05-15 17:17:34.540980] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:47.194 qpair failed and we were unable to recover it. 00:26:47.194 [2024-05-15 17:17:34.541246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.194 [2024-05-15 17:17:34.541404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.194 [2024-05-15 17:17:34.541414] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:47.194 qpair failed and we were unable to recover it. 00:26:47.194 [2024-05-15 17:17:34.541585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.194 [2024-05-15 17:17:34.541696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.194 [2024-05-15 17:17:34.541706] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:47.194 qpair failed and we were unable to recover it. 00:26:47.194 [2024-05-15 17:17:34.541879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.194 [2024-05-15 17:17:34.542000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.194 [2024-05-15 17:17:34.542010] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:47.194 qpair failed and we were unable to recover it. 00:26:47.194 [2024-05-15 17:17:34.542129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.194 [2024-05-15 17:17:34.542305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.194 [2024-05-15 17:17:34.542315] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:47.194 qpair failed and we were unable to recover it. 00:26:47.194 [2024-05-15 17:17:34.542428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.194 [2024-05-15 17:17:34.542588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.194 [2024-05-15 17:17:34.542598] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:47.194 qpair failed and we were unable to recover it. 00:26:47.194 [2024-05-15 17:17:34.542758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.194 [2024-05-15 17:17:34.542876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.194 [2024-05-15 17:17:34.542887] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:47.194 qpair failed and we were unable to recover it. 00:26:47.194 [2024-05-15 17:17:34.543063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.194 [2024-05-15 17:17:34.543345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.194 [2024-05-15 17:17:34.543355] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:47.194 qpair failed and we were unable to recover it. 00:26:47.194 [2024-05-15 17:17:34.543454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.194 [2024-05-15 17:17:34.543623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.194 [2024-05-15 17:17:34.543633] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:47.194 qpair failed and we were unable to recover it. 00:26:47.194 [2024-05-15 17:17:34.543801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.194 [2024-05-15 17:17:34.543920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.194 [2024-05-15 17:17:34.543929] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:47.194 qpair failed and we were unable to recover it. 00:26:47.194 [2024-05-15 17:17:34.544105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.194 [2024-05-15 17:17:34.544199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.194 [2024-05-15 17:17:34.544209] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:47.194 qpair failed and we were unable to recover it. 00:26:47.194 [2024-05-15 17:17:34.544327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.194 [2024-05-15 17:17:34.544438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.194 [2024-05-15 17:17:34.544447] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:47.194 qpair failed and we were unable to recover it. 00:26:47.194 [2024-05-15 17:17:34.544618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.194 [2024-05-15 17:17:34.544776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.194 [2024-05-15 17:17:34.544785] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:47.194 qpair failed and we were unable to recover it. 00:26:47.194 [2024-05-15 17:17:34.544950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.194 [2024-05-15 17:17:34.545126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.194 [2024-05-15 17:17:34.545135] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:47.194 qpair failed and we were unable to recover it. 00:26:47.194 [2024-05-15 17:17:34.545252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.194 [2024-05-15 17:17:34.545405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.194 [2024-05-15 17:17:34.545415] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:47.194 qpair failed and we were unable to recover it. 00:26:47.194 [2024-05-15 17:17:34.545570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.194 [2024-05-15 17:17:34.545673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.194 [2024-05-15 17:17:34.545683] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:47.194 qpair failed and we were unable to recover it. 00:26:47.194 [2024-05-15 17:17:34.545832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.194 [2024-05-15 17:17:34.545990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.194 [2024-05-15 17:17:34.546000] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:47.194 qpair failed and we were unable to recover it. 00:26:47.194 [2024-05-15 17:17:34.546091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.194 [2024-05-15 17:17:34.546199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.194 [2024-05-15 17:17:34.546209] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:47.194 qpair failed and we were unable to recover it. 00:26:47.194 [2024-05-15 17:17:34.546463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.194 [2024-05-15 17:17:34.546568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.194 [2024-05-15 17:17:34.546578] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:47.194 qpair failed and we were unable to recover it. 00:26:47.194 [2024-05-15 17:17:34.546679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.194 [2024-05-15 17:17:34.546780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.194 [2024-05-15 17:17:34.546790] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:47.194 qpair failed and we were unable to recover it. 00:26:47.195 [2024-05-15 17:17:34.546908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.195 [2024-05-15 17:17:34.546999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.195 [2024-05-15 17:17:34.547009] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:47.195 qpair failed and we were unable to recover it. 00:26:47.195 [2024-05-15 17:17:34.547128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.195 [2024-05-15 17:17:34.547294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.195 [2024-05-15 17:17:34.547303] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:47.195 qpair failed and we were unable to recover it. 00:26:47.195 [2024-05-15 17:17:34.547461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.195 [2024-05-15 17:17:34.547629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.195 [2024-05-15 17:17:34.547638] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:47.195 qpair failed and we were unable to recover it. 00:26:47.195 [2024-05-15 17:17:34.547739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.195 [2024-05-15 17:17:34.547827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.195 [2024-05-15 17:17:34.547837] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:47.195 qpair failed and we were unable to recover it. 00:26:47.195 [2024-05-15 17:17:34.548060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.195 [2024-05-15 17:17:34.548295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.195 [2024-05-15 17:17:34.548305] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:47.195 qpair failed and we were unable to recover it. 00:26:47.195 [2024-05-15 17:17:34.548461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.195 [2024-05-15 17:17:34.548562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.195 [2024-05-15 17:17:34.548572] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:47.195 qpair failed and we were unable to recover it. 00:26:47.195 [2024-05-15 17:17:34.548708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.195 [2024-05-15 17:17:34.548931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.195 [2024-05-15 17:17:34.548940] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:47.195 qpair failed and we were unable to recover it. 00:26:47.195 [2024-05-15 17:17:34.549110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.195 [2024-05-15 17:17:34.549300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.195 [2024-05-15 17:17:34.549310] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:47.195 qpair failed and we were unable to recover it. 00:26:47.195 [2024-05-15 17:17:34.549393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.195 [2024-05-15 17:17:34.549523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.195 [2024-05-15 17:17:34.549532] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:47.195 qpair failed and we were unable to recover it. 00:26:47.195 [2024-05-15 17:17:34.549728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.195 [2024-05-15 17:17:34.549842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.195 [2024-05-15 17:17:34.549852] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:47.195 qpair failed and we were unable to recover it. 00:26:47.195 [2024-05-15 17:17:34.549999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.195 [2024-05-15 17:17:34.550107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.195 [2024-05-15 17:17:34.550116] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:47.195 qpair failed and we were unable to recover it. 00:26:47.195 [2024-05-15 17:17:34.550367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.195 [2024-05-15 17:17:34.550523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.195 [2024-05-15 17:17:34.550532] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:47.195 qpair failed and we were unable to recover it. 00:26:47.195 [2024-05-15 17:17:34.550637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.195 [2024-05-15 17:17:34.550739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.195 [2024-05-15 17:17:34.550749] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:47.195 qpair failed and we were unable to recover it. 00:26:47.195 [2024-05-15 17:17:34.550874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.195 [2024-05-15 17:17:34.551034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.195 [2024-05-15 17:17:34.551043] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:47.195 qpair failed and we were unable to recover it. 00:26:47.195 [2024-05-15 17:17:34.551199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.195 [2024-05-15 17:17:34.551301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.195 [2024-05-15 17:17:34.551310] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:47.195 qpair failed and we were unable to recover it. 00:26:47.195 [2024-05-15 17:17:34.551436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.195 [2024-05-15 17:17:34.551542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.195 [2024-05-15 17:17:34.551551] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:47.195 qpair failed and we were unable to recover it. 00:26:47.195 [2024-05-15 17:17:34.551797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.195 [2024-05-15 17:17:34.551907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.195 [2024-05-15 17:17:34.551916] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:47.195 qpair failed and we were unable to recover it. 00:26:47.195 [2024-05-15 17:17:34.552072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.195 [2024-05-15 17:17:34.552255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.195 [2024-05-15 17:17:34.552265] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:47.195 qpair failed and we were unable to recover it. 00:26:47.195 [2024-05-15 17:17:34.552399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.195 [2024-05-15 17:17:34.552565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.195 [2024-05-15 17:17:34.552575] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:47.195 qpair failed and we were unable to recover it. 00:26:47.195 [2024-05-15 17:17:34.552848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.195 [2024-05-15 17:17:34.552960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.195 [2024-05-15 17:17:34.552969] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:47.195 qpair failed and we were unable to recover it. 00:26:47.195 [2024-05-15 17:17:34.553037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.195 [2024-05-15 17:17:34.553203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.195 [2024-05-15 17:17:34.553213] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:47.195 qpair failed and we were unable to recover it. 00:26:47.195 [2024-05-15 17:17:34.553393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.195 [2024-05-15 17:17:34.553512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.195 [2024-05-15 17:17:34.553522] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:47.195 qpair failed and we were unable to recover it. 00:26:47.195 [2024-05-15 17:17:34.553749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.195 [2024-05-15 17:17:34.553871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.195 [2024-05-15 17:17:34.553882] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:47.195 qpair failed and we were unable to recover it. 00:26:47.195 [2024-05-15 17:17:34.554044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.195 [2024-05-15 17:17:34.554202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.195 [2024-05-15 17:17:34.554212] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:47.195 qpair failed and we were unable to recover it. 00:26:47.195 [2024-05-15 17:17:34.554319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.195 [2024-05-15 17:17:34.554482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.195 [2024-05-15 17:17:34.554493] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:47.195 qpair failed and we were unable to recover it. 00:26:47.195 [2024-05-15 17:17:34.554561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.195 [2024-05-15 17:17:34.554730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.195 [2024-05-15 17:17:34.554739] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:47.195 qpair failed and we were unable to recover it. 00:26:47.195 [2024-05-15 17:17:34.554845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.195 [2024-05-15 17:17:34.554947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.195 [2024-05-15 17:17:34.554957] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:47.195 qpair failed and we were unable to recover it. 00:26:47.195 [2024-05-15 17:17:34.555207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.195 [2024-05-15 17:17:34.555444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.195 [2024-05-15 17:17:34.555454] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:47.195 qpair failed and we were unable to recover it. 00:26:47.195 [2024-05-15 17:17:34.555573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.195 [2024-05-15 17:17:34.555662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.196 [2024-05-15 17:17:34.555672] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:47.196 qpair failed and we were unable to recover it. 00:26:47.196 [2024-05-15 17:17:34.555857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.196 [2024-05-15 17:17:34.555953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.196 [2024-05-15 17:17:34.555963] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:47.196 qpair failed and we were unable to recover it. 00:26:47.196 [2024-05-15 17:17:34.556123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.196 [2024-05-15 17:17:34.556355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.196 [2024-05-15 17:17:34.556365] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:47.196 qpair failed and we were unable to recover it. 00:26:47.196 [2024-05-15 17:17:34.556481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.196 [2024-05-15 17:17:34.556649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.196 [2024-05-15 17:17:34.556658] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:47.196 qpair failed and we were unable to recover it. 00:26:47.196 [2024-05-15 17:17:34.556753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.196 [2024-05-15 17:17:34.556928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.196 [2024-05-15 17:17:34.556938] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:47.196 qpair failed and we were unable to recover it. 00:26:47.196 [2024-05-15 17:17:34.557029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.196 [2024-05-15 17:17:34.557181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.196 [2024-05-15 17:17:34.557191] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:47.196 qpair failed and we were unable to recover it. 00:26:47.196 [2024-05-15 17:17:34.557286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.196 [2024-05-15 17:17:34.557447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.196 [2024-05-15 17:17:34.557456] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:47.196 qpair failed and we were unable to recover it. 00:26:47.196 [2024-05-15 17:17:34.557590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.196 [2024-05-15 17:17:34.557816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.196 [2024-05-15 17:17:34.557826] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:47.196 qpair failed and we were unable to recover it. 00:26:47.196 [2024-05-15 17:17:34.557993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.196 [2024-05-15 17:17:34.558090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.196 [2024-05-15 17:17:34.558100] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:47.196 qpair failed and we were unable to recover it. 00:26:47.196 [2024-05-15 17:17:34.558271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.196 [2024-05-15 17:17:34.558449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.196 [2024-05-15 17:17:34.558459] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:47.196 qpair failed and we were unable to recover it. 00:26:47.196 [2024-05-15 17:17:34.558685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.196 [2024-05-15 17:17:34.558788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.196 [2024-05-15 17:17:34.558798] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:47.196 qpair failed and we were unable to recover it. 00:26:47.196 [2024-05-15 17:17:34.558975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.196 [2024-05-15 17:17:34.559122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.196 [2024-05-15 17:17:34.559132] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:47.196 qpair failed and we were unable to recover it. 00:26:47.196 [2024-05-15 17:17:34.559380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.196 [2024-05-15 17:17:34.559591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.196 [2024-05-15 17:17:34.559601] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:47.196 qpair failed and we were unable to recover it. 00:26:47.196 [2024-05-15 17:17:34.559758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.196 [2024-05-15 17:17:34.559861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.196 [2024-05-15 17:17:34.559871] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:47.196 qpair failed and we were unable to recover it. 00:26:47.196 [2024-05-15 17:17:34.560096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.196 [2024-05-15 17:17:34.560229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.196 [2024-05-15 17:17:34.560239] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:47.196 qpair failed and we were unable to recover it. 00:26:47.196 [2024-05-15 17:17:34.560416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.196 [2024-05-15 17:17:34.560512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.196 [2024-05-15 17:17:34.560522] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:47.196 qpair failed and we were unable to recover it. 00:26:47.196 [2024-05-15 17:17:34.560616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.196 [2024-05-15 17:17:34.560839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.196 [2024-05-15 17:17:34.560849] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:47.196 qpair failed and we were unable to recover it. 00:26:47.196 [2024-05-15 17:17:34.561079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.196 [2024-05-15 17:17:34.561248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.196 [2024-05-15 17:17:34.561258] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:47.196 qpair failed and we were unable to recover it. 00:26:47.196 [2024-05-15 17:17:34.561374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.196 [2024-05-15 17:17:34.561526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.196 [2024-05-15 17:17:34.561536] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:47.196 qpair failed and we were unable to recover it. 00:26:47.196 [2024-05-15 17:17:34.561638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.196 [2024-05-15 17:17:34.561839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.196 [2024-05-15 17:17:34.561849] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:47.196 qpair failed and we were unable to recover it. 00:26:47.196 [2024-05-15 17:17:34.562029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.196 [2024-05-15 17:17:34.562202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.196 [2024-05-15 17:17:34.562213] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:47.196 qpair failed and we were unable to recover it. 00:26:47.196 [2024-05-15 17:17:34.562326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.196 [2024-05-15 17:17:34.562541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.196 [2024-05-15 17:17:34.562551] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:47.196 qpair failed and we were unable to recover it. 00:26:47.196 [2024-05-15 17:17:34.562715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.196 [2024-05-15 17:17:34.562946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.196 [2024-05-15 17:17:34.562955] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:47.196 qpair failed and we were unable to recover it. 00:26:47.196 [2024-05-15 17:17:34.563043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.196 [2024-05-15 17:17:34.563297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.196 [2024-05-15 17:17:34.563307] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:47.196 qpair failed and we were unable to recover it. 00:26:47.196 [2024-05-15 17:17:34.563534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.196 [2024-05-15 17:17:34.563691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.196 [2024-05-15 17:17:34.563701] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:47.196 qpair failed and we were unable to recover it. 00:26:47.196 [2024-05-15 17:17:34.563963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.196 [2024-05-15 17:17:34.564067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.196 [2024-05-15 17:17:34.564077] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:47.196 qpair failed and we were unable to recover it. 00:26:47.196 [2024-05-15 17:17:34.564203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.196 [2024-05-15 17:17:34.564314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.196 [2024-05-15 17:17:34.564324] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:47.196 qpair failed and we were unable to recover it. 00:26:47.196 [2024-05-15 17:17:34.564567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.196 [2024-05-15 17:17:34.564732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.196 [2024-05-15 17:17:34.564742] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:47.196 qpair failed and we were unable to recover it. 00:26:47.196 [2024-05-15 17:17:34.564967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.196 [2024-05-15 17:17:34.565141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.196 [2024-05-15 17:17:34.565151] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:47.197 qpair failed and we were unable to recover it. 00:26:47.197 [2024-05-15 17:17:34.565312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.197 [2024-05-15 17:17:34.565508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.197 [2024-05-15 17:17:34.565517] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:47.197 qpair failed and we were unable to recover it. 00:26:47.197 [2024-05-15 17:17:34.565608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.197 [2024-05-15 17:17:34.565827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.197 [2024-05-15 17:17:34.565837] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:47.197 qpair failed and we were unable to recover it. 00:26:47.197 [2024-05-15 17:17:34.565935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.197 [2024-05-15 17:17:34.566106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.197 [2024-05-15 17:17:34.566116] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:47.197 qpair failed and we were unable to recover it. 00:26:47.197 [2024-05-15 17:17:34.566287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.197 [2024-05-15 17:17:34.566447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.197 [2024-05-15 17:17:34.566457] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:47.197 qpair failed and we were unable to recover it. 00:26:47.197 [2024-05-15 17:17:34.566691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.197 [2024-05-15 17:17:34.566782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.197 [2024-05-15 17:17:34.566792] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:47.197 qpair failed and we were unable to recover it. 00:26:47.197 [2024-05-15 17:17:34.567015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.197 [2024-05-15 17:17:34.567239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.197 [2024-05-15 17:17:34.567249] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:47.197 qpair failed and we were unable to recover it. 00:26:47.197 [2024-05-15 17:17:34.567368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.197 [2024-05-15 17:17:34.567455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.197 [2024-05-15 17:17:34.567465] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:47.197 qpair failed and we were unable to recover it. 00:26:47.197 [2024-05-15 17:17:34.567645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.197 [2024-05-15 17:17:34.567810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.197 [2024-05-15 17:17:34.567820] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:47.197 qpair failed and we were unable to recover it. 00:26:47.197 [2024-05-15 17:17:34.567991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.197 [2024-05-15 17:17:34.568175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.197 [2024-05-15 17:17:34.568185] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:47.197 qpair failed and we were unable to recover it. 00:26:47.197 [2024-05-15 17:17:34.568287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.197 [2024-05-15 17:17:34.568511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.197 [2024-05-15 17:17:34.568523] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:47.197 qpair failed and we were unable to recover it. 00:26:47.197 [2024-05-15 17:17:34.568753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.197 [2024-05-15 17:17:34.568970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.197 [2024-05-15 17:17:34.568980] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:47.197 qpair failed and we were unable to recover it. 00:26:47.197 [2024-05-15 17:17:34.569097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.197 [2024-05-15 17:17:34.569326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.197 [2024-05-15 17:17:34.569336] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:47.197 qpair failed and we were unable to recover it. 00:26:47.197 [2024-05-15 17:17:34.569507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.197 [2024-05-15 17:17:34.569752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.197 [2024-05-15 17:17:34.569761] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:47.197 qpair failed and we were unable to recover it. 00:26:47.197 [2024-05-15 17:17:34.569832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.197 [2024-05-15 17:17:34.570054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.197 [2024-05-15 17:17:34.570064] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:47.197 qpair failed and we were unable to recover it. 00:26:47.197 [2024-05-15 17:17:34.570246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.197 [2024-05-15 17:17:34.570494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.197 [2024-05-15 17:17:34.570503] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:47.197 qpair failed and we were unable to recover it. 00:26:47.197 [2024-05-15 17:17:34.570671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.197 [2024-05-15 17:17:34.570832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.197 [2024-05-15 17:17:34.570841] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:47.197 qpair failed and we were unable to recover it. 00:26:47.197 [2024-05-15 17:17:34.571014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.197 [2024-05-15 17:17:34.571199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.197 [2024-05-15 17:17:34.571209] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:47.197 qpair failed and we were unable to recover it. 00:26:47.197 [2024-05-15 17:17:34.571277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.197 [2024-05-15 17:17:34.571353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.197 [2024-05-15 17:17:34.571362] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:47.197 qpair failed and we were unable to recover it. 00:26:47.197 [2024-05-15 17:17:34.571530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.197 [2024-05-15 17:17:34.571709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.197 [2024-05-15 17:17:34.571718] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:47.197 qpair failed and we were unable to recover it. 00:26:47.197 [2024-05-15 17:17:34.571797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.197 [2024-05-15 17:17:34.571969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.197 [2024-05-15 17:17:34.571978] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:47.197 qpair failed and we were unable to recover it. 00:26:47.197 [2024-05-15 17:17:34.572149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.197 [2024-05-15 17:17:34.572342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.197 [2024-05-15 17:17:34.572352] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:47.197 qpair failed and we were unable to recover it. 00:26:47.197 [2024-05-15 17:17:34.572522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.197 [2024-05-15 17:17:34.572679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.197 [2024-05-15 17:17:34.572688] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:47.197 qpair failed and we were unable to recover it. 00:26:47.197 [2024-05-15 17:17:34.572862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.198 [2024-05-15 17:17:34.573030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.198 [2024-05-15 17:17:34.573040] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:47.198 qpair failed and we were unable to recover it. 00:26:47.198 [2024-05-15 17:17:34.573263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.198 [2024-05-15 17:17:34.573434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.198 [2024-05-15 17:17:34.573445] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:47.198 qpair failed and we were unable to recover it. 00:26:47.198 [2024-05-15 17:17:34.573615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.198 [2024-05-15 17:17:34.573784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.198 [2024-05-15 17:17:34.573796] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:47.198 qpair failed and we were unable to recover it. 00:26:47.198 [2024-05-15 17:17:34.573973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.198 [2024-05-15 17:17:34.574132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.198 [2024-05-15 17:17:34.574142] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:47.198 qpair failed and we were unable to recover it. 00:26:47.198 [2024-05-15 17:17:34.574259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.198 [2024-05-15 17:17:34.574449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.198 [2024-05-15 17:17:34.574459] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:47.198 qpair failed and we were unable to recover it. 00:26:47.198 [2024-05-15 17:17:34.574634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.198 [2024-05-15 17:17:34.574856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.198 [2024-05-15 17:17:34.574866] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:47.198 qpair failed and we were unable to recover it. 00:26:47.198 [2024-05-15 17:17:34.574985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.198 [2024-05-15 17:17:34.575205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.198 [2024-05-15 17:17:34.575215] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:47.198 qpair failed and we were unable to recover it. 00:26:47.198 [2024-05-15 17:17:34.575384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.198 [2024-05-15 17:17:34.575558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.198 [2024-05-15 17:17:34.575568] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:47.198 qpair failed and we were unable to recover it. 00:26:47.198 [2024-05-15 17:17:34.575737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.198 [2024-05-15 17:17:34.575852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.198 [2024-05-15 17:17:34.575861] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:47.198 qpair failed and we were unable to recover it. 00:26:47.198 [2024-05-15 17:17:34.576031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.198 [2024-05-15 17:17:34.576143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.198 [2024-05-15 17:17:34.576152] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:47.198 qpair failed and we were unable to recover it. 00:26:47.198 [2024-05-15 17:17:34.576366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.198 [2024-05-15 17:17:34.576466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.198 [2024-05-15 17:17:34.576475] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:47.198 qpair failed and we were unable to recover it. 00:26:47.198 [2024-05-15 17:17:34.576725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.198 [2024-05-15 17:17:34.576950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.198 [2024-05-15 17:17:34.576959] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:47.198 qpair failed and we were unable to recover it. 00:26:47.198 [2024-05-15 17:17:34.577203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.198 [2024-05-15 17:17:34.577359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.198 [2024-05-15 17:17:34.577371] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:47.198 qpair failed and we were unable to recover it. 00:26:47.198 [2024-05-15 17:17:34.577457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.198 [2024-05-15 17:17:34.577570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.198 [2024-05-15 17:17:34.577580] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:47.198 qpair failed and we were unable to recover it. 00:26:47.198 [2024-05-15 17:17:34.577814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.198 [2024-05-15 17:17:34.577919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.198 [2024-05-15 17:17:34.577929] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:47.198 qpair failed and we were unable to recover it. 00:26:47.198 [2024-05-15 17:17:34.578098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.198 [2024-05-15 17:17:34.578277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.198 [2024-05-15 17:17:34.578287] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:47.198 qpair failed and we were unable to recover it. 00:26:47.198 [2024-05-15 17:17:34.578463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.198 [2024-05-15 17:17:34.578639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.198 [2024-05-15 17:17:34.578649] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:47.198 qpair failed and we were unable to recover it. 00:26:47.198 [2024-05-15 17:17:34.578875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.198 [2024-05-15 17:17:34.579106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.198 [2024-05-15 17:17:34.579115] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:47.198 qpair failed and we were unable to recover it. 00:26:47.198 [2024-05-15 17:17:34.579303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.198 [2024-05-15 17:17:34.579482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.198 [2024-05-15 17:17:34.579491] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:47.198 qpair failed and we were unable to recover it. 00:26:47.198 [2024-05-15 17:17:34.579720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.198 [2024-05-15 17:17:34.579882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.198 [2024-05-15 17:17:34.579891] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:47.198 qpair failed and we were unable to recover it. 00:26:47.198 [2024-05-15 17:17:34.580064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.198 [2024-05-15 17:17:34.580283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.198 [2024-05-15 17:17:34.580294] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:47.198 qpair failed and we were unable to recover it. 00:26:47.198 [2024-05-15 17:17:34.580412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.198 [2024-05-15 17:17:34.580534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.198 [2024-05-15 17:17:34.580543] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:47.198 qpair failed and we were unable to recover it. 00:26:47.198 [2024-05-15 17:17:34.580657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.198 [2024-05-15 17:17:34.580882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.198 [2024-05-15 17:17:34.580894] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:47.198 qpair failed and we were unable to recover it. 00:26:47.198 [2024-05-15 17:17:34.581057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.198 [2024-05-15 17:17:34.581213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.198 [2024-05-15 17:17:34.581223] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:47.198 qpair failed and we were unable to recover it. 00:26:47.198 [2024-05-15 17:17:34.581386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.198 [2024-05-15 17:17:34.581606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.198 [2024-05-15 17:17:34.581618] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:47.198 qpair failed and we were unable to recover it. 00:26:47.198 [2024-05-15 17:17:34.581817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.198 [2024-05-15 17:17:34.582012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.198 [2024-05-15 17:17:34.582022] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:47.198 qpair failed and we were unable to recover it. 00:26:47.198 [2024-05-15 17:17:34.582192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.198 [2024-05-15 17:17:34.582390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.198 [2024-05-15 17:17:34.582400] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:47.198 qpair failed and we were unable to recover it. 00:26:47.198 [2024-05-15 17:17:34.582628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.198 [2024-05-15 17:17:34.582728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.198 [2024-05-15 17:17:34.582737] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:47.198 qpair failed and we were unable to recover it. 00:26:47.198 [2024-05-15 17:17:34.582852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.198 [2024-05-15 17:17:34.582954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.199 [2024-05-15 17:17:34.582964] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:47.199 qpair failed and we were unable to recover it. 00:26:47.199 [2024-05-15 17:17:34.583138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.199 [2024-05-15 17:17:34.583238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.199 [2024-05-15 17:17:34.583249] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:47.199 qpair failed and we were unable to recover it. 00:26:47.199 [2024-05-15 17:17:34.583343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.199 [2024-05-15 17:17:34.583507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.199 [2024-05-15 17:17:34.583518] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:47.199 qpair failed and we were unable to recover it. 00:26:47.199 [2024-05-15 17:17:34.583674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.199 [2024-05-15 17:17:34.583789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.199 [2024-05-15 17:17:34.583799] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:47.199 qpair failed and we were unable to recover it. 00:26:47.199 [2024-05-15 17:17:34.584049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.199 [2024-05-15 17:17:34.584161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.199 [2024-05-15 17:17:34.584182] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:47.199 qpair failed and we were unable to recover it. 00:26:47.199 [2024-05-15 17:17:34.584342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.199 [2024-05-15 17:17:34.584443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.199 [2024-05-15 17:17:34.584453] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:47.199 qpair failed and we were unable to recover it. 00:26:47.199 [2024-05-15 17:17:34.584622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.199 [2024-05-15 17:17:34.584722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.199 [2024-05-15 17:17:34.584733] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:47.199 qpair failed and we were unable to recover it. 00:26:47.199 [2024-05-15 17:17:34.584927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.199 [2024-05-15 17:17:34.585043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.199 [2024-05-15 17:17:34.585054] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:47.199 qpair failed and we were unable to recover it. 00:26:47.199 [2024-05-15 17:17:34.585197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.199 [2024-05-15 17:17:34.585371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.199 [2024-05-15 17:17:34.585381] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:47.199 qpair failed and we were unable to recover it. 00:26:47.199 [2024-05-15 17:17:34.585483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.199 [2024-05-15 17:17:34.585582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.199 [2024-05-15 17:17:34.585592] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:47.199 qpair failed and we were unable to recover it. 00:26:47.199 [2024-05-15 17:17:34.585754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.199 [2024-05-15 17:17:34.585868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.199 [2024-05-15 17:17:34.585879] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:47.199 qpair failed and we were unable to recover it. 00:26:47.199 [2024-05-15 17:17:34.586035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.199 [2024-05-15 17:17:34.586278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.199 [2024-05-15 17:17:34.586289] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:47.199 qpair failed and we were unable to recover it. 00:26:47.199 [2024-05-15 17:17:34.586445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.199 [2024-05-15 17:17:34.586622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.199 [2024-05-15 17:17:34.586633] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:47.199 qpair failed and we were unable to recover it. 00:26:47.199 [2024-05-15 17:17:34.586730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.199 [2024-05-15 17:17:34.586816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.199 [2024-05-15 17:17:34.586825] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:47.199 qpair failed and we were unable to recover it. 00:26:47.199 [2024-05-15 17:17:34.587062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.199 [2024-05-15 17:17:34.587231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.199 [2024-05-15 17:17:34.587241] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:47.199 qpair failed and we were unable to recover it. 00:26:47.199 [2024-05-15 17:17:34.587397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.199 [2024-05-15 17:17:34.587627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.199 [2024-05-15 17:17:34.587637] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:47.199 qpair failed and we were unable to recover it. 00:26:47.199 [2024-05-15 17:17:34.587734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.199 [2024-05-15 17:17:34.587926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.199 [2024-05-15 17:17:34.587936] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:47.199 qpair failed and we were unable to recover it. 00:26:47.199 [2024-05-15 17:17:34.588111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.199 [2024-05-15 17:17:34.588287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.199 [2024-05-15 17:17:34.588297] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:47.199 qpair failed and we were unable to recover it. 00:26:47.199 [2024-05-15 17:17:34.588467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.199 [2024-05-15 17:17:34.588575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.199 [2024-05-15 17:17:34.588585] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:47.199 qpair failed and we were unable to recover it. 00:26:47.199 [2024-05-15 17:17:34.588665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.199 [2024-05-15 17:17:34.588775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.199 [2024-05-15 17:17:34.588785] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:47.199 qpair failed and we were unable to recover it. 00:26:47.199 [2024-05-15 17:17:34.588953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.199 [2024-05-15 17:17:34.589112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.199 [2024-05-15 17:17:34.589122] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:47.199 qpair failed and we were unable to recover it. 00:26:47.199 [2024-05-15 17:17:34.589306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.199 [2024-05-15 17:17:34.589406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.199 [2024-05-15 17:17:34.589417] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:47.199 qpair failed and we were unable to recover it. 00:26:47.199 [2024-05-15 17:17:34.589518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.199 [2024-05-15 17:17:34.589679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.199 [2024-05-15 17:17:34.589689] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:47.199 qpair failed and we were unable to recover it. 00:26:47.199 [2024-05-15 17:17:34.590009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.199 [2024-05-15 17:17:34.590174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.199 [2024-05-15 17:17:34.590186] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:47.199 qpair failed and we were unable to recover it. 00:26:47.199 [2024-05-15 17:17:34.590359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.199 [2024-05-15 17:17:34.590526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.199 [2024-05-15 17:17:34.590536] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:47.199 qpair failed and we were unable to recover it. 00:26:47.199 [2024-05-15 17:17:34.590724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.199 [2024-05-15 17:17:34.590833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.199 [2024-05-15 17:17:34.590843] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:47.199 qpair failed and we were unable to recover it. 00:26:47.199 [2024-05-15 17:17:34.591020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.199 [2024-05-15 17:17:34.591180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.199 [2024-05-15 17:17:34.591191] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:47.199 qpair failed and we were unable to recover it. 00:26:47.199 [2024-05-15 17:17:34.591358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.199 [2024-05-15 17:17:34.591463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.199 [2024-05-15 17:17:34.591474] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:47.199 qpair failed and we were unable to recover it. 00:26:47.199 [2024-05-15 17:17:34.591657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.199 [2024-05-15 17:17:34.591820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.199 [2024-05-15 17:17:34.591829] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:47.199 qpair failed and we were unable to recover it. 00:26:47.200 [2024-05-15 17:17:34.591917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.200 [2024-05-15 17:17:34.591990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.200 [2024-05-15 17:17:34.592000] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:47.200 qpair failed and we were unable to recover it. 00:26:47.200 [2024-05-15 17:17:34.592088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.200 [2024-05-15 17:17:34.592249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.200 [2024-05-15 17:17:34.592259] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:47.200 qpair failed and we were unable to recover it. 00:26:47.200 [2024-05-15 17:17:34.592417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.200 [2024-05-15 17:17:34.592612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.200 [2024-05-15 17:17:34.592622] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:47.200 qpair failed and we were unable to recover it. 00:26:47.200 [2024-05-15 17:17:34.592795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.200 [2024-05-15 17:17:34.592901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.200 [2024-05-15 17:17:34.592911] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:47.200 qpair failed and we were unable to recover it. 00:26:47.200 [2024-05-15 17:17:34.592998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.200 [2024-05-15 17:17:34.593222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.200 [2024-05-15 17:17:34.593232] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:47.200 qpair failed and we were unable to recover it. 00:26:47.200 [2024-05-15 17:17:34.593337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.200 [2024-05-15 17:17:34.593435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.200 [2024-05-15 17:17:34.593445] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:47.200 qpair failed and we were unable to recover it. 00:26:47.200 [2024-05-15 17:17:34.593696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.200 [2024-05-15 17:17:34.593895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.200 [2024-05-15 17:17:34.593905] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:47.200 qpair failed and we were unable to recover it. 00:26:47.200 [2024-05-15 17:17:34.594073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.200 [2024-05-15 17:17:34.594320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.200 [2024-05-15 17:17:34.594331] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:47.200 qpair failed and we were unable to recover it. 00:26:47.200 [2024-05-15 17:17:34.594457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.200 [2024-05-15 17:17:34.594559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.200 [2024-05-15 17:17:34.594569] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:47.200 qpair failed and we were unable to recover it. 00:26:47.200 [2024-05-15 17:17:34.594729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.200 [2024-05-15 17:17:34.594914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.200 [2024-05-15 17:17:34.594925] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:47.200 qpair failed and we were unable to recover it. 00:26:47.200 [2024-05-15 17:17:34.595095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.200 [2024-05-15 17:17:34.595345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.200 [2024-05-15 17:17:34.595356] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:47.200 qpair failed and we were unable to recover it. 00:26:47.200 [2024-05-15 17:17:34.595547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.200 [2024-05-15 17:17:34.595711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.200 [2024-05-15 17:17:34.595721] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:47.200 qpair failed and we were unable to recover it. 00:26:47.200 [2024-05-15 17:17:34.595970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.200 [2024-05-15 17:17:34.596086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.200 [2024-05-15 17:17:34.596097] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:47.200 qpair failed and we were unable to recover it. 00:26:47.200 [2024-05-15 17:17:34.596358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.200 [2024-05-15 17:17:34.596542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.200 [2024-05-15 17:17:34.596553] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:47.200 qpair failed and we were unable to recover it. 00:26:47.200 [2024-05-15 17:17:34.596801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.200 [2024-05-15 17:17:34.596974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.200 [2024-05-15 17:17:34.596984] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:47.200 qpair failed and we were unable to recover it. 00:26:47.200 [2024-05-15 17:17:34.597103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.200 [2024-05-15 17:17:34.597213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.200 [2024-05-15 17:17:34.597223] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:47.200 qpair failed and we were unable to recover it. 00:26:47.200 [2024-05-15 17:17:34.597390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.200 [2024-05-15 17:17:34.597515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.200 [2024-05-15 17:17:34.597525] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:47.200 qpair failed and we were unable to recover it. 00:26:47.200 [2024-05-15 17:17:34.597750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.200 [2024-05-15 17:17:34.597980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.200 [2024-05-15 17:17:34.597990] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:47.200 qpair failed and we were unable to recover it. 00:26:47.200 [2024-05-15 17:17:34.598103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.200 [2024-05-15 17:17:34.598269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.200 [2024-05-15 17:17:34.598280] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:47.200 qpair failed and we were unable to recover it. 00:26:47.200 [2024-05-15 17:17:34.598447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.200 [2024-05-15 17:17:34.598690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.200 [2024-05-15 17:17:34.598700] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:47.200 qpair failed and we were unable to recover it. 00:26:47.200 [2024-05-15 17:17:34.598948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.200 [2024-05-15 17:17:34.599133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.200 [2024-05-15 17:17:34.599144] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:47.200 qpair failed and we were unable to recover it. 00:26:47.200 [2024-05-15 17:17:34.599333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.200 [2024-05-15 17:17:34.599506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.200 [2024-05-15 17:17:34.599516] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:47.200 qpair failed and we were unable to recover it. 00:26:47.200 [2024-05-15 17:17:34.599716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.200 [2024-05-15 17:17:34.599900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.200 [2024-05-15 17:17:34.599910] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:47.200 qpair failed and we were unable to recover it. 00:26:47.200 [2024-05-15 17:17:34.600024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.200 [2024-05-15 17:17:34.600216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.200 [2024-05-15 17:17:34.600227] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:47.200 qpair failed and we were unable to recover it. 00:26:47.200 [2024-05-15 17:17:34.600402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.200 [2024-05-15 17:17:34.600505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.200 [2024-05-15 17:17:34.600515] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:47.200 qpair failed and we were unable to recover it. 00:26:47.200 [2024-05-15 17:17:34.600672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.200 [2024-05-15 17:17:34.600864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.200 [2024-05-15 17:17:34.600874] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:47.200 qpair failed and we were unable to recover it. 00:26:47.200 [2024-05-15 17:17:34.601030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.200 [2024-05-15 17:17:34.601250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.200 [2024-05-15 17:17:34.601261] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:47.200 qpair failed and we were unable to recover it. 00:26:47.200 [2024-05-15 17:17:34.601482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.200 [2024-05-15 17:17:34.601663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.200 [2024-05-15 17:17:34.601673] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:47.200 qpair failed and we were unable to recover it. 00:26:47.200 [2024-05-15 17:17:34.601877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.201 [2024-05-15 17:17:34.602066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.201 [2024-05-15 17:17:34.602076] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:47.201 qpair failed and we were unable to recover it. 00:26:47.201 [2024-05-15 17:17:34.602262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.201 [2024-05-15 17:17:34.602398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.201 [2024-05-15 17:17:34.602408] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:47.201 qpair failed and we were unable to recover it. 00:26:47.201 [2024-05-15 17:17:34.602628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.201 [2024-05-15 17:17:34.602738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.201 [2024-05-15 17:17:34.602748] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:47.201 qpair failed and we were unable to recover it. 00:26:47.201 [2024-05-15 17:17:34.602935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.201 [2024-05-15 17:17:34.603039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.201 [2024-05-15 17:17:34.603049] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:47.201 qpair failed and we were unable to recover it. 00:26:47.201 [2024-05-15 17:17:34.603217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.201 [2024-05-15 17:17:34.603441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.201 [2024-05-15 17:17:34.603452] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:47.201 qpair failed and we were unable to recover it. 00:26:47.201 [2024-05-15 17:17:34.603649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.201 [2024-05-15 17:17:34.603756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.201 [2024-05-15 17:17:34.603766] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:47.201 qpair failed and we were unable to recover it. 00:26:47.201 [2024-05-15 17:17:34.603881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.201 [2024-05-15 17:17:34.604052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.201 [2024-05-15 17:17:34.604062] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:47.201 qpair failed and we were unable to recover it. 00:26:47.201 [2024-05-15 17:17:34.604258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.201 [2024-05-15 17:17:34.604413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.201 [2024-05-15 17:17:34.604423] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:47.201 qpair failed and we were unable to recover it. 00:26:47.201 [2024-05-15 17:17:34.604591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.201 [2024-05-15 17:17:34.604786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.201 [2024-05-15 17:17:34.604796] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:47.201 qpair failed and we were unable to recover it. 00:26:47.201 [2024-05-15 17:17:34.604889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.201 [2024-05-15 17:17:34.604997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.201 [2024-05-15 17:17:34.605007] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:47.201 qpair failed and we were unable to recover it. 00:26:47.201 [2024-05-15 17:17:34.605188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.201 [2024-05-15 17:17:34.605382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.201 [2024-05-15 17:17:34.605392] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:47.201 qpair failed and we were unable to recover it. 00:26:47.201 [2024-05-15 17:17:34.605548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.201 [2024-05-15 17:17:34.605701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.201 [2024-05-15 17:17:34.605711] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:47.201 qpair failed and we were unable to recover it. 00:26:47.201 [2024-05-15 17:17:34.605887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.201 [2024-05-15 17:17:34.606004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.201 [2024-05-15 17:17:34.606014] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:47.201 qpair failed and we were unable to recover it. 00:26:47.201 [2024-05-15 17:17:34.606108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.201 [2024-05-15 17:17:34.606220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.201 [2024-05-15 17:17:34.606230] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:47.201 qpair failed and we were unable to recover it. 00:26:47.201 [2024-05-15 17:17:34.606456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.201 [2024-05-15 17:17:34.606647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.201 [2024-05-15 17:17:34.606657] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:47.201 qpair failed and we were unable to recover it. 00:26:47.201 [2024-05-15 17:17:34.606764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.201 [2024-05-15 17:17:34.606862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.201 [2024-05-15 17:17:34.606872] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:47.201 qpair failed and we were unable to recover it. 00:26:47.201 [2024-05-15 17:17:34.606978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.201 [2024-05-15 17:17:34.607130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.201 [2024-05-15 17:17:34.607140] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:47.201 qpair failed and we were unable to recover it. 00:26:47.201 [2024-05-15 17:17:34.607317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.201 [2024-05-15 17:17:34.607541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.201 [2024-05-15 17:17:34.607551] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:47.201 qpair failed and we were unable to recover it. 00:26:47.201 [2024-05-15 17:17:34.607779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.201 [2024-05-15 17:17:34.607949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.201 [2024-05-15 17:17:34.607959] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:47.201 qpair failed and we were unable to recover it. 00:26:47.201 [2024-05-15 17:17:34.608184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.201 [2024-05-15 17:17:34.608404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.201 [2024-05-15 17:17:34.608414] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:47.201 qpair failed and we were unable to recover it. 00:26:47.201 [2024-05-15 17:17:34.608506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.201 [2024-05-15 17:17:34.608660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.201 [2024-05-15 17:17:34.608670] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:47.201 qpair failed and we were unable to recover it. 00:26:47.201 [2024-05-15 17:17:34.608775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.201 [2024-05-15 17:17:34.608956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.201 [2024-05-15 17:17:34.608966] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:47.201 qpair failed and we were unable to recover it. 00:26:47.201 [2024-05-15 17:17:34.609144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.201 [2024-05-15 17:17:34.609297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.201 [2024-05-15 17:17:34.609307] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:47.201 qpair failed and we were unable to recover it. 00:26:47.201 [2024-05-15 17:17:34.609482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.201 [2024-05-15 17:17:34.609678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.201 [2024-05-15 17:17:34.609688] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:47.201 qpair failed and we were unable to recover it. 00:26:47.201 [2024-05-15 17:17:34.609880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.201 [2024-05-15 17:17:34.609967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.201 [2024-05-15 17:17:34.609977] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:47.201 qpair failed and we were unable to recover it. 00:26:47.201 [2024-05-15 17:17:34.610088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.201 [2024-05-15 17:17:34.610313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.201 [2024-05-15 17:17:34.610323] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:47.201 qpair failed and we were unable to recover it. 00:26:47.201 [2024-05-15 17:17:34.610499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.201 [2024-05-15 17:17:34.610752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.201 [2024-05-15 17:17:34.610762] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:47.201 qpair failed and we were unable to recover it. 00:26:47.201 [2024-05-15 17:17:34.610879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.201 [2024-05-15 17:17:34.610996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.201 [2024-05-15 17:17:34.611007] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:47.201 qpair failed and we were unable to recover it. 00:26:47.201 [2024-05-15 17:17:34.611223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.201 [2024-05-15 17:17:34.611425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.201 [2024-05-15 17:17:34.611442] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:47.202 qpair failed and we were unable to recover it. 00:26:47.202 [2024-05-15 17:17:34.611658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.202 [2024-05-15 17:17:34.611940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.202 [2024-05-15 17:17:34.611954] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:47.202 qpair failed and we were unable to recover it. 00:26:47.202 [2024-05-15 17:17:34.612077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.202 [2024-05-15 17:17:34.612257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.202 [2024-05-15 17:17:34.612272] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:47.202 qpair failed and we were unable to recover it. 00:26:47.202 [2024-05-15 17:17:34.612384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.202 [2024-05-15 17:17:34.612564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.202 [2024-05-15 17:17:34.612578] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:47.202 qpair failed and we were unable to recover it. 00:26:47.202 [2024-05-15 17:17:34.612810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.202 [2024-05-15 17:17:34.613004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.202 [2024-05-15 17:17:34.613017] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:47.202 qpair failed and we were unable to recover it. 00:26:47.202 [2024-05-15 17:17:34.613225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.202 [2024-05-15 17:17:34.613348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.202 [2024-05-15 17:17:34.613363] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:47.202 qpair failed and we were unable to recover it. 00:26:47.202 [2024-05-15 17:17:34.613473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.202 [2024-05-15 17:17:34.613727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.202 [2024-05-15 17:17:34.613741] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:47.202 qpair failed and we were unable to recover it. 00:26:47.202 [2024-05-15 17:17:34.613858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.202 [2024-05-15 17:17:34.613976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.202 [2024-05-15 17:17:34.613989] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:47.202 qpair failed and we were unable to recover it. 00:26:47.202 [2024-05-15 17:17:34.614097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.202 [2024-05-15 17:17:34.614293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.202 [2024-05-15 17:17:34.614308] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:47.202 qpair failed and we were unable to recover it. 00:26:47.202 [2024-05-15 17:17:34.614441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.202 [2024-05-15 17:17:34.614553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.202 [2024-05-15 17:17:34.614568] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:47.202 qpair failed and we were unable to recover it. 00:26:47.202 [2024-05-15 17:17:34.614744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.202 [2024-05-15 17:17:34.614932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.202 [2024-05-15 17:17:34.614946] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:47.202 qpair failed and we were unable to recover it. 00:26:47.202 [2024-05-15 17:17:34.615149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.202 [2024-05-15 17:17:34.615324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.202 [2024-05-15 17:17:34.615338] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:47.202 qpair failed and we were unable to recover it. 00:26:47.202 [2024-05-15 17:17:34.615502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.202 [2024-05-15 17:17:34.615757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.202 [2024-05-15 17:17:34.615770] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:47.202 qpair failed and we were unable to recover it. 00:26:47.202 [2024-05-15 17:17:34.615937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.202 [2024-05-15 17:17:34.616037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.202 [2024-05-15 17:17:34.616051] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:47.202 qpair failed and we were unable to recover it. 00:26:47.202 [2024-05-15 17:17:34.616174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.202 [2024-05-15 17:17:34.616270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.202 [2024-05-15 17:17:34.616283] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:47.202 qpair failed and we were unable to recover it. 00:26:47.202 [2024-05-15 17:17:34.616542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.202 [2024-05-15 17:17:34.616717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.202 [2024-05-15 17:17:34.616730] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:47.202 qpair failed and we were unable to recover it. 00:26:47.202 [2024-05-15 17:17:34.616851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.202 [2024-05-15 17:17:34.617090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.202 [2024-05-15 17:17:34.617104] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:47.202 qpair failed and we were unable to recover it. 00:26:47.202 [2024-05-15 17:17:34.617286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.202 [2024-05-15 17:17:34.617460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.202 [2024-05-15 17:17:34.617474] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:47.202 qpair failed and we were unable to recover it. 00:26:47.202 [2024-05-15 17:17:34.617672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.202 [2024-05-15 17:17:34.617780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.202 [2024-05-15 17:17:34.617794] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:47.202 qpair failed and we were unable to recover it. 00:26:47.202 [2024-05-15 17:17:34.617989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.202 [2024-05-15 17:17:34.618147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.202 [2024-05-15 17:17:34.618161] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:47.202 qpair failed and we were unable to recover it. 00:26:47.202 [2024-05-15 17:17:34.618293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.202 [2024-05-15 17:17:34.618531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.202 [2024-05-15 17:17:34.618552] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:47.202 qpair failed and we were unable to recover it. 00:26:47.202 [2024-05-15 17:17:34.618772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.202 [2024-05-15 17:17:34.618886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.202 [2024-05-15 17:17:34.618899] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:47.202 qpair failed and we were unable to recover it. 00:26:47.202 [2024-05-15 17:17:34.619121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.202 [2024-05-15 17:17:34.619293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.202 [2024-05-15 17:17:34.619307] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:47.202 qpair failed and we were unable to recover it. 00:26:47.202 [2024-05-15 17:17:34.619436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.202 [2024-05-15 17:17:34.619600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.202 [2024-05-15 17:17:34.619614] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:47.202 qpair failed and we were unable to recover it. 00:26:47.202 [2024-05-15 17:17:34.619727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.202 [2024-05-15 17:17:34.620009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.203 [2024-05-15 17:17:34.620022] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:47.203 qpair failed and we were unable to recover it. 00:26:47.203 [2024-05-15 17:17:34.620197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.203 [2024-05-15 17:17:34.620312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.203 [2024-05-15 17:17:34.620326] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:47.203 qpair failed and we were unable to recover it. 00:26:47.203 [2024-05-15 17:17:34.620506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.203 [2024-05-15 17:17:34.620738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.203 [2024-05-15 17:17:34.620751] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:47.203 qpair failed and we were unable to recover it. 00:26:47.203 [2024-05-15 17:17:34.620871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.203 [2024-05-15 17:17:34.621046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.203 [2024-05-15 17:17:34.621060] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:47.203 qpair failed and we were unable to recover it. 00:26:47.203 [2024-05-15 17:17:34.621291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.203 [2024-05-15 17:17:34.621471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.203 [2024-05-15 17:17:34.621485] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:47.203 qpair failed and we were unable to recover it. 00:26:47.203 [2024-05-15 17:17:34.621666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.203 [2024-05-15 17:17:34.621836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.203 [2024-05-15 17:17:34.621849] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:47.203 qpair failed and we were unable to recover it. 00:26:47.203 [2024-05-15 17:17:34.621964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.203 [2024-05-15 17:17:34.622078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.203 [2024-05-15 17:17:34.622092] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:47.203 qpair failed and we were unable to recover it. 00:26:47.203 [2024-05-15 17:17:34.622263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.203 [2024-05-15 17:17:34.622368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.203 [2024-05-15 17:17:34.622382] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:47.203 qpair failed and we were unable to recover it. 00:26:47.203 [2024-05-15 17:17:34.622497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.203 [2024-05-15 17:17:34.622606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.203 [2024-05-15 17:17:34.622620] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:47.203 qpair failed and we were unable to recover it. 00:26:47.203 [2024-05-15 17:17:34.622877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.203 [2024-05-15 17:17:34.622984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.203 [2024-05-15 17:17:34.622998] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:47.203 qpair failed and we were unable to recover it. 00:26:47.203 [2024-05-15 17:17:34.623250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.203 [2024-05-15 17:17:34.623453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.203 [2024-05-15 17:17:34.623467] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:47.203 qpair failed and we were unable to recover it. 00:26:47.203 [2024-05-15 17:17:34.623562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.203 [2024-05-15 17:17:34.623745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.203 [2024-05-15 17:17:34.623759] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:47.203 qpair failed and we were unable to recover it. 00:26:47.203 [2024-05-15 17:17:34.623927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.203 [2024-05-15 17:17:34.624067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.203 [2024-05-15 17:17:34.624081] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:47.203 qpair failed and we were unable to recover it. 00:26:47.203 [2024-05-15 17:17:34.624339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.203 [2024-05-15 17:17:34.624452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.203 [2024-05-15 17:17:34.624466] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:47.203 qpair failed and we were unable to recover it. 00:26:47.203 [2024-05-15 17:17:34.624646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.203 [2024-05-15 17:17:34.624823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.203 [2024-05-15 17:17:34.624837] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:47.203 qpair failed and we were unable to recover it. 00:26:47.203 [2024-05-15 17:17:34.624961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.203 [2024-05-15 17:17:34.625065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.203 [2024-05-15 17:17:34.625078] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:47.203 qpair failed and we were unable to recover it. 00:26:47.203 [2024-05-15 17:17:34.625367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.203 [2024-05-15 17:17:34.625561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.203 [2024-05-15 17:17:34.625575] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:47.203 qpair failed and we were unable to recover it. 00:26:47.203 [2024-05-15 17:17:34.625704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.203 [2024-05-15 17:17:34.625886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.203 [2024-05-15 17:17:34.625900] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:47.203 qpair failed and we were unable to recover it. 00:26:47.203 [2024-05-15 17:17:34.626011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.203 [2024-05-15 17:17:34.626252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.203 [2024-05-15 17:17:34.626267] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:47.203 qpair failed and we were unable to recover it. 00:26:47.203 [2024-05-15 17:17:34.626437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.203 [2024-05-15 17:17:34.626586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.203 [2024-05-15 17:17:34.626600] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:47.203 qpair failed and we were unable to recover it. 00:26:47.203 [2024-05-15 17:17:34.626699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.203 [2024-05-15 17:17:34.626859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.203 [2024-05-15 17:17:34.626872] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:47.203 qpair failed and we were unable to recover it. 00:26:47.203 [2024-05-15 17:17:34.627052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.203 [2024-05-15 17:17:34.627148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.203 [2024-05-15 17:17:34.627161] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:47.203 qpair failed and we were unable to recover it. 00:26:47.203 [2024-05-15 17:17:34.627422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.203 [2024-05-15 17:17:34.627616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.203 [2024-05-15 17:17:34.627630] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:47.203 qpair failed and we were unable to recover it. 00:26:47.203 [2024-05-15 17:17:34.627819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.203 [2024-05-15 17:17:34.627992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.203 [2024-05-15 17:17:34.628005] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:47.203 qpair failed and we were unable to recover it. 00:26:47.203 [2024-05-15 17:17:34.628188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.203 [2024-05-15 17:17:34.628389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.203 [2024-05-15 17:17:34.628402] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:47.203 qpair failed and we were unable to recover it. 00:26:47.203 [2024-05-15 17:17:34.628604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.203 [2024-05-15 17:17:34.628712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.203 [2024-05-15 17:17:34.628725] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:47.203 qpair failed and we were unable to recover it. 00:26:47.203 [2024-05-15 17:17:34.628903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.203 [2024-05-15 17:17:34.629003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.203 [2024-05-15 17:17:34.629017] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244fc10 with addr=10.0.0.2, port=4420 00:26:47.203 qpair failed and we were unable to recover it. 00:26:47.203 [2024-05-15 17:17:34.629189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.203 [2024-05-15 17:17:34.629348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.203 [2024-05-15 17:17:34.629359] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:47.203 qpair failed and we were unable to recover it. 00:26:47.203 [2024-05-15 17:17:34.629487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.203 [2024-05-15 17:17:34.629736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.203 [2024-05-15 17:17:34.629747] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:47.203 qpair failed and we were unable to recover it. 00:26:47.203 [2024-05-15 17:17:34.629921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.204 [2024-05-15 17:17:34.630086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.204 [2024-05-15 17:17:34.630097] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:47.204 qpair failed and we were unable to recover it. 00:26:47.204 [2024-05-15 17:17:34.630176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.204 [2024-05-15 17:17:34.630291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.204 [2024-05-15 17:17:34.630300] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:47.204 qpair failed and we were unable to recover it. 00:26:47.204 [2024-05-15 17:17:34.630398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.204 [2024-05-15 17:17:34.630628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.204 [2024-05-15 17:17:34.630639] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:47.204 qpair failed and we were unable to recover it. 00:26:47.204 [2024-05-15 17:17:34.630870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.204 [2024-05-15 17:17:34.631037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.204 [2024-05-15 17:17:34.631047] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:47.204 qpair failed and we were unable to recover it. 00:26:47.204 [2024-05-15 17:17:34.631191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.204 [2024-05-15 17:17:34.631360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.204 [2024-05-15 17:17:34.631370] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:47.204 qpair failed and we were unable to recover it. 00:26:47.204 [2024-05-15 17:17:34.631528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.204 [2024-05-15 17:17:34.631776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.204 [2024-05-15 17:17:34.631786] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:47.204 qpair failed and we were unable to recover it. 00:26:47.204 [2024-05-15 17:17:34.631985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.204 [2024-05-15 17:17:34.632088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.204 [2024-05-15 17:17:34.632099] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:47.204 qpair failed and we were unable to recover it. 00:26:47.204 [2024-05-15 17:17:34.632256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.204 [2024-05-15 17:17:34.632477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.204 [2024-05-15 17:17:34.632488] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:47.204 qpair failed and we were unable to recover it. 00:26:47.204 [2024-05-15 17:17:34.632668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.204 [2024-05-15 17:17:34.632782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.204 [2024-05-15 17:17:34.632793] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:47.204 qpair failed and we were unable to recover it. 00:26:47.204 [2024-05-15 17:17:34.633017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.204 [2024-05-15 17:17:34.633123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.204 [2024-05-15 17:17:34.633133] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:47.204 qpair failed and we were unable to recover it. 00:26:47.204 [2024-05-15 17:17:34.633330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.204 [2024-05-15 17:17:34.633524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.204 [2024-05-15 17:17:34.633535] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:47.204 qpair failed and we were unable to recover it. 00:26:47.204 [2024-05-15 17:17:34.633631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.204 [2024-05-15 17:17:34.633788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.204 [2024-05-15 17:17:34.633797] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:47.204 qpair failed and we were unable to recover it. 00:26:47.204 [2024-05-15 17:17:34.633981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.204 [2024-05-15 17:17:34.634232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.204 [2024-05-15 17:17:34.634243] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:47.204 qpair failed and we were unable to recover it. 00:26:47.204 [2024-05-15 17:17:34.634412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.204 [2024-05-15 17:17:34.634565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.204 [2024-05-15 17:17:34.634575] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:47.204 qpair failed and we were unable to recover it. 00:26:47.204 [2024-05-15 17:17:34.634744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.204 [2024-05-15 17:17:34.634849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.204 [2024-05-15 17:17:34.634860] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:47.204 qpair failed and we were unable to recover it. 00:26:47.204 [2024-05-15 17:17:34.635033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.204 [2024-05-15 17:17:34.635206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.204 [2024-05-15 17:17:34.635218] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:47.204 qpair failed and we were unable to recover it. 00:26:47.204 [2024-05-15 17:17:34.635320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.204 [2024-05-15 17:17:34.635550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.204 [2024-05-15 17:17:34.635561] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:47.204 qpair failed and we were unable to recover it. 00:26:47.204 [2024-05-15 17:17:34.635727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.204 [2024-05-15 17:17:34.635885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.204 [2024-05-15 17:17:34.635895] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:47.204 qpair failed and we were unable to recover it. 00:26:47.204 [2024-05-15 17:17:34.636099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.204 [2024-05-15 17:17:34.636270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.204 [2024-05-15 17:17:34.636280] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:47.204 qpair failed and we were unable to recover it. 00:26:47.204 [2024-05-15 17:17:34.636435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.204 [2024-05-15 17:17:34.636602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.204 [2024-05-15 17:17:34.636612] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:47.204 qpair failed and we were unable to recover it. 00:26:47.204 [2024-05-15 17:17:34.636725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.204 [2024-05-15 17:17:34.636850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.204 [2024-05-15 17:17:34.636860] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:47.204 qpair failed and we were unable to recover it. 00:26:47.204 [2024-05-15 17:17:34.637014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.204 [2024-05-15 17:17:34.637188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.204 [2024-05-15 17:17:34.637198] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:47.204 qpair failed and we were unable to recover it. 00:26:47.204 [2024-05-15 17:17:34.637292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.204 [2024-05-15 17:17:34.637480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.204 [2024-05-15 17:17:34.637490] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:47.204 qpair failed and we were unable to recover it. 00:26:47.204 [2024-05-15 17:17:34.637607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.204 [2024-05-15 17:17:34.637704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.204 [2024-05-15 17:17:34.637714] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:47.204 qpair failed and we were unable to recover it. 00:26:47.204 [2024-05-15 17:17:34.637778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.204 [2024-05-15 17:17:34.637947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.204 [2024-05-15 17:17:34.637957] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:47.204 qpair failed and we were unable to recover it. 00:26:47.204 [2024-05-15 17:17:34.638219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.204 [2024-05-15 17:17:34.638317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.204 [2024-05-15 17:17:34.638328] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:47.204 qpair failed and we were unable to recover it. 00:26:47.204 [2024-05-15 17:17:34.638511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.204 [2024-05-15 17:17:34.638681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.204 [2024-05-15 17:17:34.638691] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:47.204 qpair failed and we were unable to recover it. 00:26:47.204 [2024-05-15 17:17:34.638877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.204 [2024-05-15 17:17:34.639002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.204 [2024-05-15 17:17:34.639012] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:47.204 qpair failed and we were unable to recover it. 00:26:47.204 [2024-05-15 17:17:34.639176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.204 [2024-05-15 17:17:34.639334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.205 [2024-05-15 17:17:34.639345] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:47.205 qpair failed and we were unable to recover it. 00:26:47.205 [2024-05-15 17:17:34.639452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.205 [2024-05-15 17:17:34.639623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.205 [2024-05-15 17:17:34.639633] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:47.205 qpair failed and we were unable to recover it. 00:26:47.205 [2024-05-15 17:17:34.639730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.205 [2024-05-15 17:17:34.639824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.205 [2024-05-15 17:17:34.639834] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:47.205 qpair failed and we were unable to recover it. 00:26:47.205 [2024-05-15 17:17:34.639995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.205 [2024-05-15 17:17:34.640121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.205 [2024-05-15 17:17:34.640131] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:47.205 qpair failed and we were unable to recover it. 00:26:47.205 [2024-05-15 17:17:34.640229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.205 [2024-05-15 17:17:34.640395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.205 [2024-05-15 17:17:34.640405] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:47.205 qpair failed and we were unable to recover it. 00:26:47.205 [2024-05-15 17:17:34.640597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.205 [2024-05-15 17:17:34.640705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.205 [2024-05-15 17:17:34.640716] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:47.205 qpair failed and we were unable to recover it. 00:26:47.205 [2024-05-15 17:17:34.640832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.205 [2024-05-15 17:17:34.640986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.205 [2024-05-15 17:17:34.640996] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:47.205 qpair failed and we were unable to recover it. 00:26:47.205 [2024-05-15 17:17:34.641155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.205 [2024-05-15 17:17:34.641354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.205 [2024-05-15 17:17:34.641365] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:47.205 qpair failed and we were unable to recover it. 00:26:47.205 [2024-05-15 17:17:34.641589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.205 [2024-05-15 17:17:34.641756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.205 [2024-05-15 17:17:34.641766] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:47.205 qpair failed and we were unable to recover it. 00:26:47.205 [2024-05-15 17:17:34.642029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.205 [2024-05-15 17:17:34.642277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.205 [2024-05-15 17:17:34.642288] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:47.205 qpair failed and we were unable to recover it. 00:26:47.205 [2024-05-15 17:17:34.642529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.205 [2024-05-15 17:17:34.642709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.205 [2024-05-15 17:17:34.642719] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:47.205 qpair failed and we were unable to recover it. 00:26:47.205 [2024-05-15 17:17:34.642967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.205 [2024-05-15 17:17:34.643186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.205 [2024-05-15 17:17:34.643196] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:47.205 qpair failed and we were unable to recover it. 00:26:47.205 [2024-05-15 17:17:34.643442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.205 [2024-05-15 17:17:34.643599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.205 [2024-05-15 17:17:34.643609] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:47.205 qpair failed and we were unable to recover it. 00:26:47.205 [2024-05-15 17:17:34.643765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.205 [2024-05-15 17:17:34.643988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.205 [2024-05-15 17:17:34.643998] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:47.205 qpair failed and we were unable to recover it. 00:26:47.205 [2024-05-15 17:17:34.644100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.205 [2024-05-15 17:17:34.644268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.205 [2024-05-15 17:17:34.644278] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:47.205 qpair failed and we were unable to recover it. 00:26:47.205 [2024-05-15 17:17:34.644503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.205 [2024-05-15 17:17:34.644621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.205 [2024-05-15 17:17:34.644630] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:47.205 qpair failed and we were unable to recover it. 00:26:47.205 [2024-05-15 17:17:34.644835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.205 [2024-05-15 17:17:34.644962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.205 [2024-05-15 17:17:34.644971] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:47.205 qpair failed and we were unable to recover it. 00:26:47.205 [2024-05-15 17:17:34.645083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.205 [2024-05-15 17:17:34.645246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.205 [2024-05-15 17:17:34.645256] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:47.205 qpair failed and we were unable to recover it. 00:26:47.205 [2024-05-15 17:17:34.645433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.205 [2024-05-15 17:17:34.645555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.205 [2024-05-15 17:17:34.645565] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:47.205 qpair failed and we were unable to recover it. 00:26:47.205 [2024-05-15 17:17:34.645674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.205 [2024-05-15 17:17:34.645853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.205 [2024-05-15 17:17:34.645863] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:47.205 qpair failed and we were unable to recover it. 00:26:47.205 [2024-05-15 17:17:34.646120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.205 [2024-05-15 17:17:34.646194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.205 [2024-05-15 17:17:34.646206] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:47.205 qpair failed and we were unable to recover it. 00:26:47.205 [2024-05-15 17:17:34.646372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.205 [2024-05-15 17:17:34.646563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.205 [2024-05-15 17:17:34.646573] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:47.205 qpair failed and we were unable to recover it. 00:26:47.205 [2024-05-15 17:17:34.646771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.205 [2024-05-15 17:17:34.646933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.205 [2024-05-15 17:17:34.646943] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:47.205 qpair failed and we were unable to recover it. 00:26:47.205 [2024-05-15 17:17:34.647112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.205 [2024-05-15 17:17:34.647293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.205 [2024-05-15 17:17:34.647303] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:47.205 qpair failed and we were unable to recover it. 00:26:47.205 [2024-05-15 17:17:34.647504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.205 [2024-05-15 17:17:34.647625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.205 [2024-05-15 17:17:34.647635] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:47.205 qpair failed and we were unable to recover it. 00:26:47.205 [2024-05-15 17:17:34.647750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.205 [2024-05-15 17:17:34.647936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.205 [2024-05-15 17:17:34.647946] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:47.205 qpair failed and we were unable to recover it. 00:26:47.205 [2024-05-15 17:17:34.648053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.205 [2024-05-15 17:17:34.648222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.205 [2024-05-15 17:17:34.648232] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:47.205 qpair failed and we were unable to recover it. 00:26:47.205 [2024-05-15 17:17:34.648393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.205 [2024-05-15 17:17:34.648561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.205 [2024-05-15 17:17:34.648570] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:47.205 qpair failed and we were unable to recover it. 00:26:47.205 [2024-05-15 17:17:34.648708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.205 [2024-05-15 17:17:34.648870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.205 [2024-05-15 17:17:34.648880] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:47.205 qpair failed and we were unable to recover it. 00:26:47.206 [2024-05-15 17:17:34.649045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.206 [2024-05-15 17:17:34.649242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.206 [2024-05-15 17:17:34.649253] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:47.206 qpair failed and we were unable to recover it. 00:26:47.206 [2024-05-15 17:17:34.649356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.206 [2024-05-15 17:17:34.649539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.206 [2024-05-15 17:17:34.649550] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:47.206 qpair failed and we were unable to recover it. 00:26:47.206 [2024-05-15 17:17:34.649756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.206 [2024-05-15 17:17:34.649924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.206 [2024-05-15 17:17:34.649933] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:47.206 qpair failed and we were unable to recover it. 00:26:47.206 [2024-05-15 17:17:34.650189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.206 [2024-05-15 17:17:34.650310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.206 [2024-05-15 17:17:34.650319] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:47.206 qpair failed and we were unable to recover it. 00:26:47.206 [2024-05-15 17:17:34.650468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.206 [2024-05-15 17:17:34.650721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.206 [2024-05-15 17:17:34.650730] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:47.206 qpair failed and we were unable to recover it. 00:26:47.206 [2024-05-15 17:17:34.650927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.206 [2024-05-15 17:17:34.651125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.206 [2024-05-15 17:17:34.651134] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:47.206 qpair failed and we were unable to recover it. 00:26:47.206 [2024-05-15 17:17:34.651322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.206 [2024-05-15 17:17:34.651519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.206 [2024-05-15 17:17:34.651529] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:47.206 qpair failed and we were unable to recover it. 00:26:47.206 [2024-05-15 17:17:34.651765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.206 [2024-05-15 17:17:34.651929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.206 [2024-05-15 17:17:34.651939] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:47.206 qpair failed and we were unable to recover it. 00:26:47.206 [2024-05-15 17:17:34.652109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.206 [2024-05-15 17:17:34.652235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.206 [2024-05-15 17:17:34.652246] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:47.206 qpair failed and we were unable to recover it. 00:26:47.206 [2024-05-15 17:17:34.652370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.206 [2024-05-15 17:17:34.652455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.206 [2024-05-15 17:17:34.652464] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:47.206 qpair failed and we were unable to recover it. 00:26:47.206 [2024-05-15 17:17:34.652558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.206 [2024-05-15 17:17:34.652663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.206 [2024-05-15 17:17:34.652672] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:47.206 qpair failed and we were unable to recover it. 00:26:47.206 [2024-05-15 17:17:34.652841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.206 [2024-05-15 17:17:34.652947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.206 [2024-05-15 17:17:34.652958] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:47.206 qpair failed and we were unable to recover it. 00:26:47.206 [2024-05-15 17:17:34.653064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.206 [2024-05-15 17:17:34.653169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.206 [2024-05-15 17:17:34.653178] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:47.206 qpair failed and we were unable to recover it. 00:26:47.206 [2024-05-15 17:17:34.653365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.206 [2024-05-15 17:17:34.653467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.206 [2024-05-15 17:17:34.653476] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:47.206 qpair failed and we were unable to recover it. 00:26:47.206 [2024-05-15 17:17:34.653566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.206 [2024-05-15 17:17:34.653689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.206 [2024-05-15 17:17:34.653699] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:47.206 qpair failed and we were unable to recover it. 00:26:47.206 [2024-05-15 17:17:34.653807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.206 [2024-05-15 17:17:34.653906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.206 [2024-05-15 17:17:34.653915] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:47.206 qpair failed and we were unable to recover it. 00:26:47.206 [2024-05-15 17:17:34.654011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.206 [2024-05-15 17:17:34.654112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.206 [2024-05-15 17:17:34.654121] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:47.206 qpair failed and we were unable to recover it. 00:26:47.206 [2024-05-15 17:17:34.654233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.206 [2024-05-15 17:17:34.654345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.206 [2024-05-15 17:17:34.654354] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:47.206 qpair failed and we were unable to recover it. 00:26:47.206 [2024-05-15 17:17:34.654453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.206 [2024-05-15 17:17:34.654559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.206 [2024-05-15 17:17:34.654568] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:47.206 qpair failed and we were unable to recover it. 00:26:47.206 [2024-05-15 17:17:34.654774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.206 [2024-05-15 17:17:34.654861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.206 [2024-05-15 17:17:34.654870] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:47.206 qpair failed and we were unable to recover it. 00:26:47.206 [2024-05-15 17:17:34.655043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.206 [2024-05-15 17:17:34.655132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.206 [2024-05-15 17:17:34.655141] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:47.206 qpair failed and we were unable to recover it. 00:26:47.206 [2024-05-15 17:17:34.655366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.206 [2024-05-15 17:17:34.655455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.206 [2024-05-15 17:17:34.655466] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:47.206 qpair failed and we were unable to recover it. 00:26:47.206 [2024-05-15 17:17:34.655556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.206 [2024-05-15 17:17:34.655650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.206 [2024-05-15 17:17:34.655660] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:47.206 qpair failed and we were unable to recover it. 00:26:47.206 [2024-05-15 17:17:34.655776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.206 [2024-05-15 17:17:34.655868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.206 [2024-05-15 17:17:34.655877] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:47.206 qpair failed and we were unable to recover it. 00:26:47.207 [2024-05-15 17:17:34.655978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.207 [2024-05-15 17:17:34.656200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.207 [2024-05-15 17:17:34.656210] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:47.207 qpair failed and we were unable to recover it. 00:26:47.207 [2024-05-15 17:17:34.656370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.207 [2024-05-15 17:17:34.656555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.207 [2024-05-15 17:17:34.656565] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:47.207 qpair failed and we were unable to recover it. 00:26:47.207 [2024-05-15 17:17:34.656661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.207 [2024-05-15 17:17:34.656907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.207 [2024-05-15 17:17:34.656916] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:47.207 qpair failed and we were unable to recover it. 00:26:47.207 [2024-05-15 17:17:34.657140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.207 [2024-05-15 17:17:34.657298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.207 [2024-05-15 17:17:34.657308] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:47.207 qpair failed and we were unable to recover it. 00:26:47.207 [2024-05-15 17:17:34.657482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.207 [2024-05-15 17:17:34.657598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.207 [2024-05-15 17:17:34.657608] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:47.207 qpair failed and we were unable to recover it. 00:26:47.207 [2024-05-15 17:17:34.657774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.207 [2024-05-15 17:17:34.657931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.207 [2024-05-15 17:17:34.657940] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:47.207 qpair failed and we were unable to recover it. 00:26:47.207 [2024-05-15 17:17:34.658043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.207 [2024-05-15 17:17:34.658132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.207 [2024-05-15 17:17:34.658141] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:47.207 qpair failed and we were unable to recover it. 00:26:47.207 [2024-05-15 17:17:34.658271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.207 [2024-05-15 17:17:34.658532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.207 [2024-05-15 17:17:34.658541] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:47.207 qpair failed and we were unable to recover it. 00:26:47.207 [2024-05-15 17:17:34.658763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.207 [2024-05-15 17:17:34.658925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.207 [2024-05-15 17:17:34.658934] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:47.207 qpair failed and we were unable to recover it. 00:26:47.207 [2024-05-15 17:17:34.659123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.207 [2024-05-15 17:17:34.659329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.207 [2024-05-15 17:17:34.659339] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:47.207 qpair failed and we were unable to recover it. 00:26:47.207 [2024-05-15 17:17:34.659591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.207 [2024-05-15 17:17:34.659767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.207 [2024-05-15 17:17:34.659776] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:47.207 qpair failed and we were unable to recover it. 00:26:47.207 [2024-05-15 17:17:34.659970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.207 [2024-05-15 17:17:34.660237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.207 [2024-05-15 17:17:34.660247] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:47.207 qpair failed and we were unable to recover it. 00:26:47.207 [2024-05-15 17:17:34.660370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.207 [2024-05-15 17:17:34.660558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.207 [2024-05-15 17:17:34.660567] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:47.207 qpair failed and we were unable to recover it. 00:26:47.207 [2024-05-15 17:17:34.660825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.207 [2024-05-15 17:17:34.661076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.207 [2024-05-15 17:17:34.661086] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:47.207 qpair failed and we were unable to recover it. 00:26:47.207 [2024-05-15 17:17:34.661320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.207 [2024-05-15 17:17:34.661539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.207 [2024-05-15 17:17:34.661548] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:47.207 qpair failed and we were unable to recover it. 00:26:47.207 [2024-05-15 17:17:34.661668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.207 [2024-05-15 17:17:34.661771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.207 [2024-05-15 17:17:34.661780] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:47.207 qpair failed and we were unable to recover it. 00:26:47.207 [2024-05-15 17:17:34.662025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.207 [2024-05-15 17:17:34.662211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.207 [2024-05-15 17:17:34.662221] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:47.207 qpair failed and we were unable to recover it. 00:26:47.207 [2024-05-15 17:17:34.662423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.207 [2024-05-15 17:17:34.662619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.207 [2024-05-15 17:17:34.662629] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:47.207 qpair failed and we were unable to recover it. 00:26:47.207 [2024-05-15 17:17:34.662821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.207 [2024-05-15 17:17:34.662977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.207 [2024-05-15 17:17:34.662987] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:47.207 qpair failed and we were unable to recover it. 00:26:47.207 [2024-05-15 17:17:34.663177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.207 [2024-05-15 17:17:34.663428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.207 [2024-05-15 17:17:34.663437] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:47.207 qpair failed and we were unable to recover it. 00:26:47.207 [2024-05-15 17:17:34.663566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.207 [2024-05-15 17:17:34.663757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.207 [2024-05-15 17:17:34.663767] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:47.207 qpair failed and we were unable to recover it. 00:26:47.207 [2024-05-15 17:17:34.664014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.207 [2024-05-15 17:17:34.664177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.207 [2024-05-15 17:17:34.664187] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:47.207 qpair failed and we were unable to recover it. 00:26:47.207 [2024-05-15 17:17:34.664414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.207 [2024-05-15 17:17:34.664522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.207 [2024-05-15 17:17:34.664531] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:47.207 qpair failed and we were unable to recover it. 00:26:47.207 [2024-05-15 17:17:34.664688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.207 [2024-05-15 17:17:34.664848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.207 [2024-05-15 17:17:34.664857] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:47.207 qpair failed and we were unable to recover it. 00:26:47.207 [2024-05-15 17:17:34.665106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.207 [2024-05-15 17:17:34.665306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.207 [2024-05-15 17:17:34.665316] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:47.207 qpair failed and we were unable to recover it. 00:26:47.207 [2024-05-15 17:17:34.665531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.207 [2024-05-15 17:17:34.665711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.207 [2024-05-15 17:17:34.665721] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:47.207 qpair failed and we were unable to recover it. 00:26:47.207 [2024-05-15 17:17:34.665895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.207 [2024-05-15 17:17:34.666163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.207 [2024-05-15 17:17:34.666175] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:47.207 qpair failed and we were unable to recover it. 00:26:47.207 [2024-05-15 17:17:34.666429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.207 [2024-05-15 17:17:34.666601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.207 [2024-05-15 17:17:34.666610] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:47.207 qpair failed and we were unable to recover it. 00:26:47.208 [2024-05-15 17:17:34.666772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.208 [2024-05-15 17:17:34.667001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.208 [2024-05-15 17:17:34.667011] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:47.208 qpair failed and we were unable to recover it. 00:26:47.208 [2024-05-15 17:17:34.667253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.208 [2024-05-15 17:17:34.667446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.208 [2024-05-15 17:17:34.667456] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:47.208 qpair failed and we were unable to recover it. 00:26:47.208 [2024-05-15 17:17:34.667586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.208 [2024-05-15 17:17:34.667775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.208 [2024-05-15 17:17:34.667784] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:47.208 qpair failed and we were unable to recover it. 00:26:47.208 [2024-05-15 17:17:34.667963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.208 [2024-05-15 17:17:34.668195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.208 [2024-05-15 17:17:34.668205] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:47.208 qpair failed and we were unable to recover it. 00:26:47.208 [2024-05-15 17:17:34.668433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.208 [2024-05-15 17:17:34.668698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.208 [2024-05-15 17:17:34.668707] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:47.208 qpair failed and we were unable to recover it. 00:26:47.208 17:17:34 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:26:47.208 [2024-05-15 17:17:34.668969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.208 [2024-05-15 17:17:34.669082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.208 [2024-05-15 17:17:34.669091] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:47.208 qpair failed and we were unable to recover it. 00:26:47.208 17:17:34 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@860 -- # return 0 00:26:47.208 [2024-05-15 17:17:34.669324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.208 [2024-05-15 17:17:34.669569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.208 [2024-05-15 17:17:34.669579] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:47.208 qpair failed and we were unable to recover it. 00:26:47.208 17:17:34 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:26:47.208 [2024-05-15 17:17:34.669702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.208 [2024-05-15 17:17:34.669806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.208 [2024-05-15 17:17:34.669816] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:47.208 qpair failed and we were unable to recover it. 00:26:47.208 17:17:34 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@726 -- # xtrace_disable 00:26:47.208 [2024-05-15 17:17:34.669995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.208 17:17:34 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:26:47.208 [2024-05-15 17:17:34.670228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.208 [2024-05-15 17:17:34.670241] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:47.208 qpair failed and we were unable to recover it. 00:26:47.208 [2024-05-15 17:17:34.670400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.208 [2024-05-15 17:17:34.670651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.208 [2024-05-15 17:17:34.670660] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:47.208 qpair failed and we were unable to recover it. 00:26:47.208 [2024-05-15 17:17:34.670931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.208 [2024-05-15 17:17:34.671163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.208 [2024-05-15 17:17:34.671185] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:47.208 qpair failed and we were unable to recover it. 00:26:47.208 [2024-05-15 17:17:34.671343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.208 [2024-05-15 17:17:34.671504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.208 [2024-05-15 17:17:34.671514] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:47.208 qpair failed and we were unable to recover it. 00:26:47.208 [2024-05-15 17:17:34.671703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.208 [2024-05-15 17:17:34.671927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.208 [2024-05-15 17:17:34.671937] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:47.208 qpair failed and we were unable to recover it. 00:26:47.208 [2024-05-15 17:17:34.672168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.208 [2024-05-15 17:17:34.672343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.208 [2024-05-15 17:17:34.672353] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:47.208 qpair failed and we were unable to recover it. 00:26:47.208 [2024-05-15 17:17:34.672596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.208 [2024-05-15 17:17:34.672713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.208 [2024-05-15 17:17:34.672722] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:47.208 qpair failed and we were unable to recover it. 00:26:47.208 [2024-05-15 17:17:34.672971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.208 [2024-05-15 17:17:34.673203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.208 [2024-05-15 17:17:34.673213] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:47.208 qpair failed and we were unable to recover it. 00:26:47.208 [2024-05-15 17:17:34.673465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.208 [2024-05-15 17:17:34.673646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.208 [2024-05-15 17:17:34.673656] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:47.208 qpair failed and we were unable to recover it. 00:26:47.208 [2024-05-15 17:17:34.673823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.208 [2024-05-15 17:17:34.674001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.208 [2024-05-15 17:17:34.674013] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:47.208 qpair failed and we were unable to recover it. 00:26:47.208 [2024-05-15 17:17:34.674289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.208 [2024-05-15 17:17:34.674419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.208 [2024-05-15 17:17:34.674431] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:47.208 qpair failed and we were unable to recover it. 00:26:47.208 [2024-05-15 17:17:34.674681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.208 [2024-05-15 17:17:34.674902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.208 [2024-05-15 17:17:34.674911] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:47.208 qpair failed and we were unable to recover it. 00:26:47.208 [2024-05-15 17:17:34.675127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.208 [2024-05-15 17:17:34.675319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.208 [2024-05-15 17:17:34.675329] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:47.208 qpair failed and we were unable to recover it. 00:26:47.208 [2024-05-15 17:17:34.675451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.208 [2024-05-15 17:17:34.675573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.208 [2024-05-15 17:17:34.675583] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:47.208 qpair failed and we were unable to recover it. 00:26:47.208 [2024-05-15 17:17:34.675683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.208 [2024-05-15 17:17:34.675962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.208 [2024-05-15 17:17:34.675972] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:47.208 qpair failed and we were unable to recover it. 00:26:47.208 [2024-05-15 17:17:34.676144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.208 [2024-05-15 17:17:34.676393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.208 [2024-05-15 17:17:34.676405] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:47.208 qpair failed and we were unable to recover it. 00:26:47.208 [2024-05-15 17:17:34.676523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.208 [2024-05-15 17:17:34.676717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.208 [2024-05-15 17:17:34.676727] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:47.208 qpair failed and we were unable to recover it. 00:26:47.208 [2024-05-15 17:17:34.676845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.208 [2024-05-15 17:17:34.677066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.208 [2024-05-15 17:17:34.677076] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:47.208 qpair failed and we were unable to recover it. 00:26:47.208 [2024-05-15 17:17:34.677269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.208 [2024-05-15 17:17:34.677517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.208 [2024-05-15 17:17:34.677528] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:47.208 qpair failed and we were unable to recover it. 00:26:47.209 [2024-05-15 17:17:34.677640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.209 [2024-05-15 17:17:34.677825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.209 [2024-05-15 17:17:34.677835] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:47.209 qpair failed and we were unable to recover it. 00:26:47.209 [2024-05-15 17:17:34.678007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.209 [2024-05-15 17:17:34.678168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.209 [2024-05-15 17:17:34.678180] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:47.209 qpair failed and we were unable to recover it. 00:26:47.209 [2024-05-15 17:17:34.678380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.209 [2024-05-15 17:17:34.678585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.209 [2024-05-15 17:17:34.678594] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:47.209 qpair failed and we were unable to recover it. 00:26:47.209 [2024-05-15 17:17:34.678763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.209 [2024-05-15 17:17:34.678932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.209 [2024-05-15 17:17:34.678942] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:47.209 qpair failed and we were unable to recover it. 00:26:47.209 [2024-05-15 17:17:34.679112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.209 [2024-05-15 17:17:34.679230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.209 [2024-05-15 17:17:34.679240] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:47.209 qpair failed and we were unable to recover it. 00:26:47.209 [2024-05-15 17:17:34.679379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.209 [2024-05-15 17:17:34.679543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.209 [2024-05-15 17:17:34.679553] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:47.209 qpair failed and we were unable to recover it. 00:26:47.209 [2024-05-15 17:17:34.679676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.209 [2024-05-15 17:17:34.679865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.209 [2024-05-15 17:17:34.679875] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:47.209 qpair failed and we were unable to recover it. 00:26:47.209 [2024-05-15 17:17:34.679975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.209 [2024-05-15 17:17:34.680171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.209 [2024-05-15 17:17:34.680181] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:47.209 qpair failed and we were unable to recover it. 00:26:47.209 [2024-05-15 17:17:34.680369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.209 [2024-05-15 17:17:34.680504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.209 [2024-05-15 17:17:34.680514] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:47.209 qpair failed and we were unable to recover it. 00:26:47.209 [2024-05-15 17:17:34.680644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.209 [2024-05-15 17:17:34.680868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.209 [2024-05-15 17:17:34.680879] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:47.209 qpair failed and we were unable to recover it. 00:26:47.209 [2024-05-15 17:17:34.681118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.209 [2024-05-15 17:17:34.681285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.209 [2024-05-15 17:17:34.681295] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:47.209 qpair failed and we were unable to recover it. 00:26:47.209 [2024-05-15 17:17:34.681467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.209 [2024-05-15 17:17:34.681716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.209 [2024-05-15 17:17:34.681730] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:47.209 qpair failed and we were unable to recover it. 00:26:47.209 [2024-05-15 17:17:34.681885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.209 [2024-05-15 17:17:34.682122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.209 [2024-05-15 17:17:34.682132] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:47.209 qpair failed and we were unable to recover it. 00:26:47.209 [2024-05-15 17:17:34.682373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.209 [2024-05-15 17:17:34.682561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.209 [2024-05-15 17:17:34.682572] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:47.209 qpair failed and we were unable to recover it. 00:26:47.209 [2024-05-15 17:17:34.682696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.209 [2024-05-15 17:17:34.682871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.209 [2024-05-15 17:17:34.682881] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:47.209 qpair failed and we were unable to recover it. 00:26:47.209 [2024-05-15 17:17:34.683106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.209 [2024-05-15 17:17:34.683267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.209 [2024-05-15 17:17:34.683278] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:47.209 qpair failed and we were unable to recover it. 00:26:47.209 [2024-05-15 17:17:34.683554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.209 [2024-05-15 17:17:34.683687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.209 [2024-05-15 17:17:34.683696] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:47.209 qpair failed and we were unable to recover it. 00:26:47.209 [2024-05-15 17:17:34.683889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.209 [2024-05-15 17:17:34.684041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.209 [2024-05-15 17:17:34.684051] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:47.209 qpair failed and we were unable to recover it. 00:26:47.209 [2024-05-15 17:17:34.684247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.209 [2024-05-15 17:17:34.684368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.209 [2024-05-15 17:17:34.684378] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:47.209 qpair failed and we were unable to recover it. 00:26:47.209 [2024-05-15 17:17:34.684558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.209 [2024-05-15 17:17:34.684732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.209 [2024-05-15 17:17:34.684742] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:47.209 qpair failed and we were unable to recover it. 00:26:47.209 [2024-05-15 17:17:34.685064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.209 [2024-05-15 17:17:34.685314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.209 [2024-05-15 17:17:34.685324] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:47.209 qpair failed and we were unable to recover it. 00:26:47.209 [2024-05-15 17:17:34.685574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.209 [2024-05-15 17:17:34.685749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.209 [2024-05-15 17:17:34.685759] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:47.209 qpair failed and we were unable to recover it. 00:26:47.209 [2024-05-15 17:17:34.685926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.209 [2024-05-15 17:17:34.686097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.209 [2024-05-15 17:17:34.686107] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:47.209 qpair failed and we were unable to recover it. 00:26:47.209 [2024-05-15 17:17:34.686336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.209 [2024-05-15 17:17:34.686495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.209 [2024-05-15 17:17:34.686505] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:47.209 qpair failed and we were unable to recover it. 00:26:47.209 [2024-05-15 17:17:34.686627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.209 [2024-05-15 17:17:34.686730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.209 [2024-05-15 17:17:34.686740] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:47.209 qpair failed and we were unable to recover it. 00:26:47.209 [2024-05-15 17:17:34.686940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.209 [2024-05-15 17:17:34.687168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.209 [2024-05-15 17:17:34.687178] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:47.209 qpair failed and we were unable to recover it. 00:26:47.209 [2024-05-15 17:17:34.687364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.209 [2024-05-15 17:17:34.687605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.209 [2024-05-15 17:17:34.687616] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:47.209 qpair failed and we were unable to recover it. 00:26:47.209 [2024-05-15 17:17:34.687872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.209 [2024-05-15 17:17:34.688104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.209 [2024-05-15 17:17:34.688116] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:47.209 qpair failed and we were unable to recover it. 00:26:47.209 [2024-05-15 17:17:34.688408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.209 [2024-05-15 17:17:34.688603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.210 [2024-05-15 17:17:34.688613] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:47.210 qpair failed and we were unable to recover it. 00:26:47.210 [2024-05-15 17:17:34.688848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.210 [2024-05-15 17:17:34.689025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.210 [2024-05-15 17:17:34.689034] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:47.210 qpair failed and we were unable to recover it. 00:26:47.210 [2024-05-15 17:17:34.689218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.210 [2024-05-15 17:17:34.689329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.210 [2024-05-15 17:17:34.689339] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:47.210 qpair failed and we were unable to recover it. 00:26:47.210 [2024-05-15 17:17:34.689464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.210 [2024-05-15 17:17:34.689591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.210 [2024-05-15 17:17:34.689602] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:47.210 qpair failed and we were unable to recover it. 00:26:47.210 [2024-05-15 17:17:34.689725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.210 [2024-05-15 17:17:34.689834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.210 [2024-05-15 17:17:34.689845] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:47.210 qpair failed and we were unable to recover it. 00:26:47.210 [2024-05-15 17:17:34.690086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.210 [2024-05-15 17:17:34.690365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.210 [2024-05-15 17:17:34.690375] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:47.210 qpair failed and we were unable to recover it. 00:26:47.210 [2024-05-15 17:17:34.690543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.210 [2024-05-15 17:17:34.690703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.210 [2024-05-15 17:17:34.690712] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:47.210 qpair failed and we were unable to recover it. 00:26:47.210 [2024-05-15 17:17:34.690948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.210 [2024-05-15 17:17:34.691163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.210 [2024-05-15 17:17:34.691176] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:47.210 qpair failed and we were unable to recover it. 00:26:47.210 [2024-05-15 17:17:34.691298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.210 [2024-05-15 17:17:34.691418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.210 [2024-05-15 17:17:34.691427] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:47.210 qpair failed and we were unable to recover it. 00:26:47.210 [2024-05-15 17:17:34.691600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.210 [2024-05-15 17:17:34.691765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.210 [2024-05-15 17:17:34.691775] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:47.210 qpair failed and we were unable to recover it. 00:26:47.210 [2024-05-15 17:17:34.691992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.210 [2024-05-15 17:17:34.692207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.210 [2024-05-15 17:17:34.692217] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:47.210 qpair failed and we were unable to recover it. 00:26:47.210 [2024-05-15 17:17:34.692460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.210 [2024-05-15 17:17:34.692684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.210 [2024-05-15 17:17:34.692694] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:47.210 qpair failed and we were unable to recover it. 00:26:47.210 [2024-05-15 17:17:34.692819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.210 [2024-05-15 17:17:34.693011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.210 [2024-05-15 17:17:34.693021] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:47.210 qpair failed and we were unable to recover it. 00:26:47.210 [2024-05-15 17:17:34.693197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.210 [2024-05-15 17:17:34.693377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.210 [2024-05-15 17:17:34.693388] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:47.210 qpair failed and we were unable to recover it. 00:26:47.210 [2024-05-15 17:17:34.693490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.210 [2024-05-15 17:17:34.693714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.210 [2024-05-15 17:17:34.693723] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:47.210 qpair failed and we were unable to recover it. 00:26:47.210 [2024-05-15 17:17:34.693918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.210 [2024-05-15 17:17:34.694110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.210 [2024-05-15 17:17:34.694119] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:47.210 qpair failed and we were unable to recover it. 00:26:47.210 [2024-05-15 17:17:34.694306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.210 [2024-05-15 17:17:34.694418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.210 [2024-05-15 17:17:34.694428] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:47.210 qpair failed and we were unable to recover it. 00:26:47.210 [2024-05-15 17:17:34.694674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.210 [2024-05-15 17:17:34.694925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.210 [2024-05-15 17:17:34.694935] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:47.210 qpair failed and we were unable to recover it. 00:26:47.210 [2024-05-15 17:17:34.695048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.210 [2024-05-15 17:17:34.695271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.210 [2024-05-15 17:17:34.695282] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:47.210 qpair failed and we were unable to recover it. 00:26:47.210 [2024-05-15 17:17:34.695450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.210 [2024-05-15 17:17:34.695578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.210 [2024-05-15 17:17:34.695588] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:47.210 qpair failed and we were unable to recover it. 00:26:47.210 [2024-05-15 17:17:34.695759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.210 [2024-05-15 17:17:34.696019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.210 [2024-05-15 17:17:34.696029] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:47.210 qpair failed and we were unable to recover it. 00:26:47.210 [2024-05-15 17:17:34.696230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.210 [2024-05-15 17:17:34.696478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.210 [2024-05-15 17:17:34.696488] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:47.210 qpair failed and we were unable to recover it. 00:26:47.210 [2024-05-15 17:17:34.696597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.210 [2024-05-15 17:17:34.696712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.210 [2024-05-15 17:17:34.696722] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:47.210 qpair failed and we were unable to recover it. 00:26:47.210 [2024-05-15 17:17:34.696982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.210 [2024-05-15 17:17:34.697167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.210 [2024-05-15 17:17:34.697178] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:47.210 qpair failed and we were unable to recover it. 00:26:47.210 [2024-05-15 17:17:34.697394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.210 [2024-05-15 17:17:34.697521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.210 [2024-05-15 17:17:34.697531] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:47.210 qpair failed and we were unable to recover it. 00:26:47.210 [2024-05-15 17:17:34.697658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.210 [2024-05-15 17:17:34.697789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.210 [2024-05-15 17:17:34.697799] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:47.210 qpair failed and we were unable to recover it. 00:26:47.210 [2024-05-15 17:17:34.697901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.210 [2024-05-15 17:17:34.698080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.210 [2024-05-15 17:17:34.698090] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:47.210 qpair failed and we were unable to recover it. 00:26:47.210 [2024-05-15 17:17:34.698338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.210 [2024-05-15 17:17:34.698461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.210 [2024-05-15 17:17:34.698471] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:47.210 qpair failed and we were unable to recover it. 00:26:47.210 [2024-05-15 17:17:34.698629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.210 [2024-05-15 17:17:34.698811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.211 [2024-05-15 17:17:34.698821] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:47.211 qpair failed and we were unable to recover it. 00:26:47.211 [2024-05-15 17:17:34.698995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.211 [2024-05-15 17:17:34.699245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.211 [2024-05-15 17:17:34.699255] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:47.211 qpair failed and we were unable to recover it. 00:26:47.211 [2024-05-15 17:17:34.699386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.211 [2024-05-15 17:17:34.699560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.211 [2024-05-15 17:17:34.699570] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:47.211 qpair failed and we were unable to recover it. 00:26:47.211 [2024-05-15 17:17:34.699697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.211 [2024-05-15 17:17:34.699920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.211 [2024-05-15 17:17:34.699930] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:47.211 qpair failed and we were unable to recover it. 00:26:47.211 [2024-05-15 17:17:34.700063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.211 [2024-05-15 17:17:34.700346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.211 [2024-05-15 17:17:34.700356] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:47.211 qpair failed and we were unable to recover it. 00:26:47.211 [2024-05-15 17:17:34.700533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.211 [2024-05-15 17:17:34.700648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.211 [2024-05-15 17:17:34.700659] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:47.211 qpair failed and we were unable to recover it. 00:26:47.211 [2024-05-15 17:17:34.700850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.211 [2024-05-15 17:17:34.701093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.211 [2024-05-15 17:17:34.701104] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:47.211 qpair failed and we were unable to recover it. 00:26:47.211 [2024-05-15 17:17:34.701273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.211 [2024-05-15 17:17:34.701452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.211 [2024-05-15 17:17:34.701462] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:47.211 qpair failed and we were unable to recover it. 00:26:47.211 [2024-05-15 17:17:34.701639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.211 [2024-05-15 17:17:34.701741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.211 [2024-05-15 17:17:34.701751] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:47.211 qpair failed and we were unable to recover it. 00:26:47.211 [2024-05-15 17:17:34.701997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.211 [2024-05-15 17:17:34.702173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.211 [2024-05-15 17:17:34.702184] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:47.211 qpair failed and we were unable to recover it. 00:26:47.211 [2024-05-15 17:17:34.702350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.211 [2024-05-15 17:17:34.702478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.211 [2024-05-15 17:17:34.702488] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:47.211 qpair failed and we were unable to recover it. 00:26:47.211 [2024-05-15 17:17:34.702679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.211 [2024-05-15 17:17:34.702857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.211 [2024-05-15 17:17:34.702867] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:47.211 qpair failed and we were unable to recover it. 00:26:47.211 [2024-05-15 17:17:34.703050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.211 [2024-05-15 17:17:34.703229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.211 [2024-05-15 17:17:34.703240] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:47.211 qpair failed and we were unable to recover it. 00:26:47.211 [2024-05-15 17:17:34.703415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.211 [2024-05-15 17:17:34.703577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.211 [2024-05-15 17:17:34.703587] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:47.211 qpair failed and we were unable to recover it. 00:26:47.211 [2024-05-15 17:17:34.703900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.211 [2024-05-15 17:17:34.704080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.211 [2024-05-15 17:17:34.704090] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:47.211 qpair failed and we were unable to recover it. 00:26:47.211 [2024-05-15 17:17:34.704212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.211 [2024-05-15 17:17:34.704378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.211 [2024-05-15 17:17:34.704388] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:47.211 qpair failed and we were unable to recover it. 00:26:47.211 [2024-05-15 17:17:34.704515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.211 17:17:34 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:47.211 [2024-05-15 17:17:34.704643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.211 [2024-05-15 17:17:34.704654] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:47.211 qpair failed and we were unable to recover it. 00:26:47.211 17:17:34 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:26:47.211 [2024-05-15 17:17:34.704924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.211 [2024-05-15 17:17:34.705090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.211 [2024-05-15 17:17:34.705100] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:47.211 qpair failed and we were unable to recover it. 00:26:47.211 17:17:34 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:47.211 [2024-05-15 17:17:34.705322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.211 [2024-05-15 17:17:34.705519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.211 [2024-05-15 17:17:34.705530] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:47.211 qpair failed and we were unable to recover it. 00:26:47.211 17:17:34 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:26:47.211 [2024-05-15 17:17:34.705707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.211 [2024-05-15 17:17:34.705879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.211 [2024-05-15 17:17:34.705889] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:47.211 qpair failed and we were unable to recover it. 00:26:47.211 [2024-05-15 17:17:34.706057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.211 [2024-05-15 17:17:34.706250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.211 [2024-05-15 17:17:34.706260] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:47.211 qpair failed and we were unable to recover it. 00:26:47.211 [2024-05-15 17:17:34.706438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.211 [2024-05-15 17:17:34.706595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.211 [2024-05-15 17:17:34.706604] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:47.211 qpair failed and we were unable to recover it. 00:26:47.211 [2024-05-15 17:17:34.706775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.211 [2024-05-15 17:17:34.706962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.211 [2024-05-15 17:17:34.706971] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:47.211 qpair failed and we were unable to recover it. 00:26:47.211 [2024-05-15 17:17:34.707095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.212 [2024-05-15 17:17:34.707291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.212 [2024-05-15 17:17:34.707301] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:47.212 qpair failed and we were unable to recover it. 00:26:47.212 [2024-05-15 17:17:34.707525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.212 [2024-05-15 17:17:34.707636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.212 [2024-05-15 17:17:34.707646] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:47.212 qpair failed and we were unable to recover it. 00:26:47.212 [2024-05-15 17:17:34.707815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.212 [2024-05-15 17:17:34.707990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.212 [2024-05-15 17:17:34.708000] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:47.212 qpair failed and we were unable to recover it. 00:26:47.212 [2024-05-15 17:17:34.708273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.212 [2024-05-15 17:17:34.708377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.212 [2024-05-15 17:17:34.708387] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:47.212 qpair failed and we were unable to recover it. 00:26:47.212 [2024-05-15 17:17:34.708579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.212 [2024-05-15 17:17:34.708690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.212 [2024-05-15 17:17:34.708700] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:47.212 qpair failed and we were unable to recover it. 00:26:47.212 [2024-05-15 17:17:34.708971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.212 [2024-05-15 17:17:34.709224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.212 [2024-05-15 17:17:34.709234] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:47.212 qpair failed and we were unable to recover it. 00:26:47.212 [2024-05-15 17:17:34.709434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.212 [2024-05-15 17:17:34.709657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.212 [2024-05-15 17:17:34.709667] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:47.212 qpair failed and we were unable to recover it. 00:26:47.212 [2024-05-15 17:17:34.709941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.212 [2024-05-15 17:17:34.710199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.212 [2024-05-15 17:17:34.710210] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:47.212 qpair failed and we were unable to recover it. 00:26:47.212 [2024-05-15 17:17:34.710456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.212 [2024-05-15 17:17:34.710627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.212 [2024-05-15 17:17:34.710638] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:47.212 qpair failed and we were unable to recover it. 00:26:47.212 [2024-05-15 17:17:34.710812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.212 [2024-05-15 17:17:34.711063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.212 [2024-05-15 17:17:34.711073] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:47.212 qpair failed and we were unable to recover it. 00:26:47.212 [2024-05-15 17:17:34.711268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.212 [2024-05-15 17:17:34.711444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.212 [2024-05-15 17:17:34.711454] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:47.212 qpair failed and we were unable to recover it. 00:26:47.212 [2024-05-15 17:17:34.711628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.212 [2024-05-15 17:17:34.711734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.212 [2024-05-15 17:17:34.711745] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:47.212 qpair failed and we were unable to recover it. 00:26:47.212 [2024-05-15 17:17:34.711925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.212 [2024-05-15 17:17:34.712044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.212 [2024-05-15 17:17:34.712054] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:47.212 qpair failed and we were unable to recover it. 00:26:47.212 [2024-05-15 17:17:34.712274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.212 [2024-05-15 17:17:34.712439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.212 [2024-05-15 17:17:34.712449] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:47.212 qpair failed and we were unable to recover it. 00:26:47.212 [2024-05-15 17:17:34.712615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.212 [2024-05-15 17:17:34.712742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.212 [2024-05-15 17:17:34.712752] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:47.212 qpair failed and we were unable to recover it. 00:26:47.212 [2024-05-15 17:17:34.712990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.212 [2024-05-15 17:17:34.713184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.212 [2024-05-15 17:17:34.713194] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:47.212 qpair failed and we were unable to recover it. 00:26:47.212 [2024-05-15 17:17:34.713371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.212 [2024-05-15 17:17:34.713643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.212 [2024-05-15 17:17:34.713654] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:47.212 qpair failed and we were unable to recover it. 00:26:47.212 [2024-05-15 17:17:34.713847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.212 [2024-05-15 17:17:34.714019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.212 [2024-05-15 17:17:34.714029] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:47.212 qpair failed and we were unable to recover it. 00:26:47.212 [2024-05-15 17:17:34.714225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.212 [2024-05-15 17:17:34.714395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.212 [2024-05-15 17:17:34.714405] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:47.212 qpair failed and we were unable to recover it. 00:26:47.212 [2024-05-15 17:17:34.714579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.212 [2024-05-15 17:17:34.714801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.212 [2024-05-15 17:17:34.714812] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:47.212 qpair failed and we were unable to recover it. 00:26:47.212 [2024-05-15 17:17:34.715058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.212 [2024-05-15 17:17:34.715230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.212 [2024-05-15 17:17:34.715241] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:47.212 qpair failed and we were unable to recover it. 00:26:47.212 [2024-05-15 17:17:34.715412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.212 [2024-05-15 17:17:34.715586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.212 [2024-05-15 17:17:34.715597] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:47.212 qpair failed and we were unable to recover it. 00:26:47.212 [2024-05-15 17:17:34.715824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.212 [2024-05-15 17:17:34.715982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.212 [2024-05-15 17:17:34.715993] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:47.212 qpair failed and we were unable to recover it. 00:26:47.212 [2024-05-15 17:17:34.716192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.212 [2024-05-15 17:17:34.716416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.212 [2024-05-15 17:17:34.716427] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:47.212 qpair failed and we were unable to recover it. 00:26:47.212 [2024-05-15 17:17:34.716705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.212 [2024-05-15 17:17:34.716887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.212 [2024-05-15 17:17:34.716898] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:47.212 qpair failed and we were unable to recover it. 00:26:47.212 [2024-05-15 17:17:34.717150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.212 [2024-05-15 17:17:34.717338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.212 [2024-05-15 17:17:34.717349] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:47.212 qpair failed and we were unable to recover it. 00:26:47.212 [2024-05-15 17:17:34.717597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.212 [2024-05-15 17:17:34.717706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.212 [2024-05-15 17:17:34.717716] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:47.212 qpair failed and we were unable to recover it. 00:26:47.212 [2024-05-15 17:17:34.717904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.212 [2024-05-15 17:17:34.718106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.212 [2024-05-15 17:17:34.718118] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:47.212 qpair failed and we were unable to recover it. 00:26:47.212 [2024-05-15 17:17:34.718318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.212 [2024-05-15 17:17:34.718561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.213 [2024-05-15 17:17:34.718573] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:47.213 qpair failed and we were unable to recover it. 00:26:47.213 [2024-05-15 17:17:34.718682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.213 [2024-05-15 17:17:34.718899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.213 [2024-05-15 17:17:34.718911] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:47.213 qpair failed and we were unable to recover it. 00:26:47.213 [2024-05-15 17:17:34.719139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.213 [2024-05-15 17:17:34.719344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.213 [2024-05-15 17:17:34.719355] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:47.213 qpair failed and we were unable to recover it. 00:26:47.213 [2024-05-15 17:17:34.719475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.213 [2024-05-15 17:17:34.719581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.213 [2024-05-15 17:17:34.719592] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:47.213 qpair failed and we were unable to recover it. 00:26:47.213 [2024-05-15 17:17:34.719774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.213 [2024-05-15 17:17:34.720004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.213 [2024-05-15 17:17:34.720015] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:47.213 qpair failed and we were unable to recover it. 00:26:47.213 [2024-05-15 17:17:34.720247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.213 [2024-05-15 17:17:34.720415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.213 [2024-05-15 17:17:34.720425] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:47.213 qpair failed and we were unable to recover it. 00:26:47.213 [2024-05-15 17:17:34.720599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.213 [2024-05-15 17:17:34.720841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.213 [2024-05-15 17:17:34.720851] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:47.213 qpair failed and we were unable to recover it. 00:26:47.213 [2024-05-15 17:17:34.720965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.213 [2024-05-15 17:17:34.721178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.213 [2024-05-15 17:17:34.721188] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:47.213 qpair failed and we were unable to recover it. 00:26:47.213 [2024-05-15 17:17:34.721412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.213 [2024-05-15 17:17:34.721612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.213 [2024-05-15 17:17:34.721623] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:47.213 qpair failed and we were unable to recover it. 00:26:47.213 [2024-05-15 17:17:34.721803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.213 [2024-05-15 17:17:34.721970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.213 [2024-05-15 17:17:34.721980] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:47.213 qpair failed and we were unable to recover it. 00:26:47.213 [2024-05-15 17:17:34.722140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.213 [2024-05-15 17:17:34.722340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.213 [2024-05-15 17:17:34.722351] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:47.213 qpair failed and we were unable to recover it. 00:26:47.213 [2024-05-15 17:17:34.722595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.213 [2024-05-15 17:17:34.722762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.213 [2024-05-15 17:17:34.722772] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:47.213 qpair failed and we were unable to recover it. 00:26:47.213 Malloc0 00:26:47.213 [2024-05-15 17:17:34.723002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.213 [2024-05-15 17:17:34.723175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.213 [2024-05-15 17:17:34.723186] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:47.213 qpair failed and we were unable to recover it. 00:26:47.213 [2024-05-15 17:17:34.723453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.213 17:17:34 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:47.213 [2024-05-15 17:17:34.723631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.213 [2024-05-15 17:17:34.723644] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:47.213 qpair failed and we were unable to recover it. 00:26:47.213 17:17:34 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:26:47.213 [2024-05-15 17:17:34.723920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.213 [2024-05-15 17:17:34.724167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.213 [2024-05-15 17:17:34.724177] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:47.213 qpair failed and we were unable to recover it. 00:26:47.213 17:17:34 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:47.213 [2024-05-15 17:17:34.724352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.213 17:17:34 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:26:47.213 [2024-05-15 17:17:34.724575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.213 [2024-05-15 17:17:34.724586] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:47.213 qpair failed and we were unable to recover it. 00:26:47.213 [2024-05-15 17:17:34.724702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.213 [2024-05-15 17:17:34.724941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.213 [2024-05-15 17:17:34.724950] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:47.213 qpair failed and we were unable to recover it. 00:26:47.213 [2024-05-15 17:17:34.725216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.213 [2024-05-15 17:17:34.725460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.213 [2024-05-15 17:17:34.725470] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:47.213 qpair failed and we were unable to recover it. 00:26:47.213 [2024-05-15 17:17:34.725635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.213 [2024-05-15 17:17:34.725923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.213 [2024-05-15 17:17:34.725933] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:47.213 qpair failed and we were unable to recover it. 00:26:47.213 [2024-05-15 17:17:34.726092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.213 [2024-05-15 17:17:34.726259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.213 [2024-05-15 17:17:34.726270] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:47.213 qpair failed and we were unable to recover it. 00:26:47.213 [2024-05-15 17:17:34.726507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.213 [2024-05-15 17:17:34.726765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.213 [2024-05-15 17:17:34.726775] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:47.213 qpair failed and we were unable to recover it. 00:26:47.213 [2024-05-15 17:17:34.726957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.213 [2024-05-15 17:17:34.727120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.213 [2024-05-15 17:17:34.727129] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:47.213 qpair failed and we were unable to recover it. 00:26:47.213 [2024-05-15 17:17:34.727238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.213 [2024-05-15 17:17:34.727421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.213 [2024-05-15 17:17:34.727431] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:47.213 qpair failed and we were unable to recover it. 00:26:47.213 [2024-05-15 17:17:34.727680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.213 [2024-05-15 17:17:34.728005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.213 [2024-05-15 17:17:34.728014] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:47.213 qpair failed and we were unable to recover it. 00:26:47.213 [2024-05-15 17:17:34.728254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.213 [2024-05-15 17:17:34.728501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.213 [2024-05-15 17:17:34.728510] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:47.213 qpair failed and we were unable to recover it. 00:26:47.213 [2024-05-15 17:17:34.728677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.213 [2024-05-15 17:17:34.728933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.213 [2024-05-15 17:17:34.728943] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:47.213 qpair failed and we were unable to recover it. 00:26:47.213 [2024-05-15 17:17:34.729206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.213 [2024-05-15 17:17:34.729397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.213 [2024-05-15 17:17:34.729407] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:47.213 qpair failed and we were unable to recover it. 00:26:47.213 [2024-05-15 17:17:34.729580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.213 [2024-05-15 17:17:34.729751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.214 [2024-05-15 17:17:34.729761] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:47.214 qpair failed and we were unable to recover it. 00:26:47.214 [2024-05-15 17:17:34.729916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.214 [2024-05-15 17:17:34.730073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.214 [2024-05-15 17:17:34.730082] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:47.214 qpair failed and we were unable to recover it. 00:26:47.214 [2024-05-15 17:17:34.730258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.214 [2024-05-15 17:17:34.730410] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:47.214 [2024-05-15 17:17:34.730424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.214 [2024-05-15 17:17:34.730434] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:47.214 qpair failed and we were unable to recover it. 00:26:47.214 [2024-05-15 17:17:34.730592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.214 [2024-05-15 17:17:34.730703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.214 [2024-05-15 17:17:34.730713] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:47.214 qpair failed and we were unable to recover it. 00:26:47.214 [2024-05-15 17:17:34.730962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.214 [2024-05-15 17:17:34.731243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.214 [2024-05-15 17:17:34.731253] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:47.214 qpair failed and we were unable to recover it. 00:26:47.214 [2024-05-15 17:17:34.731427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.214 [2024-05-15 17:17:34.731625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.214 [2024-05-15 17:17:34.731638] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:47.214 qpair failed and we were unable to recover it. 00:26:47.214 [2024-05-15 17:17:34.731896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.214 [2024-05-15 17:17:34.732082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.214 [2024-05-15 17:17:34.732091] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:47.214 qpair failed and we were unable to recover it. 00:26:47.214 [2024-05-15 17:17:34.732328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.214 [2024-05-15 17:17:34.732437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.214 [2024-05-15 17:17:34.732447] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:47.214 qpair failed and we were unable to recover it. 00:26:47.214 [2024-05-15 17:17:34.732615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.214 [2024-05-15 17:17:34.732791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.214 [2024-05-15 17:17:34.732801] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:47.214 qpair failed and we were unable to recover it. 00:26:47.214 [2024-05-15 17:17:34.733035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.214 [2024-05-15 17:17:34.733215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.214 [2024-05-15 17:17:34.733225] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:47.214 qpair failed and we were unable to recover it. 00:26:47.214 [2024-05-15 17:17:34.733489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.214 [2024-05-15 17:17:34.733659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.214 [2024-05-15 17:17:34.733669] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:47.214 qpair failed and we were unable to recover it. 00:26:47.214 [2024-05-15 17:17:34.733785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.214 [2024-05-15 17:17:34.734052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.214 [2024-05-15 17:17:34.734062] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:47.214 qpair failed and we were unable to recover it. 00:26:47.214 [2024-05-15 17:17:34.734270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.214 [2024-05-15 17:17:34.734431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.214 [2024-05-15 17:17:34.734440] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:47.214 qpair failed and we were unable to recover it. 00:26:47.214 [2024-05-15 17:17:34.734712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.214 [2024-05-15 17:17:34.734872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.214 [2024-05-15 17:17:34.734881] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:47.214 qpair failed and we were unable to recover it. 00:26:47.214 [2024-05-15 17:17:34.735139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.214 [2024-05-15 17:17:34.735393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.214 [2024-05-15 17:17:34.735403] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:47.214 qpair failed and we were unable to recover it. 00:26:47.214 [2024-05-15 17:17:34.735572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.214 [2024-05-15 17:17:34.735739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.214 [2024-05-15 17:17:34.735751] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:47.214 qpair failed and we were unable to recover it. 00:26:47.214 [2024-05-15 17:17:34.736015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.214 [2024-05-15 17:17:34.736262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.214 [2024-05-15 17:17:34.736272] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:47.214 qpair failed and we were unable to recover it. 00:26:47.214 [2024-05-15 17:17:34.736393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.214 [2024-05-15 17:17:34.736614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.214 [2024-05-15 17:17:34.736624] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:47.214 qpair failed and we were unable to recover it. 00:26:47.214 [2024-05-15 17:17:34.736910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.214 [2024-05-15 17:17:34.737157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.214 [2024-05-15 17:17:34.737170] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:47.214 qpair failed and we were unable to recover it. 00:26:47.214 [2024-05-15 17:17:34.737388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.214 [2024-05-15 17:17:34.737555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.214 [2024-05-15 17:17:34.737565] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:47.214 qpair failed and we were unable to recover it. 00:26:47.214 [2024-05-15 17:17:34.737733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.214 [2024-05-15 17:17:34.737981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.214 [2024-05-15 17:17:34.737991] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:47.214 qpair failed and we were unable to recover it. 00:26:47.214 [2024-05-15 17:17:34.738235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.214 [2024-05-15 17:17:34.738403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.214 [2024-05-15 17:17:34.738413] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:47.214 qpair failed and we were unable to recover it. 00:26:47.214 [2024-05-15 17:17:34.738641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.214 [2024-05-15 17:17:34.738888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.214 [2024-05-15 17:17:34.738898] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:47.214 qpair failed and we were unable to recover it. 00:26:47.214 17:17:34 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:47.214 [2024-05-15 17:17:34.739138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.214 [2024-05-15 17:17:34.739304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.214 [2024-05-15 17:17:34.739314] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:47.214 qpair failed and we were unable to recover it. 00:26:47.214 17:17:34 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:26:47.214 [2024-05-15 17:17:34.739573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.214 17:17:34 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:47.214 [2024-05-15 17:17:34.739846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.214 [2024-05-15 17:17:34.739860] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:47.214 qpair failed and we were unable to recover it. 00:26:47.214 17:17:34 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:26:47.214 [2024-05-15 17:17:34.740090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.214 [2024-05-15 17:17:34.740246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.214 [2024-05-15 17:17:34.740257] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:47.214 qpair failed and we were unable to recover it. 00:26:47.214 [2024-05-15 17:17:34.740523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.214 [2024-05-15 17:17:34.740751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.214 [2024-05-15 17:17:34.740760] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:47.214 qpair failed and we were unable to recover it. 00:26:47.214 [2024-05-15 17:17:34.741020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.214 [2024-05-15 17:17:34.741215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.215 [2024-05-15 17:17:34.741225] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:47.215 qpair failed and we were unable to recover it. 00:26:47.215 [2024-05-15 17:17:34.741405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.215 [2024-05-15 17:17:34.741561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.215 [2024-05-15 17:17:34.741571] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:47.215 qpair failed and we were unable to recover it. 00:26:47.215 [2024-05-15 17:17:34.741671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.215 [2024-05-15 17:17:34.741838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.215 [2024-05-15 17:17:34.741848] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:47.215 qpair failed and we were unable to recover it. 00:26:47.215 [2024-05-15 17:17:34.741949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.215 [2024-05-15 17:17:34.742196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.215 [2024-05-15 17:17:34.742206] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:47.215 qpair failed and we were unable to recover it. 00:26:47.215 [2024-05-15 17:17:34.742434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.215 [2024-05-15 17:17:34.742631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.215 [2024-05-15 17:17:34.742641] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:47.215 qpair failed and we were unable to recover it. 00:26:47.215 [2024-05-15 17:17:34.742887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.215 [2024-05-15 17:17:34.743144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.215 [2024-05-15 17:17:34.743153] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:47.215 qpair failed and we were unable to recover it. 00:26:47.215 [2024-05-15 17:17:34.743401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.215 [2024-05-15 17:17:34.743627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.215 [2024-05-15 17:17:34.743637] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:47.215 qpair failed and we were unable to recover it. 00:26:47.215 [2024-05-15 17:17:34.743934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.215 [2024-05-15 17:17:34.744174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.215 [2024-05-15 17:17:34.744186] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:47.215 qpair failed and we were unable to recover it. 00:26:47.215 [2024-05-15 17:17:34.744420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.215 [2024-05-15 17:17:34.744644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.215 [2024-05-15 17:17:34.744654] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:47.215 qpair failed and we were unable to recover it. 00:26:47.215 [2024-05-15 17:17:34.744904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.215 [2024-05-15 17:17:34.745127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.215 [2024-05-15 17:17:34.745136] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:47.215 qpair failed and we were unable to recover it. 00:26:47.215 [2024-05-15 17:17:34.745302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.215 [2024-05-15 17:17:34.745403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.215 [2024-05-15 17:17:34.745412] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:47.215 qpair failed and we were unable to recover it. 00:26:47.215 [2024-05-15 17:17:34.745580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.215 [2024-05-15 17:17:34.745757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.215 [2024-05-15 17:17:34.745766] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:47.215 qpair failed and we were unable to recover it. 00:26:47.215 [2024-05-15 17:17:34.745926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.215 [2024-05-15 17:17:34.746145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.215 [2024-05-15 17:17:34.746155] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:47.215 qpair failed and we were unable to recover it. 00:26:47.215 [2024-05-15 17:17:34.746313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.215 [2024-05-15 17:17:34.746559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.215 [2024-05-15 17:17:34.746569] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:47.215 qpair failed and we were unable to recover it. 00:26:47.215 [2024-05-15 17:17:34.746789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.215 [2024-05-15 17:17:34.746998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.215 [2024-05-15 17:17:34.747008] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:47.215 qpair failed and we were unable to recover it. 00:26:47.215 17:17:34 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:47.215 [2024-05-15 17:17:34.747203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.215 [2024-05-15 17:17:34.747448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.215 [2024-05-15 17:17:34.747458] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:47.215 qpair failed and we were unable to recover it. 00:26:47.215 17:17:34 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:26:47.215 [2024-05-15 17:17:34.747620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.215 17:17:34 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:47.215 [2024-05-15 17:17:34.747880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.215 [2024-05-15 17:17:34.747892] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:47.215 qpair failed and we were unable to recover it. 00:26:47.215 17:17:34 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:26:47.215 [2024-05-15 17:17:34.748089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.215 [2024-05-15 17:17:34.748278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.215 [2024-05-15 17:17:34.748288] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:47.215 qpair failed and we were unable to recover it. 00:26:47.215 [2024-05-15 17:17:34.748461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.215 [2024-05-15 17:17:34.748690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.215 [2024-05-15 17:17:34.748699] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:47.215 qpair failed and we were unable to recover it. 00:26:47.215 [2024-05-15 17:17:34.748934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.215 [2024-05-15 17:17:34.749159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.215 [2024-05-15 17:17:34.749172] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:47.215 qpair failed and we were unable to recover it. 00:26:47.215 [2024-05-15 17:17:34.749395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.215 [2024-05-15 17:17:34.749553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.215 [2024-05-15 17:17:34.749562] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:47.215 qpair failed and we were unable to recover it. 00:26:47.215 [2024-05-15 17:17:34.749812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.215 [2024-05-15 17:17:34.750051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.215 [2024-05-15 17:17:34.750061] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:47.215 qpair failed and we were unable to recover it. 00:26:47.215 [2024-05-15 17:17:34.750301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.215 [2024-05-15 17:17:34.750477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.215 [2024-05-15 17:17:34.750487] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:47.215 qpair failed and we were unable to recover it. 00:26:47.215 [2024-05-15 17:17:34.750654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.215 [2024-05-15 17:17:34.750759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.215 [2024-05-15 17:17:34.750769] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:47.215 qpair failed and we were unable to recover it. 00:26:47.215 [2024-05-15 17:17:34.751012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.215 [2024-05-15 17:17:34.751262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.215 [2024-05-15 17:17:34.751272] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:47.215 qpair failed and we were unable to recover it. 00:26:47.215 [2024-05-15 17:17:34.751533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.215 [2024-05-15 17:17:34.751704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.216 [2024-05-15 17:17:34.751714] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:47.216 qpair failed and we were unable to recover it. 00:26:47.216 [2024-05-15 17:17:34.751962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.216 [2024-05-15 17:17:34.752190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.216 [2024-05-15 17:17:34.752200] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:47.216 qpair failed and we were unable to recover it. 00:26:47.216 [2024-05-15 17:17:34.752423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.216 [2024-05-15 17:17:34.752644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.216 [2024-05-15 17:17:34.752654] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:47.216 qpair failed and we were unable to recover it. 00:26:47.216 [2024-05-15 17:17:34.752845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.216 [2024-05-15 17:17:34.753006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.216 [2024-05-15 17:17:34.753016] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:47.216 qpair failed and we were unable to recover it. 00:26:47.216 [2024-05-15 17:17:34.753189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.216 [2024-05-15 17:17:34.753370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.216 [2024-05-15 17:17:34.753380] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:47.216 qpair failed and we were unable to recover it. 00:26:47.216 [2024-05-15 17:17:34.753497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.216 [2024-05-15 17:17:34.753722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.216 [2024-05-15 17:17:34.753732] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:47.216 qpair failed and we were unable to recover it. 00:26:47.216 [2024-05-15 17:17:34.753927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.216 [2024-05-15 17:17:34.754103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.216 [2024-05-15 17:17:34.754112] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:47.216 qpair failed and we were unable to recover it. 00:26:47.216 [2024-05-15 17:17:34.754374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.216 [2024-05-15 17:17:34.754597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.216 [2024-05-15 17:17:34.754607] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:47.216 qpair failed and we were unable to recover it. 00:26:47.216 [2024-05-15 17:17:34.754841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.216 [2024-05-15 17:17:34.755090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.216 [2024-05-15 17:17:34.755100] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:47.216 qpair failed and we were unable to recover it. 00:26:47.216 17:17:34 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:47.216 [2024-05-15 17:17:34.755288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.216 [2024-05-15 17:17:34.755509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.216 [2024-05-15 17:17:34.755519] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:47.216 qpair failed and we were unable to recover it. 00:26:47.216 17:17:34 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:26:47.216 [2024-05-15 17:17:34.755770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.216 17:17:34 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:47.216 [2024-05-15 17:17:34.756043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.216 [2024-05-15 17:17:34.756054] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:47.216 qpair failed and we were unable to recover it. 00:26:47.216 17:17:34 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:26:47.216 [2024-05-15 17:17:34.756288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.216 [2024-05-15 17:17:34.756456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.216 [2024-05-15 17:17:34.756465] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:47.216 qpair failed and we were unable to recover it. 00:26:47.216 [2024-05-15 17:17:34.756634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.216 [2024-05-15 17:17:34.756855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.216 [2024-05-15 17:17:34.756865] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:47.216 qpair failed and we were unable to recover it. 00:26:47.216 [2024-05-15 17:17:34.757052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.216 [2024-05-15 17:17:34.757137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.216 [2024-05-15 17:17:34.757147] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:47.216 qpair failed and we were unable to recover it. 00:26:47.216 [2024-05-15 17:17:34.757403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.216 [2024-05-15 17:17:34.757602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.216 [2024-05-15 17:17:34.757612] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:47.216 qpair failed and we were unable to recover it. 00:26:47.216 [2024-05-15 17:17:34.757791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.216 [2024-05-15 17:17:34.757951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.216 [2024-05-15 17:17:34.757960] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:47.216 qpair failed and we were unable to recover it. 00:26:47.216 [2024-05-15 17:17:34.758136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.216 [2024-05-15 17:17:34.758330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.216 [2024-05-15 17:17:34.758340] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f93f8000b90 with addr=10.0.0.2, port=4420 00:26:47.216 qpair failed and we were unable to recover it. 00:26:47.216 [2024-05-15 17:17:34.758447] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:26:47.216 [2024-05-15 17:17:34.758559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.216 [2024-05-15 17:17:34.758682] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:47.216 [2024-05-15 17:17:34.760766] posix.c: 675:posix_sock_psk_use_session_client_cb: *ERROR*: PSK is not set 00:26:47.216 [2024-05-15 17:17:34.760809] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7f93f8000b90 (107): Transport endpoint is not connected 00:26:47.216 [2024-05-15 17:17:34.760851] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:47.216 qpair failed and we were unable to recover it. 00:26:47.216 17:17:34 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:47.216 17:17:34 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:26:47.216 17:17:34 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:47.216 17:17:34 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:26:47.216 [2024-05-15 17:17:34.770989] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:47.216 [2024-05-15 17:17:34.771078] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:47.216 [2024-05-15 17:17:34.771096] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:47.216 [2024-05-15 17:17:34.771105] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:47.216 [2024-05-15 17:17:34.771112] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f93f8000b90 00:26:47.216 [2024-05-15 17:17:34.771129] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:47.216 qpair failed and we were unable to recover it. 00:26:47.216 17:17:34 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:47.216 17:17:34 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@50 -- # wait 3222132 00:26:47.216 [2024-05-15 17:17:34.780938] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:47.216 [2024-05-15 17:17:34.781004] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:47.216 [2024-05-15 17:17:34.781020] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:47.216 [2024-05-15 17:17:34.781027] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:47.216 [2024-05-15 17:17:34.781033] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f93f8000b90 00:26:47.217 [2024-05-15 17:17:34.781048] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:47.217 qpair failed and we were unable to recover it. 00:26:47.217 [2024-05-15 17:17:34.790910] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:47.217 [2024-05-15 17:17:34.790979] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:47.217 [2024-05-15 17:17:34.790995] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:47.217 [2024-05-15 17:17:34.791002] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:47.217 [2024-05-15 17:17:34.791008] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f93f8000b90 00:26:47.217 [2024-05-15 17:17:34.791023] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:47.217 qpair failed and we were unable to recover it. 00:26:47.217 [2024-05-15 17:17:34.801009] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:47.217 [2024-05-15 17:17:34.801078] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:47.217 [2024-05-15 17:17:34.801095] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:47.217 [2024-05-15 17:17:34.801102] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:47.217 [2024-05-15 17:17:34.801108] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f93f8000b90 00:26:47.217 [2024-05-15 17:17:34.801122] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:47.217 qpair failed and we were unable to recover it. 00:26:47.217 [2024-05-15 17:17:34.810966] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:47.217 [2024-05-15 17:17:34.811025] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:47.217 [2024-05-15 17:17:34.811041] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:47.217 [2024-05-15 17:17:34.811048] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:47.217 [2024-05-15 17:17:34.811053] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f93f8000b90 00:26:47.217 [2024-05-15 17:17:34.811068] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:47.217 qpair failed and we were unable to recover it. 00:26:47.217 [2024-05-15 17:17:34.820985] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:47.217 [2024-05-15 17:17:34.821042] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:47.217 [2024-05-15 17:17:34.821058] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:47.217 [2024-05-15 17:17:34.821064] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:47.217 [2024-05-15 17:17:34.821070] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f93f8000b90 00:26:47.217 [2024-05-15 17:17:34.821085] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:47.217 qpair failed and we were unable to recover it. 00:26:47.217 [2024-05-15 17:17:34.830999] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:47.217 [2024-05-15 17:17:34.831078] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:47.217 [2024-05-15 17:17:34.831093] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:47.217 [2024-05-15 17:17:34.831100] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:47.217 [2024-05-15 17:17:34.831106] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f93f8000b90 00:26:47.217 [2024-05-15 17:17:34.831120] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:47.217 qpair failed and we were unable to recover it. 00:26:47.476 [2024-05-15 17:17:34.841051] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:47.476 [2024-05-15 17:17:34.841116] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:47.476 [2024-05-15 17:17:34.841131] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:47.476 [2024-05-15 17:17:34.841138] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:47.476 [2024-05-15 17:17:34.841144] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f93f8000b90 00:26:47.476 [2024-05-15 17:17:34.841159] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:47.476 qpair failed and we were unable to recover it. 00:26:47.476 [2024-05-15 17:17:34.851116] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:47.476 [2024-05-15 17:17:34.851176] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:47.476 [2024-05-15 17:17:34.851194] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:47.476 [2024-05-15 17:17:34.851201] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:47.476 [2024-05-15 17:17:34.851207] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f93f8000b90 00:26:47.476 [2024-05-15 17:17:34.851222] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:47.476 qpair failed and we were unable to recover it. 00:26:47.476 [2024-05-15 17:17:34.861077] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:47.476 [2024-05-15 17:17:34.861137] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:47.476 [2024-05-15 17:17:34.861152] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:47.476 [2024-05-15 17:17:34.861159] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:47.476 [2024-05-15 17:17:34.861170] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f93f8000b90 00:26:47.476 [2024-05-15 17:17:34.861185] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:47.476 qpair failed and we were unable to recover it. 00:26:47.476 [2024-05-15 17:17:34.871114] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:47.476 [2024-05-15 17:17:34.871210] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:47.476 [2024-05-15 17:17:34.871225] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:47.476 [2024-05-15 17:17:34.871231] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:47.477 [2024-05-15 17:17:34.871237] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f93f8000b90 00:26:47.477 [2024-05-15 17:17:34.871251] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:47.477 qpair failed and we were unable to recover it. 00:26:47.477 [2024-05-15 17:17:34.881162] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:47.477 [2024-05-15 17:17:34.881229] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:47.477 [2024-05-15 17:17:34.881244] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:47.477 [2024-05-15 17:17:34.881251] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:47.477 [2024-05-15 17:17:34.881257] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f93f8000b90 00:26:47.477 [2024-05-15 17:17:34.881271] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:47.477 qpair failed and we were unable to recover it. 00:26:47.477 [2024-05-15 17:17:34.891207] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:47.477 [2024-05-15 17:17:34.891271] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:47.477 [2024-05-15 17:17:34.891287] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:47.477 [2024-05-15 17:17:34.891294] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:47.477 [2024-05-15 17:17:34.891300] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f93f8000b90 00:26:47.477 [2024-05-15 17:17:34.891315] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:47.477 qpair failed and we were unable to recover it. 00:26:47.477 [2024-05-15 17:17:34.901226] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:47.477 [2024-05-15 17:17:34.901286] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:47.477 [2024-05-15 17:17:34.901303] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:47.477 [2024-05-15 17:17:34.901309] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:47.477 [2024-05-15 17:17:34.901315] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f93f8000b90 00:26:47.477 [2024-05-15 17:17:34.901330] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:47.477 qpair failed and we were unable to recover it. 00:26:47.477 [2024-05-15 17:17:34.911219] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:47.477 [2024-05-15 17:17:34.911283] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:47.477 [2024-05-15 17:17:34.911299] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:47.477 [2024-05-15 17:17:34.911306] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:47.477 [2024-05-15 17:17:34.911312] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f93f8000b90 00:26:47.477 [2024-05-15 17:17:34.911326] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:47.477 qpair failed and we were unable to recover it. 00:26:47.477 [2024-05-15 17:17:34.921268] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:47.477 [2024-05-15 17:17:34.921332] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:47.477 [2024-05-15 17:17:34.921348] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:47.477 [2024-05-15 17:17:34.921355] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:47.477 [2024-05-15 17:17:34.921361] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f93f8000b90 00:26:47.477 [2024-05-15 17:17:34.921375] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:47.477 qpair failed and we were unable to recover it. 00:26:47.477 [2024-05-15 17:17:34.931287] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:47.477 [2024-05-15 17:17:34.931345] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:47.477 [2024-05-15 17:17:34.931360] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:47.477 [2024-05-15 17:17:34.931366] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:47.477 [2024-05-15 17:17:34.931372] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f93f8000b90 00:26:47.477 [2024-05-15 17:17:34.931386] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:47.477 qpair failed and we were unable to recover it. 00:26:47.477 [2024-05-15 17:17:34.941315] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:47.477 [2024-05-15 17:17:34.941374] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:47.477 [2024-05-15 17:17:34.941394] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:47.477 [2024-05-15 17:17:34.941401] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:47.477 [2024-05-15 17:17:34.941407] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f93f8000b90 00:26:47.477 [2024-05-15 17:17:34.941421] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:47.477 qpair failed and we were unable to recover it. 00:26:47.477 [2024-05-15 17:17:34.951327] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:47.477 [2024-05-15 17:17:34.951390] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:47.477 [2024-05-15 17:17:34.951405] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:47.477 [2024-05-15 17:17:34.951411] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:47.477 [2024-05-15 17:17:34.951417] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f93f8000b90 00:26:47.477 [2024-05-15 17:17:34.951432] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:47.477 qpair failed and we were unable to recover it. 00:26:47.477 [2024-05-15 17:17:34.961390] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:47.477 [2024-05-15 17:17:34.961452] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:47.477 [2024-05-15 17:17:34.961467] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:47.477 [2024-05-15 17:17:34.961473] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:47.477 [2024-05-15 17:17:34.961479] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f93f8000b90 00:26:47.477 [2024-05-15 17:17:34.961493] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:47.477 qpair failed and we were unable to recover it. 00:26:47.477 [2024-05-15 17:17:34.971410] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:47.477 [2024-05-15 17:17:34.971513] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:47.477 [2024-05-15 17:17:34.971527] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:47.477 [2024-05-15 17:17:34.971534] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:47.477 [2024-05-15 17:17:34.971540] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f93f8000b90 00:26:47.477 [2024-05-15 17:17:34.971554] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:47.477 qpair failed and we were unable to recover it. 00:26:47.477 [2024-05-15 17:17:34.981456] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:47.477 [2024-05-15 17:17:34.981528] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:47.477 [2024-05-15 17:17:34.981543] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:47.477 [2024-05-15 17:17:34.981550] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:47.477 [2024-05-15 17:17:34.981556] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f93f8000b90 00:26:47.477 [2024-05-15 17:17:34.981573] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:47.477 qpair failed and we were unable to recover it. 00:26:47.477 [2024-05-15 17:17:34.991470] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:47.477 [2024-05-15 17:17:34.991536] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:47.477 [2024-05-15 17:17:34.991551] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:47.477 [2024-05-15 17:17:34.991558] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:47.477 [2024-05-15 17:17:34.991565] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f93f8000b90 00:26:47.477 [2024-05-15 17:17:34.991579] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:47.477 qpair failed and we were unable to recover it. 00:26:47.477 [2024-05-15 17:17:35.001549] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:47.477 [2024-05-15 17:17:35.001638] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:47.477 [2024-05-15 17:17:35.001653] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:47.477 [2024-05-15 17:17:35.001660] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:47.477 [2024-05-15 17:17:35.001666] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f93f8000b90 00:26:47.478 [2024-05-15 17:17:35.001680] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:47.478 qpair failed and we were unable to recover it. 00:26:47.478 [2024-05-15 17:17:35.011548] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:47.478 [2024-05-15 17:17:35.011610] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:47.478 [2024-05-15 17:17:35.011625] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:47.478 [2024-05-15 17:17:35.011632] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:47.478 [2024-05-15 17:17:35.011638] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f93f8000b90 00:26:47.478 [2024-05-15 17:17:35.011653] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:47.478 qpair failed and we were unable to recover it. 00:26:47.478 [2024-05-15 17:17:35.021559] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:47.478 [2024-05-15 17:17:35.021621] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:47.478 [2024-05-15 17:17:35.021636] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:47.478 [2024-05-15 17:17:35.021643] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:47.478 [2024-05-15 17:17:35.021649] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f93f8000b90 00:26:47.478 [2024-05-15 17:17:35.021664] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:47.478 qpair failed and we were unable to recover it. 00:26:47.478 [2024-05-15 17:17:35.031541] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:47.478 [2024-05-15 17:17:35.031600] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:47.478 [2024-05-15 17:17:35.031619] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:47.478 [2024-05-15 17:17:35.031625] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:47.478 [2024-05-15 17:17:35.031631] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f93f8000b90 00:26:47.478 [2024-05-15 17:17:35.031645] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:47.478 qpair failed and we were unable to recover it. 00:26:47.478 [2024-05-15 17:17:35.041556] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:47.478 [2024-05-15 17:17:35.041624] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:47.478 [2024-05-15 17:17:35.041639] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:47.478 [2024-05-15 17:17:35.041646] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:47.478 [2024-05-15 17:17:35.041651] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f93f8000b90 00:26:47.478 [2024-05-15 17:17:35.041665] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:47.478 qpair failed and we were unable to recover it. 00:26:47.478 [2024-05-15 17:17:35.051660] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:47.478 [2024-05-15 17:17:35.051726] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:47.478 [2024-05-15 17:17:35.051741] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:47.478 [2024-05-15 17:17:35.051747] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:47.478 [2024-05-15 17:17:35.051753] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f93f8000b90 00:26:47.478 [2024-05-15 17:17:35.051767] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:47.478 qpair failed and we were unable to recover it. 00:26:47.478 [2024-05-15 17:17:35.061616] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:47.478 [2024-05-15 17:17:35.061677] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:47.478 [2024-05-15 17:17:35.061693] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:47.478 [2024-05-15 17:17:35.061699] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:47.478 [2024-05-15 17:17:35.061705] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f93f8000b90 00:26:47.478 [2024-05-15 17:17:35.061719] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:47.478 qpair failed and we were unable to recover it. 00:26:47.478 [2024-05-15 17:17:35.071652] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:47.478 [2024-05-15 17:17:35.071761] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:47.478 [2024-05-15 17:17:35.071775] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:47.478 [2024-05-15 17:17:35.071782] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:47.478 [2024-05-15 17:17:35.071791] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f93f8000b90 00:26:47.478 [2024-05-15 17:17:35.071806] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:47.478 qpair failed and we were unable to recover it. 00:26:47.478 [2024-05-15 17:17:35.081711] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:47.478 [2024-05-15 17:17:35.081777] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:47.478 [2024-05-15 17:17:35.081791] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:47.478 [2024-05-15 17:17:35.081798] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:47.478 [2024-05-15 17:17:35.081804] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f93f8000b90 00:26:47.478 [2024-05-15 17:17:35.081818] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:47.478 qpair failed and we were unable to recover it. 00:26:47.478 [2024-05-15 17:17:35.091748] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:47.478 [2024-05-15 17:17:35.091812] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:47.478 [2024-05-15 17:17:35.091826] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:47.478 [2024-05-15 17:17:35.091833] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:47.478 [2024-05-15 17:17:35.091839] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f93f8000b90 00:26:47.478 [2024-05-15 17:17:35.091853] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:47.478 qpair failed and we were unable to recover it. 00:26:47.478 [2024-05-15 17:17:35.101764] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:47.478 [2024-05-15 17:17:35.101822] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:47.478 [2024-05-15 17:17:35.101837] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:47.478 [2024-05-15 17:17:35.101844] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:47.478 [2024-05-15 17:17:35.101850] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f93f8000b90 00:26:47.478 [2024-05-15 17:17:35.101864] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:47.478 qpair failed and we were unable to recover it. 00:26:47.478 [2024-05-15 17:17:35.111790] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:47.478 [2024-05-15 17:17:35.111851] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:47.478 [2024-05-15 17:17:35.111865] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:47.478 [2024-05-15 17:17:35.111872] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:47.478 [2024-05-15 17:17:35.111877] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f93f8000b90 00:26:47.478 [2024-05-15 17:17:35.111891] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:47.478 qpair failed and we were unable to recover it. 00:26:47.478 [2024-05-15 17:17:35.121810] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:47.478 [2024-05-15 17:17:35.121882] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:47.478 [2024-05-15 17:17:35.121896] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:47.478 [2024-05-15 17:17:35.121903] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:47.478 [2024-05-15 17:17:35.121909] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f93f8000b90 00:26:47.478 [2024-05-15 17:17:35.121923] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:47.478 qpair failed and we were unable to recover it. 00:26:47.478 [2024-05-15 17:17:35.131827] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:47.478 [2024-05-15 17:17:35.131891] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:47.478 [2024-05-15 17:17:35.131906] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:47.478 [2024-05-15 17:17:35.131912] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:47.478 [2024-05-15 17:17:35.131918] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f93f8000b90 00:26:47.478 [2024-05-15 17:17:35.131932] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:47.478 qpair failed and we were unable to recover it. 00:26:47.738 [2024-05-15 17:17:35.141869] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:47.738 [2024-05-15 17:17:35.141944] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:47.738 [2024-05-15 17:17:35.141958] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:47.738 [2024-05-15 17:17:35.141966] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:47.738 [2024-05-15 17:17:35.141972] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f93f8000b90 00:26:47.738 [2024-05-15 17:17:35.141986] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:47.738 qpair failed and we were unable to recover it. 00:26:47.738 [2024-05-15 17:17:35.151879] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:47.738 [2024-05-15 17:17:35.151948] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:47.738 [2024-05-15 17:17:35.151963] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:47.738 [2024-05-15 17:17:35.151971] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:47.738 [2024-05-15 17:17:35.151976] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f93f8000b90 00:26:47.738 [2024-05-15 17:17:35.151991] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:47.738 qpair failed and we were unable to recover it. 00:26:47.738 [2024-05-15 17:17:35.161927] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:47.738 [2024-05-15 17:17:35.161988] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:47.738 [2024-05-15 17:17:35.162003] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:47.738 [2024-05-15 17:17:35.162014] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:47.738 [2024-05-15 17:17:35.162020] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f93f8000b90 00:26:47.738 [2024-05-15 17:17:35.162034] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:47.738 qpair failed and we were unable to recover it. 00:26:47.738 [2024-05-15 17:17:35.171923] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:47.738 [2024-05-15 17:17:35.171983] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:47.738 [2024-05-15 17:17:35.171997] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:47.738 [2024-05-15 17:17:35.172004] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:47.738 [2024-05-15 17:17:35.172010] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f93f8000b90 00:26:47.738 [2024-05-15 17:17:35.172024] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:47.738 qpair failed and we were unable to recover it. 00:26:47.738 [2024-05-15 17:17:35.181988] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:47.738 [2024-05-15 17:17:35.182055] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:47.738 [2024-05-15 17:17:35.182069] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:47.738 [2024-05-15 17:17:35.182077] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:47.738 [2024-05-15 17:17:35.182082] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f93f8000b90 00:26:47.738 [2024-05-15 17:17:35.182097] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:47.738 qpair failed and we were unable to recover it. 00:26:47.738 [2024-05-15 17:17:35.192000] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:47.738 [2024-05-15 17:17:35.192064] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:47.738 [2024-05-15 17:17:35.192078] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:47.738 [2024-05-15 17:17:35.192085] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:47.738 [2024-05-15 17:17:35.192090] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f93f8000b90 00:26:47.738 [2024-05-15 17:17:35.192104] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:47.738 qpair failed and we were unable to recover it. 00:26:47.738 [2024-05-15 17:17:35.202037] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:47.738 [2024-05-15 17:17:35.202099] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:47.738 [2024-05-15 17:17:35.202114] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:47.738 [2024-05-15 17:17:35.202120] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:47.738 [2024-05-15 17:17:35.202126] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f93f8000b90 00:26:47.738 [2024-05-15 17:17:35.202140] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:47.738 qpair failed and we were unable to recover it. 00:26:47.738 [2024-05-15 17:17:35.212056] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:47.738 [2024-05-15 17:17:35.212115] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:47.738 [2024-05-15 17:17:35.212129] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:47.738 [2024-05-15 17:17:35.212136] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:47.738 [2024-05-15 17:17:35.212141] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f93f8000b90 00:26:47.738 [2024-05-15 17:17:35.212155] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:47.738 qpair failed and we were unable to recover it. 00:26:47.738 [2024-05-15 17:17:35.222083] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:47.739 [2024-05-15 17:17:35.222149] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:47.739 [2024-05-15 17:17:35.222163] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:47.739 [2024-05-15 17:17:35.222174] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:47.739 [2024-05-15 17:17:35.222179] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f93f8000b90 00:26:47.739 [2024-05-15 17:17:35.222193] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:47.739 qpair failed and we were unable to recover it. 00:26:47.739 [2024-05-15 17:17:35.232117] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:47.739 [2024-05-15 17:17:35.232188] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:47.739 [2024-05-15 17:17:35.232202] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:47.739 [2024-05-15 17:17:35.232209] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:47.739 [2024-05-15 17:17:35.232215] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f93f8000b90 00:26:47.739 [2024-05-15 17:17:35.232229] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:47.739 qpair failed and we were unable to recover it. 00:26:47.739 [2024-05-15 17:17:35.242158] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:47.739 [2024-05-15 17:17:35.242224] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:47.739 [2024-05-15 17:17:35.242239] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:47.739 [2024-05-15 17:17:35.242245] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:47.739 [2024-05-15 17:17:35.242251] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f93f8000b90 00:26:47.739 [2024-05-15 17:17:35.242265] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:47.739 qpair failed and we were unable to recover it. 00:26:47.739 [2024-05-15 17:17:35.252139] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:47.739 [2024-05-15 17:17:35.252205] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:47.739 [2024-05-15 17:17:35.252220] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:47.739 [2024-05-15 17:17:35.252230] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:47.739 [2024-05-15 17:17:35.252236] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f93f8000b90 00:26:47.739 [2024-05-15 17:17:35.252250] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:47.739 qpair failed and we were unable to recover it. 00:26:47.739 [2024-05-15 17:17:35.262237] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:47.739 [2024-05-15 17:17:35.262312] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:47.739 [2024-05-15 17:17:35.262326] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:47.739 [2024-05-15 17:17:35.262333] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:47.739 [2024-05-15 17:17:35.262339] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f93f8000b90 00:26:47.739 [2024-05-15 17:17:35.262353] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:47.739 qpair failed and we were unable to recover it. 00:26:47.739 [2024-05-15 17:17:35.272230] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:47.739 [2024-05-15 17:17:35.272291] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:47.739 [2024-05-15 17:17:35.272306] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:47.739 [2024-05-15 17:17:35.272313] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:47.739 [2024-05-15 17:17:35.272318] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f93f8000b90 00:26:47.739 [2024-05-15 17:17:35.272332] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:47.739 qpair failed and we were unable to recover it. 00:26:47.739 [2024-05-15 17:17:35.282265] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:47.739 [2024-05-15 17:17:35.282323] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:47.739 [2024-05-15 17:17:35.282337] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:47.739 [2024-05-15 17:17:35.282345] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:47.739 [2024-05-15 17:17:35.282351] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f93f8000b90 00:26:47.739 [2024-05-15 17:17:35.282365] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:47.739 qpair failed and we were unable to recover it. 00:26:47.739 [2024-05-15 17:17:35.292309] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:47.739 [2024-05-15 17:17:35.292382] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:47.739 [2024-05-15 17:17:35.292396] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:47.739 [2024-05-15 17:17:35.292403] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:47.739 [2024-05-15 17:17:35.292409] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f93f8000b90 00:26:47.739 [2024-05-15 17:17:35.292423] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:47.739 qpair failed and we were unable to recover it. 00:26:47.739 [2024-05-15 17:17:35.302377] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:47.739 [2024-05-15 17:17:35.302440] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:47.739 [2024-05-15 17:17:35.302454] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:47.739 [2024-05-15 17:17:35.302461] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:47.739 [2024-05-15 17:17:35.302467] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f93f8000b90 00:26:47.739 [2024-05-15 17:17:35.302480] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:47.739 qpair failed and we were unable to recover it. 00:26:47.739 [2024-05-15 17:17:35.312353] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:47.739 [2024-05-15 17:17:35.312415] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:47.739 [2024-05-15 17:17:35.312430] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:47.739 [2024-05-15 17:17:35.312437] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:47.739 [2024-05-15 17:17:35.312443] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f93f8000b90 00:26:47.739 [2024-05-15 17:17:35.312457] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:47.739 qpair failed and we were unable to recover it. 00:26:47.739 [2024-05-15 17:17:35.322387] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:47.739 [2024-05-15 17:17:35.322455] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:47.739 [2024-05-15 17:17:35.322470] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:47.739 [2024-05-15 17:17:35.322477] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:47.739 [2024-05-15 17:17:35.322483] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f93f8000b90 00:26:47.739 [2024-05-15 17:17:35.322496] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:47.739 qpair failed and we were unable to recover it. 00:26:47.739 [2024-05-15 17:17:35.332384] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:47.739 [2024-05-15 17:17:35.332447] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:47.739 [2024-05-15 17:17:35.332462] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:47.739 [2024-05-15 17:17:35.332469] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:47.739 [2024-05-15 17:17:35.332475] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f93f8000b90 00:26:47.739 [2024-05-15 17:17:35.332489] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:47.739 qpair failed and we were unable to recover it. 00:26:47.739 [2024-05-15 17:17:35.342453] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:47.739 [2024-05-15 17:17:35.342513] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:47.739 [2024-05-15 17:17:35.342531] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:47.739 [2024-05-15 17:17:35.342538] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:47.739 [2024-05-15 17:17:35.342544] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f93f8000b90 00:26:47.739 [2024-05-15 17:17:35.342558] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:47.739 qpair failed and we were unable to recover it. 00:26:47.739 [2024-05-15 17:17:35.352457] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:47.739 [2024-05-15 17:17:35.352525] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:47.740 [2024-05-15 17:17:35.352542] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:47.740 [2024-05-15 17:17:35.352550] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:47.740 [2024-05-15 17:17:35.352555] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f93f8000b90 00:26:47.740 [2024-05-15 17:17:35.352569] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:47.740 qpair failed and we were unable to recover it. 00:26:47.740 [2024-05-15 17:17:35.362470] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:47.740 [2024-05-15 17:17:35.362567] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:47.740 [2024-05-15 17:17:35.362581] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:47.740 [2024-05-15 17:17:35.362588] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:47.740 [2024-05-15 17:17:35.362593] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f93f8000b90 00:26:47.740 [2024-05-15 17:17:35.362609] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:47.740 qpair failed and we were unable to recover it. 00:26:47.740 [2024-05-15 17:17:35.372510] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:47.740 [2024-05-15 17:17:35.372574] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:47.740 [2024-05-15 17:17:35.372588] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:47.740 [2024-05-15 17:17:35.372595] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:47.740 [2024-05-15 17:17:35.372601] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f93f8000b90 00:26:47.740 [2024-05-15 17:17:35.372615] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:47.740 qpair failed and we were unable to recover it. 00:26:47.740 [2024-05-15 17:17:35.382557] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:47.740 [2024-05-15 17:17:35.382624] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:47.740 [2024-05-15 17:17:35.382638] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:47.740 [2024-05-15 17:17:35.382645] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:47.740 [2024-05-15 17:17:35.382651] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f93f8000b90 00:26:47.740 [2024-05-15 17:17:35.382668] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:47.740 qpair failed and we were unable to recover it. 00:26:47.740 [2024-05-15 17:17:35.392569] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:47.740 [2024-05-15 17:17:35.392631] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:47.740 [2024-05-15 17:17:35.392645] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:47.740 [2024-05-15 17:17:35.392652] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:47.740 [2024-05-15 17:17:35.392658] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f93f8000b90 00:26:47.740 [2024-05-15 17:17:35.392672] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:47.740 qpair failed and we were unable to recover it. 00:26:47.999 [2024-05-15 17:17:35.402629] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:47.999 [2024-05-15 17:17:35.402697] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:47.999 [2024-05-15 17:17:35.402712] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:47.999 [2024-05-15 17:17:35.402720] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:47.999 [2024-05-15 17:17:35.402726] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f93f8000b90 00:26:47.999 [2024-05-15 17:17:35.402740] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:47.999 qpair failed and we were unable to recover it. 00:26:47.999 [2024-05-15 17:17:35.412584] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:47.999 [2024-05-15 17:17:35.412679] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:47.999 [2024-05-15 17:17:35.412694] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:47.999 [2024-05-15 17:17:35.412700] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:47.999 [2024-05-15 17:17:35.412706] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f93f8000b90 00:26:47.999 [2024-05-15 17:17:35.412720] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:47.999 qpair failed and we were unable to recover it. 00:26:47.999 [2024-05-15 17:17:35.422666] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:47.999 [2024-05-15 17:17:35.422726] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:47.999 [2024-05-15 17:17:35.422740] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:47.999 [2024-05-15 17:17:35.422747] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:47.999 [2024-05-15 17:17:35.422753] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f93f8000b90 00:26:47.999 [2024-05-15 17:17:35.422767] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:47.999 qpair failed and we were unable to recover it. 00:26:47.999 [2024-05-15 17:17:35.432649] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:47.999 [2024-05-15 17:17:35.432713] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:47.999 [2024-05-15 17:17:35.432731] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:47.999 [2024-05-15 17:17:35.432738] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:47.999 [2024-05-15 17:17:35.432744] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f93f8000b90 00:26:47.999 [2024-05-15 17:17:35.432758] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:47.999 qpair failed and we were unable to recover it. 00:26:47.999 [2024-05-15 17:17:35.442650] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:47.999 [2024-05-15 17:17:35.442748] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:47.999 [2024-05-15 17:17:35.442762] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:47.999 [2024-05-15 17:17:35.442769] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:47.999 [2024-05-15 17:17:35.442774] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f93f8000b90 00:26:47.999 [2024-05-15 17:17:35.442789] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:47.999 qpair failed and we were unable to recover it. 00:26:47.999 [2024-05-15 17:17:35.452757] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:47.999 [2024-05-15 17:17:35.452815] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:47.999 [2024-05-15 17:17:35.452830] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:47.999 [2024-05-15 17:17:35.452836] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:47.999 [2024-05-15 17:17:35.452842] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f93f8000b90 00:26:47.999 [2024-05-15 17:17:35.452857] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:47.999 qpair failed and we were unable to recover it. 00:26:47.999 [2024-05-15 17:17:35.462763] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:47.999 [2024-05-15 17:17:35.462862] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:47.999 [2024-05-15 17:17:35.462876] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:47.999 [2024-05-15 17:17:35.462883] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:47.999 [2024-05-15 17:17:35.462889] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f93f8000b90 00:26:47.999 [2024-05-15 17:17:35.462904] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:47.999 qpair failed and we were unable to recover it. 00:26:47.999 [2024-05-15 17:17:35.472820] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:47.999 [2024-05-15 17:17:35.472880] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:47.999 [2024-05-15 17:17:35.472894] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:47.999 [2024-05-15 17:17:35.472901] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:47.999 [2024-05-15 17:17:35.472910] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f93f8000b90 00:26:47.999 [2024-05-15 17:17:35.472925] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:47.999 qpair failed and we were unable to recover it. 00:26:47.999 [2024-05-15 17:17:35.482834] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:47.999 [2024-05-15 17:17:35.482893] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:47.999 [2024-05-15 17:17:35.482908] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:47.999 [2024-05-15 17:17:35.482915] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:47.999 [2024-05-15 17:17:35.482921] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f93f8000b90 00:26:47.999 [2024-05-15 17:17:35.482935] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:47.999 qpair failed and we were unable to recover it. 00:26:47.999 [2024-05-15 17:17:35.492858] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:47.999 [2024-05-15 17:17:35.492922] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:47.999 [2024-05-15 17:17:35.492936] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:47.999 [2024-05-15 17:17:35.492943] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:47.999 [2024-05-15 17:17:35.492949] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f93f8000b90 00:26:47.999 [2024-05-15 17:17:35.492962] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:47.999 qpair failed and we were unable to recover it. 00:26:47.999 [2024-05-15 17:17:35.502865] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:47.999 [2024-05-15 17:17:35.502926] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:47.999 [2024-05-15 17:17:35.502940] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:47.999 [2024-05-15 17:17:35.502947] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:47.999 [2024-05-15 17:17:35.502953] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f93f8000b90 00:26:48.000 [2024-05-15 17:17:35.502967] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:48.000 qpair failed and we were unable to recover it. 00:26:48.000 [2024-05-15 17:17:35.512983] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:48.000 [2024-05-15 17:17:35.513065] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:48.000 [2024-05-15 17:17:35.513079] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:48.000 [2024-05-15 17:17:35.513086] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:48.000 [2024-05-15 17:17:35.513092] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f93f8000b90 00:26:48.000 [2024-05-15 17:17:35.513106] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:48.000 qpair failed and we were unable to recover it. 00:26:48.000 [2024-05-15 17:17:35.522946] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:48.000 [2024-05-15 17:17:35.523013] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:48.000 [2024-05-15 17:17:35.523028] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:48.000 [2024-05-15 17:17:35.523035] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:48.000 [2024-05-15 17:17:35.523041] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f93f8000b90 00:26:48.000 [2024-05-15 17:17:35.523055] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:48.000 qpair failed and we were unable to recover it. 00:26:48.000 [2024-05-15 17:17:35.532985] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:48.000 [2024-05-15 17:17:35.533047] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:48.000 [2024-05-15 17:17:35.533061] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:48.000 [2024-05-15 17:17:35.533068] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:48.000 [2024-05-15 17:17:35.533074] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f93f8000b90 00:26:48.000 [2024-05-15 17:17:35.533088] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:48.000 qpair failed and we were unable to recover it. 00:26:48.000 [2024-05-15 17:17:35.542966] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:48.000 [2024-05-15 17:17:35.543022] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:48.000 [2024-05-15 17:17:35.543036] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:48.000 [2024-05-15 17:17:35.543043] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:48.000 [2024-05-15 17:17:35.543048] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f93f8000b90 00:26:48.000 [2024-05-15 17:17:35.543063] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:48.000 qpair failed and we were unable to recover it. 00:26:48.000 [2024-05-15 17:17:35.553073] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:48.000 [2024-05-15 17:17:35.553139] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:48.000 [2024-05-15 17:17:35.553153] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:48.000 [2024-05-15 17:17:35.553160] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:48.000 [2024-05-15 17:17:35.553172] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f93f8000b90 00:26:48.000 [2024-05-15 17:17:35.553186] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:48.000 qpair failed and we were unable to recover it. 00:26:48.000 [2024-05-15 17:17:35.563077] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:48.000 [2024-05-15 17:17:35.563142] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:48.000 [2024-05-15 17:17:35.563157] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:48.000 [2024-05-15 17:17:35.563167] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:48.000 [2024-05-15 17:17:35.563177] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f93f8000b90 00:26:48.000 [2024-05-15 17:17:35.563191] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:48.000 qpair failed and we were unable to recover it. 00:26:48.000 [2024-05-15 17:17:35.573107] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:48.000 [2024-05-15 17:17:35.573176] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:48.000 [2024-05-15 17:17:35.573192] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:48.000 [2024-05-15 17:17:35.573199] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:48.000 [2024-05-15 17:17:35.573205] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f93f8000b90 00:26:48.000 [2024-05-15 17:17:35.573220] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:48.000 qpair failed and we were unable to recover it. 00:26:48.000 [2024-05-15 17:17:35.583082] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:48.000 [2024-05-15 17:17:35.583147] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:48.000 [2024-05-15 17:17:35.583162] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:48.000 [2024-05-15 17:17:35.583176] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:48.000 [2024-05-15 17:17:35.583182] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f93f8000b90 00:26:48.000 [2024-05-15 17:17:35.583197] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:48.000 qpair failed and we were unable to recover it. 00:26:48.000 [2024-05-15 17:17:35.593206] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:48.000 [2024-05-15 17:17:35.593274] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:48.000 [2024-05-15 17:17:35.593288] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:48.000 [2024-05-15 17:17:35.593295] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:48.000 [2024-05-15 17:17:35.593301] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f93f8000b90 00:26:48.000 [2024-05-15 17:17:35.593315] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:48.000 qpair failed and we were unable to recover it. 00:26:48.000 [2024-05-15 17:17:35.603224] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:48.000 [2024-05-15 17:17:35.603287] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:48.000 [2024-05-15 17:17:35.603301] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:48.000 [2024-05-15 17:17:35.603308] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:48.000 [2024-05-15 17:17:35.603314] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f93f8000b90 00:26:48.000 [2024-05-15 17:17:35.603329] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:48.000 qpair failed and we were unable to recover it. 00:26:48.000 [2024-05-15 17:17:35.613148] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:48.000 [2024-05-15 17:17:35.613211] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:48.000 [2024-05-15 17:17:35.613226] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:48.000 [2024-05-15 17:17:35.613233] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:48.000 [2024-05-15 17:17:35.613238] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f93f8000b90 00:26:48.000 [2024-05-15 17:17:35.613253] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:48.000 qpair failed and we were unable to recover it. 00:26:48.000 [2024-05-15 17:17:35.623203] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:48.000 [2024-05-15 17:17:35.623263] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:48.000 [2024-05-15 17:17:35.623278] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:48.000 [2024-05-15 17:17:35.623284] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:48.000 [2024-05-15 17:17:35.623290] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f93f8000b90 00:26:48.000 [2024-05-15 17:17:35.623304] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:48.000 qpair failed and we were unable to recover it. 00:26:48.000 [2024-05-15 17:17:35.633233] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:48.000 [2024-05-15 17:17:35.633298] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:48.000 [2024-05-15 17:17:35.633312] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:48.000 [2024-05-15 17:17:35.633319] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:48.000 [2024-05-15 17:17:35.633325] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f93f8000b90 00:26:48.000 [2024-05-15 17:17:35.633339] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:48.000 qpair failed and we were unable to recover it. 00:26:48.001 [2024-05-15 17:17:35.643283] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:48.001 [2024-05-15 17:17:35.643373] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:48.001 [2024-05-15 17:17:35.643388] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:48.001 [2024-05-15 17:17:35.643395] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:48.001 [2024-05-15 17:17:35.643400] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f93f8000b90 00:26:48.001 [2024-05-15 17:17:35.643415] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:48.001 qpair failed and we were unable to recover it. 00:26:48.001 [2024-05-15 17:17:35.653355] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:48.001 [2024-05-15 17:17:35.653418] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:48.001 [2024-05-15 17:17:35.653433] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:48.001 [2024-05-15 17:17:35.653445] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:48.001 [2024-05-15 17:17:35.653451] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f93f8000b90 00:26:48.001 [2024-05-15 17:17:35.653465] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:48.001 qpair failed and we were unable to recover it. 00:26:48.260 [2024-05-15 17:17:35.663327] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:48.260 [2024-05-15 17:17:35.663391] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:48.260 [2024-05-15 17:17:35.663406] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:48.260 [2024-05-15 17:17:35.663412] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:48.260 [2024-05-15 17:17:35.663418] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f93f8000b90 00:26:48.260 [2024-05-15 17:17:35.663433] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:48.260 qpair failed and we were unable to recover it. 00:26:48.260 [2024-05-15 17:17:35.673352] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:48.260 [2024-05-15 17:17:35.673417] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:48.260 [2024-05-15 17:17:35.673431] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:48.260 [2024-05-15 17:17:35.673438] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:48.260 [2024-05-15 17:17:35.673444] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f93f8000b90 00:26:48.260 [2024-05-15 17:17:35.673458] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:48.260 qpair failed and we were unable to recover it. 00:26:48.260 [2024-05-15 17:17:35.683442] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:48.260 [2024-05-15 17:17:35.683546] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:48.260 [2024-05-15 17:17:35.683560] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:48.260 [2024-05-15 17:17:35.683567] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:48.260 [2024-05-15 17:17:35.683572] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f93f8000b90 00:26:48.260 [2024-05-15 17:17:35.683587] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:48.260 qpair failed and we were unable to recover it. 00:26:48.260 [2024-05-15 17:17:35.693474] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:48.260 [2024-05-15 17:17:35.693536] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:48.260 [2024-05-15 17:17:35.693549] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:48.260 [2024-05-15 17:17:35.693556] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:48.260 [2024-05-15 17:17:35.693562] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f93f8000b90 00:26:48.260 [2024-05-15 17:17:35.693576] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:48.260 qpair failed and we were unable to recover it. 00:26:48.260 [2024-05-15 17:17:35.703513] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:48.260 [2024-05-15 17:17:35.703570] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:48.260 [2024-05-15 17:17:35.703585] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:48.260 [2024-05-15 17:17:35.703592] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:48.260 [2024-05-15 17:17:35.703598] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f93f8000b90 00:26:48.260 [2024-05-15 17:17:35.703612] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:48.260 qpair failed and we were unable to recover it. 00:26:48.260 [2024-05-15 17:17:35.713536] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:48.260 [2024-05-15 17:17:35.713597] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:48.260 [2024-05-15 17:17:35.713612] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:48.260 [2024-05-15 17:17:35.713618] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:48.260 [2024-05-15 17:17:35.713624] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f93f8000b90 00:26:48.260 [2024-05-15 17:17:35.713638] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:48.260 qpair failed and we were unable to recover it. 00:26:48.260 [2024-05-15 17:17:35.723501] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:48.260 [2024-05-15 17:17:35.723561] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:48.260 [2024-05-15 17:17:35.723576] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:48.260 [2024-05-15 17:17:35.723583] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:48.260 [2024-05-15 17:17:35.723589] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f93f8000b90 00:26:48.260 [2024-05-15 17:17:35.723603] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:48.260 qpair failed and we were unable to recover it. 00:26:48.260 [2024-05-15 17:17:35.733610] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:48.260 [2024-05-15 17:17:35.733670] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:48.260 [2024-05-15 17:17:35.733684] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:48.260 [2024-05-15 17:17:35.733691] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:48.260 [2024-05-15 17:17:35.733697] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f93f8000b90 00:26:48.260 [2024-05-15 17:17:35.733711] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:48.260 qpair failed and we were unable to recover it. 00:26:48.260 [2024-05-15 17:17:35.743595] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:48.260 [2024-05-15 17:17:35.743656] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:48.260 [2024-05-15 17:17:35.743674] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:48.260 [2024-05-15 17:17:35.743681] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:48.260 [2024-05-15 17:17:35.743687] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f93f8000b90 00:26:48.261 [2024-05-15 17:17:35.743701] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:48.261 qpair failed and we were unable to recover it. 00:26:48.261 [2024-05-15 17:17:35.753656] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:48.261 [2024-05-15 17:17:35.753716] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:48.261 [2024-05-15 17:17:35.753730] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:48.261 [2024-05-15 17:17:35.753737] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:48.261 [2024-05-15 17:17:35.753743] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f93f8000b90 00:26:48.261 [2024-05-15 17:17:35.753758] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:48.261 qpair failed and we were unable to recover it. 00:26:48.261 [2024-05-15 17:17:35.763674] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:48.261 [2024-05-15 17:17:35.763732] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:48.261 [2024-05-15 17:17:35.763747] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:48.261 [2024-05-15 17:17:35.763754] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:48.261 [2024-05-15 17:17:35.763760] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f93f8000b90 00:26:48.261 [2024-05-15 17:17:35.763775] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:48.261 qpair failed and we were unable to recover it. 00:26:48.261 [2024-05-15 17:17:35.773737] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:48.261 [2024-05-15 17:17:35.773802] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:48.261 [2024-05-15 17:17:35.773816] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:48.261 [2024-05-15 17:17:35.773823] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:48.261 [2024-05-15 17:17:35.773829] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f93f8000b90 00:26:48.261 [2024-05-15 17:17:35.773845] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:48.261 qpair failed and we were unable to recover it. 00:26:48.261 [2024-05-15 17:17:35.783738] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:48.261 [2024-05-15 17:17:35.783801] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:48.261 [2024-05-15 17:17:35.783815] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:48.261 [2024-05-15 17:17:35.783822] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:48.261 [2024-05-15 17:17:35.783828] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f93f8000b90 00:26:48.261 [2024-05-15 17:17:35.783845] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:48.261 qpair failed and we were unable to recover it. 00:26:48.261 [2024-05-15 17:17:35.793773] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:48.261 [2024-05-15 17:17:35.793836] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:48.261 [2024-05-15 17:17:35.793850] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:48.261 [2024-05-15 17:17:35.793857] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:48.261 [2024-05-15 17:17:35.793863] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f93f8000b90 00:26:48.261 [2024-05-15 17:17:35.793878] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:48.261 qpair failed and we were unable to recover it. 00:26:48.261 [2024-05-15 17:17:35.803806] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:48.261 [2024-05-15 17:17:35.803869] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:48.261 [2024-05-15 17:17:35.803883] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:48.261 [2024-05-15 17:17:35.803890] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:48.261 [2024-05-15 17:17:35.803896] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f93f8000b90 00:26:48.261 [2024-05-15 17:17:35.803910] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:48.261 qpair failed and we were unable to recover it. 00:26:48.261 [2024-05-15 17:17:35.813854] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:48.261 [2024-05-15 17:17:35.813929] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:48.261 [2024-05-15 17:17:35.813944] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:48.261 [2024-05-15 17:17:35.813950] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:48.261 [2024-05-15 17:17:35.813956] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f93f8000b90 00:26:48.261 [2024-05-15 17:17:35.813971] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:48.261 qpair failed and we were unable to recover it. 00:26:48.261 [2024-05-15 17:17:35.823866] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:48.261 [2024-05-15 17:17:35.823929] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:48.261 [2024-05-15 17:17:35.823944] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:48.261 [2024-05-15 17:17:35.823951] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:48.261 [2024-05-15 17:17:35.823957] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f93f8000b90 00:26:48.261 [2024-05-15 17:17:35.823971] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:48.261 qpair failed and we were unable to recover it. 00:26:48.261 [2024-05-15 17:17:35.833906] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:48.261 [2024-05-15 17:17:35.833967] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:48.261 [2024-05-15 17:17:35.833984] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:48.261 [2024-05-15 17:17:35.833991] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:48.261 [2024-05-15 17:17:35.833997] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f93f8000b90 00:26:48.261 [2024-05-15 17:17:35.834011] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:48.261 qpair failed and we were unable to recover it. 00:26:48.261 [2024-05-15 17:17:35.843920] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:48.261 [2024-05-15 17:17:35.843987] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:48.261 [2024-05-15 17:17:35.844002] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:48.261 [2024-05-15 17:17:35.844009] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:48.261 [2024-05-15 17:17:35.844015] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f93f8000b90 00:26:48.261 [2024-05-15 17:17:35.844030] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:48.261 qpair failed and we were unable to recover it. 00:26:48.261 [2024-05-15 17:17:35.853950] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:48.261 [2024-05-15 17:17:35.854007] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:48.261 [2024-05-15 17:17:35.854021] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:48.261 [2024-05-15 17:17:35.854028] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:48.261 [2024-05-15 17:17:35.854034] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f93f8000b90 00:26:48.261 [2024-05-15 17:17:35.854048] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:48.261 qpair failed and we were unable to recover it. 00:26:48.261 [2024-05-15 17:17:35.863970] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:48.261 [2024-05-15 17:17:35.864029] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:48.261 [2024-05-15 17:17:35.864044] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:48.261 [2024-05-15 17:17:35.864051] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:48.261 [2024-05-15 17:17:35.864057] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f93f8000b90 00:26:48.261 [2024-05-15 17:17:35.864071] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:48.261 qpair failed and we were unable to recover it. 00:26:48.261 [2024-05-15 17:17:35.874007] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:48.261 [2024-05-15 17:17:35.874067] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:48.261 [2024-05-15 17:17:35.874082] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:48.261 [2024-05-15 17:17:35.874089] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:48.261 [2024-05-15 17:17:35.874098] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f93f8000b90 00:26:48.262 [2024-05-15 17:17:35.874112] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:48.262 qpair failed and we were unable to recover it. 00:26:48.262 [2024-05-15 17:17:35.884045] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:48.262 [2024-05-15 17:17:35.884111] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:48.262 [2024-05-15 17:17:35.884126] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:48.262 [2024-05-15 17:17:35.884133] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:48.262 [2024-05-15 17:17:35.884139] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f93f8000b90 00:26:48.262 [2024-05-15 17:17:35.884153] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:48.262 qpair failed and we were unable to recover it. 00:26:48.262 [2024-05-15 17:17:35.894061] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:48.262 [2024-05-15 17:17:35.894122] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:48.262 [2024-05-15 17:17:35.894136] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:48.262 [2024-05-15 17:17:35.894143] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:48.262 [2024-05-15 17:17:35.894149] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f93f8000b90 00:26:48.262 [2024-05-15 17:17:35.894163] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:48.262 qpair failed and we were unable to recover it. 00:26:48.262 [2024-05-15 17:17:35.904074] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:48.262 [2024-05-15 17:17:35.904138] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:48.262 [2024-05-15 17:17:35.904153] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:48.262 [2024-05-15 17:17:35.904160] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:48.262 [2024-05-15 17:17:35.904169] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f93f8000b90 00:26:48.262 [2024-05-15 17:17:35.904184] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:48.262 qpair failed and we were unable to recover it. 00:26:48.262 [2024-05-15 17:17:35.914094] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:48.262 [2024-05-15 17:17:35.914157] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:48.262 [2024-05-15 17:17:35.914176] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:48.262 [2024-05-15 17:17:35.914183] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:48.262 [2024-05-15 17:17:35.914189] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f93f8000b90 00:26:48.262 [2024-05-15 17:17:35.914204] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:48.262 qpair failed and we were unable to recover it. 00:26:48.520 [2024-05-15 17:17:35.924141] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:48.521 [2024-05-15 17:17:35.924213] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:48.521 [2024-05-15 17:17:35.924228] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:48.521 [2024-05-15 17:17:35.924235] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:48.521 [2024-05-15 17:17:35.924240] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f93f8000b90 00:26:48.521 [2024-05-15 17:17:35.924255] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:48.521 qpair failed and we were unable to recover it. 00:26:48.521 [2024-05-15 17:17:35.934112] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:48.521 [2024-05-15 17:17:35.934200] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:48.521 [2024-05-15 17:17:35.934215] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:48.521 [2024-05-15 17:17:35.934221] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:48.521 [2024-05-15 17:17:35.934228] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f93f8000b90 00:26:48.521 [2024-05-15 17:17:35.934242] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:48.521 qpair failed and we were unable to recover it. 00:26:48.521 [2024-05-15 17:17:35.944222] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:48.521 [2024-05-15 17:17:35.944323] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:48.521 [2024-05-15 17:17:35.944337] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:48.521 [2024-05-15 17:17:35.944344] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:48.521 [2024-05-15 17:17:35.944350] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f93f8000b90 00:26:48.521 [2024-05-15 17:17:35.944365] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:48.521 qpair failed and we were unable to recover it. 00:26:48.521 [2024-05-15 17:17:35.954221] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:48.521 [2024-05-15 17:17:35.954282] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:48.521 [2024-05-15 17:17:35.954296] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:48.521 [2024-05-15 17:17:35.954303] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:48.521 [2024-05-15 17:17:35.954309] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f93f8000b90 00:26:48.521 [2024-05-15 17:17:35.954323] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:48.521 qpair failed and we were unable to recover it. 00:26:48.521 [2024-05-15 17:17:35.964244] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:48.521 [2024-05-15 17:17:35.964307] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:48.521 [2024-05-15 17:17:35.964321] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:48.521 [2024-05-15 17:17:35.964328] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:48.521 [2024-05-15 17:17:35.964337] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f93f8000b90 00:26:48.521 [2024-05-15 17:17:35.964351] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:48.521 qpair failed and we were unable to recover it. 00:26:48.521 [2024-05-15 17:17:35.974336] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:48.521 [2024-05-15 17:17:35.974435] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:48.521 [2024-05-15 17:17:35.974450] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:48.521 [2024-05-15 17:17:35.974456] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:48.521 [2024-05-15 17:17:35.974462] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f93f8000b90 00:26:48.521 [2024-05-15 17:17:35.974478] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:48.521 qpair failed and we were unable to recover it. 00:26:48.521 [2024-05-15 17:17:35.984329] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:48.521 [2024-05-15 17:17:35.984438] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:48.521 [2024-05-15 17:17:35.984453] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:48.521 [2024-05-15 17:17:35.984461] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:48.521 [2024-05-15 17:17:35.984466] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f93f8000b90 00:26:48.521 [2024-05-15 17:17:35.984481] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:48.521 qpair failed and we were unable to recover it. 00:26:48.521 [2024-05-15 17:17:35.994337] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:48.521 [2024-05-15 17:17:35.994399] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:48.521 [2024-05-15 17:17:35.994413] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:48.521 [2024-05-15 17:17:35.994420] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:48.521 [2024-05-15 17:17:35.994426] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f93f8000b90 00:26:48.521 [2024-05-15 17:17:35.994440] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:48.521 qpair failed and we were unable to recover it. 00:26:48.521 [2024-05-15 17:17:36.004408] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:48.521 [2024-05-15 17:17:36.004510] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:48.521 [2024-05-15 17:17:36.004524] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:48.521 [2024-05-15 17:17:36.004531] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:48.521 [2024-05-15 17:17:36.004537] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f93f8000b90 00:26:48.521 [2024-05-15 17:17:36.004552] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:48.521 qpair failed and we were unable to recover it. 00:26:48.521 [2024-05-15 17:17:36.014380] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:48.521 [2024-05-15 17:17:36.014439] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:48.521 [2024-05-15 17:17:36.014454] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:48.521 [2024-05-15 17:17:36.014461] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:48.521 [2024-05-15 17:17:36.014467] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f93f8000b90 00:26:48.521 [2024-05-15 17:17:36.014481] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:48.521 qpair failed and we were unable to recover it. 00:26:48.521 [2024-05-15 17:17:36.024444] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:48.521 [2024-05-15 17:17:36.024518] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:48.521 [2024-05-15 17:17:36.024532] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:48.521 [2024-05-15 17:17:36.024539] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:48.521 [2024-05-15 17:17:36.024545] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f93f8000b90 00:26:48.521 [2024-05-15 17:17:36.024560] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:48.521 qpair failed and we were unable to recover it. 00:26:48.521 [2024-05-15 17:17:36.034445] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:48.521 [2024-05-15 17:17:36.034510] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:48.521 [2024-05-15 17:17:36.034524] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:48.521 [2024-05-15 17:17:36.034531] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:48.521 [2024-05-15 17:17:36.034537] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f93f8000b90 00:26:48.521 [2024-05-15 17:17:36.034552] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:48.521 qpair failed and we were unable to recover it. 00:26:48.521 [2024-05-15 17:17:36.044466] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:48.521 [2024-05-15 17:17:36.044526] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:48.521 [2024-05-15 17:17:36.044540] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:48.521 [2024-05-15 17:17:36.044547] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:48.521 [2024-05-15 17:17:36.044554] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f93f8000b90 00:26:48.521 [2024-05-15 17:17:36.044568] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:48.521 qpair failed and we were unable to recover it. 00:26:48.521 [2024-05-15 17:17:36.054497] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:48.522 [2024-05-15 17:17:36.054555] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:48.522 [2024-05-15 17:17:36.054570] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:48.522 [2024-05-15 17:17:36.054579] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:48.522 [2024-05-15 17:17:36.054586] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f93f8000b90 00:26:48.522 [2024-05-15 17:17:36.054601] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:48.522 qpair failed and we were unable to recover it. 00:26:48.522 [2024-05-15 17:17:36.064522] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:48.522 [2024-05-15 17:17:36.064591] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:48.522 [2024-05-15 17:17:36.064606] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:48.522 [2024-05-15 17:17:36.064613] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:48.522 [2024-05-15 17:17:36.064619] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f93f8000b90 00:26:48.522 [2024-05-15 17:17:36.064634] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:48.522 qpair failed and we were unable to recover it. 00:26:48.522 [2024-05-15 17:17:36.074554] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:48.522 [2024-05-15 17:17:36.074617] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:48.522 [2024-05-15 17:17:36.074632] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:48.522 [2024-05-15 17:17:36.074639] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:48.522 [2024-05-15 17:17:36.074646] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f93f8000b90 00:26:48.522 [2024-05-15 17:17:36.074660] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:48.522 qpair failed and we were unable to recover it. 00:26:48.522 [2024-05-15 17:17:36.084579] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:48.522 [2024-05-15 17:17:36.084644] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:48.522 [2024-05-15 17:17:36.084659] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:48.522 [2024-05-15 17:17:36.084666] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:48.522 [2024-05-15 17:17:36.084672] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f93f8000b90 00:26:48.522 [2024-05-15 17:17:36.084686] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:48.522 qpair failed and we were unable to recover it. 00:26:48.522 [2024-05-15 17:17:36.094640] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:48.522 [2024-05-15 17:17:36.094705] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:48.522 [2024-05-15 17:17:36.094720] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:48.522 [2024-05-15 17:17:36.094727] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:48.522 [2024-05-15 17:17:36.094733] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f93f8000b90 00:26:48.522 [2024-05-15 17:17:36.094747] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:48.522 qpair failed and we were unable to recover it. 00:26:48.522 [2024-05-15 17:17:36.104650] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:48.522 [2024-05-15 17:17:36.104707] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:48.522 [2024-05-15 17:17:36.104723] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:48.522 [2024-05-15 17:17:36.104730] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:48.522 [2024-05-15 17:17:36.104737] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f93f8000b90 00:26:48.522 [2024-05-15 17:17:36.104752] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:48.522 qpair failed and we were unable to recover it. 00:26:48.522 [2024-05-15 17:17:36.114689] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:48.522 [2024-05-15 17:17:36.114766] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:48.522 [2024-05-15 17:17:36.114780] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:48.522 [2024-05-15 17:17:36.114787] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:48.522 [2024-05-15 17:17:36.114793] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f93f8000b90 00:26:48.522 [2024-05-15 17:17:36.114807] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:48.522 qpair failed and we were unable to recover it. 00:26:48.522 [2024-05-15 17:17:36.124721] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:48.522 [2024-05-15 17:17:36.124783] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:48.522 [2024-05-15 17:17:36.124798] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:48.522 [2024-05-15 17:17:36.124804] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:48.522 [2024-05-15 17:17:36.124810] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f93f8000b90 00:26:48.522 [2024-05-15 17:17:36.124825] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:48.522 qpair failed and we were unable to recover it. 00:26:48.522 [2024-05-15 17:17:36.134744] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:48.522 [2024-05-15 17:17:36.134805] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:48.522 [2024-05-15 17:17:36.134820] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:48.522 [2024-05-15 17:17:36.134827] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:48.522 [2024-05-15 17:17:36.134833] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f93f8000b90 00:26:48.522 [2024-05-15 17:17:36.134847] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:48.522 qpair failed and we were unable to recover it. 00:26:48.522 [2024-05-15 17:17:36.144772] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:48.522 [2024-05-15 17:17:36.144831] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:48.522 [2024-05-15 17:17:36.144848] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:48.522 [2024-05-15 17:17:36.144855] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:48.522 [2024-05-15 17:17:36.144861] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f93f8000b90 00:26:48.522 [2024-05-15 17:17:36.144875] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:48.522 qpair failed and we were unable to recover it. 00:26:48.522 [2024-05-15 17:17:36.154796] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:48.522 [2024-05-15 17:17:36.154857] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:48.522 [2024-05-15 17:17:36.154872] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:48.522 [2024-05-15 17:17:36.154879] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:48.522 [2024-05-15 17:17:36.154884] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f93f8000b90 00:26:48.522 [2024-05-15 17:17:36.154899] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:48.522 qpair failed and we were unable to recover it. 00:26:48.522 [2024-05-15 17:17:36.164846] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:48.522 [2024-05-15 17:17:36.164919] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:48.522 [2024-05-15 17:17:36.164934] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:48.522 [2024-05-15 17:17:36.164941] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:48.522 [2024-05-15 17:17:36.164947] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f93f8000b90 00:26:48.522 [2024-05-15 17:17:36.164961] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:48.522 qpair failed and we were unable to recover it. 00:26:48.522 [2024-05-15 17:17:36.174846] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:48.522 [2024-05-15 17:17:36.174909] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:48.522 [2024-05-15 17:17:36.174924] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:48.522 [2024-05-15 17:17:36.174931] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:48.522 [2024-05-15 17:17:36.174937] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f93f8000b90 00:26:48.522 [2024-05-15 17:17:36.174951] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:48.522 qpair failed and we were unable to recover it. 00:26:48.782 [2024-05-15 17:17:36.184875] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:48.782 [2024-05-15 17:17:36.184934] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:48.782 [2024-05-15 17:17:36.184948] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:48.782 [2024-05-15 17:17:36.184955] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:48.782 [2024-05-15 17:17:36.184961] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f93f8000b90 00:26:48.782 [2024-05-15 17:17:36.184979] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:48.782 qpair failed and we were unable to recover it. 00:26:48.782 [2024-05-15 17:17:36.194914] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:48.782 [2024-05-15 17:17:36.194975] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:48.782 [2024-05-15 17:17:36.194989] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:48.782 [2024-05-15 17:17:36.194996] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:48.782 [2024-05-15 17:17:36.195002] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f93f8000b90 00:26:48.782 [2024-05-15 17:17:36.195016] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:48.782 qpair failed and we were unable to recover it. 00:26:48.782 [2024-05-15 17:17:36.204943] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:48.782 [2024-05-15 17:17:36.205008] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:48.782 [2024-05-15 17:17:36.205023] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:48.782 [2024-05-15 17:17:36.205030] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:48.782 [2024-05-15 17:17:36.205035] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f93f8000b90 00:26:48.782 [2024-05-15 17:17:36.205050] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:48.782 qpair failed and we were unable to recover it. 00:26:48.782 [2024-05-15 17:17:36.214896] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:48.782 [2024-05-15 17:17:36.214992] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:48.782 [2024-05-15 17:17:36.215006] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:48.782 [2024-05-15 17:17:36.215012] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:48.782 [2024-05-15 17:17:36.215018] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f93f8000b90 00:26:48.782 [2024-05-15 17:17:36.215032] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:48.782 qpair failed and we were unable to recover it. 00:26:48.782 [2024-05-15 17:17:36.224989] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:48.782 [2024-05-15 17:17:36.225054] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:48.782 [2024-05-15 17:17:36.225069] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:48.782 [2024-05-15 17:17:36.225076] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:48.782 [2024-05-15 17:17:36.225081] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f93f8000b90 00:26:48.782 [2024-05-15 17:17:36.225096] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:48.782 qpair failed and we were unable to recover it. 00:26:48.782 [2024-05-15 17:17:36.235019] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:48.782 [2024-05-15 17:17:36.235081] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:48.782 [2024-05-15 17:17:36.235099] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:48.782 [2024-05-15 17:17:36.235106] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:48.782 [2024-05-15 17:17:36.235111] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f93f8000b90 00:26:48.782 [2024-05-15 17:17:36.235126] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:48.782 qpair failed and we were unable to recover it. 00:26:48.782 [2024-05-15 17:17:36.245060] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:48.782 [2024-05-15 17:17:36.245123] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:48.782 [2024-05-15 17:17:36.245138] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:48.782 [2024-05-15 17:17:36.245145] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:48.782 [2024-05-15 17:17:36.245151] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f93f8000b90 00:26:48.782 [2024-05-15 17:17:36.245173] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:48.782 qpair failed and we were unable to recover it. 00:26:48.782 [2024-05-15 17:17:36.255069] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:48.782 [2024-05-15 17:17:36.255129] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:48.782 [2024-05-15 17:17:36.255143] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:48.782 [2024-05-15 17:17:36.255150] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:48.782 [2024-05-15 17:17:36.255156] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f93f8000b90 00:26:48.782 [2024-05-15 17:17:36.255174] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:48.782 qpair failed and we were unable to recover it. 00:26:48.782 [2024-05-15 17:17:36.265100] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:48.782 [2024-05-15 17:17:36.265160] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:48.782 [2024-05-15 17:17:36.265178] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:48.782 [2024-05-15 17:17:36.265185] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:48.782 [2024-05-15 17:17:36.265190] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f93f8000b90 00:26:48.782 [2024-05-15 17:17:36.265205] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:48.782 qpair failed and we were unable to recover it. 00:26:48.783 [2024-05-15 17:17:36.275133] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:48.783 [2024-05-15 17:17:36.275202] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:48.783 [2024-05-15 17:17:36.275216] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:48.783 [2024-05-15 17:17:36.275224] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:48.783 [2024-05-15 17:17:36.275229] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f93f8000b90 00:26:48.783 [2024-05-15 17:17:36.275247] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:48.783 qpair failed and we were unable to recover it. 00:26:48.783 [2024-05-15 17:17:36.285199] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:48.783 [2024-05-15 17:17:36.285311] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:48.783 [2024-05-15 17:17:36.285325] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:48.783 [2024-05-15 17:17:36.285332] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:48.783 [2024-05-15 17:17:36.285338] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f93f8000b90 00:26:48.783 [2024-05-15 17:17:36.285352] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:48.783 qpair failed and we were unable to recover it. 00:26:48.783 [2024-05-15 17:17:36.295189] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:48.783 [2024-05-15 17:17:36.295252] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:48.783 [2024-05-15 17:17:36.295267] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:48.783 [2024-05-15 17:17:36.295273] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:48.783 [2024-05-15 17:17:36.295279] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f93f8000b90 00:26:48.783 [2024-05-15 17:17:36.295293] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:48.783 qpair failed and we were unable to recover it. 00:26:48.783 [2024-05-15 17:17:36.305248] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:48.783 [2024-05-15 17:17:36.305309] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:48.783 [2024-05-15 17:17:36.305323] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:48.783 [2024-05-15 17:17:36.305330] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:48.783 [2024-05-15 17:17:36.305336] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f93f8000b90 00:26:48.783 [2024-05-15 17:17:36.305350] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:48.783 qpair failed and we were unable to recover it. 00:26:48.783 [2024-05-15 17:17:36.315289] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:48.783 [2024-05-15 17:17:36.315349] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:48.783 [2024-05-15 17:17:36.315363] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:48.783 [2024-05-15 17:17:36.315370] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:48.783 [2024-05-15 17:17:36.315376] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f93f8000b90 00:26:48.783 [2024-05-15 17:17:36.315390] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:48.783 qpair failed and we were unable to recover it. 00:26:48.783 [2024-05-15 17:17:36.325263] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:48.783 [2024-05-15 17:17:36.325329] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:48.783 [2024-05-15 17:17:36.325343] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:48.783 [2024-05-15 17:17:36.325350] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:48.783 [2024-05-15 17:17:36.325356] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f93f8000b90 00:26:48.783 [2024-05-15 17:17:36.325371] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:48.783 qpair failed and we were unable to recover it. 00:26:48.783 [2024-05-15 17:17:36.335298] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:48.783 [2024-05-15 17:17:36.335360] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:48.783 [2024-05-15 17:17:36.335375] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:48.783 [2024-05-15 17:17:36.335381] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:48.783 [2024-05-15 17:17:36.335387] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f93f8000b90 00:26:48.783 [2024-05-15 17:17:36.335401] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:48.783 qpair failed and we were unable to recover it. 00:26:48.783 [2024-05-15 17:17:36.345324] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:48.783 [2024-05-15 17:17:36.345385] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:48.783 [2024-05-15 17:17:36.345400] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:48.783 [2024-05-15 17:17:36.345406] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:48.783 [2024-05-15 17:17:36.345412] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f93f8000b90 00:26:48.783 [2024-05-15 17:17:36.345427] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:48.783 qpair failed and we were unable to recover it. 00:26:48.783 [2024-05-15 17:17:36.355365] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:48.783 [2024-05-15 17:17:36.355456] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:48.783 [2024-05-15 17:17:36.355470] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:48.783 [2024-05-15 17:17:36.355477] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:48.783 [2024-05-15 17:17:36.355482] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f93f8000b90 00:26:48.783 [2024-05-15 17:17:36.355496] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:48.783 qpair failed and we were unable to recover it. 00:26:48.783 [2024-05-15 17:17:36.365439] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:48.783 [2024-05-15 17:17:36.365508] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:48.783 [2024-05-15 17:17:36.365522] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:48.784 [2024-05-15 17:17:36.365530] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:48.784 [2024-05-15 17:17:36.365540] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f93f8000b90 00:26:48.784 [2024-05-15 17:17:36.365555] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:48.784 qpair failed and we were unable to recover it. 00:26:48.784 [2024-05-15 17:17:36.375429] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:48.784 [2024-05-15 17:17:36.375490] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:48.784 [2024-05-15 17:17:36.375504] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:48.784 [2024-05-15 17:17:36.375511] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:48.784 [2024-05-15 17:17:36.375517] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f93f8000b90 00:26:48.784 [2024-05-15 17:17:36.375531] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:48.784 qpair failed and we were unable to recover it. 00:26:48.784 [2024-05-15 17:17:36.385373] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:48.784 [2024-05-15 17:17:36.385432] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:48.784 [2024-05-15 17:17:36.385446] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:48.784 [2024-05-15 17:17:36.385453] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:48.784 [2024-05-15 17:17:36.385459] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f93f8000b90 00:26:48.784 [2024-05-15 17:17:36.385474] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:48.784 qpair failed and we were unable to recover it. 00:26:48.784 [2024-05-15 17:17:36.395415] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:48.784 [2024-05-15 17:17:36.395475] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:48.784 [2024-05-15 17:17:36.395489] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:48.784 [2024-05-15 17:17:36.395496] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:48.784 [2024-05-15 17:17:36.395502] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f93f8000b90 00:26:48.784 [2024-05-15 17:17:36.395516] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:48.784 qpair failed and we were unable to recover it. 00:26:48.784 [2024-05-15 17:17:36.405507] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:48.784 [2024-05-15 17:17:36.405580] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:48.784 [2024-05-15 17:17:36.405594] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:48.784 [2024-05-15 17:17:36.405601] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:48.784 [2024-05-15 17:17:36.405607] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f93f8000b90 00:26:48.784 [2024-05-15 17:17:36.405621] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:48.784 qpair failed and we were unable to recover it. 00:26:48.784 [2024-05-15 17:17:36.415530] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:48.784 [2024-05-15 17:17:36.415591] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:48.784 [2024-05-15 17:17:36.415606] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:48.784 [2024-05-15 17:17:36.415613] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:48.784 [2024-05-15 17:17:36.415618] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f93f8000b90 00:26:48.784 [2024-05-15 17:17:36.415632] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:48.784 qpair failed and we were unable to recover it. 00:26:48.784 [2024-05-15 17:17:36.425591] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:48.784 [2024-05-15 17:17:36.425649] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:48.784 [2024-05-15 17:17:36.425664] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:48.784 [2024-05-15 17:17:36.425671] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:48.784 [2024-05-15 17:17:36.425679] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f93f8000b90 00:26:48.784 [2024-05-15 17:17:36.425693] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:48.784 qpair failed and we were unable to recover it. 00:26:48.784 [2024-05-15 17:17:36.435594] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:48.784 [2024-05-15 17:17:36.435655] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:48.784 [2024-05-15 17:17:36.435670] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:48.784 [2024-05-15 17:17:36.435676] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:48.784 [2024-05-15 17:17:36.435682] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f93f8000b90 00:26:48.784 [2024-05-15 17:17:36.435696] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:48.784 qpair failed and we were unable to recover it. 00:26:49.043 [2024-05-15 17:17:36.445592] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:49.043 [2024-05-15 17:17:36.445653] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:49.043 [2024-05-15 17:17:36.445667] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:49.043 [2024-05-15 17:17:36.445674] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:49.043 [2024-05-15 17:17:36.445680] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f93f8000b90 00:26:49.043 [2024-05-15 17:17:36.445695] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:49.043 qpair failed and we were unable to recover it. 00:26:49.043 [2024-05-15 17:17:36.455632] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:49.043 [2024-05-15 17:17:36.455696] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:49.044 [2024-05-15 17:17:36.455710] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:49.044 [2024-05-15 17:17:36.455720] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:49.044 [2024-05-15 17:17:36.455726] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f93f8000b90 00:26:49.044 [2024-05-15 17:17:36.455740] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:49.044 qpair failed and we were unable to recover it. 00:26:49.044 [2024-05-15 17:17:36.465670] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:49.044 [2024-05-15 17:17:36.465733] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:49.044 [2024-05-15 17:17:36.465747] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:49.044 [2024-05-15 17:17:36.465753] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:49.044 [2024-05-15 17:17:36.465759] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f93f8000b90 00:26:49.044 [2024-05-15 17:17:36.465773] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:49.044 qpair failed and we were unable to recover it. 00:26:49.044 [2024-05-15 17:17:36.475707] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:49.044 [2024-05-15 17:17:36.475767] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:49.044 [2024-05-15 17:17:36.475781] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:49.044 [2024-05-15 17:17:36.475788] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:49.044 [2024-05-15 17:17:36.475794] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f93f8000b90 00:26:49.044 [2024-05-15 17:17:36.475807] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:49.044 qpair failed and we were unable to recover it. 00:26:49.044 [2024-05-15 17:17:36.485735] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:49.044 [2024-05-15 17:17:36.485797] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:49.044 [2024-05-15 17:17:36.485812] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:49.044 [2024-05-15 17:17:36.485818] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:49.044 [2024-05-15 17:17:36.485824] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f93f8000b90 00:26:49.044 [2024-05-15 17:17:36.485839] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:49.044 qpair failed and we were unable to recover it. 00:26:49.044 [2024-05-15 17:17:36.495780] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:49.044 [2024-05-15 17:17:36.495842] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:49.044 [2024-05-15 17:17:36.495856] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:49.044 [2024-05-15 17:17:36.495863] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:49.044 [2024-05-15 17:17:36.495869] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f93f8000b90 00:26:49.044 [2024-05-15 17:17:36.495883] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:49.044 qpair failed and we were unable to recover it. 00:26:49.044 [2024-05-15 17:17:36.505790] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:49.044 [2024-05-15 17:17:36.505855] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:49.044 [2024-05-15 17:17:36.505869] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:49.044 [2024-05-15 17:17:36.505876] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:49.044 [2024-05-15 17:17:36.505881] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f93f8000b90 00:26:49.044 [2024-05-15 17:17:36.505895] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:49.044 qpair failed and we were unable to recover it. 00:26:49.044 [2024-05-15 17:17:36.515808] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:49.044 [2024-05-15 17:17:36.515873] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:49.044 [2024-05-15 17:17:36.515887] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:49.044 [2024-05-15 17:17:36.515894] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:49.044 [2024-05-15 17:17:36.515900] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f93f8000b90 00:26:49.044 [2024-05-15 17:17:36.515914] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:49.044 qpair failed and we were unable to recover it. 00:26:49.044 [2024-05-15 17:17:36.525810] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:49.044 [2024-05-15 17:17:36.525873] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:49.044 [2024-05-15 17:17:36.525888] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:49.044 [2024-05-15 17:17:36.525895] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:49.044 [2024-05-15 17:17:36.525901] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f93f8000b90 00:26:49.044 [2024-05-15 17:17:36.525915] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:49.044 qpair failed and we were unable to recover it. 00:26:49.044 [2024-05-15 17:17:36.535886] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:49.044 [2024-05-15 17:17:36.535948] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:49.044 [2024-05-15 17:17:36.535962] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:49.044 [2024-05-15 17:17:36.535969] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:49.044 [2024-05-15 17:17:36.535975] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f93f8000b90 00:26:49.044 [2024-05-15 17:17:36.535989] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:49.044 qpair failed and we were unable to recover it. 00:26:49.044 [2024-05-15 17:17:36.545891] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:49.044 [2024-05-15 17:17:36.545955] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:49.044 [2024-05-15 17:17:36.545972] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:49.044 [2024-05-15 17:17:36.545979] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:49.044 [2024-05-15 17:17:36.545985] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f93f8000b90 00:26:49.044 [2024-05-15 17:17:36.545999] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:49.044 qpair failed and we were unable to recover it. 00:26:49.044 [2024-05-15 17:17:36.555933] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:49.044 [2024-05-15 17:17:36.555993] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:49.044 [2024-05-15 17:17:36.556007] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:49.044 [2024-05-15 17:17:36.556014] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:49.044 [2024-05-15 17:17:36.556020] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f93f8000b90 00:26:49.044 [2024-05-15 17:17:36.556034] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:49.044 qpair failed and we were unable to recover it. 00:26:49.044 [2024-05-15 17:17:36.566014] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:49.044 [2024-05-15 17:17:36.566079] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:49.044 [2024-05-15 17:17:36.566093] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:49.044 [2024-05-15 17:17:36.566100] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:49.044 [2024-05-15 17:17:36.566106] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f93f8000b90 00:26:49.044 [2024-05-15 17:17:36.566120] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:49.044 qpair failed and we were unable to recover it. 00:26:49.044 [2024-05-15 17:17:36.575956] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:49.044 [2024-05-15 17:17:36.576041] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:49.044 [2024-05-15 17:17:36.576055] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:49.044 [2024-05-15 17:17:36.576062] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:49.044 [2024-05-15 17:17:36.576068] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f93f8000b90 00:26:49.045 [2024-05-15 17:17:36.576082] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:49.045 qpair failed and we were unable to recover it. 00:26:49.045 [2024-05-15 17:17:36.586068] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:49.045 [2024-05-15 17:17:36.586131] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:49.045 [2024-05-15 17:17:36.586146] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:49.045 [2024-05-15 17:17:36.586153] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:49.045 [2024-05-15 17:17:36.586159] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f93f8000b90 00:26:49.045 [2024-05-15 17:17:36.586177] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:49.045 qpair failed and we were unable to recover it. 00:26:49.045 [2024-05-15 17:17:36.596082] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:49.045 [2024-05-15 17:17:36.596158] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:49.045 [2024-05-15 17:17:36.596177] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:49.045 [2024-05-15 17:17:36.596184] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:49.045 [2024-05-15 17:17:36.596190] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f93f8000b90 00:26:49.045 [2024-05-15 17:17:36.596205] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:49.045 qpair failed and we were unable to recover it. 00:26:49.045 [2024-05-15 17:17:36.606089] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:49.045 [2024-05-15 17:17:36.606307] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:49.045 [2024-05-15 17:17:36.606323] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:49.045 [2024-05-15 17:17:36.606330] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:49.045 [2024-05-15 17:17:36.606336] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f93f8000b90 00:26:49.045 [2024-05-15 17:17:36.606352] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:49.045 qpair failed and we were unable to recover it. 00:26:49.045 [2024-05-15 17:17:36.616112] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:49.045 [2024-05-15 17:17:36.616174] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:49.045 [2024-05-15 17:17:36.616188] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:49.045 [2024-05-15 17:17:36.616195] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:49.045 [2024-05-15 17:17:36.616201] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f93f8000b90 00:26:49.045 [2024-05-15 17:17:36.616216] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:49.045 qpair failed and we were unable to recover it. 00:26:49.045 [2024-05-15 17:17:36.626130] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:49.045 [2024-05-15 17:17:36.626194] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:49.045 [2024-05-15 17:17:36.626209] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:49.045 [2024-05-15 17:17:36.626216] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:49.045 [2024-05-15 17:17:36.626221] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f93f8000b90 00:26:49.045 [2024-05-15 17:17:36.626236] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:49.045 qpair failed and we were unable to recover it. 00:26:49.045 [2024-05-15 17:17:36.636190] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:49.045 [2024-05-15 17:17:36.636263] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:49.045 [2024-05-15 17:17:36.636281] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:49.045 [2024-05-15 17:17:36.636288] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:49.045 [2024-05-15 17:17:36.636294] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f93f8000b90 00:26:49.045 [2024-05-15 17:17:36.636308] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:49.045 qpair failed and we were unable to recover it. 00:26:49.045 [2024-05-15 17:17:36.646193] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:49.045 [2024-05-15 17:17:36.646258] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:49.045 [2024-05-15 17:17:36.646272] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:49.045 [2024-05-15 17:17:36.646280] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:49.045 [2024-05-15 17:17:36.646286] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f93f8000b90 00:26:49.045 [2024-05-15 17:17:36.646300] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:49.045 qpair failed and we were unable to recover it. 00:26:49.045 [2024-05-15 17:17:36.656205] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:49.045 [2024-05-15 17:17:36.656272] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:49.045 [2024-05-15 17:17:36.656287] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:49.045 [2024-05-15 17:17:36.656293] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:49.045 [2024-05-15 17:17:36.656299] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f93f8000b90 00:26:49.045 [2024-05-15 17:17:36.656313] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:49.045 qpair failed and we were unable to recover it. 00:26:49.045 [2024-05-15 17:17:36.666255] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:49.045 [2024-05-15 17:17:36.666331] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:49.045 [2024-05-15 17:17:36.666346] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:49.045 [2024-05-15 17:17:36.666353] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:49.045 [2024-05-15 17:17:36.666358] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f93f8000b90 00:26:49.045 [2024-05-15 17:17:36.666373] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:49.045 qpair failed and we were unable to recover it. 00:26:49.045 [2024-05-15 17:17:36.676219] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:49.045 [2024-05-15 17:17:36.676287] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:49.045 [2024-05-15 17:17:36.676301] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:49.045 [2024-05-15 17:17:36.676311] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:49.045 [2024-05-15 17:17:36.676317] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f93f8000b90 00:26:49.045 [2024-05-15 17:17:36.676337] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:49.045 qpair failed and we were unable to recover it. 00:26:49.045 [2024-05-15 17:17:36.686317] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:49.045 [2024-05-15 17:17:36.686391] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:49.045 [2024-05-15 17:17:36.686405] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:49.045 [2024-05-15 17:17:36.686412] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:49.045 [2024-05-15 17:17:36.686418] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f93f8000b90 00:26:49.045 [2024-05-15 17:17:36.686432] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:49.045 qpair failed and we were unable to recover it. 00:26:49.045 [2024-05-15 17:17:36.696324] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:49.045 [2024-05-15 17:17:36.696429] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:49.045 [2024-05-15 17:17:36.696444] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:49.045 [2024-05-15 17:17:36.696451] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:49.045 [2024-05-15 17:17:36.696456] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f93f8000b90 00:26:49.045 [2024-05-15 17:17:36.696470] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:49.045 qpair failed and we were unable to recover it. 00:26:49.304 [2024-05-15 17:17:36.706400] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:49.304 [2024-05-15 17:17:36.706469] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:49.304 [2024-05-15 17:17:36.706484] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:49.304 [2024-05-15 17:17:36.706491] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:49.304 [2024-05-15 17:17:36.706496] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f93f8000b90 00:26:49.304 [2024-05-15 17:17:36.706510] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:49.304 qpair failed and we were unable to recover it. 00:26:49.304 [2024-05-15 17:17:36.716333] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:49.304 [2024-05-15 17:17:36.716400] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:49.304 [2024-05-15 17:17:36.716415] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:49.304 [2024-05-15 17:17:36.716421] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:49.304 [2024-05-15 17:17:36.716428] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f93f8000b90 00:26:49.304 [2024-05-15 17:17:36.716441] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:49.304 qpair failed and we were unable to recover it. 00:26:49.304 [2024-05-15 17:17:36.726455] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:49.304 [2024-05-15 17:17:36.726519] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:49.304 [2024-05-15 17:17:36.726537] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:49.304 [2024-05-15 17:17:36.726544] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:49.304 [2024-05-15 17:17:36.726549] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f93f8000b90 00:26:49.304 [2024-05-15 17:17:36.726564] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:49.304 qpair failed and we were unable to recover it. 00:26:49.304 [2024-05-15 17:17:36.736454] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:49.304 [2024-05-15 17:17:36.736553] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:49.304 [2024-05-15 17:17:36.736568] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:49.304 [2024-05-15 17:17:36.736575] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:49.304 [2024-05-15 17:17:36.736580] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f93f8000b90 00:26:49.304 [2024-05-15 17:17:36.736596] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:49.304 qpair failed and we were unable to recover it. 00:26:49.304 [2024-05-15 17:17:36.746431] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:49.304 [2024-05-15 17:17:36.746506] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:49.304 [2024-05-15 17:17:36.746521] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:49.304 [2024-05-15 17:17:36.746528] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:49.304 [2024-05-15 17:17:36.746534] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f93f8000b90 00:26:49.305 [2024-05-15 17:17:36.746549] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:49.305 qpair failed and we were unable to recover it. 00:26:49.305 [2024-05-15 17:17:36.756534] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:49.305 [2024-05-15 17:17:36.756599] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:49.305 [2024-05-15 17:17:36.756614] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:49.305 [2024-05-15 17:17:36.756621] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:49.305 [2024-05-15 17:17:36.756627] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f93f8000b90 00:26:49.305 [2024-05-15 17:17:36.756641] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:49.305 qpair failed and we were unable to recover it. 00:26:49.305 [2024-05-15 17:17:36.766537] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:49.305 [2024-05-15 17:17:36.766606] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:49.305 [2024-05-15 17:17:36.766621] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:49.305 [2024-05-15 17:17:36.766628] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:49.305 [2024-05-15 17:17:36.766637] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f93f8000b90 00:26:49.305 [2024-05-15 17:17:36.766652] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:49.305 qpair failed and we were unable to recover it. 00:26:49.305 [2024-05-15 17:17:36.776561] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:49.305 [2024-05-15 17:17:36.776644] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:49.305 [2024-05-15 17:17:36.776659] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:49.305 [2024-05-15 17:17:36.776666] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:49.305 [2024-05-15 17:17:36.776672] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f93f8000b90 00:26:49.305 [2024-05-15 17:17:36.776686] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:49.305 qpair failed and we were unable to recover it. 00:26:49.305 [2024-05-15 17:17:36.786596] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:49.305 [2024-05-15 17:17:36.786654] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:49.305 [2024-05-15 17:17:36.786668] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:49.305 [2024-05-15 17:17:36.786676] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:49.305 [2024-05-15 17:17:36.786681] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f93f8000b90 00:26:49.305 [2024-05-15 17:17:36.786695] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:49.305 qpair failed and we were unable to recover it. 00:26:49.305 [2024-05-15 17:17:36.796633] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:49.305 [2024-05-15 17:17:36.796700] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:49.305 [2024-05-15 17:17:36.796714] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:49.305 [2024-05-15 17:17:36.796721] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:49.305 [2024-05-15 17:17:36.796727] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f93f8000b90 00:26:49.305 [2024-05-15 17:17:36.796741] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:49.305 qpair failed and we were unable to recover it. 00:26:49.305 [2024-05-15 17:17:36.806668] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:49.305 [2024-05-15 17:17:36.806728] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:49.305 [2024-05-15 17:17:36.806742] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:49.305 [2024-05-15 17:17:36.806750] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:49.305 [2024-05-15 17:17:36.806755] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f93f8000b90 00:26:49.305 [2024-05-15 17:17:36.806769] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:49.305 qpair failed and we were unable to recover it. 00:26:49.305 [2024-05-15 17:17:36.816701] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:49.305 [2024-05-15 17:17:36.816810] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:49.305 [2024-05-15 17:17:36.816824] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:49.305 [2024-05-15 17:17:36.816830] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:49.305 [2024-05-15 17:17:36.816837] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f93f8000b90 00:26:49.305 [2024-05-15 17:17:36.816851] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:49.305 qpair failed and we were unable to recover it. 00:26:49.305 [2024-05-15 17:17:36.826732] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:49.305 [2024-05-15 17:17:36.826791] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:49.305 [2024-05-15 17:17:36.826805] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:49.305 [2024-05-15 17:17:36.826812] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:49.305 [2024-05-15 17:17:36.826818] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f93f8000b90 00:26:49.305 [2024-05-15 17:17:36.826833] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:49.305 qpair failed and we were unable to recover it. 00:26:49.305 [2024-05-15 17:17:36.836761] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:49.305 [2024-05-15 17:17:36.836827] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:49.305 [2024-05-15 17:17:36.836841] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:49.305 [2024-05-15 17:17:36.836848] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:49.305 [2024-05-15 17:17:36.836854] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f93f8000b90 00:26:49.305 [2024-05-15 17:17:36.836868] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:49.305 qpair failed and we were unable to recover it. 00:26:49.305 [2024-05-15 17:17:36.846724] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:49.305 [2024-05-15 17:17:36.846784] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:49.305 [2024-05-15 17:17:36.846798] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:49.305 [2024-05-15 17:17:36.846805] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:49.305 [2024-05-15 17:17:36.846811] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f93f8000b90 00:26:49.305 [2024-05-15 17:17:36.846825] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:49.305 qpair failed and we were unable to recover it. 00:26:49.305 [2024-05-15 17:17:36.856790] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:49.305 [2024-05-15 17:17:36.856852] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:49.305 [2024-05-15 17:17:36.856866] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:49.305 [2024-05-15 17:17:36.856876] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:49.305 [2024-05-15 17:17:36.856882] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f93f8000b90 00:26:49.305 [2024-05-15 17:17:36.856896] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:49.305 qpair failed and we were unable to recover it. 00:26:49.305 [2024-05-15 17:17:36.866816] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:49.305 [2024-05-15 17:17:36.866873] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:49.305 [2024-05-15 17:17:36.866888] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:49.305 [2024-05-15 17:17:36.866895] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:49.305 [2024-05-15 17:17:36.866901] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f93f8000b90 00:26:49.305 [2024-05-15 17:17:36.866915] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:49.305 qpair failed and we were unable to recover it. 00:26:49.305 [2024-05-15 17:17:36.876901] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:49.305 [2024-05-15 17:17:36.876968] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:49.305 [2024-05-15 17:17:36.876982] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:49.305 [2024-05-15 17:17:36.876989] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:49.305 [2024-05-15 17:17:36.876995] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f93f8000b90 00:26:49.305 [2024-05-15 17:17:36.877009] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:49.305 qpair failed and we were unable to recover it. 00:26:49.306 [2024-05-15 17:17:36.886897] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:49.306 [2024-05-15 17:17:36.887010] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:49.306 [2024-05-15 17:17:36.887025] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:49.306 [2024-05-15 17:17:36.887032] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:49.306 [2024-05-15 17:17:36.887038] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f93f8000b90 00:26:49.306 [2024-05-15 17:17:36.887053] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:49.306 qpair failed and we were unable to recover it. 00:26:49.306 [2024-05-15 17:17:36.896934] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:49.306 [2024-05-15 17:17:36.896990] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:49.306 [2024-05-15 17:17:36.897004] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:49.306 [2024-05-15 17:17:36.897012] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:49.306 [2024-05-15 17:17:36.897018] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f93f8000b90 00:26:49.306 [2024-05-15 17:17:36.897032] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:49.306 qpair failed and we were unable to recover it. 00:26:49.306 [2024-05-15 17:17:36.906960] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:49.306 [2024-05-15 17:17:36.907021] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:49.306 [2024-05-15 17:17:36.907035] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:49.306 [2024-05-15 17:17:36.907042] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:49.306 [2024-05-15 17:17:36.907048] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f93f8000b90 00:26:49.306 [2024-05-15 17:17:36.907062] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:49.306 qpair failed and we were unable to recover it. 00:26:49.306 [2024-05-15 17:17:36.916983] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:49.306 [2024-05-15 17:17:36.917049] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:49.306 [2024-05-15 17:17:36.917063] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:49.306 [2024-05-15 17:17:36.917070] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:49.306 [2024-05-15 17:17:36.917076] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f93f8000b90 00:26:49.306 [2024-05-15 17:17:36.917090] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:49.306 qpair failed and we were unable to recover it. 00:26:49.306 [2024-05-15 17:17:36.927012] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:49.306 [2024-05-15 17:17:36.927086] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:49.306 [2024-05-15 17:17:36.927100] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:49.306 [2024-05-15 17:17:36.927107] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:49.306 [2024-05-15 17:17:36.927113] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f93f8000b90 00:26:49.306 [2024-05-15 17:17:36.927127] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:49.306 qpair failed and we were unable to recover it. 00:26:49.306 [2024-05-15 17:17:36.937052] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:49.306 [2024-05-15 17:17:36.937116] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:49.306 [2024-05-15 17:17:36.937130] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:49.306 [2024-05-15 17:17:36.937138] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:49.306 [2024-05-15 17:17:36.937143] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f93f8000b90 00:26:49.306 [2024-05-15 17:17:36.937157] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:49.306 qpair failed and we were unable to recover it. 00:26:49.306 [2024-05-15 17:17:36.947056] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:49.306 [2024-05-15 17:17:36.947116] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:49.306 [2024-05-15 17:17:36.947130] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:49.306 [2024-05-15 17:17:36.947140] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:49.306 [2024-05-15 17:17:36.947146] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f93f8000b90 00:26:49.306 [2024-05-15 17:17:36.947161] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:49.306 qpair failed and we were unable to recover it. 00:26:49.306 [2024-05-15 17:17:36.957098] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:49.306 [2024-05-15 17:17:36.957159] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:49.306 [2024-05-15 17:17:36.957176] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:49.306 [2024-05-15 17:17:36.957183] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:49.306 [2024-05-15 17:17:36.957189] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f93f8000b90 00:26:49.306 [2024-05-15 17:17:36.957203] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:49.306 qpair failed and we were unable to recover it. 00:26:49.565 [2024-05-15 17:17:36.967157] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:49.565 [2024-05-15 17:17:36.967221] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:49.565 [2024-05-15 17:17:36.967236] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:49.565 [2024-05-15 17:17:36.967243] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:49.565 [2024-05-15 17:17:36.967248] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f93f8000b90 00:26:49.565 [2024-05-15 17:17:36.967262] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:49.565 qpair failed and we were unable to recover it. 00:26:49.565 [2024-05-15 17:17:36.977176] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:49.565 [2024-05-15 17:17:36.977368] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:49.565 [2024-05-15 17:17:36.977384] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:49.565 [2024-05-15 17:17:36.977391] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:49.565 [2024-05-15 17:17:36.977396] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f93f8000b90 00:26:49.565 [2024-05-15 17:17:36.977411] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:49.565 qpair failed and we were unable to recover it. 00:26:49.565 [2024-05-15 17:17:36.987191] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:49.565 [2024-05-15 17:17:36.987262] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:49.565 [2024-05-15 17:17:36.987276] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:49.565 [2024-05-15 17:17:36.987283] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:49.565 [2024-05-15 17:17:36.987289] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f93f8000b90 00:26:49.565 [2024-05-15 17:17:36.987304] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:49.565 qpair failed and we were unable to recover it. 00:26:49.565 [2024-05-15 17:17:36.997287] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:49.565 [2024-05-15 17:17:36.997405] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:49.565 [2024-05-15 17:17:36.997420] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:49.565 [2024-05-15 17:17:36.997427] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:49.565 [2024-05-15 17:17:36.997432] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f93f8000b90 00:26:49.565 [2024-05-15 17:17:36.997447] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:49.565 qpair failed and we were unable to recover it. 00:26:49.565 [2024-05-15 17:17:37.007297] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:49.565 [2024-05-15 17:17:37.007404] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:49.565 [2024-05-15 17:17:37.007419] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:49.565 [2024-05-15 17:17:37.007426] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:49.565 [2024-05-15 17:17:37.007432] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f93f8000b90 00:26:49.565 [2024-05-15 17:17:37.007447] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:49.565 qpair failed and we were unable to recover it. 00:26:49.565 [2024-05-15 17:17:37.017332] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:49.565 [2024-05-15 17:17:37.017398] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:49.565 [2024-05-15 17:17:37.017412] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:49.565 [2024-05-15 17:17:37.017419] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:49.565 [2024-05-15 17:17:37.017425] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f93f8000b90 00:26:49.565 [2024-05-15 17:17:37.017439] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:49.565 qpair failed and we were unable to recover it. 00:26:49.565 [2024-05-15 17:17:37.027299] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:49.565 [2024-05-15 17:17:37.027359] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:49.565 [2024-05-15 17:17:37.027374] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:49.565 [2024-05-15 17:17:37.027380] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:49.565 [2024-05-15 17:17:37.027386] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f93f8000b90 00:26:49.565 [2024-05-15 17:17:37.027401] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:49.565 qpair failed and we were unable to recover it. 00:26:49.565 [2024-05-15 17:17:37.037336] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:49.565 [2024-05-15 17:17:37.037398] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:49.565 [2024-05-15 17:17:37.037416] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:49.565 [2024-05-15 17:17:37.037423] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:49.565 [2024-05-15 17:17:37.037429] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f93f8000b90 00:26:49.565 [2024-05-15 17:17:37.037443] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:49.565 qpair failed and we were unable to recover it. 00:26:49.565 [2024-05-15 17:17:37.047401] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:49.566 [2024-05-15 17:17:37.047461] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:49.566 [2024-05-15 17:17:37.047476] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:49.566 [2024-05-15 17:17:37.047483] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:49.566 [2024-05-15 17:17:37.047489] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f93f8000b90 00:26:49.566 [2024-05-15 17:17:37.047503] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:49.566 qpair failed and we were unable to recover it. 00:26:49.566 [2024-05-15 17:17:37.057387] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:49.566 [2024-05-15 17:17:37.057451] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:49.566 [2024-05-15 17:17:37.057465] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:49.566 [2024-05-15 17:17:37.057472] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:49.566 [2024-05-15 17:17:37.057478] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f93f8000b90 00:26:49.566 [2024-05-15 17:17:37.057492] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:49.566 qpair failed and we were unable to recover it. 00:26:49.566 [2024-05-15 17:17:37.067417] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:49.566 [2024-05-15 17:17:37.067473] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:49.566 [2024-05-15 17:17:37.067488] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:49.566 [2024-05-15 17:17:37.067495] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:49.566 [2024-05-15 17:17:37.067500] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f93f8000b90 00:26:49.566 [2024-05-15 17:17:37.067514] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:49.566 qpair failed and we were unable to recover it. 00:26:49.566 [2024-05-15 17:17:37.077394] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:49.566 [2024-05-15 17:17:37.077477] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:49.566 [2024-05-15 17:17:37.077492] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:49.566 [2024-05-15 17:17:37.077499] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:49.566 [2024-05-15 17:17:37.077505] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f93f8000b90 00:26:49.566 [2024-05-15 17:17:37.077523] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:49.566 qpair failed and we were unable to recover it. 00:26:49.566 [2024-05-15 17:17:37.087459] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:49.566 [2024-05-15 17:17:37.087525] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:49.566 [2024-05-15 17:17:37.087540] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:49.566 [2024-05-15 17:17:37.087547] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:49.566 [2024-05-15 17:17:37.087552] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f93f8000b90 00:26:49.566 [2024-05-15 17:17:37.087566] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:49.566 qpair failed and we were unable to recover it. 00:26:49.566 [2024-05-15 17:17:37.097518] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:49.566 [2024-05-15 17:17:37.097581] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:49.566 [2024-05-15 17:17:37.097595] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:49.566 [2024-05-15 17:17:37.097602] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:49.566 [2024-05-15 17:17:37.097608] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f93f8000b90 00:26:49.566 [2024-05-15 17:17:37.097622] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:49.566 qpair failed and we were unable to recover it. 00:26:49.566 [2024-05-15 17:17:37.107455] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:49.566 [2024-05-15 17:17:37.107519] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:49.566 [2024-05-15 17:17:37.107534] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:49.566 [2024-05-15 17:17:37.107541] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:49.566 [2024-05-15 17:17:37.107546] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f93f8000b90 00:26:49.566 [2024-05-15 17:17:37.107560] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:49.566 qpair failed and we were unable to recover it. 00:26:49.566 [2024-05-15 17:17:37.117548] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:49.566 [2024-05-15 17:17:37.117614] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:49.566 [2024-05-15 17:17:37.117628] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:49.566 [2024-05-15 17:17:37.117635] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:49.566 [2024-05-15 17:17:37.117641] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f93f8000b90 00:26:49.566 [2024-05-15 17:17:37.117655] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:49.566 qpair failed and we were unable to recover it. 00:26:49.566 [2024-05-15 17:17:37.127623] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:49.566 [2024-05-15 17:17:37.127686] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:49.566 [2024-05-15 17:17:37.127703] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:49.566 [2024-05-15 17:17:37.127710] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:49.566 [2024-05-15 17:17:37.127716] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f93f8000b90 00:26:49.566 [2024-05-15 17:17:37.127730] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:49.566 qpair failed and we were unable to recover it. 00:26:49.566 [2024-05-15 17:17:37.137601] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:49.566 [2024-05-15 17:17:37.137669] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:49.566 [2024-05-15 17:17:37.137683] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:49.566 [2024-05-15 17:17:37.137690] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:49.566 [2024-05-15 17:17:37.137696] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f93f8000b90 00:26:49.566 [2024-05-15 17:17:37.137710] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:49.566 qpair failed and we were unable to recover it. 00:26:49.566 [2024-05-15 17:17:37.147580] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:49.566 [2024-05-15 17:17:37.147637] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:49.566 [2024-05-15 17:17:37.147651] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:49.566 [2024-05-15 17:17:37.147658] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:49.566 [2024-05-15 17:17:37.147664] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f93f8000b90 00:26:49.566 [2024-05-15 17:17:37.147678] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:49.566 qpair failed and we were unable to recover it. 00:26:49.566 [2024-05-15 17:17:37.157633] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:49.566 [2024-05-15 17:17:37.157713] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:49.566 [2024-05-15 17:17:37.157728] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:49.566 [2024-05-15 17:17:37.157735] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:49.566 [2024-05-15 17:17:37.157740] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f93f8000b90 00:26:49.566 [2024-05-15 17:17:37.157754] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:49.566 qpair failed and we were unable to recover it. 00:26:49.566 [2024-05-15 17:17:37.167678] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:49.566 [2024-05-15 17:17:37.167740] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:49.566 [2024-05-15 17:17:37.167755] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:49.566 [2024-05-15 17:17:37.167763] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:49.566 [2024-05-15 17:17:37.167772] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f93f8000b90 00:26:49.566 [2024-05-15 17:17:37.167787] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:49.566 qpair failed and we were unable to recover it. 00:26:49.566 [2024-05-15 17:17:37.177743] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:49.566 [2024-05-15 17:17:37.177811] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:49.567 [2024-05-15 17:17:37.177825] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:49.567 [2024-05-15 17:17:37.177832] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:49.567 [2024-05-15 17:17:37.177839] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f93f8000b90 00:26:49.567 [2024-05-15 17:17:37.177853] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:49.567 qpair failed and we were unable to recover it. 00:26:49.567 [2024-05-15 17:17:37.187771] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:49.567 [2024-05-15 17:17:37.187855] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:49.567 [2024-05-15 17:17:37.187868] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:49.567 [2024-05-15 17:17:37.187875] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:49.567 [2024-05-15 17:17:37.187881] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f93f8000b90 00:26:49.567 [2024-05-15 17:17:37.187894] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:49.567 qpair failed and we were unable to recover it. 00:26:49.567 [2024-05-15 17:17:37.197779] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:49.567 [2024-05-15 17:17:37.197842] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:49.567 [2024-05-15 17:17:37.197857] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:49.567 [2024-05-15 17:17:37.197864] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:49.567 [2024-05-15 17:17:37.197870] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f93f8000b90 00:26:49.567 [2024-05-15 17:17:37.197884] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:49.567 qpair failed and we were unable to recover it. 00:26:49.567 [2024-05-15 17:17:37.207803] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:49.567 [2024-05-15 17:17:37.207872] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:49.567 [2024-05-15 17:17:37.207887] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:49.567 [2024-05-15 17:17:37.207894] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:49.567 [2024-05-15 17:17:37.207900] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f93f8000b90 00:26:49.567 [2024-05-15 17:17:37.207914] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:49.567 qpair failed and we were unable to recover it. 00:26:49.567 [2024-05-15 17:17:37.217845] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:49.567 [2024-05-15 17:17:37.217912] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:49.567 [2024-05-15 17:17:37.217927] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:49.567 [2024-05-15 17:17:37.217933] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:49.567 [2024-05-15 17:17:37.217939] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f93f8000b90 00:26:49.567 [2024-05-15 17:17:37.217953] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:49.567 qpair failed and we were unable to recover it. 00:26:49.828 [2024-05-15 17:17:37.227807] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:49.828 [2024-05-15 17:17:37.227870] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:49.828 [2024-05-15 17:17:37.227885] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:49.828 [2024-05-15 17:17:37.227891] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:49.828 [2024-05-15 17:17:37.227897] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f93f8000b90 00:26:49.828 [2024-05-15 17:17:37.227911] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:49.828 qpair failed and we were unable to recover it. 00:26:49.828 [2024-05-15 17:17:37.237918] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:49.828 [2024-05-15 17:17:37.237981] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:49.828 [2024-05-15 17:17:37.237995] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:49.828 [2024-05-15 17:17:37.238001] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:49.828 [2024-05-15 17:17:37.238007] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f93f8000b90 00:26:49.828 [2024-05-15 17:17:37.238021] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:49.828 qpair failed and we were unable to recover it. 00:26:49.828 [2024-05-15 17:17:37.247956] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:49.828 [2024-05-15 17:17:37.248032] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:49.828 [2024-05-15 17:17:37.248047] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:49.828 [2024-05-15 17:17:37.248054] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:49.828 [2024-05-15 17:17:37.248059] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f93f8000b90 00:26:49.828 [2024-05-15 17:17:37.248074] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:49.828 qpair failed and we were unable to recover it. 00:26:49.828 [2024-05-15 17:17:37.257993] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:49.828 [2024-05-15 17:17:37.258057] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:49.828 [2024-05-15 17:17:37.258072] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:49.828 [2024-05-15 17:17:37.258082] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:49.828 [2024-05-15 17:17:37.258087] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f93f8000b90 00:26:49.828 [2024-05-15 17:17:37.258101] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:49.828 qpair failed and we were unable to recover it. 00:26:49.828 [2024-05-15 17:17:37.267996] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:49.828 [2024-05-15 17:17:37.268070] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:49.828 [2024-05-15 17:17:37.268084] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:49.828 [2024-05-15 17:17:37.268091] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:49.828 [2024-05-15 17:17:37.268097] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f93f8000b90 00:26:49.828 [2024-05-15 17:17:37.268111] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:49.828 qpair failed and we were unable to recover it. 00:26:49.828 [2024-05-15 17:17:37.278006] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:49.828 [2024-05-15 17:17:37.278068] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:49.828 [2024-05-15 17:17:37.278082] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:49.828 [2024-05-15 17:17:37.278089] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:49.828 [2024-05-15 17:17:37.278095] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f93f8000b90 00:26:49.828 [2024-05-15 17:17:37.278109] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:49.828 qpair failed and we were unable to recover it. 00:26:49.828 [2024-05-15 17:17:37.288069] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:49.828 [2024-05-15 17:17:37.288129] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:49.828 [2024-05-15 17:17:37.288144] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:49.828 [2024-05-15 17:17:37.288151] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:49.828 [2024-05-15 17:17:37.288156] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f93f8000b90 00:26:49.828 [2024-05-15 17:17:37.288174] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:49.828 qpair failed and we were unable to recover it. 00:26:49.828 [2024-05-15 17:17:37.298111] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:49.828 [2024-05-15 17:17:37.298192] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:49.828 [2024-05-15 17:17:37.298206] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:49.828 [2024-05-15 17:17:37.298213] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:49.828 [2024-05-15 17:17:37.298219] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f93f8000b90 00:26:49.828 [2024-05-15 17:17:37.298233] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:49.828 qpair failed and we were unable to recover it. 00:26:49.828 [2024-05-15 17:17:37.308104] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:49.828 [2024-05-15 17:17:37.308162] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:49.828 [2024-05-15 17:17:37.308180] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:49.828 [2024-05-15 17:17:37.308187] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:49.828 [2024-05-15 17:17:37.308193] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f93f8000b90 00:26:49.828 [2024-05-15 17:17:37.308208] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:49.828 qpair failed and we were unable to recover it. 00:26:49.828 [2024-05-15 17:17:37.318149] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:49.828 [2024-05-15 17:17:37.318212] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:49.828 [2024-05-15 17:17:37.318227] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:49.828 [2024-05-15 17:17:37.318233] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:49.828 [2024-05-15 17:17:37.318239] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f93f8000b90 00:26:49.828 [2024-05-15 17:17:37.318254] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:49.828 qpair failed and we were unable to recover it. 00:26:49.828 [2024-05-15 17:17:37.328196] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:49.828 [2024-05-15 17:17:37.328259] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:49.828 [2024-05-15 17:17:37.328274] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:49.828 [2024-05-15 17:17:37.328280] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:49.828 [2024-05-15 17:17:37.328286] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f93f8000b90 00:26:49.828 [2024-05-15 17:17:37.328300] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:49.829 qpair failed and we were unable to recover it. 00:26:49.829 [2024-05-15 17:17:37.338221] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:49.829 [2024-05-15 17:17:37.338279] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:49.829 [2024-05-15 17:17:37.338293] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:49.829 [2024-05-15 17:17:37.338301] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:49.829 [2024-05-15 17:17:37.338306] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f93f8000b90 00:26:49.829 [2024-05-15 17:17:37.338321] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:49.829 qpair failed and we were unable to recover it. 00:26:49.829 [2024-05-15 17:17:37.348253] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:49.829 [2024-05-15 17:17:37.348324] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:49.829 [2024-05-15 17:17:37.348339] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:49.829 [2024-05-15 17:17:37.348348] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:49.829 [2024-05-15 17:17:37.348354] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f93f8000b90 00:26:49.829 [2024-05-15 17:17:37.348368] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:49.829 qpair failed and we were unable to recover it. 00:26:49.829 [2024-05-15 17:17:37.358265] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:49.829 [2024-05-15 17:17:37.358328] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:49.829 [2024-05-15 17:17:37.358342] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:49.829 [2024-05-15 17:17:37.358349] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:49.829 [2024-05-15 17:17:37.358355] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f93f8000b90 00:26:49.829 [2024-05-15 17:17:37.358369] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:49.829 qpair failed and we were unable to recover it. 00:26:49.829 [2024-05-15 17:17:37.368335] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:49.829 [2024-05-15 17:17:37.368396] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:49.829 [2024-05-15 17:17:37.368410] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:49.829 [2024-05-15 17:17:37.368417] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:49.829 [2024-05-15 17:17:37.368423] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f93f8000b90 00:26:49.829 [2024-05-15 17:17:37.368437] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:49.829 qpair failed and we were unable to recover it. 00:26:49.829 [2024-05-15 17:17:37.378304] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:49.829 [2024-05-15 17:17:37.378382] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:49.829 [2024-05-15 17:17:37.378397] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:49.829 [2024-05-15 17:17:37.378404] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:49.829 [2024-05-15 17:17:37.378410] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f93f8000b90 00:26:49.829 [2024-05-15 17:17:37.378424] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:49.829 qpair failed and we were unable to recover it. 00:26:49.829 [2024-05-15 17:17:37.388337] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:49.829 [2024-05-15 17:17:37.388399] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:49.829 [2024-05-15 17:17:37.388414] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:49.829 [2024-05-15 17:17:37.388421] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:49.829 [2024-05-15 17:17:37.388427] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f93f8000b90 00:26:49.829 [2024-05-15 17:17:37.388441] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:49.829 qpair failed and we were unable to recover it. 00:26:49.829 [2024-05-15 17:17:37.398386] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:49.829 [2024-05-15 17:17:37.398452] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:49.829 [2024-05-15 17:17:37.398467] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:49.829 [2024-05-15 17:17:37.398474] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:49.829 [2024-05-15 17:17:37.398480] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f93f8000b90 00:26:49.829 [2024-05-15 17:17:37.398494] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:49.829 qpair failed and we were unable to recover it. 00:26:49.829 [2024-05-15 17:17:37.408400] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:49.829 [2024-05-15 17:17:37.408466] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:49.829 [2024-05-15 17:17:37.408480] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:49.829 [2024-05-15 17:17:37.408487] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:49.829 [2024-05-15 17:17:37.408493] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f93f8000b90 00:26:49.829 [2024-05-15 17:17:37.408507] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:49.829 qpair failed and we were unable to recover it. 00:26:49.829 [2024-05-15 17:17:37.418446] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:49.829 [2024-05-15 17:17:37.418507] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:49.829 [2024-05-15 17:17:37.418521] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:49.829 [2024-05-15 17:17:37.418528] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:49.829 [2024-05-15 17:17:37.418534] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f93f8000b90 00:26:49.829 [2024-05-15 17:17:37.418548] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:49.829 qpair failed and we were unable to recover it. 00:26:49.829 [2024-05-15 17:17:37.428474] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:49.829 [2024-05-15 17:17:37.428532] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:49.829 [2024-05-15 17:17:37.428547] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:49.829 [2024-05-15 17:17:37.428554] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:49.829 [2024-05-15 17:17:37.428560] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f93f8000b90 00:26:49.829 [2024-05-15 17:17:37.428574] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:49.829 qpair failed and we were unable to recover it. 00:26:49.829 [2024-05-15 17:17:37.438499] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:49.829 [2024-05-15 17:17:37.438563] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:49.829 [2024-05-15 17:17:37.438580] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:49.829 [2024-05-15 17:17:37.438588] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:49.829 [2024-05-15 17:17:37.438594] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f93f8000b90 00:26:49.829 [2024-05-15 17:17:37.438608] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:49.829 qpair failed and we were unable to recover it. 00:26:49.829 [2024-05-15 17:17:37.448535] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:49.829 [2024-05-15 17:17:37.448608] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:49.829 [2024-05-15 17:17:37.448623] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:49.829 [2024-05-15 17:17:37.448630] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:49.829 [2024-05-15 17:17:37.448636] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f93f8000b90 00:26:49.829 [2024-05-15 17:17:37.448651] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:49.829 qpair failed and we were unable to recover it. 00:26:49.829 [2024-05-15 17:17:37.458578] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:49.829 [2024-05-15 17:17:37.458641] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:49.829 [2024-05-15 17:17:37.458655] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:49.829 [2024-05-15 17:17:37.458662] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:49.829 [2024-05-15 17:17:37.458669] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f93f8000b90 00:26:49.829 [2024-05-15 17:17:37.458684] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:49.830 qpair failed and we were unable to recover it. 00:26:49.830 [2024-05-15 17:17:37.468578] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:49.830 [2024-05-15 17:17:37.468637] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:49.830 [2024-05-15 17:17:37.468651] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:49.830 [2024-05-15 17:17:37.468659] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:49.830 [2024-05-15 17:17:37.468665] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f93f8000b90 00:26:49.830 [2024-05-15 17:17:37.468679] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:49.830 qpair failed and we were unable to recover it. 00:26:49.830 [2024-05-15 17:17:37.478597] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:49.830 [2024-05-15 17:17:37.478660] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:49.830 [2024-05-15 17:17:37.478674] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:49.830 [2024-05-15 17:17:37.478681] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:49.830 [2024-05-15 17:17:37.478687] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f93f8000b90 00:26:49.830 [2024-05-15 17:17:37.478705] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:49.830 qpair failed and we were unable to recover it. 00:26:50.089 [2024-05-15 17:17:37.488641] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:50.089 [2024-05-15 17:17:37.488703] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:50.089 [2024-05-15 17:17:37.488718] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:50.089 [2024-05-15 17:17:37.488725] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:50.089 [2024-05-15 17:17:37.488731] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f93f8000b90 00:26:50.089 [2024-05-15 17:17:37.488745] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:50.089 qpair failed and we were unable to recover it. 00:26:50.089 [2024-05-15 17:17:37.498681] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:50.089 [2024-05-15 17:17:37.498743] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:50.089 [2024-05-15 17:17:37.498757] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:50.089 [2024-05-15 17:17:37.498764] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:50.089 [2024-05-15 17:17:37.498770] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f93f8000b90 00:26:50.089 [2024-05-15 17:17:37.498784] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:50.089 qpair failed and we were unable to recover it. 00:26:50.089 [2024-05-15 17:17:37.508702] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:50.089 [2024-05-15 17:17:37.508765] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:50.089 [2024-05-15 17:17:37.508779] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:50.089 [2024-05-15 17:17:37.508786] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:50.089 [2024-05-15 17:17:37.508792] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f93f8000b90 00:26:50.089 [2024-05-15 17:17:37.508806] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:50.089 qpair failed and we were unable to recover it. 00:26:50.089 [2024-05-15 17:17:37.518713] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:50.089 [2024-05-15 17:17:37.518775] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:50.089 [2024-05-15 17:17:37.518789] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:50.089 [2024-05-15 17:17:37.518796] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:50.089 [2024-05-15 17:17:37.518802] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f93f8000b90 00:26:50.089 [2024-05-15 17:17:37.518816] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:50.089 qpair failed and we were unable to recover it. 00:26:50.089 [2024-05-15 17:17:37.528736] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:50.089 [2024-05-15 17:17:37.528793] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:50.089 [2024-05-15 17:17:37.528811] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:50.089 [2024-05-15 17:17:37.528817] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:50.089 [2024-05-15 17:17:37.528823] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f93f8000b90 00:26:50.089 [2024-05-15 17:17:37.528837] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:50.089 qpair failed and we were unable to recover it. 00:26:50.089 [2024-05-15 17:17:37.538740] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:50.089 [2024-05-15 17:17:37.538832] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:50.089 [2024-05-15 17:17:37.538846] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:50.089 [2024-05-15 17:17:37.538852] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:50.089 [2024-05-15 17:17:37.538858] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f93f8000b90 00:26:50.089 [2024-05-15 17:17:37.538872] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:50.089 qpair failed and we were unable to recover it. 00:26:50.089 [2024-05-15 17:17:37.548787] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:50.089 [2024-05-15 17:17:37.548848] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:50.089 [2024-05-15 17:17:37.548863] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:50.089 [2024-05-15 17:17:37.548870] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:50.089 [2024-05-15 17:17:37.548876] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f93f8000b90 00:26:50.089 [2024-05-15 17:17:37.548890] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:50.089 qpair failed and we were unable to recover it. 00:26:50.089 [2024-05-15 17:17:37.558869] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:50.089 [2024-05-15 17:17:37.558928] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:50.089 [2024-05-15 17:17:37.558942] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:50.089 [2024-05-15 17:17:37.558949] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:50.089 [2024-05-15 17:17:37.558955] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f93f8000b90 00:26:50.089 [2024-05-15 17:17:37.558969] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:50.089 qpair failed and we were unable to recover it. 00:26:50.089 [2024-05-15 17:17:37.568910] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:50.089 [2024-05-15 17:17:37.569010] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:50.089 [2024-05-15 17:17:37.569024] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:50.089 [2024-05-15 17:17:37.569031] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:50.089 [2024-05-15 17:17:37.569040] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f93f8000b90 00:26:50.089 [2024-05-15 17:17:37.569055] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:50.089 qpair failed and we were unable to recover it. 00:26:50.089 [2024-05-15 17:17:37.578902] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:50.089 [2024-05-15 17:17:37.578960] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:50.089 [2024-05-15 17:17:37.578975] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:50.089 [2024-05-15 17:17:37.578982] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:50.089 [2024-05-15 17:17:37.578989] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f93f8000b90 00:26:50.089 [2024-05-15 17:17:37.579003] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:50.089 qpair failed and we were unable to recover it. 00:26:50.089 [2024-05-15 17:17:37.588917] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:50.089 [2024-05-15 17:17:37.588975] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:50.089 [2024-05-15 17:17:37.588989] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:50.089 [2024-05-15 17:17:37.588996] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:50.089 [2024-05-15 17:17:37.589002] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f93f8000b90 00:26:50.089 [2024-05-15 17:17:37.589017] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:50.089 qpair failed and we were unable to recover it. 00:26:50.089 [2024-05-15 17:17:37.598941] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:50.090 [2024-05-15 17:17:37.599004] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:50.090 [2024-05-15 17:17:37.599018] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:50.090 [2024-05-15 17:17:37.599025] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:50.090 [2024-05-15 17:17:37.599031] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f93f8000b90 00:26:50.090 [2024-05-15 17:17:37.599045] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:50.090 qpair failed and we were unable to recover it. 00:26:50.090 [2024-05-15 17:17:37.608900] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:50.090 [2024-05-15 17:17:37.608961] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:50.090 [2024-05-15 17:17:37.608975] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:50.090 [2024-05-15 17:17:37.608982] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:50.090 [2024-05-15 17:17:37.608988] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f93f8000b90 00:26:50.090 [2024-05-15 17:17:37.609003] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:50.090 qpair failed and we were unable to recover it. 00:26:50.090 [2024-05-15 17:17:37.618951] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:50.090 [2024-05-15 17:17:37.619019] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:50.090 [2024-05-15 17:17:37.619038] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:50.090 [2024-05-15 17:17:37.619045] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:50.090 [2024-05-15 17:17:37.619051] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f93f8000b90 00:26:50.090 [2024-05-15 17:17:37.619067] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:50.090 qpair failed and we were unable to recover it. 00:26:50.090 [2024-05-15 17:17:37.629027] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:50.090 [2024-05-15 17:17:37.629088] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:50.090 [2024-05-15 17:17:37.629103] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:50.090 [2024-05-15 17:17:37.629110] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:50.090 [2024-05-15 17:17:37.629116] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f93f8000b90 00:26:50.090 [2024-05-15 17:17:37.629130] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:50.090 qpair failed and we were unable to recover it. 00:26:50.090 [2024-05-15 17:17:37.639058] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:50.090 [2024-05-15 17:17:37.639122] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:50.090 [2024-05-15 17:17:37.639136] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:50.090 [2024-05-15 17:17:37.639143] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:50.090 [2024-05-15 17:17:37.639149] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f93f8000b90 00:26:50.090 [2024-05-15 17:17:37.639163] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:50.090 qpair failed and we were unable to recover it. 00:26:50.090 [2024-05-15 17:17:37.649135] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:50.090 [2024-05-15 17:17:37.649231] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:50.090 [2024-05-15 17:17:37.649246] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:50.090 [2024-05-15 17:17:37.649252] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:50.090 [2024-05-15 17:17:37.649259] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f93f8000b90 00:26:50.090 [2024-05-15 17:17:37.649274] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:50.090 qpair failed and we were unable to recover it. 00:26:50.090 [2024-05-15 17:17:37.659111] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:50.090 [2024-05-15 17:17:37.659173] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:50.090 [2024-05-15 17:17:37.659188] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:50.090 [2024-05-15 17:17:37.659194] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:50.090 [2024-05-15 17:17:37.659203] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f93f8000b90 00:26:50.090 [2024-05-15 17:17:37.659217] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:50.090 qpair failed and we were unable to recover it. 00:26:50.090 [2024-05-15 17:17:37.669173] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:50.090 [2024-05-15 17:17:37.669236] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:50.090 [2024-05-15 17:17:37.669251] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:50.090 [2024-05-15 17:17:37.669257] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:50.090 [2024-05-15 17:17:37.669263] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f93f8000b90 00:26:50.090 [2024-05-15 17:17:37.669278] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:50.090 qpair failed and we were unable to recover it. 00:26:50.090 [2024-05-15 17:17:37.679174] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:50.090 [2024-05-15 17:17:37.679239] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:50.090 [2024-05-15 17:17:37.679253] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:50.090 [2024-05-15 17:17:37.679260] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:50.090 [2024-05-15 17:17:37.679266] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f93f8000b90 00:26:50.090 [2024-05-15 17:17:37.679280] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:50.090 qpair failed and we were unable to recover it. 00:26:50.090 [2024-05-15 17:17:37.689127] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:50.090 [2024-05-15 17:17:37.689192] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:50.090 [2024-05-15 17:17:37.689207] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:50.090 [2024-05-15 17:17:37.689214] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:50.090 [2024-05-15 17:17:37.689220] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f93f8000b90 00:26:50.090 [2024-05-15 17:17:37.689234] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:50.090 qpair failed and we were unable to recover it. 00:26:50.090 [2024-05-15 17:17:37.699221] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:50.090 [2024-05-15 17:17:37.699284] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:50.090 [2024-05-15 17:17:37.699298] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:50.090 [2024-05-15 17:17:37.699305] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:50.090 [2024-05-15 17:17:37.699311] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f93f8000b90 00:26:50.090 [2024-05-15 17:17:37.699325] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:50.090 qpair failed and we were unable to recover it. 00:26:50.090 [2024-05-15 17:17:37.709261] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:50.090 [2024-05-15 17:17:37.709321] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:50.090 [2024-05-15 17:17:37.709336] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:50.090 [2024-05-15 17:17:37.709343] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:50.090 [2024-05-15 17:17:37.709349] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f93f8000b90 00:26:50.090 [2024-05-15 17:17:37.709363] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:50.090 qpair failed and we were unable to recover it. 00:26:50.090 [2024-05-15 17:17:37.719294] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:50.090 [2024-05-15 17:17:37.719358] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:50.090 [2024-05-15 17:17:37.719373] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:50.090 [2024-05-15 17:17:37.719379] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:50.090 [2024-05-15 17:17:37.719386] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f93f8000b90 00:26:50.090 [2024-05-15 17:17:37.719400] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:50.090 qpair failed and we were unable to recover it. 00:26:50.090 [2024-05-15 17:17:37.729253] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:50.091 [2024-05-15 17:17:37.729317] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:50.091 [2024-05-15 17:17:37.729332] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:50.091 [2024-05-15 17:17:37.729339] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:50.091 [2024-05-15 17:17:37.729345] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f93f8000b90 00:26:50.091 [2024-05-15 17:17:37.729359] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:50.091 qpair failed and we were unable to recover it. 00:26:50.091 [2024-05-15 17:17:37.739366] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:50.091 [2024-05-15 17:17:37.739433] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:50.091 [2024-05-15 17:17:37.739447] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:50.091 [2024-05-15 17:17:37.739454] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:50.091 [2024-05-15 17:17:37.739460] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f93f8000b90 00:26:50.091 [2024-05-15 17:17:37.739474] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:50.091 qpair failed and we were unable to recover it. 00:26:50.350 [2024-05-15 17:17:37.749386] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:50.350 [2024-05-15 17:17:37.749447] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:50.350 [2024-05-15 17:17:37.749462] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:50.350 [2024-05-15 17:17:37.749472] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:50.350 [2024-05-15 17:17:37.749478] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f93f8000b90 00:26:50.350 [2024-05-15 17:17:37.749492] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:50.350 qpair failed and we were unable to recover it. 00:26:50.350 [2024-05-15 17:17:37.759428] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:50.350 [2024-05-15 17:17:37.759492] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:50.350 [2024-05-15 17:17:37.759507] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:50.350 [2024-05-15 17:17:37.759513] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:50.350 [2024-05-15 17:17:37.759520] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f93f8000b90 00:26:50.350 [2024-05-15 17:17:37.759534] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:50.350 qpair failed and we were unable to recover it. 00:26:50.350 [2024-05-15 17:17:37.769434] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:50.350 [2024-05-15 17:17:37.769496] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:50.350 [2024-05-15 17:17:37.769510] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:50.350 [2024-05-15 17:17:37.769517] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:50.350 [2024-05-15 17:17:37.769523] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f93f8000b90 00:26:50.350 [2024-05-15 17:17:37.769537] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:50.350 qpair failed and we were unable to recover it. 00:26:50.350 [2024-05-15 17:17:37.779468] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:50.350 [2024-05-15 17:17:37.779529] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:50.350 [2024-05-15 17:17:37.779543] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:50.350 [2024-05-15 17:17:37.779550] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:50.350 [2024-05-15 17:17:37.779556] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f93f8000b90 00:26:50.350 [2024-05-15 17:17:37.779570] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:50.350 qpair failed and we were unable to recover it. 00:26:50.350 [2024-05-15 17:17:37.789512] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:50.350 [2024-05-15 17:17:37.789574] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:50.350 [2024-05-15 17:17:37.789589] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:50.350 [2024-05-15 17:17:37.789596] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:50.350 [2024-05-15 17:17:37.789602] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f93f8000b90 00:26:50.350 [2024-05-15 17:17:37.789616] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:50.350 qpair failed and we were unable to recover it. 00:26:50.350 [2024-05-15 17:17:37.799503] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:50.350 [2024-05-15 17:17:37.799564] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:50.350 [2024-05-15 17:17:37.799578] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:50.350 [2024-05-15 17:17:37.799585] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:50.350 [2024-05-15 17:17:37.799592] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f93f8000b90 00:26:50.350 [2024-05-15 17:17:37.799606] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:50.350 qpair failed and we were unable to recover it. 00:26:50.350 [2024-05-15 17:17:37.809560] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:50.350 [2024-05-15 17:17:37.809673] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:50.350 [2024-05-15 17:17:37.809687] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:50.350 [2024-05-15 17:17:37.809695] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:50.350 [2024-05-15 17:17:37.809701] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f93f8000b90 00:26:50.350 [2024-05-15 17:17:37.809715] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:50.350 qpair failed and we were unable to recover it. 00:26:50.350 [2024-05-15 17:17:37.819567] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:50.350 [2024-05-15 17:17:37.819624] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:50.350 [2024-05-15 17:17:37.819639] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:50.350 [2024-05-15 17:17:37.819646] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:50.350 [2024-05-15 17:17:37.819651] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f93f8000b90 00:26:50.350 [2024-05-15 17:17:37.819666] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:50.350 qpair failed and we were unable to recover it. 00:26:50.350 [2024-05-15 17:17:37.829625] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:50.351 [2024-05-15 17:17:37.829684] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:50.351 [2024-05-15 17:17:37.829699] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:50.351 [2024-05-15 17:17:37.829706] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:50.351 [2024-05-15 17:17:37.829712] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f93f8000b90 00:26:50.351 [2024-05-15 17:17:37.829728] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:50.351 qpair failed and we were unable to recover it. 00:26:50.351 [2024-05-15 17:17:37.839638] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:50.351 [2024-05-15 17:17:37.839702] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:50.351 [2024-05-15 17:17:37.839720] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:50.351 [2024-05-15 17:17:37.839727] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:50.351 [2024-05-15 17:17:37.839733] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f93f8000b90 00:26:50.351 [2024-05-15 17:17:37.839747] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:50.351 qpair failed and we were unable to recover it. 00:26:50.351 [2024-05-15 17:17:37.849702] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:50.351 [2024-05-15 17:17:37.849812] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:50.351 [2024-05-15 17:17:37.849828] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:50.351 [2024-05-15 17:17:37.849835] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:50.351 [2024-05-15 17:17:37.849841] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f93f8000b90 00:26:50.351 [2024-05-15 17:17:37.849856] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:50.351 qpair failed and we were unable to recover it. 00:26:50.351 [2024-05-15 17:17:37.859681] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:50.351 [2024-05-15 17:17:37.859745] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:50.351 [2024-05-15 17:17:37.859759] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:50.351 [2024-05-15 17:17:37.859766] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:50.351 [2024-05-15 17:17:37.859772] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f93f8000b90 00:26:50.351 [2024-05-15 17:17:37.859786] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:50.351 qpair failed and we were unable to recover it. 00:26:50.351 [2024-05-15 17:17:37.869692] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:50.351 [2024-05-15 17:17:37.869751] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:50.351 [2024-05-15 17:17:37.869765] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:50.351 [2024-05-15 17:17:37.869772] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:50.351 [2024-05-15 17:17:37.869778] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f93f8000b90 00:26:50.351 [2024-05-15 17:17:37.869792] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:50.351 qpair failed and we were unable to recover it. 00:26:50.351 [2024-05-15 17:17:37.879743] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:50.351 [2024-05-15 17:17:37.879802] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:50.351 [2024-05-15 17:17:37.879816] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:50.351 [2024-05-15 17:17:37.879823] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:50.351 [2024-05-15 17:17:37.879829] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f93f8000b90 00:26:50.351 [2024-05-15 17:17:37.879846] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:50.351 qpair failed and we were unable to recover it. 00:26:50.351 [2024-05-15 17:17:37.889788] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:50.351 [2024-05-15 17:17:37.889850] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:50.351 [2024-05-15 17:17:37.889864] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:50.351 [2024-05-15 17:17:37.889871] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:50.351 [2024-05-15 17:17:37.889877] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f93f8000b90 00:26:50.351 [2024-05-15 17:17:37.889891] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:50.351 qpair failed and we were unable to recover it. 00:26:50.351 [2024-05-15 17:17:37.899786] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:50.351 [2024-05-15 17:17:37.899847] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:50.351 [2024-05-15 17:17:37.899861] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:50.351 [2024-05-15 17:17:37.899868] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:50.351 [2024-05-15 17:17:37.899874] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f93f8000b90 00:26:50.351 [2024-05-15 17:17:37.899888] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:50.351 qpair failed and we were unable to recover it. 00:26:50.351 [2024-05-15 17:17:37.909870] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:50.351 [2024-05-15 17:17:37.909930] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:50.351 [2024-05-15 17:17:37.909945] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:50.351 [2024-05-15 17:17:37.909951] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:50.351 [2024-05-15 17:17:37.909957] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f93f8000b90 00:26:50.351 [2024-05-15 17:17:37.909971] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:50.351 qpair failed and we were unable to recover it. 00:26:50.351 [2024-05-15 17:17:37.919857] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:50.351 [2024-05-15 17:17:37.919918] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:50.351 [2024-05-15 17:17:37.919932] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:50.351 [2024-05-15 17:17:37.919939] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:50.351 [2024-05-15 17:17:37.919945] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f93f8000b90 00:26:50.351 [2024-05-15 17:17:37.919959] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:50.351 qpair failed and we were unable to recover it. 00:26:50.351 [2024-05-15 17:17:37.929887] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:50.351 [2024-05-15 17:17:37.929950] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:50.351 [2024-05-15 17:17:37.929969] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:50.351 [2024-05-15 17:17:37.929976] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:50.351 [2024-05-15 17:17:37.929982] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f93f8000b90 00:26:50.351 [2024-05-15 17:17:37.929996] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:50.351 qpair failed and we were unable to recover it. 00:26:50.351 [2024-05-15 17:17:37.939896] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:50.351 [2024-05-15 17:17:37.939958] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:50.351 [2024-05-15 17:17:37.939972] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:50.351 [2024-05-15 17:17:37.939979] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:50.351 [2024-05-15 17:17:37.939985] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f93f8000b90 00:26:50.351 [2024-05-15 17:17:37.939999] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:50.351 qpair failed and we were unable to recover it. 00:26:50.351 [2024-05-15 17:17:37.949969] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:50.351 [2024-05-15 17:17:37.950041] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:50.351 [2024-05-15 17:17:37.950055] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:50.351 [2024-05-15 17:17:37.950062] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:50.351 [2024-05-15 17:17:37.950068] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f93f8000b90 00:26:50.351 [2024-05-15 17:17:37.950082] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:50.351 qpair failed and we were unable to recover it. 00:26:50.351 [2024-05-15 17:17:37.959955] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:50.351 [2024-05-15 17:17:37.960016] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:50.352 [2024-05-15 17:17:37.960031] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:50.352 [2024-05-15 17:17:37.960038] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:50.352 [2024-05-15 17:17:37.960043] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f93f8000b90 00:26:50.352 [2024-05-15 17:17:37.960057] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:50.352 qpair failed and we were unable to recover it. 00:26:50.352 [2024-05-15 17:17:37.970009] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:50.352 [2024-05-15 17:17:37.970112] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:50.352 [2024-05-15 17:17:37.970126] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:50.352 [2024-05-15 17:17:37.970133] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:50.352 [2024-05-15 17:17:37.970139] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f93f8000b90 00:26:50.352 [2024-05-15 17:17:37.970157] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:50.352 qpair failed and we were unable to recover it. 00:26:50.352 [2024-05-15 17:17:37.980022] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:50.352 [2024-05-15 17:17:37.980083] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:50.352 [2024-05-15 17:17:37.980098] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:50.352 [2024-05-15 17:17:37.980105] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:50.352 [2024-05-15 17:17:37.980111] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f93f8000b90 00:26:50.352 [2024-05-15 17:17:37.980125] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:50.352 qpair failed and we were unable to recover it. 00:26:50.352 [2024-05-15 17:17:37.990056] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:50.352 [2024-05-15 17:17:37.990112] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:50.352 [2024-05-15 17:17:37.990127] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:50.352 [2024-05-15 17:17:37.990134] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:50.352 [2024-05-15 17:17:37.990140] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f93f8000b90 00:26:50.352 [2024-05-15 17:17:37.990154] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:50.352 qpair failed and we were unable to recover it. 00:26:50.352 [2024-05-15 17:17:38.000086] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:50.352 [2024-05-15 17:17:38.000151] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:50.352 [2024-05-15 17:17:38.000170] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:50.352 [2024-05-15 17:17:38.000177] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:50.352 [2024-05-15 17:17:38.000183] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f93f8000b90 00:26:50.352 [2024-05-15 17:17:38.000197] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:50.352 qpair failed and we were unable to recover it. 00:26:50.611 [2024-05-15 17:17:38.010130] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:50.611 [2024-05-15 17:17:38.010199] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:50.611 [2024-05-15 17:17:38.010214] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:50.611 [2024-05-15 17:17:38.010221] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:50.611 [2024-05-15 17:17:38.010227] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f93f8000b90 00:26:50.611 [2024-05-15 17:17:38.010240] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:50.611 qpair failed and we were unable to recover it. 00:26:50.611 [2024-05-15 17:17:38.020158] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:50.611 [2024-05-15 17:17:38.020260] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:50.611 [2024-05-15 17:17:38.020275] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:50.611 [2024-05-15 17:17:38.020282] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:50.611 [2024-05-15 17:17:38.020287] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f93f8000b90 00:26:50.611 [2024-05-15 17:17:38.020303] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:50.611 qpair failed and we were unable to recover it. 00:26:50.611 [2024-05-15 17:17:38.030179] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:50.611 [2024-05-15 17:17:38.030241] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:50.611 [2024-05-15 17:17:38.030256] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:50.611 [2024-05-15 17:17:38.030262] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:50.611 [2024-05-15 17:17:38.030268] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f93f8000b90 00:26:50.611 [2024-05-15 17:17:38.030282] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:50.611 qpair failed and we were unable to recover it. 00:26:50.611 [2024-05-15 17:17:38.040211] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:50.611 [2024-05-15 17:17:38.040273] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:50.611 [2024-05-15 17:17:38.040288] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:50.611 [2024-05-15 17:17:38.040294] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:50.611 [2024-05-15 17:17:38.040300] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f93f8000b90 00:26:50.611 [2024-05-15 17:17:38.040314] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:50.611 qpair failed and we were unable to recover it. 00:26:50.611 [2024-05-15 17:17:38.050226] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:50.611 [2024-05-15 17:17:38.050287] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:50.611 [2024-05-15 17:17:38.050302] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:50.611 [2024-05-15 17:17:38.050309] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:50.611 [2024-05-15 17:17:38.050315] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f93f8000b90 00:26:50.611 [2024-05-15 17:17:38.050330] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:50.611 qpair failed and we were unable to recover it. 00:26:50.611 [2024-05-15 17:17:38.060277] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:50.611 [2024-05-15 17:17:38.060335] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:50.611 [2024-05-15 17:17:38.060349] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:50.611 [2024-05-15 17:17:38.060356] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:50.611 [2024-05-15 17:17:38.060365] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f93f8000b90 00:26:50.611 [2024-05-15 17:17:38.060379] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:50.611 qpair failed and we were unable to recover it. 00:26:50.611 [2024-05-15 17:17:38.070277] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:50.611 [2024-05-15 17:17:38.070336] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:50.611 [2024-05-15 17:17:38.070351] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:50.611 [2024-05-15 17:17:38.070358] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:50.611 [2024-05-15 17:17:38.070364] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f93f8000b90 00:26:50.611 [2024-05-15 17:17:38.070378] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:50.611 qpair failed and we were unable to recover it. 00:26:50.611 [2024-05-15 17:17:38.080313] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:50.612 [2024-05-15 17:17:38.080378] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:50.612 [2024-05-15 17:17:38.080392] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:50.612 [2024-05-15 17:17:38.080399] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:50.612 [2024-05-15 17:17:38.080405] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f93f8000b90 00:26:50.612 [2024-05-15 17:17:38.080419] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:50.612 qpair failed and we were unable to recover it. 00:26:50.612 [2024-05-15 17:17:38.090395] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:50.612 [2024-05-15 17:17:38.090507] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:50.612 [2024-05-15 17:17:38.090521] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:50.612 [2024-05-15 17:17:38.090528] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:50.612 [2024-05-15 17:17:38.090534] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f93f8000b90 00:26:50.612 [2024-05-15 17:17:38.090548] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:50.612 qpair failed and we were unable to recover it. 00:26:50.612 [2024-05-15 17:17:38.100367] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:50.612 [2024-05-15 17:17:38.100422] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:50.612 [2024-05-15 17:17:38.100436] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:50.612 [2024-05-15 17:17:38.100443] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:50.612 [2024-05-15 17:17:38.100449] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f93f8000b90 00:26:50.612 [2024-05-15 17:17:38.100463] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:50.612 qpair failed and we were unable to recover it. 00:26:50.612 [2024-05-15 17:17:38.110418] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:50.612 [2024-05-15 17:17:38.110481] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:50.612 [2024-05-15 17:17:38.110496] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:50.612 [2024-05-15 17:17:38.110502] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:50.612 [2024-05-15 17:17:38.110508] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f93f8000b90 00:26:50.612 [2024-05-15 17:17:38.110523] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:50.612 qpair failed and we were unable to recover it. 00:26:50.612 [2024-05-15 17:17:38.120436] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:50.612 [2024-05-15 17:17:38.120494] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:50.612 [2024-05-15 17:17:38.120509] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:50.612 [2024-05-15 17:17:38.120516] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:50.612 [2024-05-15 17:17:38.120522] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f93f8000b90 00:26:50.612 [2024-05-15 17:17:38.120536] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:50.612 qpair failed and we were unable to recover it. 00:26:50.612 [2024-05-15 17:17:38.130495] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:50.612 [2024-05-15 17:17:38.130559] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:50.612 [2024-05-15 17:17:38.130573] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:50.612 [2024-05-15 17:17:38.130580] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:50.612 [2024-05-15 17:17:38.130586] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f93f8000b90 00:26:50.612 [2024-05-15 17:17:38.130600] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:50.612 qpair failed and we were unable to recover it. 00:26:50.612 [2024-05-15 17:17:38.140468] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:50.612 [2024-05-15 17:17:38.140531] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:50.612 [2024-05-15 17:17:38.140545] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:50.612 [2024-05-15 17:17:38.140551] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:50.612 [2024-05-15 17:17:38.140558] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f93f8000b90 00:26:50.612 [2024-05-15 17:17:38.140572] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:50.612 qpair failed and we were unable to recover it. 00:26:50.612 [2024-05-15 17:17:38.150545] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:50.612 [2024-05-15 17:17:38.150612] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:50.612 [2024-05-15 17:17:38.150626] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:50.612 [2024-05-15 17:17:38.150636] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:50.612 [2024-05-15 17:17:38.150642] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f93f8000b90 00:26:50.612 [2024-05-15 17:17:38.150656] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:50.612 qpair failed and we were unable to recover it. 00:26:50.612 [2024-05-15 17:17:38.160582] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:50.612 [2024-05-15 17:17:38.160652] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:50.612 [2024-05-15 17:17:38.160667] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:50.612 [2024-05-15 17:17:38.160674] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:50.612 [2024-05-15 17:17:38.160681] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f93f8000b90 00:26:50.612 [2024-05-15 17:17:38.160695] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:50.612 qpair failed and we were unable to recover it. 00:26:50.612 [2024-05-15 17:17:38.170587] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:50.612 [2024-05-15 17:17:38.170647] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:50.612 [2024-05-15 17:17:38.170662] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:50.612 [2024-05-15 17:17:38.170669] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:50.612 [2024-05-15 17:17:38.170675] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f93f8000b90 00:26:50.612 [2024-05-15 17:17:38.170688] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:50.612 qpair failed and we were unable to recover it. 00:26:50.612 [2024-05-15 17:17:38.180653] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:50.612 [2024-05-15 17:17:38.180717] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:50.612 [2024-05-15 17:17:38.180731] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:50.612 [2024-05-15 17:17:38.180738] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:50.612 [2024-05-15 17:17:38.180744] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f93f8000b90 00:26:50.612 [2024-05-15 17:17:38.180758] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:50.612 qpair failed and we were unable to recover it. 00:26:50.612 [2024-05-15 17:17:38.190626] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:50.612 [2024-05-15 17:17:38.190687] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:50.612 [2024-05-15 17:17:38.190702] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:50.612 [2024-05-15 17:17:38.190709] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:50.612 [2024-05-15 17:17:38.190715] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f93f8000b90 00:26:50.612 [2024-05-15 17:17:38.190729] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:50.612 qpair failed and we were unable to recover it. 00:26:50.612 [2024-05-15 17:17:38.200690] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:50.612 [2024-05-15 17:17:38.200753] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:50.612 [2024-05-15 17:17:38.200769] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:50.612 [2024-05-15 17:17:38.200776] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:50.612 [2024-05-15 17:17:38.200782] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f93f8000b90 00:26:50.612 [2024-05-15 17:17:38.200797] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:50.612 qpair failed and we were unable to recover it. 00:26:50.612 [2024-05-15 17:17:38.210683] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:50.613 [2024-05-15 17:17:38.210747] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:50.613 [2024-05-15 17:17:38.210761] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:50.613 [2024-05-15 17:17:38.210770] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:50.613 [2024-05-15 17:17:38.210777] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f93f8000b90 00:26:50.613 [2024-05-15 17:17:38.210790] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:50.613 qpair failed and we were unable to recover it. 00:26:50.613 [2024-05-15 17:17:38.220699] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:50.613 [2024-05-15 17:17:38.220758] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:50.613 [2024-05-15 17:17:38.220774] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:50.613 [2024-05-15 17:17:38.220781] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:50.613 [2024-05-15 17:17:38.220786] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f93f8000b90 00:26:50.613 [2024-05-15 17:17:38.220801] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:50.613 qpair failed and we were unable to recover it. 00:26:50.613 [2024-05-15 17:17:38.230734] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:50.613 [2024-05-15 17:17:38.230807] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:50.613 [2024-05-15 17:17:38.230821] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:50.613 [2024-05-15 17:17:38.230828] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:50.613 [2024-05-15 17:17:38.230835] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f93f8000b90 00:26:50.613 [2024-05-15 17:17:38.230849] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:50.613 qpair failed and we were unable to recover it. 00:26:50.613 [2024-05-15 17:17:38.240780] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:50.613 [2024-05-15 17:17:38.240840] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:50.613 [2024-05-15 17:17:38.240858] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:50.613 [2024-05-15 17:17:38.240866] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:50.613 [2024-05-15 17:17:38.240872] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f93f8000b90 00:26:50.613 [2024-05-15 17:17:38.240886] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:50.613 qpair failed and we were unable to recover it. 00:26:50.613 [2024-05-15 17:17:38.250842] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:50.613 [2024-05-15 17:17:38.250905] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:50.613 [2024-05-15 17:17:38.250920] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:50.613 [2024-05-15 17:17:38.250927] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:50.613 [2024-05-15 17:17:38.250933] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f93f8000b90 00:26:50.613 [2024-05-15 17:17:38.250948] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:50.613 qpair failed and we were unable to recover it. 00:26:50.613 [2024-05-15 17:17:38.260802] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:50.613 [2024-05-15 17:17:38.260861] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:50.613 [2024-05-15 17:17:38.260877] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:50.613 [2024-05-15 17:17:38.260885] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:50.613 [2024-05-15 17:17:38.260891] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f93f8000b90 00:26:50.613 [2024-05-15 17:17:38.260906] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:50.613 qpair failed and we were unable to recover it. 00:26:50.872 [2024-05-15 17:17:38.270850] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:50.872 [2024-05-15 17:17:38.270911] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:50.872 [2024-05-15 17:17:38.270926] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:50.872 [2024-05-15 17:17:38.270934] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:50.872 [2024-05-15 17:17:38.270940] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f93f8000b90 00:26:50.872 [2024-05-15 17:17:38.270955] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:50.872 qpair failed and we were unable to recover it. 00:26:50.872 [2024-05-15 17:17:38.280862] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:50.872 [2024-05-15 17:17:38.280926] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:50.872 [2024-05-15 17:17:38.280940] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:50.872 [2024-05-15 17:17:38.280947] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:50.872 [2024-05-15 17:17:38.280953] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f93f8000b90 00:26:50.872 [2024-05-15 17:17:38.280967] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:50.872 qpair failed and we were unable to recover it. 00:26:50.872 [2024-05-15 17:17:38.290863] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:50.872 [2024-05-15 17:17:38.290925] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:50.872 [2024-05-15 17:17:38.290940] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:50.872 [2024-05-15 17:17:38.290947] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:50.872 [2024-05-15 17:17:38.290953] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f93f8000b90 00:26:50.872 [2024-05-15 17:17:38.290967] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:50.872 qpair failed and we were unable to recover it. 00:26:50.872 [2024-05-15 17:17:38.300889] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:50.872 [2024-05-15 17:17:38.300946] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:50.872 [2024-05-15 17:17:38.300961] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:50.872 [2024-05-15 17:17:38.300970] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:50.872 [2024-05-15 17:17:38.300976] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f93f8000b90 00:26:50.872 [2024-05-15 17:17:38.300991] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:50.872 qpair failed and we were unable to recover it. 00:26:50.872 [2024-05-15 17:17:38.310936] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:50.872 [2024-05-15 17:17:38.311030] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:50.872 [2024-05-15 17:17:38.311044] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:50.872 [2024-05-15 17:17:38.311051] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:50.872 [2024-05-15 17:17:38.311057] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f93f8000b90 00:26:50.872 [2024-05-15 17:17:38.311071] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:50.872 qpair failed and we were unable to recover it. 00:26:50.872 [2024-05-15 17:17:38.320941] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:50.872 [2024-05-15 17:17:38.321005] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:50.872 [2024-05-15 17:17:38.321019] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:50.872 [2024-05-15 17:17:38.321026] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:50.872 [2024-05-15 17:17:38.321032] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f93f8000b90 00:26:50.872 [2024-05-15 17:17:38.321046] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:50.872 qpair failed and we were unable to recover it. 00:26:50.872 [2024-05-15 17:17:38.331023] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:50.872 [2024-05-15 17:17:38.331088] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:50.872 [2024-05-15 17:17:38.331106] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:50.872 [2024-05-15 17:17:38.331112] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:50.872 [2024-05-15 17:17:38.331118] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f93f8000b90 00:26:50.872 [2024-05-15 17:17:38.331132] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:50.872 qpair failed and we were unable to recover it. 00:26:50.872 [2024-05-15 17:17:38.341029] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:50.872 [2024-05-15 17:17:38.341121] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:50.872 [2024-05-15 17:17:38.341135] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:50.872 [2024-05-15 17:17:38.341142] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:50.872 [2024-05-15 17:17:38.341148] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f93f8000b90 00:26:50.872 [2024-05-15 17:17:38.341162] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:50.872 qpair failed and we were unable to recover it. 00:26:50.872 [2024-05-15 17:17:38.351018] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:50.872 [2024-05-15 17:17:38.351078] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:50.872 [2024-05-15 17:17:38.351093] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:50.872 [2024-05-15 17:17:38.351100] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:50.873 [2024-05-15 17:17:38.351106] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f93f8000b90 00:26:50.873 [2024-05-15 17:17:38.351120] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:50.873 qpair failed and we were unable to recover it. 00:26:50.873 [2024-05-15 17:17:38.361129] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:50.873 [2024-05-15 17:17:38.361197] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:50.873 [2024-05-15 17:17:38.361212] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:50.873 [2024-05-15 17:17:38.361219] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:50.873 [2024-05-15 17:17:38.361225] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f93f8000b90 00:26:50.873 [2024-05-15 17:17:38.361239] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:50.873 qpair failed and we were unable to recover it. 00:26:50.873 [2024-05-15 17:17:38.371082] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:50.873 [2024-05-15 17:17:38.371147] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:50.873 [2024-05-15 17:17:38.371161] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:50.873 [2024-05-15 17:17:38.371173] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:50.873 [2024-05-15 17:17:38.371180] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f93f8000b90 00:26:50.873 [2024-05-15 17:17:38.371197] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:50.873 qpair failed and we were unable to recover it. 00:26:50.873 [2024-05-15 17:17:38.381160] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:50.873 [2024-05-15 17:17:38.381230] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:50.873 [2024-05-15 17:17:38.381244] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:50.873 [2024-05-15 17:17:38.381251] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:50.873 [2024-05-15 17:17:38.381257] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f93f8000b90 00:26:50.873 [2024-05-15 17:17:38.381271] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:50.873 qpair failed and we were unable to recover it. 00:26:50.873 [2024-05-15 17:17:38.391133] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:50.873 [2024-05-15 17:17:38.391195] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:50.873 [2024-05-15 17:17:38.391210] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:50.873 [2024-05-15 17:17:38.391216] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:50.873 [2024-05-15 17:17:38.391222] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f93f8000b90 00:26:50.873 [2024-05-15 17:17:38.391237] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:50.873 qpair failed and we were unable to recover it. 00:26:50.873 [2024-05-15 17:17:38.401177] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:50.873 [2024-05-15 17:17:38.401240] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:50.873 [2024-05-15 17:17:38.401254] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:50.873 [2024-05-15 17:17:38.401261] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:50.873 [2024-05-15 17:17:38.401267] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f93f8000b90 00:26:50.873 [2024-05-15 17:17:38.401281] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:50.873 qpair failed and we were unable to recover it. 00:26:50.873 [2024-05-15 17:17:38.411227] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:50.873 [2024-05-15 17:17:38.411291] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:50.873 [2024-05-15 17:17:38.411306] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:50.873 [2024-05-15 17:17:38.411313] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:50.873 [2024-05-15 17:17:38.411319] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f93f8000b90 00:26:50.873 [2024-05-15 17:17:38.411334] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:50.873 qpair failed and we were unable to recover it. 00:26:50.873 [2024-05-15 17:17:38.421277] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:50.873 [2024-05-15 17:17:38.421335] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:50.873 [2024-05-15 17:17:38.421352] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:50.873 [2024-05-15 17:17:38.421359] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:50.873 [2024-05-15 17:17:38.421365] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f93f8000b90 00:26:50.873 [2024-05-15 17:17:38.421381] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:50.873 qpair failed and we were unable to recover it. 00:26:50.873 [2024-05-15 17:17:38.431307] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:50.873 [2024-05-15 17:17:38.431374] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:50.873 [2024-05-15 17:17:38.431389] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:50.873 [2024-05-15 17:17:38.431395] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:50.873 [2024-05-15 17:17:38.431401] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f93f8000b90 00:26:50.873 [2024-05-15 17:17:38.431416] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:50.873 qpair failed and we were unable to recover it. 00:26:50.873 [2024-05-15 17:17:38.441276] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:50.873 [2024-05-15 17:17:38.441340] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:50.873 [2024-05-15 17:17:38.441355] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:50.873 [2024-05-15 17:17:38.441361] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:50.873 [2024-05-15 17:17:38.441367] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f93f8000b90 00:26:50.873 [2024-05-15 17:17:38.441381] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:50.873 qpair failed and we were unable to recover it. 00:26:50.873 [2024-05-15 17:17:38.451330] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:50.873 [2024-05-15 17:17:38.451394] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:50.873 [2024-05-15 17:17:38.451409] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:50.873 [2024-05-15 17:17:38.451416] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:50.873 [2024-05-15 17:17:38.451422] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f93f8000b90 00:26:50.873 [2024-05-15 17:17:38.451436] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:50.873 qpair failed and we were unable to recover it. 00:26:50.873 [2024-05-15 17:17:38.461390] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:50.873 [2024-05-15 17:17:38.461449] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:50.873 [2024-05-15 17:17:38.461463] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:50.873 [2024-05-15 17:17:38.461470] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:50.873 [2024-05-15 17:17:38.461479] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f93f8000b90 00:26:50.873 [2024-05-15 17:17:38.461493] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:50.873 qpair failed and we were unable to recover it. 00:26:50.873 [2024-05-15 17:17:38.471354] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:50.873 [2024-05-15 17:17:38.471420] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:50.873 [2024-05-15 17:17:38.471434] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:50.873 [2024-05-15 17:17:38.471441] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:50.873 [2024-05-15 17:17:38.471447] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f93f8000b90 00:26:50.873 [2024-05-15 17:17:38.471461] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:50.873 qpair failed and we were unable to recover it. 00:26:50.873 [2024-05-15 17:17:38.481454] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:50.873 [2024-05-15 17:17:38.481521] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:50.873 [2024-05-15 17:17:38.481535] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:50.873 [2024-05-15 17:17:38.481542] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:50.873 [2024-05-15 17:17:38.481548] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f93f8000b90 00:26:50.873 [2024-05-15 17:17:38.481562] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:50.874 qpair failed and we were unable to recover it. 00:26:50.874 [2024-05-15 17:17:38.491405] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:50.874 [2024-05-15 17:17:38.491466] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:50.874 [2024-05-15 17:17:38.491481] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:50.874 [2024-05-15 17:17:38.491488] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:50.874 [2024-05-15 17:17:38.491494] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f93f8000b90 00:26:50.874 [2024-05-15 17:17:38.491508] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:50.874 qpair failed and we were unable to recover it. 00:26:50.874 [2024-05-15 17:17:38.501490] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:50.874 [2024-05-15 17:17:38.501588] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:50.874 [2024-05-15 17:17:38.501602] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:50.874 [2024-05-15 17:17:38.501609] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:50.874 [2024-05-15 17:17:38.501614] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f93f8000b90 00:26:50.874 [2024-05-15 17:17:38.501629] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:50.874 qpair failed and we were unable to recover it. 00:26:50.874 [2024-05-15 17:17:38.511515] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:50.874 [2024-05-15 17:17:38.511575] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:50.874 [2024-05-15 17:17:38.511590] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:50.874 [2024-05-15 17:17:38.511596] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:50.874 [2024-05-15 17:17:38.511602] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f93f8000b90 00:26:50.874 [2024-05-15 17:17:38.511617] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:50.874 qpair failed and we were unable to recover it. 00:26:50.874 [2024-05-15 17:17:38.521566] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:50.874 [2024-05-15 17:17:38.521627] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:50.874 [2024-05-15 17:17:38.521642] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:50.874 [2024-05-15 17:17:38.521650] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:50.874 [2024-05-15 17:17:38.521655] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f93f8000b90 00:26:50.874 [2024-05-15 17:17:38.521670] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:50.874 qpair failed and we were unable to recover it. 00:26:51.134 [2024-05-15 17:17:38.531550] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:51.134 [2024-05-15 17:17:38.531611] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:51.134 [2024-05-15 17:17:38.531626] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:51.134 [2024-05-15 17:17:38.531633] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:51.134 [2024-05-15 17:17:38.531639] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f93f8000b90 00:26:51.134 [2024-05-15 17:17:38.531653] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:51.134 qpair failed and we were unable to recover it. 00:26:51.134 [2024-05-15 17:17:38.541595] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:51.134 [2024-05-15 17:17:38.541656] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:51.134 [2024-05-15 17:17:38.541671] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:51.134 [2024-05-15 17:17:38.541678] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:51.134 [2024-05-15 17:17:38.541684] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f93f8000b90 00:26:51.134 [2024-05-15 17:17:38.541698] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:51.134 qpair failed and we were unable to recover it. 00:26:51.134 [2024-05-15 17:17:38.551636] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:51.134 [2024-05-15 17:17:38.551704] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:51.134 [2024-05-15 17:17:38.551718] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:51.134 [2024-05-15 17:17:38.551731] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:51.134 [2024-05-15 17:17:38.551737] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f93f8000b90 00:26:51.134 [2024-05-15 17:17:38.551751] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:51.134 qpair failed and we were unable to recover it. 00:26:51.134 [2024-05-15 17:17:38.561667] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:51.134 [2024-05-15 17:17:38.561732] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:51.134 [2024-05-15 17:17:38.561746] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:51.134 [2024-05-15 17:17:38.561753] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:51.134 [2024-05-15 17:17:38.561759] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f93f8000b90 00:26:51.134 [2024-05-15 17:17:38.561773] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:51.134 qpair failed and we were unable to recover it. 00:26:51.134 [2024-05-15 17:17:38.571708] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:51.134 [2024-05-15 17:17:38.571771] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:51.134 [2024-05-15 17:17:38.571786] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:51.134 [2024-05-15 17:17:38.571792] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:51.134 [2024-05-15 17:17:38.571799] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f93f8000b90 00:26:51.134 [2024-05-15 17:17:38.571813] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:51.134 qpair failed and we were unable to recover it. 00:26:51.134 [2024-05-15 17:17:38.581734] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:51.134 [2024-05-15 17:17:38.581800] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:51.134 [2024-05-15 17:17:38.581814] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:51.134 [2024-05-15 17:17:38.581821] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:51.134 [2024-05-15 17:17:38.581827] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f93f8000b90 00:26:51.134 [2024-05-15 17:17:38.581841] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:51.134 qpair failed and we were unable to recover it. 00:26:51.134 [2024-05-15 17:17:38.591720] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:51.134 [2024-05-15 17:17:38.591785] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:51.134 [2024-05-15 17:17:38.591799] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:51.134 [2024-05-15 17:17:38.591806] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:51.134 [2024-05-15 17:17:38.591812] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f93f8000b90 00:26:51.134 [2024-05-15 17:17:38.591826] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:51.134 qpair failed and we were unable to recover it. 00:26:51.134 [2024-05-15 17:17:38.601802] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:51.134 [2024-05-15 17:17:38.601863] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:51.134 [2024-05-15 17:17:38.601877] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:51.134 [2024-05-15 17:17:38.601884] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:51.134 [2024-05-15 17:17:38.601890] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f93f8000b90 00:26:51.134 [2024-05-15 17:17:38.601904] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:51.134 qpair failed and we were unable to recover it. 00:26:51.134 [2024-05-15 17:17:38.611801] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:51.134 [2024-05-15 17:17:38.611864] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:51.134 [2024-05-15 17:17:38.611878] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:51.134 [2024-05-15 17:17:38.611885] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:51.134 [2024-05-15 17:17:38.611891] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f93f8000b90 00:26:51.134 [2024-05-15 17:17:38.611905] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:51.134 qpair failed and we were unable to recover it. 00:26:51.134 [2024-05-15 17:17:38.621902] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:51.134 [2024-05-15 17:17:38.621978] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:51.134 [2024-05-15 17:17:38.621992] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:51.134 [2024-05-15 17:17:38.621999] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:51.134 [2024-05-15 17:17:38.622005] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f93f8000b90 00:26:51.134 [2024-05-15 17:17:38.622019] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:51.134 qpair failed and we were unable to recover it. 00:26:51.134 [2024-05-15 17:17:38.631911] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:51.134 [2024-05-15 17:17:38.631969] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:51.134 [2024-05-15 17:17:38.631983] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:51.134 [2024-05-15 17:17:38.631990] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:51.134 [2024-05-15 17:17:38.631996] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f93f8000b90 00:26:51.135 [2024-05-15 17:17:38.632011] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:51.135 qpair failed and we were unable to recover it. 00:26:51.135 [2024-05-15 17:17:38.641933] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:51.135 [2024-05-15 17:17:38.641994] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:51.135 [2024-05-15 17:17:38.642008] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:51.135 [2024-05-15 17:17:38.642018] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:51.135 [2024-05-15 17:17:38.642024] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f93f8000b90 00:26:51.135 [2024-05-15 17:17:38.642039] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:51.135 qpair failed and we were unable to recover it. 00:26:51.135 [2024-05-15 17:17:38.651952] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:51.135 [2024-05-15 17:17:38.652010] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:51.135 [2024-05-15 17:17:38.652025] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:51.135 [2024-05-15 17:17:38.652033] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:51.135 [2024-05-15 17:17:38.652039] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f93f8000b90 00:26:51.135 [2024-05-15 17:17:38.652053] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:51.135 qpair failed and we were unable to recover it. 00:26:51.135 [2024-05-15 17:17:38.661964] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:51.135 [2024-05-15 17:17:38.662023] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:51.135 [2024-05-15 17:17:38.662038] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:51.135 [2024-05-15 17:17:38.662044] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:51.135 [2024-05-15 17:17:38.662050] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f93f8000b90 00:26:51.135 [2024-05-15 17:17:38.662065] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:51.135 qpair failed and we were unable to recover it. 00:26:51.135 [2024-05-15 17:17:38.671996] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:51.135 [2024-05-15 17:17:38.672053] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:51.135 [2024-05-15 17:17:38.672067] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:51.135 [2024-05-15 17:17:38.672074] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:51.135 [2024-05-15 17:17:38.672081] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f93f8000b90 00:26:51.135 [2024-05-15 17:17:38.672095] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:51.135 qpair failed and we were unable to recover it. 00:26:51.135 [2024-05-15 17:17:38.682039] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:51.135 [2024-05-15 17:17:38.682112] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:51.135 [2024-05-15 17:17:38.682126] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:51.135 [2024-05-15 17:17:38.682133] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:51.135 [2024-05-15 17:17:38.682139] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f93f8000b90 00:26:51.135 [2024-05-15 17:17:38.682153] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:51.135 qpair failed and we were unable to recover it. 00:26:51.135 [2024-05-15 17:17:38.692075] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:51.135 [2024-05-15 17:17:38.692138] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:51.135 [2024-05-15 17:17:38.692153] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:51.135 [2024-05-15 17:17:38.692160] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:51.135 [2024-05-15 17:17:38.692170] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f93f8000b90 00:26:51.135 [2024-05-15 17:17:38.692184] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:51.135 qpair failed and we were unable to recover it. 00:26:51.135 [2024-05-15 17:17:38.702095] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:51.135 [2024-05-15 17:17:38.702156] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:51.135 [2024-05-15 17:17:38.702175] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:51.135 [2024-05-15 17:17:38.702182] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:51.135 [2024-05-15 17:17:38.702187] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f93f8000b90 00:26:51.135 [2024-05-15 17:17:38.702201] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:51.135 qpair failed and we were unable to recover it. 00:26:51.135 [2024-05-15 17:17:38.712135] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:51.135 [2024-05-15 17:17:38.712213] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:51.135 [2024-05-15 17:17:38.712230] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:51.135 [2024-05-15 17:17:38.712237] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:51.135 [2024-05-15 17:17:38.712243] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f93f8000b90 00:26:51.135 [2024-05-15 17:17:38.712257] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:51.135 qpair failed and we were unable to recover it. 00:26:51.135 [2024-05-15 17:17:38.722144] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:51.135 [2024-05-15 17:17:38.722214] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:51.135 [2024-05-15 17:17:38.722229] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:51.135 [2024-05-15 17:17:38.722236] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:51.135 [2024-05-15 17:17:38.722242] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f93f8000b90 00:26:51.135 [2024-05-15 17:17:38.722256] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:51.135 qpair failed and we were unable to recover it. 00:26:51.135 [2024-05-15 17:17:38.732194] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:51.135 [2024-05-15 17:17:38.732260] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:51.135 [2024-05-15 17:17:38.732277] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:51.135 [2024-05-15 17:17:38.732285] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:51.135 [2024-05-15 17:17:38.732291] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f93f8000b90 00:26:51.135 [2024-05-15 17:17:38.732305] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:51.135 qpair failed and we were unable to recover it. 00:26:51.135 [2024-05-15 17:17:38.742195] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:51.135 [2024-05-15 17:17:38.742257] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:51.135 [2024-05-15 17:17:38.742271] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:51.135 [2024-05-15 17:17:38.742278] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:51.135 [2024-05-15 17:17:38.742284] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f93f8000b90 00:26:51.135 [2024-05-15 17:17:38.742298] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:51.135 qpair failed and we were unable to recover it. 00:26:51.135 [2024-05-15 17:17:38.752238] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:51.135 [2024-05-15 17:17:38.752300] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:51.135 [2024-05-15 17:17:38.752314] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:51.135 [2024-05-15 17:17:38.752321] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:51.135 [2024-05-15 17:17:38.752327] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f93f8000b90 00:26:51.135 [2024-05-15 17:17:38.752341] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:51.135 qpair failed and we were unable to recover it. 00:26:51.135 [2024-05-15 17:17:38.762287] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:51.135 [2024-05-15 17:17:38.762350] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:51.135 [2024-05-15 17:17:38.762364] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:51.135 [2024-05-15 17:17:38.762371] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:51.135 [2024-05-15 17:17:38.762377] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f93f8000b90 00:26:51.135 [2024-05-15 17:17:38.762391] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:51.135 qpair failed and we were unable to recover it. 00:26:51.136 [2024-05-15 17:17:38.772305] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:51.136 [2024-05-15 17:17:38.772368] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:51.136 [2024-05-15 17:17:38.772383] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:51.136 [2024-05-15 17:17:38.772390] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:51.136 [2024-05-15 17:17:38.772396] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f93f8000b90 00:26:51.136 [2024-05-15 17:17:38.772413] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:51.136 qpair failed and we were unable to recover it. 00:26:51.136 [2024-05-15 17:17:38.782306] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:51.136 [2024-05-15 17:17:38.782371] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:51.136 [2024-05-15 17:17:38.782385] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:51.136 [2024-05-15 17:17:38.782392] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:51.136 [2024-05-15 17:17:38.782397] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f93f8000b90 00:26:51.136 [2024-05-15 17:17:38.782411] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:51.136 qpair failed and we were unable to recover it. 00:26:51.395 [2024-05-15 17:17:38.792359] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:51.395 [2024-05-15 17:17:38.792421] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:51.395 [2024-05-15 17:17:38.792435] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:51.395 [2024-05-15 17:17:38.792442] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:51.395 [2024-05-15 17:17:38.792448] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f93f8000b90 00:26:51.395 [2024-05-15 17:17:38.792462] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:51.395 qpair failed and we were unable to recover it. 00:26:51.395 [2024-05-15 17:17:38.802399] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:51.395 [2024-05-15 17:17:38.802459] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:51.395 [2024-05-15 17:17:38.802473] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:51.395 [2024-05-15 17:17:38.802480] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:51.395 [2024-05-15 17:17:38.802486] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f93f8000b90 00:26:51.395 [2024-05-15 17:17:38.802501] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:51.395 qpair failed and we were unable to recover it. 00:26:51.395 [2024-05-15 17:17:38.812411] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:51.395 [2024-05-15 17:17:38.812472] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:51.395 [2024-05-15 17:17:38.812486] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:51.395 [2024-05-15 17:17:38.812493] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:51.395 [2024-05-15 17:17:38.812498] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f93f8000b90 00:26:51.395 [2024-05-15 17:17:38.812513] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:51.395 qpair failed and we were unable to recover it. 00:26:51.395 [2024-05-15 17:17:38.822465] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:51.395 [2024-05-15 17:17:38.822519] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:51.395 [2024-05-15 17:17:38.822537] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:51.395 [2024-05-15 17:17:38.822544] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:51.395 [2024-05-15 17:17:38.822550] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f93f8000b90 00:26:51.395 [2024-05-15 17:17:38.822564] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:51.395 qpair failed and we were unable to recover it. 00:26:51.395 [2024-05-15 17:17:38.832501] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:51.395 [2024-05-15 17:17:38.832580] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:51.395 [2024-05-15 17:17:38.832596] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:51.395 [2024-05-15 17:17:38.832603] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:51.395 [2024-05-15 17:17:38.832609] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f93f8000b90 00:26:51.395 [2024-05-15 17:17:38.832623] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:51.395 qpair failed and we were unable to recover it. 00:26:51.395 [2024-05-15 17:17:38.842541] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:51.395 [2024-05-15 17:17:38.842687] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:51.395 [2024-05-15 17:17:38.842702] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:51.395 [2024-05-15 17:17:38.842709] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:51.395 [2024-05-15 17:17:38.842715] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f93f8000b90 00:26:51.395 [2024-05-15 17:17:38.842730] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:51.395 qpair failed and we were unable to recover it. 00:26:51.395 [2024-05-15 17:17:38.852559] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:51.395 [2024-05-15 17:17:38.852623] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:51.395 [2024-05-15 17:17:38.852639] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:51.395 [2024-05-15 17:17:38.852646] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:51.395 [2024-05-15 17:17:38.852652] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f93f8000b90 00:26:51.395 [2024-05-15 17:17:38.852667] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:51.395 qpair failed and we were unable to recover it. 00:26:51.395 [2024-05-15 17:17:38.862559] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:51.395 [2024-05-15 17:17:38.862637] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:51.395 [2024-05-15 17:17:38.862653] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:51.395 [2024-05-15 17:17:38.862660] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:51.395 [2024-05-15 17:17:38.862669] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f93f8000b90 00:26:51.395 [2024-05-15 17:17:38.862684] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:51.395 qpair failed and we were unable to recover it. 00:26:51.395 [2024-05-15 17:17:38.872602] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:51.395 [2024-05-15 17:17:38.872667] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:51.395 [2024-05-15 17:17:38.872682] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:51.395 [2024-05-15 17:17:38.872689] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:51.395 [2024-05-15 17:17:38.872695] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f93f8000b90 00:26:51.395 [2024-05-15 17:17:38.872709] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:51.395 qpair failed and we were unable to recover it. 00:26:51.395 [2024-05-15 17:17:38.882615] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:51.395 [2024-05-15 17:17:38.882730] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:51.395 [2024-05-15 17:17:38.882745] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:51.395 [2024-05-15 17:17:38.882752] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:51.395 [2024-05-15 17:17:38.882758] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f93f8000b90 00:26:51.395 [2024-05-15 17:17:38.882772] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:51.395 qpair failed and we were unable to recover it. 00:26:51.395 [2024-05-15 17:17:38.892642] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:51.395 [2024-05-15 17:17:38.892710] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:51.395 [2024-05-15 17:17:38.892724] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:51.395 [2024-05-15 17:17:38.892731] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:51.395 [2024-05-15 17:17:38.892737] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f93f8000b90 00:26:51.395 [2024-05-15 17:17:38.892751] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:51.395 qpair failed and we were unable to recover it. 00:26:51.395 [2024-05-15 17:17:38.902667] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:51.395 [2024-05-15 17:17:38.902743] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:51.395 [2024-05-15 17:17:38.902757] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:51.395 [2024-05-15 17:17:38.902764] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:51.395 [2024-05-15 17:17:38.902770] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f93f8000b90 00:26:51.396 [2024-05-15 17:17:38.902783] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:51.396 qpair failed and we were unable to recover it. 00:26:51.396 [2024-05-15 17:17:38.912703] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:51.396 [2024-05-15 17:17:38.912766] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:51.396 [2024-05-15 17:17:38.912781] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:51.396 [2024-05-15 17:17:38.912788] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:51.396 [2024-05-15 17:17:38.912794] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f93f8000b90 00:26:51.396 [2024-05-15 17:17:38.912809] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:51.396 qpair failed and we were unable to recover it. 00:26:51.396 [2024-05-15 17:17:38.922699] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:51.396 [2024-05-15 17:17:38.922786] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:51.396 [2024-05-15 17:17:38.922800] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:51.396 [2024-05-15 17:17:38.922806] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:51.396 [2024-05-15 17:17:38.922812] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f93f8000b90 00:26:51.396 [2024-05-15 17:17:38.922826] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:51.396 qpair failed and we were unable to recover it. 00:26:51.396 [2024-05-15 17:17:38.932805] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:51.396 [2024-05-15 17:17:38.932867] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:51.396 [2024-05-15 17:17:38.932882] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:51.396 [2024-05-15 17:17:38.932889] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:51.396 [2024-05-15 17:17:38.932895] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f93f8000b90 00:26:51.396 [2024-05-15 17:17:38.932908] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:51.396 qpair failed and we were unable to recover it. 00:26:51.396 [2024-05-15 17:17:38.942780] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:51.396 [2024-05-15 17:17:38.942840] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:51.396 [2024-05-15 17:17:38.942855] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:51.396 [2024-05-15 17:17:38.942862] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:51.396 [2024-05-15 17:17:38.942868] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f93f8000b90 00:26:51.396 [2024-05-15 17:17:38.942882] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:51.396 qpair failed and we were unable to recover it. 00:26:51.396 [2024-05-15 17:17:38.952794] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:51.396 [2024-05-15 17:17:38.952857] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:51.396 [2024-05-15 17:17:38.952872] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:51.396 [2024-05-15 17:17:38.952882] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:51.396 [2024-05-15 17:17:38.952888] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f93f8000b90 00:26:51.396 [2024-05-15 17:17:38.952903] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:51.396 qpair failed and we were unable to recover it. 00:26:51.396 [2024-05-15 17:17:38.962804] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:51.396 [2024-05-15 17:17:38.962900] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:51.396 [2024-05-15 17:17:38.962914] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:51.396 [2024-05-15 17:17:38.962921] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:51.396 [2024-05-15 17:17:38.962926] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f93f8000b90 00:26:51.396 [2024-05-15 17:17:38.962941] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:51.396 qpair failed and we were unable to recover it. 00:26:51.396 [2024-05-15 17:17:38.972861] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:51.396 [2024-05-15 17:17:38.972926] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:51.396 [2024-05-15 17:17:38.972941] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:51.396 [2024-05-15 17:17:38.972948] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:51.396 [2024-05-15 17:17:38.972954] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f93f8000b90 00:26:51.396 [2024-05-15 17:17:38.972968] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:51.396 qpair failed and we were unable to recover it. 00:26:51.396 [2024-05-15 17:17:38.982905] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:51.396 [2024-05-15 17:17:38.982964] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:51.396 [2024-05-15 17:17:38.982979] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:51.396 [2024-05-15 17:17:38.982986] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:51.396 [2024-05-15 17:17:38.982991] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f93f8000b90 00:26:51.396 [2024-05-15 17:17:38.983006] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:51.396 qpair failed and we were unable to recover it. 00:26:51.396 [2024-05-15 17:17:38.992932] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:51.396 [2024-05-15 17:17:38.992995] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:51.396 [2024-05-15 17:17:38.993010] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:51.396 [2024-05-15 17:17:38.993016] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:51.396 [2024-05-15 17:17:38.993022] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f93f8000b90 00:26:51.396 [2024-05-15 17:17:38.993036] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:51.396 qpair failed and we were unable to recover it. 00:26:51.396 [2024-05-15 17:17:39.002958] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:51.396 [2024-05-15 17:17:39.003018] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:51.396 [2024-05-15 17:17:39.003033] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:51.396 [2024-05-15 17:17:39.003040] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:51.396 [2024-05-15 17:17:39.003046] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f93f8000b90 00:26:51.396 [2024-05-15 17:17:39.003060] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:51.396 qpair failed and we were unable to recover it. 00:26:51.396 [2024-05-15 17:17:39.012978] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:51.396 [2024-05-15 17:17:39.013038] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:51.396 [2024-05-15 17:17:39.013053] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:51.396 [2024-05-15 17:17:39.013060] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:51.396 [2024-05-15 17:17:39.013067] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f93f8000b90 00:26:51.396 [2024-05-15 17:17:39.013081] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:51.396 qpair failed and we were unable to recover it. 00:26:51.396 [2024-05-15 17:17:39.023015] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:51.396 [2024-05-15 17:17:39.023074] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:51.397 [2024-05-15 17:17:39.023089] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:51.397 [2024-05-15 17:17:39.023095] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:51.397 [2024-05-15 17:17:39.023101] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f93f8000b90 00:26:51.397 [2024-05-15 17:17:39.023116] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:51.397 qpair failed and we were unable to recover it. 00:26:51.397 [2024-05-15 17:17:39.033039] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:51.397 [2024-05-15 17:17:39.033103] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:51.397 [2024-05-15 17:17:39.033117] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:51.397 [2024-05-15 17:17:39.033124] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:51.397 [2024-05-15 17:17:39.033131] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f93f8000b90 00:26:51.397 [2024-05-15 17:17:39.033145] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:51.397 qpair failed and we were unable to recover it. 00:26:51.397 [2024-05-15 17:17:39.043085] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:51.397 [2024-05-15 17:17:39.043148] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:51.397 [2024-05-15 17:17:39.043162] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:51.397 [2024-05-15 17:17:39.043178] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:51.397 [2024-05-15 17:17:39.043184] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f93f8000b90 00:26:51.397 [2024-05-15 17:17:39.043198] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:51.397 qpair failed and we were unable to recover it. 00:26:51.656 [2024-05-15 17:17:39.053135] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:51.656 [2024-05-15 17:17:39.053210] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:51.656 [2024-05-15 17:17:39.053226] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:51.656 [2024-05-15 17:17:39.053233] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:51.656 [2024-05-15 17:17:39.053239] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f93f8000b90 00:26:51.656 [2024-05-15 17:17:39.053253] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:51.656 qpair failed and we were unable to recover it. 00:26:51.656 [2024-05-15 17:17:39.063132] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:51.656 [2024-05-15 17:17:39.063194] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:51.656 [2024-05-15 17:17:39.063209] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:51.656 [2024-05-15 17:17:39.063215] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:51.656 [2024-05-15 17:17:39.063221] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f93f8000b90 00:26:51.656 [2024-05-15 17:17:39.063235] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:51.656 qpair failed and we were unable to recover it. 00:26:51.656 [2024-05-15 17:17:39.073166] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:51.656 [2024-05-15 17:17:39.073225] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:51.656 [2024-05-15 17:17:39.073239] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:51.656 [2024-05-15 17:17:39.073246] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:51.656 [2024-05-15 17:17:39.073252] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f93f8000b90 00:26:51.656 [2024-05-15 17:17:39.073266] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:51.656 qpair failed and we were unable to recover it. 00:26:51.656 [2024-05-15 17:17:39.083209] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:51.656 [2024-05-15 17:17:39.083272] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:51.656 [2024-05-15 17:17:39.083286] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:51.656 [2024-05-15 17:17:39.083292] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:51.656 [2024-05-15 17:17:39.083298] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f93f8000b90 00:26:51.656 [2024-05-15 17:17:39.083312] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:51.656 qpair failed and we were unable to recover it. 00:26:51.656 [2024-05-15 17:17:39.093242] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:51.656 [2024-05-15 17:17:39.093310] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:51.656 [2024-05-15 17:17:39.093325] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:51.656 [2024-05-15 17:17:39.093332] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:51.656 [2024-05-15 17:17:39.093338] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f93f8000b90 00:26:51.656 [2024-05-15 17:17:39.093352] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:51.656 qpair failed and we were unable to recover it. 00:26:51.656 [2024-05-15 17:17:39.103246] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:51.656 [2024-05-15 17:17:39.103309] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:51.656 [2024-05-15 17:17:39.103323] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:51.656 [2024-05-15 17:17:39.103329] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:51.656 [2024-05-15 17:17:39.103335] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f93f8000b90 00:26:51.656 [2024-05-15 17:17:39.103350] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:51.656 qpair failed and we were unable to recover it. 00:26:51.656 [2024-05-15 17:17:39.113287] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:51.656 [2024-05-15 17:17:39.113349] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:51.656 [2024-05-15 17:17:39.113363] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:51.656 [2024-05-15 17:17:39.113370] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:51.656 [2024-05-15 17:17:39.113376] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f93f8000b90 00:26:51.656 [2024-05-15 17:17:39.113390] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:51.656 qpair failed and we were unable to recover it. 00:26:51.656 [2024-05-15 17:17:39.123340] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:51.656 [2024-05-15 17:17:39.123417] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:51.656 [2024-05-15 17:17:39.123432] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:51.656 [2024-05-15 17:17:39.123438] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:51.656 [2024-05-15 17:17:39.123444] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f93f8000b90 00:26:51.656 [2024-05-15 17:17:39.123458] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:51.656 qpair failed and we were unable to recover it. 00:26:51.656 [2024-05-15 17:17:39.133364] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:51.656 [2024-05-15 17:17:39.133437] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:51.656 [2024-05-15 17:17:39.133454] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:51.656 [2024-05-15 17:17:39.133461] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:51.656 [2024-05-15 17:17:39.133467] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f93f8000b90 00:26:51.656 [2024-05-15 17:17:39.133481] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:51.656 qpair failed and we were unable to recover it. 00:26:51.656 [2024-05-15 17:17:39.143385] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:51.656 [2024-05-15 17:17:39.143448] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:51.657 [2024-05-15 17:17:39.143462] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:51.657 [2024-05-15 17:17:39.143469] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:51.657 [2024-05-15 17:17:39.143475] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f93f8000b90 00:26:51.657 [2024-05-15 17:17:39.143489] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:51.657 qpair failed and we were unable to recover it. 00:26:51.657 [2024-05-15 17:17:39.153404] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:51.657 [2024-05-15 17:17:39.153463] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:51.657 [2024-05-15 17:17:39.153478] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:51.657 [2024-05-15 17:17:39.153485] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:51.657 [2024-05-15 17:17:39.153491] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f93f8000b90 00:26:51.657 [2024-05-15 17:17:39.153505] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:51.657 qpair failed and we were unable to recover it. 00:26:51.657 [2024-05-15 17:17:39.163443] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:51.657 [2024-05-15 17:17:39.163509] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:51.657 [2024-05-15 17:17:39.163524] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:51.657 [2024-05-15 17:17:39.163531] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:51.657 [2024-05-15 17:17:39.163537] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f93f8000b90 00:26:51.657 [2024-05-15 17:17:39.163551] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:51.657 qpair failed and we were unable to recover it. 00:26:51.657 [2024-05-15 17:17:39.173463] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:51.657 [2024-05-15 17:17:39.173529] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:51.657 [2024-05-15 17:17:39.173543] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:51.657 [2024-05-15 17:17:39.173550] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:51.657 [2024-05-15 17:17:39.173556] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f93f8000b90 00:26:51.657 [2024-05-15 17:17:39.173573] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:51.657 qpair failed and we were unable to recover it. 00:26:51.657 [2024-05-15 17:17:39.183494] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:51.657 [2024-05-15 17:17:39.183555] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:51.657 [2024-05-15 17:17:39.183569] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:51.657 [2024-05-15 17:17:39.183576] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:51.657 [2024-05-15 17:17:39.183581] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f93f8000b90 00:26:51.657 [2024-05-15 17:17:39.183595] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:51.657 qpair failed and we were unable to recover it. 00:26:51.657 [2024-05-15 17:17:39.193544] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:51.657 [2024-05-15 17:17:39.193604] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:51.657 [2024-05-15 17:17:39.193618] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:51.657 [2024-05-15 17:17:39.193625] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:51.657 [2024-05-15 17:17:39.193631] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f93f8000b90 00:26:51.657 [2024-05-15 17:17:39.193644] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:51.657 qpair failed and we were unable to recover it. 00:26:51.657 [2024-05-15 17:17:39.203562] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:51.657 [2024-05-15 17:17:39.203623] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:51.657 [2024-05-15 17:17:39.203637] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:51.657 [2024-05-15 17:17:39.203644] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:51.657 [2024-05-15 17:17:39.203649] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f93f8000b90 00:26:51.657 [2024-05-15 17:17:39.203664] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:51.657 qpair failed and we were unable to recover it. 00:26:51.657 [2024-05-15 17:17:39.213585] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:51.657 [2024-05-15 17:17:39.213650] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:51.657 [2024-05-15 17:17:39.213665] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:51.657 [2024-05-15 17:17:39.213671] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:51.657 [2024-05-15 17:17:39.213677] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f93f8000b90 00:26:51.657 [2024-05-15 17:17:39.213691] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:51.657 qpair failed and we were unable to recover it. 00:26:51.657 [2024-05-15 17:17:39.223593] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:51.657 [2024-05-15 17:17:39.223650] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:51.657 [2024-05-15 17:17:39.223667] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:51.657 [2024-05-15 17:17:39.223673] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:51.657 [2024-05-15 17:17:39.223679] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f93f8000b90 00:26:51.657 [2024-05-15 17:17:39.223693] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:51.657 qpair failed and we were unable to recover it. 00:26:51.657 [2024-05-15 17:17:39.233586] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:51.657 [2024-05-15 17:17:39.233646] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:51.657 [2024-05-15 17:17:39.233661] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:51.657 [2024-05-15 17:17:39.233667] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:51.657 [2024-05-15 17:17:39.233673] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f93f8000b90 00:26:51.657 [2024-05-15 17:17:39.233687] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:51.657 qpair failed and we were unable to recover it. 00:26:51.657 [2024-05-15 17:17:39.243676] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:51.657 [2024-05-15 17:17:39.243738] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:51.657 [2024-05-15 17:17:39.243752] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:51.657 [2024-05-15 17:17:39.243759] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:51.657 [2024-05-15 17:17:39.243764] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f93f8000b90 00:26:51.657 [2024-05-15 17:17:39.243779] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:51.657 qpair failed and we were unable to recover it. 00:26:51.657 [2024-05-15 17:17:39.253691] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:51.657 [2024-05-15 17:17:39.253760] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:51.657 [2024-05-15 17:17:39.253775] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:51.657 [2024-05-15 17:17:39.253782] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:51.657 [2024-05-15 17:17:39.253787] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f93f8000b90 00:26:51.657 [2024-05-15 17:17:39.253802] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:51.657 qpair failed and we were unable to recover it. 00:26:51.657 [2024-05-15 17:17:39.263724] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:51.657 [2024-05-15 17:17:39.263781] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:51.657 [2024-05-15 17:17:39.263796] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:51.657 [2024-05-15 17:17:39.263803] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:51.657 [2024-05-15 17:17:39.263814] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f93f8000b90 00:26:51.657 [2024-05-15 17:17:39.263828] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:51.657 qpair failed and we were unable to recover it. 00:26:51.657 [2024-05-15 17:17:39.273755] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:51.658 [2024-05-15 17:17:39.273821] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:51.658 [2024-05-15 17:17:39.273835] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:51.658 [2024-05-15 17:17:39.273842] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:51.658 [2024-05-15 17:17:39.273848] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f93f8000b90 00:26:51.658 [2024-05-15 17:17:39.273862] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:51.658 qpair failed and we were unable to recover it. 00:26:51.658 [2024-05-15 17:17:39.283800] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:51.658 [2024-05-15 17:17:39.283862] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:51.658 [2024-05-15 17:17:39.283876] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:51.658 [2024-05-15 17:17:39.283883] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:51.658 [2024-05-15 17:17:39.283888] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f93f8000b90 00:26:51.658 [2024-05-15 17:17:39.283902] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:51.658 qpair failed and we were unable to recover it. 00:26:51.658 [2024-05-15 17:17:39.293837] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:51.658 [2024-05-15 17:17:39.293895] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:51.658 [2024-05-15 17:17:39.293910] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:51.658 [2024-05-15 17:17:39.293918] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:51.658 [2024-05-15 17:17:39.293924] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f93f8000b90 00:26:51.658 [2024-05-15 17:17:39.293937] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:51.658 qpair failed and we were unable to recover it. 00:26:51.658 [2024-05-15 17:17:39.303849] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:51.658 [2024-05-15 17:17:39.303913] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:51.658 [2024-05-15 17:17:39.303928] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:51.658 [2024-05-15 17:17:39.303934] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:51.658 [2024-05-15 17:17:39.303940] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f93f8000b90 00:26:51.658 [2024-05-15 17:17:39.303955] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:51.658 qpair failed and we were unable to recover it. 00:26:51.916 [2024-05-15 17:17:39.313885] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:51.916 [2024-05-15 17:17:39.313948] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:51.916 [2024-05-15 17:17:39.313962] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:51.916 [2024-05-15 17:17:39.313969] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:51.916 [2024-05-15 17:17:39.313976] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f93f8000b90 00:26:51.916 [2024-05-15 17:17:39.313991] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:51.916 qpair failed and we were unable to recover it. 00:26:51.916 [2024-05-15 17:17:39.323914] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:51.916 [2024-05-15 17:17:39.323978] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:51.917 [2024-05-15 17:17:39.323992] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:51.917 [2024-05-15 17:17:39.324000] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:51.917 [2024-05-15 17:17:39.324006] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f93f8000b90 00:26:51.917 [2024-05-15 17:17:39.324020] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:51.917 qpair failed and we were unable to recover it. 00:26:51.917 [2024-05-15 17:17:39.333872] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:51.917 [2024-05-15 17:17:39.333937] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:51.917 [2024-05-15 17:17:39.333952] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:51.917 [2024-05-15 17:17:39.333959] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:51.917 [2024-05-15 17:17:39.333965] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f93f8000b90 00:26:51.917 [2024-05-15 17:17:39.333980] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:51.917 qpair failed and we were unable to recover it. 00:26:51.917 [2024-05-15 17:17:39.344019] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:51.917 [2024-05-15 17:17:39.344078] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:51.917 [2024-05-15 17:17:39.344092] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:51.917 [2024-05-15 17:17:39.344099] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:51.917 [2024-05-15 17:17:39.344105] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f93f8000b90 00:26:51.917 [2024-05-15 17:17:39.344119] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:51.917 qpair failed and we were unable to recover it. 00:26:51.917 [2024-05-15 17:17:39.353995] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:51.917 [2024-05-15 17:17:39.354054] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:51.917 [2024-05-15 17:17:39.354068] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:51.917 [2024-05-15 17:17:39.354075] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:51.917 [2024-05-15 17:17:39.354084] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f93f8000b90 00:26:51.917 [2024-05-15 17:17:39.354098] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:51.917 qpair failed and we were unable to recover it. 00:26:51.917 [2024-05-15 17:17:39.364035] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:51.917 [2024-05-15 17:17:39.364099] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:51.917 [2024-05-15 17:17:39.364114] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:51.917 [2024-05-15 17:17:39.364120] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:51.917 [2024-05-15 17:17:39.364126] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f93f8000b90 00:26:51.917 [2024-05-15 17:17:39.364141] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:51.917 qpair failed and we were unable to recover it. 00:26:51.917 [2024-05-15 17:17:39.374071] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:51.917 [2024-05-15 17:17:39.374132] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:51.917 [2024-05-15 17:17:39.374146] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:51.917 [2024-05-15 17:17:39.374153] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:51.917 [2024-05-15 17:17:39.374159] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f93f8000b90 00:26:51.917 [2024-05-15 17:17:39.374177] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:51.917 qpair failed and we were unable to recover it. 00:26:51.917 [2024-05-15 17:17:39.384078] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:51.917 [2024-05-15 17:17:39.384141] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:51.917 [2024-05-15 17:17:39.384156] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:51.917 [2024-05-15 17:17:39.384163] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:51.917 [2024-05-15 17:17:39.384172] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f93f8000b90 00:26:51.917 [2024-05-15 17:17:39.384186] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:51.917 qpair failed and we were unable to recover it. 00:26:51.917 [2024-05-15 17:17:39.394136] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:51.917 [2024-05-15 17:17:39.394201] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:51.917 [2024-05-15 17:17:39.394215] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:51.917 [2024-05-15 17:17:39.394222] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:51.917 [2024-05-15 17:17:39.394228] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f93f8000b90 00:26:51.917 [2024-05-15 17:17:39.394242] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:51.917 qpair failed and we were unable to recover it. 00:26:51.917 [2024-05-15 17:17:39.404152] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:51.917 [2024-05-15 17:17:39.404218] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:51.917 [2024-05-15 17:17:39.404233] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:51.917 [2024-05-15 17:17:39.404240] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:51.917 [2024-05-15 17:17:39.404246] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f93f8000b90 00:26:51.917 [2024-05-15 17:17:39.404260] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:51.917 qpair failed and we were unable to recover it. 00:26:51.917 [2024-05-15 17:17:39.414175] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:51.917 [2024-05-15 17:17:39.414234] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:51.917 [2024-05-15 17:17:39.414248] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:51.917 [2024-05-15 17:17:39.414255] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:51.917 [2024-05-15 17:17:39.414261] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f93f8000b90 00:26:51.917 [2024-05-15 17:17:39.414275] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:51.917 qpair failed and we were unable to recover it. 00:26:51.917 [2024-05-15 17:17:39.424188] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:51.917 [2024-05-15 17:17:39.424252] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:51.917 [2024-05-15 17:17:39.424266] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:51.917 [2024-05-15 17:17:39.424273] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:51.917 [2024-05-15 17:17:39.424279] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f93f8000b90 00:26:51.917 [2024-05-15 17:17:39.424293] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:51.917 qpair failed and we were unable to recover it. 00:26:51.917 [2024-05-15 17:17:39.434241] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:51.917 [2024-05-15 17:17:39.434307] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:51.917 [2024-05-15 17:17:39.434321] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:51.917 [2024-05-15 17:17:39.434328] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:51.917 [2024-05-15 17:17:39.434334] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f93f8000b90 00:26:51.917 [2024-05-15 17:17:39.434348] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:51.917 qpair failed and we were unable to recover it. 00:26:51.917 [2024-05-15 17:17:39.444257] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:51.917 [2024-05-15 17:17:39.444320] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:51.917 [2024-05-15 17:17:39.444334] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:51.917 [2024-05-15 17:17:39.444344] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:51.917 [2024-05-15 17:17:39.444350] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f93f8000b90 00:26:51.917 [2024-05-15 17:17:39.444363] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:51.917 qpair failed and we were unable to recover it. 00:26:51.917 [2024-05-15 17:17:39.454294] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:51.917 [2024-05-15 17:17:39.454367] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:51.917 [2024-05-15 17:17:39.454382] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:51.917 [2024-05-15 17:17:39.454389] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:51.918 [2024-05-15 17:17:39.454394] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f93f8000b90 00:26:51.918 [2024-05-15 17:17:39.454408] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:51.918 qpair failed and we were unable to recover it. 00:26:51.918 [2024-05-15 17:17:39.464312] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:51.918 [2024-05-15 17:17:39.464376] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:51.918 [2024-05-15 17:17:39.464390] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:51.918 [2024-05-15 17:17:39.464397] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:51.918 [2024-05-15 17:17:39.464403] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f93f8000b90 00:26:51.918 [2024-05-15 17:17:39.464417] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:51.918 qpair failed and we were unable to recover it. 00:26:51.918 [2024-05-15 17:17:39.474359] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:51.918 [2024-05-15 17:17:39.474427] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:51.918 [2024-05-15 17:17:39.474441] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:51.918 [2024-05-15 17:17:39.474448] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:51.918 [2024-05-15 17:17:39.474454] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f93f8000b90 00:26:51.918 [2024-05-15 17:17:39.474468] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:51.918 qpair failed and we were unable to recover it. 00:26:51.918 [2024-05-15 17:17:39.484358] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:51.918 [2024-05-15 17:17:39.484420] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:51.918 [2024-05-15 17:17:39.484434] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:51.918 [2024-05-15 17:17:39.484441] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:51.918 [2024-05-15 17:17:39.484447] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f93f8000b90 00:26:51.918 [2024-05-15 17:17:39.484461] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:51.918 qpair failed and we were unable to recover it. 00:26:51.918 [2024-05-15 17:17:39.494387] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:51.918 [2024-05-15 17:17:39.494453] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:51.918 [2024-05-15 17:17:39.494467] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:51.918 [2024-05-15 17:17:39.494475] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:51.918 [2024-05-15 17:17:39.494481] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f93f8000b90 00:26:51.918 [2024-05-15 17:17:39.494495] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:51.918 qpair failed and we were unable to recover it. 00:26:51.918 [2024-05-15 17:17:39.504435] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:51.918 [2024-05-15 17:17:39.504532] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:51.918 [2024-05-15 17:17:39.504546] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:51.918 [2024-05-15 17:17:39.504553] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:51.918 [2024-05-15 17:17:39.504559] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f93f8000b90 00:26:51.918 [2024-05-15 17:17:39.504573] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:51.918 qpair failed and we were unable to recover it. 00:26:51.918 [2024-05-15 17:17:39.514454] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:51.918 [2024-05-15 17:17:39.514510] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:51.918 [2024-05-15 17:17:39.514524] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:51.918 [2024-05-15 17:17:39.514532] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:51.918 [2024-05-15 17:17:39.514537] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f93f8000b90 00:26:51.918 [2024-05-15 17:17:39.514552] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:51.918 qpair failed and we were unable to recover it. 00:26:51.918 [2024-05-15 17:17:39.524479] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:51.918 [2024-05-15 17:17:39.524539] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:51.918 [2024-05-15 17:17:39.524553] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:51.918 [2024-05-15 17:17:39.524560] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:51.918 [2024-05-15 17:17:39.524566] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f93f8000b90 00:26:51.918 [2024-05-15 17:17:39.524580] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:51.918 qpair failed and we were unable to recover it. 00:26:51.918 [2024-05-15 17:17:39.534511] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:51.918 [2024-05-15 17:17:39.534577] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:51.918 [2024-05-15 17:17:39.534595] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:51.918 [2024-05-15 17:17:39.534602] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:51.918 [2024-05-15 17:17:39.534607] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f93f8000b90 00:26:51.918 [2024-05-15 17:17:39.534621] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:51.918 qpair failed and we were unable to recover it. 00:26:51.918 [2024-05-15 17:17:39.544549] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:51.918 [2024-05-15 17:17:39.544618] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:51.918 [2024-05-15 17:17:39.544633] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:51.918 [2024-05-15 17:17:39.544639] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:51.918 [2024-05-15 17:17:39.544645] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f93f8000b90 00:26:51.918 [2024-05-15 17:17:39.544659] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:51.918 qpair failed and we were unable to recover it. 00:26:51.918 [2024-05-15 17:17:39.554603] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:51.918 [2024-05-15 17:17:39.554668] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:51.918 [2024-05-15 17:17:39.554683] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:51.918 [2024-05-15 17:17:39.554689] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:51.918 [2024-05-15 17:17:39.554695] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f93f8000b90 00:26:51.918 [2024-05-15 17:17:39.554709] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:51.918 qpair failed and we were unable to recover it. 00:26:51.918 [2024-05-15 17:17:39.564573] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:51.918 [2024-05-15 17:17:39.564637] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:51.918 [2024-05-15 17:17:39.564651] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:51.918 [2024-05-15 17:17:39.564658] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:51.918 [2024-05-15 17:17:39.564664] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f93f8000b90 00:26:51.918 [2024-05-15 17:17:39.564678] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:51.918 qpair failed and we were unable to recover it. 00:26:52.177 [2024-05-15 17:17:39.574631] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:52.178 [2024-05-15 17:17:39.574693] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:52.178 [2024-05-15 17:17:39.574708] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:52.178 [2024-05-15 17:17:39.574715] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:52.178 [2024-05-15 17:17:39.574721] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f93f8000b90 00:26:52.178 [2024-05-15 17:17:39.574738] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:52.178 qpair failed and we were unable to recover it. 00:26:52.178 [2024-05-15 17:17:39.584659] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:52.178 [2024-05-15 17:17:39.584718] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:52.178 [2024-05-15 17:17:39.584733] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:52.178 [2024-05-15 17:17:39.584739] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:52.178 [2024-05-15 17:17:39.584745] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f93f8000b90 00:26:52.178 [2024-05-15 17:17:39.584759] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:52.178 qpair failed and we were unable to recover it. 00:26:52.178 [2024-05-15 17:17:39.594672] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:52.178 [2024-05-15 17:17:39.594737] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:52.178 [2024-05-15 17:17:39.594752] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:52.178 [2024-05-15 17:17:39.594759] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:52.178 [2024-05-15 17:17:39.594766] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f93f8000b90 00:26:52.178 [2024-05-15 17:17:39.594780] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:52.178 qpair failed and we were unable to recover it. 00:26:52.178 [2024-05-15 17:17:39.604746] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:52.178 [2024-05-15 17:17:39.604814] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:52.178 [2024-05-15 17:17:39.604829] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:52.178 [2024-05-15 17:17:39.604835] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:52.178 [2024-05-15 17:17:39.604841] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f93f8000b90 00:26:52.178 [2024-05-15 17:17:39.604855] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:52.178 qpair failed and we were unable to recover it. 00:26:52.178 [2024-05-15 17:17:39.614731] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:52.178 [2024-05-15 17:17:39.614792] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:52.178 [2024-05-15 17:17:39.614807] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:52.178 [2024-05-15 17:17:39.614814] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:52.178 [2024-05-15 17:17:39.614820] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f93f8000b90 00:26:52.178 [2024-05-15 17:17:39.614834] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:52.178 qpair failed and we were unable to recover it. 00:26:52.178 [2024-05-15 17:17:39.624793] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:52.178 [2024-05-15 17:17:39.624851] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:52.178 [2024-05-15 17:17:39.624869] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:52.178 [2024-05-15 17:17:39.624876] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:52.178 [2024-05-15 17:17:39.624882] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f93f8000b90 00:26:52.178 [2024-05-15 17:17:39.624896] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:52.178 qpair failed and we were unable to recover it. 00:26:52.178 [2024-05-15 17:17:39.634776] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:52.178 [2024-05-15 17:17:39.634840] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:52.178 [2024-05-15 17:17:39.634855] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:52.178 [2024-05-15 17:17:39.634861] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:52.178 [2024-05-15 17:17:39.634867] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f93f8000b90 00:26:52.178 [2024-05-15 17:17:39.634881] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:52.178 qpair failed and we were unable to recover it. 00:26:52.178 [2024-05-15 17:17:39.644790] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:52.178 [2024-05-15 17:17:39.644852] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:52.178 [2024-05-15 17:17:39.644867] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:52.178 [2024-05-15 17:17:39.644873] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:52.178 [2024-05-15 17:17:39.644879] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f93f8000b90 00:26:52.178 [2024-05-15 17:17:39.644893] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:52.178 qpair failed and we were unable to recover it. 00:26:52.178 [2024-05-15 17:17:39.654836] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:52.178 [2024-05-15 17:17:39.654904] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:52.178 [2024-05-15 17:17:39.654919] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:52.178 [2024-05-15 17:17:39.654925] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:52.178 [2024-05-15 17:17:39.654932] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f93f8000b90 00:26:52.178 [2024-05-15 17:17:39.654946] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:52.178 qpair failed and we were unable to recover it. 00:26:52.178 [2024-05-15 17:17:39.664869] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:52.178 [2024-05-15 17:17:39.664928] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:52.178 [2024-05-15 17:17:39.664943] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:52.178 [2024-05-15 17:17:39.664950] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:52.178 [2024-05-15 17:17:39.664956] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f93f8000b90 00:26:52.178 [2024-05-15 17:17:39.664973] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:52.178 qpair failed and we were unable to recover it. 00:26:52.178 [2024-05-15 17:17:39.674861] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:52.178 [2024-05-15 17:17:39.674918] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:52.178 [2024-05-15 17:17:39.674934] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:52.178 [2024-05-15 17:17:39.674941] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:52.178 [2024-05-15 17:17:39.674947] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f93f8000b90 00:26:52.178 [2024-05-15 17:17:39.674962] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:52.178 qpair failed and we were unable to recover it. 00:26:52.178 [2024-05-15 17:17:39.684939] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:52.178 [2024-05-15 17:17:39.685004] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:52.178 [2024-05-15 17:17:39.685019] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:52.178 [2024-05-15 17:17:39.685025] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:52.178 [2024-05-15 17:17:39.685032] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f93f8000b90 00:26:52.178 [2024-05-15 17:17:39.685046] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:52.178 qpair failed and we were unable to recover it. 00:26:52.178 [2024-05-15 17:17:39.694910] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:52.178 [2024-05-15 17:17:39.694974] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:52.178 [2024-05-15 17:17:39.694989] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:52.178 [2024-05-15 17:17:39.694997] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:52.178 [2024-05-15 17:17:39.695003] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f93f8000b90 00:26:52.178 [2024-05-15 17:17:39.695017] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:52.178 qpair failed and we were unable to recover it. 00:26:52.178 [2024-05-15 17:17:39.704934] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:52.178 [2024-05-15 17:17:39.704999] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:52.178 [2024-05-15 17:17:39.705013] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:52.178 [2024-05-15 17:17:39.705021] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:52.178 [2024-05-15 17:17:39.705027] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f93f8000b90 00:26:52.178 [2024-05-15 17:17:39.705040] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:52.178 qpair failed and we were unable to recover it. 00:26:52.178 [2024-05-15 17:17:39.715015] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:52.178 [2024-05-15 17:17:39.715081] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:52.178 [2024-05-15 17:17:39.715096] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:52.178 [2024-05-15 17:17:39.715102] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:52.178 [2024-05-15 17:17:39.715109] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f93f8000b90 00:26:52.178 [2024-05-15 17:17:39.715123] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:52.178 qpair failed and we were unable to recover it. 00:26:52.178 [2024-05-15 17:17:39.725007] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:52.178 [2024-05-15 17:17:39.725074] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:52.179 [2024-05-15 17:17:39.725089] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:52.179 [2024-05-15 17:17:39.725096] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:52.179 [2024-05-15 17:17:39.725102] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f93f8000b90 00:26:52.179 [2024-05-15 17:17:39.725117] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:52.179 qpair failed and we were unable to recover it. 00:26:52.179 [2024-05-15 17:17:39.735063] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:52.179 [2024-05-15 17:17:39.735130] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:52.179 [2024-05-15 17:17:39.735145] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:52.179 [2024-05-15 17:17:39.735152] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:52.179 [2024-05-15 17:17:39.735158] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f93f8000b90 00:26:52.179 [2024-05-15 17:17:39.735178] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:52.179 qpair failed and we were unable to recover it. 00:26:52.179 [2024-05-15 17:17:39.745047] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:52.179 [2024-05-15 17:17:39.745113] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:52.179 [2024-05-15 17:17:39.745128] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:52.179 [2024-05-15 17:17:39.745135] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:52.179 [2024-05-15 17:17:39.745141] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f93f8000b90 00:26:52.179 [2024-05-15 17:17:39.745155] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:52.179 qpair failed and we were unable to recover it. 00:26:52.179 [2024-05-15 17:17:39.755069] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:52.179 [2024-05-15 17:17:39.755135] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:52.179 [2024-05-15 17:17:39.755150] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:52.179 [2024-05-15 17:17:39.755157] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:52.179 [2024-05-15 17:17:39.755171] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f93f8000b90 00:26:52.179 [2024-05-15 17:17:39.755186] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:52.179 qpair failed and we were unable to recover it. 00:26:52.179 [2024-05-15 17:17:39.765173] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:52.179 [2024-05-15 17:17:39.765239] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:52.179 [2024-05-15 17:17:39.765254] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:52.179 [2024-05-15 17:17:39.765261] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:52.179 [2024-05-15 17:17:39.765267] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f93f8000b90 00:26:52.179 [2024-05-15 17:17:39.765282] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:52.179 qpair failed and we were unable to recover it. 00:26:52.179 [2024-05-15 17:17:39.775147] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:52.179 [2024-05-15 17:17:39.775219] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:52.179 [2024-05-15 17:17:39.775234] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:52.179 [2024-05-15 17:17:39.775240] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:52.179 [2024-05-15 17:17:39.775246] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f93f8000b90 00:26:52.179 [2024-05-15 17:17:39.775261] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:52.179 qpair failed and we were unable to recover it. 00:26:52.179 [2024-05-15 17:17:39.785237] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:52.179 [2024-05-15 17:17:39.785305] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:52.179 [2024-05-15 17:17:39.785320] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:52.179 [2024-05-15 17:17:39.785327] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:52.179 [2024-05-15 17:17:39.785333] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f93f8000b90 00:26:52.179 [2024-05-15 17:17:39.785347] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:52.179 qpair failed and we were unable to recover it. 00:26:52.179 [2024-05-15 17:17:39.795215] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:52.179 [2024-05-15 17:17:39.795279] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:52.179 [2024-05-15 17:17:39.795294] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:52.179 [2024-05-15 17:17:39.795301] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:52.179 [2024-05-15 17:17:39.795307] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f93f8000b90 00:26:52.179 [2024-05-15 17:17:39.795321] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:52.179 qpair failed and we were unable to recover it. 00:26:52.179 [2024-05-15 17:17:39.805234] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:52.179 [2024-05-15 17:17:39.805295] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:52.179 [2024-05-15 17:17:39.805310] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:52.179 [2024-05-15 17:17:39.805316] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:52.179 [2024-05-15 17:17:39.805323] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f93f8000b90 00:26:52.179 [2024-05-15 17:17:39.805337] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:52.179 qpair failed and we were unable to recover it. 00:26:52.179 [2024-05-15 17:17:39.815251] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:52.179 [2024-05-15 17:17:39.815308] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:52.179 [2024-05-15 17:17:39.815322] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:52.179 [2024-05-15 17:17:39.815329] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:52.179 [2024-05-15 17:17:39.815335] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f93f8000b90 00:26:52.179 [2024-05-15 17:17:39.815350] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:52.179 qpair failed and we were unable to recover it. 00:26:52.179 [2024-05-15 17:17:39.825371] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:52.179 [2024-05-15 17:17:39.825429] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:52.179 [2024-05-15 17:17:39.825443] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:52.179 [2024-05-15 17:17:39.825450] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:52.179 [2024-05-15 17:17:39.825456] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f93f8000b90 00:26:52.179 [2024-05-15 17:17:39.825470] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:52.179 qpair failed and we were unable to recover it. 00:26:52.438 [2024-05-15 17:17:39.835395] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:52.438 [2024-05-15 17:17:39.835458] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:52.438 [2024-05-15 17:17:39.835474] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:52.438 [2024-05-15 17:17:39.835481] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:52.438 [2024-05-15 17:17:39.835487] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f93f8000b90 00:26:52.438 [2024-05-15 17:17:39.835501] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:52.438 qpair failed and we were unable to recover it. 00:26:52.438 [2024-05-15 17:17:39.845416] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:52.438 [2024-05-15 17:17:39.845478] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:52.438 [2024-05-15 17:17:39.845493] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:52.438 [2024-05-15 17:17:39.845503] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:52.438 [2024-05-15 17:17:39.845509] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f93f8000b90 00:26:52.438 [2024-05-15 17:17:39.845523] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:52.438 qpair failed and we were unable to recover it. 00:26:52.438 [2024-05-15 17:17:39.855430] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:52.439 [2024-05-15 17:17:39.855493] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:52.439 [2024-05-15 17:17:39.855508] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:52.439 [2024-05-15 17:17:39.855516] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:52.439 [2024-05-15 17:17:39.855522] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f93f8000b90 00:26:52.439 [2024-05-15 17:17:39.855536] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:52.439 qpair failed and we were unable to recover it. 00:26:52.439 [2024-05-15 17:17:39.865394] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:52.439 [2024-05-15 17:17:39.865454] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:52.439 [2024-05-15 17:17:39.865469] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:52.439 [2024-05-15 17:17:39.865476] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:52.439 [2024-05-15 17:17:39.865482] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f93f8000b90 00:26:52.439 [2024-05-15 17:17:39.865497] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:52.439 qpair failed and we were unable to recover it. 00:26:52.439 [2024-05-15 17:17:39.875437] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:52.439 [2024-05-15 17:17:39.875502] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:52.439 [2024-05-15 17:17:39.875518] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:52.439 [2024-05-15 17:17:39.875525] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:52.439 [2024-05-15 17:17:39.875531] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f93f8000b90 00:26:52.439 [2024-05-15 17:17:39.875546] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:52.439 qpair failed and we were unable to recover it. 00:26:52.439 [2024-05-15 17:17:39.885468] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:52.439 [2024-05-15 17:17:39.885532] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:52.439 [2024-05-15 17:17:39.885548] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:52.439 [2024-05-15 17:17:39.885555] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:52.439 [2024-05-15 17:17:39.885561] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f93f8000b90 00:26:52.439 [2024-05-15 17:17:39.885575] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:52.439 qpair failed and we were unable to recover it. 00:26:52.439 [2024-05-15 17:17:39.895493] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:52.439 [2024-05-15 17:17:39.895554] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:52.439 [2024-05-15 17:17:39.895569] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:52.439 [2024-05-15 17:17:39.895577] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:52.439 [2024-05-15 17:17:39.895583] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f93f8000b90 00:26:52.439 [2024-05-15 17:17:39.895597] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:52.439 qpair failed and we were unable to recover it. 00:26:52.439 [2024-05-15 17:17:39.905516] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:52.439 [2024-05-15 17:17:39.905578] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:52.439 [2024-05-15 17:17:39.905593] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:52.439 [2024-05-15 17:17:39.905600] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:52.439 [2024-05-15 17:17:39.905606] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f93f8000b90 00:26:52.439 [2024-05-15 17:17:39.905621] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:52.439 qpair failed and we were unable to recover it. 00:26:52.439 [2024-05-15 17:17:39.915540] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:52.439 [2024-05-15 17:17:39.915601] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:52.439 [2024-05-15 17:17:39.915616] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:52.439 [2024-05-15 17:17:39.915623] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:52.439 [2024-05-15 17:17:39.915628] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f93f8000b90 00:26:52.439 [2024-05-15 17:17:39.915643] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:52.439 qpair failed and we were unable to recover it. 00:26:52.439 [2024-05-15 17:17:39.925572] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:52.439 [2024-05-15 17:17:39.925633] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:52.439 [2024-05-15 17:17:39.925648] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:52.439 [2024-05-15 17:17:39.925655] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:52.439 [2024-05-15 17:17:39.925661] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f93f8000b90 00:26:52.439 [2024-05-15 17:17:39.925676] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:52.439 qpair failed and we were unable to recover it. 00:26:52.439 [2024-05-15 17:17:39.935634] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:52.439 [2024-05-15 17:17:39.935697] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:52.439 [2024-05-15 17:17:39.935715] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:52.439 [2024-05-15 17:17:39.935722] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:52.439 [2024-05-15 17:17:39.935728] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f93f8000b90 00:26:52.439 [2024-05-15 17:17:39.935742] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:52.439 qpair failed and we were unable to recover it. 00:26:52.439 [2024-05-15 17:17:39.945620] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:52.439 [2024-05-15 17:17:39.945683] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:52.439 [2024-05-15 17:17:39.945698] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:52.439 [2024-05-15 17:17:39.945705] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:52.439 [2024-05-15 17:17:39.945711] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f93f8000b90 00:26:52.439 [2024-05-15 17:17:39.945726] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:52.439 qpair failed and we were unable to recover it. 00:26:52.439 [2024-05-15 17:17:39.955641] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:52.439 [2024-05-15 17:17:39.955699] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:52.439 [2024-05-15 17:17:39.955714] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:52.439 [2024-05-15 17:17:39.955721] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:52.439 [2024-05-15 17:17:39.955727] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f93f8000b90 00:26:52.439 [2024-05-15 17:17:39.955741] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:52.439 qpair failed and we were unable to recover it. 00:26:52.439 [2024-05-15 17:17:39.965755] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:52.439 [2024-05-15 17:17:39.965835] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:52.439 [2024-05-15 17:17:39.965863] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:52.439 [2024-05-15 17:17:39.965874] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:52.439 [2024-05-15 17:17:39.965884] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9400000b90 00:26:52.439 [2024-05-15 17:17:39.965906] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:26:52.439 qpair failed and we were unable to recover it. 00:26:52.439 [2024-05-15 17:17:39.975729] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:52.440 [2024-05-15 17:17:39.975789] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:52.440 [2024-05-15 17:17:39.975805] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:52.440 [2024-05-15 17:17:39.975812] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:52.440 [2024-05-15 17:17:39.975819] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9400000b90 00:26:52.440 [2024-05-15 17:17:39.975837] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:26:52.440 qpair failed and we were unable to recover it. 00:26:52.440 [2024-05-15 17:17:39.985805] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:52.440 [2024-05-15 17:17:39.985884] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:52.440 [2024-05-15 17:17:39.985911] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:52.440 [2024-05-15 17:17:39.985923] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:52.440 [2024-05-15 17:17:39.985932] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f93f0000b90 00:26:52.440 [2024-05-15 17:17:39.985955] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:52.440 qpair failed and we were unable to recover it. 00:26:52.440 [2024-05-15 17:17:39.995826] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:52.440 [2024-05-15 17:17:39.995893] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:52.440 [2024-05-15 17:17:39.995909] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:52.440 [2024-05-15 17:17:39.995916] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:52.440 [2024-05-15 17:17:39.995922] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f93f0000b90 00:26:52.440 [2024-05-15 17:17:39.995937] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:52.440 qpair failed and we were unable to recover it. 00:26:52.440 [2024-05-15 17:17:40.005839] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:52.440 [2024-05-15 17:17:40.005921] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:52.440 [2024-05-15 17:17:40.005949] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:52.440 [2024-05-15 17:17:40.005961] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:52.440 [2024-05-15 17:17:40.005969] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x244fc10 00:26:52.440 [2024-05-15 17:17:40.005992] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:52.440 qpair failed and we were unable to recover it. 00:26:52.440 [2024-05-15 17:17:40.015812] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:52.440 [2024-05-15 17:17:40.015877] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:52.440 [2024-05-15 17:17:40.015894] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:52.440 [2024-05-15 17:17:40.015902] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:52.440 [2024-05-15 17:17:40.015909] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x244fc10 00:26:52.440 [2024-05-15 17:17:40.015924] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:52.440 qpair failed and we were unable to recover it. 00:26:52.440 [2024-05-15 17:17:40.016021] nvme_ctrlr.c:4341:nvme_ctrlr_keep_alive: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Submitting Keep Alive failed 00:26:52.440 A controller has encountered a failure and is being reset. 00:26:52.440 Controller properly reset. 00:26:52.440 Initializing NVMe Controllers 00:26:52.440 Attaching to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:26:52.440 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:26:52.440 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 0 00:26:52.440 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 1 00:26:52.440 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 2 00:26:52.440 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 3 00:26:52.440 Initialization complete. Launching workers. 00:26:52.440 Starting thread on core 1 00:26:52.440 Starting thread on core 2 00:26:52.440 Starting thread on core 3 00:26:52.440 Starting thread on core 0 00:26:52.440 17:17:40 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@51 -- # sync 00:26:52.440 00:26:52.440 real 0m11.247s 00:26:52.440 user 0m21.280s 00:26:52.440 sys 0m4.229s 00:26:52.440 17:17:40 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1122 -- # xtrace_disable 00:26:52.440 17:17:40 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:26:52.440 ************************************ 00:26:52.440 END TEST nvmf_target_disconnect_tc2 00:26:52.440 ************************************ 00:26:52.440 17:17:40 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@72 -- # '[' -n '' ']' 00:26:52.440 17:17:40 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@76 -- # trap - SIGINT SIGTERM EXIT 00:26:52.440 17:17:40 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@77 -- # nvmftestfini 00:26:52.440 17:17:40 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@488 -- # nvmfcleanup 00:26:52.440 17:17:40 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@117 -- # sync 00:26:52.440 17:17:40 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:26:52.440 17:17:40 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@120 -- # set +e 00:26:52.440 17:17:40 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@121 -- # for i in {1..20} 00:26:52.440 17:17:40 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:26:52.440 rmmod nvme_tcp 00:26:52.698 rmmod nvme_fabrics 00:26:52.698 rmmod nvme_keyring 00:26:52.698 17:17:40 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:26:52.698 17:17:40 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@124 -- # set -e 00:26:52.698 17:17:40 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@125 -- # return 0 00:26:52.698 17:17:40 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@489 -- # '[' -n 3222724 ']' 00:26:52.698 17:17:40 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@490 -- # killprocess 3222724 00:26:52.698 17:17:40 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@946 -- # '[' -z 3222724 ']' 00:26:52.698 17:17:40 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@950 -- # kill -0 3222724 00:26:52.698 17:17:40 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@951 -- # uname 00:26:52.698 17:17:40 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:26:52.698 17:17:40 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3222724 00:26:52.698 17:17:40 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@952 -- # process_name=reactor_4 00:26:52.698 17:17:40 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@956 -- # '[' reactor_4 = sudo ']' 00:26:52.698 17:17:40 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3222724' 00:26:52.698 killing process with pid 3222724 00:26:52.698 17:17:40 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@965 -- # kill 3222724 00:26:52.698 [2024-05-15 17:17:40.199174] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:26:52.698 17:17:40 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@970 -- # wait 3222724 00:26:52.957 17:17:40 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:26:52.957 17:17:40 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:26:52.957 17:17:40 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:26:52.957 17:17:40 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:26:52.957 17:17:40 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@278 -- # remove_spdk_ns 00:26:52.957 17:17:40 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:52.957 17:17:40 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:26:52.957 17:17:40 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:54.855 17:17:42 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:26:54.855 00:26:54.855 real 0m19.205s 00:26:54.855 user 0m48.256s 00:26:54.855 sys 0m8.469s 00:26:54.855 17:17:42 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1122 -- # xtrace_disable 00:26:54.855 17:17:42 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:26:54.855 ************************************ 00:26:54.855 END TEST nvmf_target_disconnect 00:26:54.855 ************************************ 00:26:55.113 17:17:42 nvmf_tcp -- nvmf/nvmf.sh@125 -- # timing_exit host 00:26:55.113 17:17:42 nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:26:55.113 17:17:42 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:26:55.113 17:17:42 nvmf_tcp -- nvmf/nvmf.sh@127 -- # trap - SIGINT SIGTERM EXIT 00:26:55.113 00:26:55.113 real 20m48.420s 00:26:55.113 user 45m12.043s 00:26:55.113 sys 6m13.122s 00:26:55.113 17:17:42 nvmf_tcp -- common/autotest_common.sh@1122 -- # xtrace_disable 00:26:55.113 17:17:42 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:26:55.113 ************************************ 00:26:55.113 END TEST nvmf_tcp 00:26:55.113 ************************************ 00:26:55.113 17:17:42 -- spdk/autotest.sh@284 -- # [[ 0 -eq 0 ]] 00:26:55.113 17:17:42 -- spdk/autotest.sh@285 -- # run_test spdkcli_nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:26:55.113 17:17:42 -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:26:55.113 17:17:42 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:26:55.113 17:17:42 -- common/autotest_common.sh@10 -- # set +x 00:26:55.113 ************************************ 00:26:55.113 START TEST spdkcli_nvmf_tcp 00:26:55.113 ************************************ 00:26:55.113 17:17:42 spdkcli_nvmf_tcp -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:26:55.113 * Looking for test storage... 00:26:55.113 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:26:55.113 17:17:42 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:26:55.113 17:17:42 spdkcli_nvmf_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:26:55.113 17:17:42 spdkcli_nvmf_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:26:55.113 17:17:42 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:55.113 17:17:42 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # uname -s 00:26:55.113 17:17:42 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:55.113 17:17:42 spdkcli_nvmf_tcp -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:55.113 17:17:42 spdkcli_nvmf_tcp -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:55.113 17:17:42 spdkcli_nvmf_tcp -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:55.113 17:17:42 spdkcli_nvmf_tcp -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:55.113 17:17:42 spdkcli_nvmf_tcp -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:55.113 17:17:42 spdkcli_nvmf_tcp -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:55.113 17:17:42 spdkcli_nvmf_tcp -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:55.113 17:17:42 spdkcli_nvmf_tcp -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:55.113 17:17:42 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:55.113 17:17:42 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:26:55.113 17:17:42 spdkcli_nvmf_tcp -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:26:55.113 17:17:42 spdkcli_nvmf_tcp -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:55.113 17:17:42 spdkcli_nvmf_tcp -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:55.113 17:17:42 spdkcli_nvmf_tcp -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:55.113 17:17:42 spdkcli_nvmf_tcp -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:55.113 17:17:42 spdkcli_nvmf_tcp -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:55.113 17:17:42 spdkcli_nvmf_tcp -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:55.113 17:17:42 spdkcli_nvmf_tcp -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:55.113 17:17:42 spdkcli_nvmf_tcp -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:55.113 17:17:42 spdkcli_nvmf_tcp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:55.113 17:17:42 spdkcli_nvmf_tcp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:55.113 17:17:42 spdkcli_nvmf_tcp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:55.114 17:17:42 spdkcli_nvmf_tcp -- paths/export.sh@5 -- # export PATH 00:26:55.114 17:17:42 spdkcli_nvmf_tcp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:55.114 17:17:42 spdkcli_nvmf_tcp -- nvmf/common.sh@47 -- # : 0 00:26:55.114 17:17:42 spdkcli_nvmf_tcp -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:26:55.114 17:17:42 spdkcli_nvmf_tcp -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:26:55.114 17:17:42 spdkcli_nvmf_tcp -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:55.114 17:17:42 spdkcli_nvmf_tcp -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:55.114 17:17:42 spdkcli_nvmf_tcp -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:55.114 17:17:42 spdkcli_nvmf_tcp -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:26:55.114 17:17:42 spdkcli_nvmf_tcp -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:26:55.114 17:17:42 spdkcli_nvmf_tcp -- nvmf/common.sh@51 -- # have_pci_nics=0 00:26:55.114 17:17:42 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@12 -- # MATCH_FILE=spdkcli_nvmf.test 00:26:55.114 17:17:42 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@13 -- # SPDKCLI_BRANCH=/nvmf 00:26:55.114 17:17:42 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@15 -- # trap cleanup EXIT 00:26:55.114 17:17:42 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@17 -- # timing_enter run_nvmf_tgt 00:26:55.114 17:17:42 spdkcli_nvmf_tcp -- common/autotest_common.sh@720 -- # xtrace_disable 00:26:55.114 17:17:42 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:26:55.114 17:17:42 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@18 -- # run_nvmf_tgt 00:26:55.114 17:17:42 spdkcli_nvmf_tcp -- spdkcli/common.sh@33 -- # nvmf_tgt_pid=3224378 00:26:55.114 17:17:42 spdkcli_nvmf_tcp -- spdkcli/common.sh@34 -- # waitforlisten 3224378 00:26:55.114 17:17:42 spdkcli_nvmf_tcp -- common/autotest_common.sh@827 -- # '[' -z 3224378 ']' 00:26:55.114 17:17:42 spdkcli_nvmf_tcp -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:55.114 17:17:42 spdkcli_nvmf_tcp -- spdkcli/common.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x3 -p 0 00:26:55.114 17:17:42 spdkcli_nvmf_tcp -- common/autotest_common.sh@832 -- # local max_retries=100 00:26:55.114 17:17:42 spdkcli_nvmf_tcp -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:55.114 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:55.114 17:17:42 spdkcli_nvmf_tcp -- common/autotest_common.sh@836 -- # xtrace_disable 00:26:55.114 17:17:42 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:26:55.371 [2024-05-15 17:17:42.800144] Starting SPDK v24.05-pre git sha1 0ba8ca574 / DPDK 23.11.0 initialization... 00:26:55.371 [2024-05-15 17:17:42.800199] [ DPDK EAL parameters: nvmf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3224378 ] 00:26:55.371 EAL: No free 2048 kB hugepages reported on node 1 00:26:55.371 [2024-05-15 17:17:42.854546] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:26:55.371 [2024-05-15 17:17:42.935265] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:26:55.371 [2024-05-15 17:17:42.935268] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:56.303 17:17:43 spdkcli_nvmf_tcp -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:26:56.303 17:17:43 spdkcli_nvmf_tcp -- common/autotest_common.sh@860 -- # return 0 00:26:56.303 17:17:43 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@19 -- # timing_exit run_nvmf_tgt 00:26:56.303 17:17:43 spdkcli_nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:26:56.303 17:17:43 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:26:56.303 17:17:43 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@21 -- # NVMF_TARGET_IP=127.0.0.1 00:26:56.303 17:17:43 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@22 -- # [[ tcp == \r\d\m\a ]] 00:26:56.303 17:17:43 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@27 -- # timing_enter spdkcli_create_nvmf_config 00:26:56.303 17:17:43 spdkcli_nvmf_tcp -- common/autotest_common.sh@720 -- # xtrace_disable 00:26:56.303 17:17:43 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:26:56.303 17:17:43 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc create 32 512 Malloc1'\'' '\''Malloc1'\'' True 00:26:56.303 '\''/bdevs/malloc create 32 512 Malloc2'\'' '\''Malloc2'\'' True 00:26:56.303 '\''/bdevs/malloc create 32 512 Malloc3'\'' '\''Malloc3'\'' True 00:26:56.303 '\''/bdevs/malloc create 32 512 Malloc4'\'' '\''Malloc4'\'' True 00:26:56.303 '\''/bdevs/malloc create 32 512 Malloc5'\'' '\''Malloc5'\'' True 00:26:56.303 '\''/bdevs/malloc create 32 512 Malloc6'\'' '\''Malloc6'\'' True 00:26:56.303 '\''nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192'\'' '\'''\'' True 00:26:56.303 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:26:56.303 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1'\'' '\''Malloc3'\'' True 00:26:56.303 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2'\'' '\''Malloc4'\'' True 00:26:56.303 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:26:56.303 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:26:56.303 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2'\'' '\''Malloc2'\'' True 00:26:56.303 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:26:56.303 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:26:56.303 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1'\'' '\''Malloc1'\'' True 00:26:56.303 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:26:56.303 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:26:56.303 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:26:56.303 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:26:56.303 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True'\'' '\''Allow any host'\'' 00:26:56.303 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False'\'' '\''Allow any host'\'' True 00:26:56.303 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:26:56.303 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4'\'' '\''127.0.0.1:4262'\'' True 00:26:56.303 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:26:56.303 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5'\'' '\''Malloc5'\'' True 00:26:56.303 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6'\'' '\''Malloc6'\'' True 00:26:56.303 '\''/nvmf/referral create tcp 127.0.0.2 4030 IPv4'\'' 00:26:56.303 ' 00:26:58.827 [2024-05-15 17:17:46.025086] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:59.762 [2024-05-15 17:17:47.200671] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:26:59.762 [2024-05-15 17:17:47.201032] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4260 *** 00:27:02.287 [2024-05-15 17:17:49.492043] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4261 *** 00:27:04.184 [2024-05-15 17:17:51.550516] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4262 *** 00:27:05.555 Executing command: ['/bdevs/malloc create 32 512 Malloc1', 'Malloc1', True] 00:27:05.555 Executing command: ['/bdevs/malloc create 32 512 Malloc2', 'Malloc2', True] 00:27:05.555 Executing command: ['/bdevs/malloc create 32 512 Malloc3', 'Malloc3', True] 00:27:05.555 Executing command: ['/bdevs/malloc create 32 512 Malloc4', 'Malloc4', True] 00:27:05.555 Executing command: ['/bdevs/malloc create 32 512 Malloc5', 'Malloc5', True] 00:27:05.555 Executing command: ['/bdevs/malloc create 32 512 Malloc6', 'Malloc6', True] 00:27:05.555 Executing command: ['nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192', '', True] 00:27:05.555 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode1', True] 00:27:05.555 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1', 'Malloc3', True] 00:27:05.555 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2', 'Malloc4', True] 00:27:05.555 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:27:05.555 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:27:05.555 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2', 'Malloc2', True] 00:27:05.555 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:27:05.555 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:27:05.555 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1', 'Malloc1', True] 00:27:05.555 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:27:05.555 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:27:05.555 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1', 'nqn.2014-08.org.spdk:cnode1', True] 00:27:05.555 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:27:05.555 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True', 'Allow any host', False] 00:27:05.555 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False', 'Allow any host', True] 00:27:05.555 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:27:05.555 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4', '127.0.0.1:4262', True] 00:27:05.555 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:27:05.555 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5', 'Malloc5', True] 00:27:05.555 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6', 'Malloc6', True] 00:27:05.555 Executing command: ['/nvmf/referral create tcp 127.0.0.2 4030 IPv4', False] 00:27:05.555 17:17:53 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@66 -- # timing_exit spdkcli_create_nvmf_config 00:27:05.555 17:17:53 spdkcli_nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:27:05.812 17:17:53 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:27:05.812 17:17:53 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@68 -- # timing_enter spdkcli_check_match 00:27:05.812 17:17:53 spdkcli_nvmf_tcp -- common/autotest_common.sh@720 -- # xtrace_disable 00:27:05.812 17:17:53 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:27:05.812 17:17:53 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@69 -- # check_match 00:27:05.812 17:17:53 spdkcli_nvmf_tcp -- spdkcli/common.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdkcli.py ll /nvmf 00:27:06.069 17:17:53 spdkcli_nvmf_tcp -- spdkcli/common.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/match/match /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test.match 00:27:06.069 17:17:53 spdkcli_nvmf_tcp -- spdkcli/common.sh@46 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test 00:27:06.069 17:17:53 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@70 -- # timing_exit spdkcli_check_match 00:27:06.069 17:17:53 spdkcli_nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:27:06.069 17:17:53 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:27:06.069 17:17:53 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@72 -- # timing_enter spdkcli_clear_nvmf_config 00:27:06.069 17:17:53 spdkcli_nvmf_tcp -- common/autotest_common.sh@720 -- # xtrace_disable 00:27:06.069 17:17:53 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:27:06.069 17:17:53 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1'\'' '\''Malloc3'\'' 00:27:06.069 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all'\'' '\''Malloc4'\'' 00:27:06.070 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:27:06.070 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' 00:27:06.070 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262'\'' '\''127.0.0.1:4262'\'' 00:27:06.070 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all'\'' '\''127.0.0.1:4261'\'' 00:27:06.070 '\''/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3'\'' '\''nqn.2014-08.org.spdk:cnode3'\'' 00:27:06.070 '\''/nvmf/subsystem delete_all'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:27:06.070 '\''/bdevs/malloc delete Malloc6'\'' '\''Malloc6'\'' 00:27:06.070 '\''/bdevs/malloc delete Malloc5'\'' '\''Malloc5'\'' 00:27:06.070 '\''/bdevs/malloc delete Malloc4'\'' '\''Malloc4'\'' 00:27:06.070 '\''/bdevs/malloc delete Malloc3'\'' '\''Malloc3'\'' 00:27:06.070 '\''/bdevs/malloc delete Malloc2'\'' '\''Malloc2'\'' 00:27:06.070 '\''/bdevs/malloc delete Malloc1'\'' '\''Malloc1'\'' 00:27:06.070 ' 00:27:11.325 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1', 'Malloc3', False] 00:27:11.325 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all', 'Malloc4', False] 00:27:11.325 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', False] 00:27:11.325 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all', 'nqn.2014-08.org.spdk:cnode1', False] 00:27:11.325 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262', '127.0.0.1:4262', False] 00:27:11.325 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all', '127.0.0.1:4261', False] 00:27:11.325 Executing command: ['/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3', 'nqn.2014-08.org.spdk:cnode3', False] 00:27:11.325 Executing command: ['/nvmf/subsystem delete_all', 'nqn.2014-08.org.spdk:cnode2', False] 00:27:11.325 Executing command: ['/bdevs/malloc delete Malloc6', 'Malloc6', False] 00:27:11.325 Executing command: ['/bdevs/malloc delete Malloc5', 'Malloc5', False] 00:27:11.325 Executing command: ['/bdevs/malloc delete Malloc4', 'Malloc4', False] 00:27:11.325 Executing command: ['/bdevs/malloc delete Malloc3', 'Malloc3', False] 00:27:11.325 Executing command: ['/bdevs/malloc delete Malloc2', 'Malloc2', False] 00:27:11.325 Executing command: ['/bdevs/malloc delete Malloc1', 'Malloc1', False] 00:27:11.325 17:17:58 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@88 -- # timing_exit spdkcli_clear_nvmf_config 00:27:11.325 17:17:58 spdkcli_nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:27:11.325 17:17:58 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:27:11.325 17:17:58 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@90 -- # killprocess 3224378 00:27:11.325 17:17:58 spdkcli_nvmf_tcp -- common/autotest_common.sh@946 -- # '[' -z 3224378 ']' 00:27:11.325 17:17:58 spdkcli_nvmf_tcp -- common/autotest_common.sh@950 -- # kill -0 3224378 00:27:11.325 17:17:58 spdkcli_nvmf_tcp -- common/autotest_common.sh@951 -- # uname 00:27:11.325 17:17:58 spdkcli_nvmf_tcp -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:27:11.325 17:17:58 spdkcli_nvmf_tcp -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3224378 00:27:11.325 17:17:58 spdkcli_nvmf_tcp -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:27:11.325 17:17:58 spdkcli_nvmf_tcp -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:27:11.325 17:17:58 spdkcli_nvmf_tcp -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3224378' 00:27:11.325 killing process with pid 3224378 00:27:11.325 17:17:58 spdkcli_nvmf_tcp -- common/autotest_common.sh@965 -- # kill 3224378 00:27:11.325 [2024-05-15 17:17:58.706897] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:27:11.325 17:17:58 spdkcli_nvmf_tcp -- common/autotest_common.sh@970 -- # wait 3224378 00:27:11.325 17:17:58 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@1 -- # cleanup 00:27:11.325 17:17:58 spdkcli_nvmf_tcp -- spdkcli/common.sh@10 -- # '[' -n '' ']' 00:27:11.325 17:17:58 spdkcli_nvmf_tcp -- spdkcli/common.sh@13 -- # '[' -n 3224378 ']' 00:27:11.325 17:17:58 spdkcli_nvmf_tcp -- spdkcli/common.sh@14 -- # killprocess 3224378 00:27:11.325 17:17:58 spdkcli_nvmf_tcp -- common/autotest_common.sh@946 -- # '[' -z 3224378 ']' 00:27:11.325 17:17:58 spdkcli_nvmf_tcp -- common/autotest_common.sh@950 -- # kill -0 3224378 00:27:11.325 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 950: kill: (3224378) - No such process 00:27:11.325 17:17:58 spdkcli_nvmf_tcp -- common/autotest_common.sh@973 -- # echo 'Process with pid 3224378 is not found' 00:27:11.325 Process with pid 3224378 is not found 00:27:11.325 17:17:58 spdkcli_nvmf_tcp -- spdkcli/common.sh@16 -- # '[' -n '' ']' 00:27:11.325 17:17:58 spdkcli_nvmf_tcp -- spdkcli/common.sh@19 -- # '[' -n '' ']' 00:27:11.325 17:17:58 spdkcli_nvmf_tcp -- spdkcli/common.sh@22 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_nvmf.test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_details_vhost.test /tmp/sample_aio 00:27:11.325 00:27:11.325 real 0m16.280s 00:27:11.325 user 0m34.355s 00:27:11.325 sys 0m0.727s 00:27:11.325 17:17:58 spdkcli_nvmf_tcp -- common/autotest_common.sh@1122 -- # xtrace_disable 00:27:11.325 17:17:58 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:27:11.325 ************************************ 00:27:11.325 END TEST spdkcli_nvmf_tcp 00:27:11.325 ************************************ 00:27:11.326 17:17:58 -- spdk/autotest.sh@286 -- # run_test nvmf_identify_passthru /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:27:11.326 17:17:58 -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:27:11.326 17:17:58 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:27:11.326 17:17:58 -- common/autotest_common.sh@10 -- # set +x 00:27:11.326 ************************************ 00:27:11.326 START TEST nvmf_identify_passthru 00:27:11.326 ************************************ 00:27:11.326 17:17:58 nvmf_identify_passthru -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:27:11.584 * Looking for test storage... 00:27:11.584 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:27:11.584 17:17:59 nvmf_identify_passthru -- target/identify_passthru.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:11.584 17:17:59 nvmf_identify_passthru -- nvmf/common.sh@7 -- # uname -s 00:27:11.584 17:17:59 nvmf_identify_passthru -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:11.584 17:17:59 nvmf_identify_passthru -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:11.584 17:17:59 nvmf_identify_passthru -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:11.584 17:17:59 nvmf_identify_passthru -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:11.584 17:17:59 nvmf_identify_passthru -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:11.584 17:17:59 nvmf_identify_passthru -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:11.584 17:17:59 nvmf_identify_passthru -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:11.584 17:17:59 nvmf_identify_passthru -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:11.584 17:17:59 nvmf_identify_passthru -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:11.584 17:17:59 nvmf_identify_passthru -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:11.584 17:17:59 nvmf_identify_passthru -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:27:11.584 17:17:59 nvmf_identify_passthru -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:27:11.584 17:17:59 nvmf_identify_passthru -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:11.584 17:17:59 nvmf_identify_passthru -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:11.584 17:17:59 nvmf_identify_passthru -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:11.584 17:17:59 nvmf_identify_passthru -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:11.584 17:17:59 nvmf_identify_passthru -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:11.584 17:17:59 nvmf_identify_passthru -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:11.584 17:17:59 nvmf_identify_passthru -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:11.584 17:17:59 nvmf_identify_passthru -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:11.584 17:17:59 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:11.584 17:17:59 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:11.584 17:17:59 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:11.584 17:17:59 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:27:11.584 17:17:59 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:11.584 17:17:59 nvmf_identify_passthru -- nvmf/common.sh@47 -- # : 0 00:27:11.584 17:17:59 nvmf_identify_passthru -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:27:11.584 17:17:59 nvmf_identify_passthru -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:27:11.584 17:17:59 nvmf_identify_passthru -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:11.584 17:17:59 nvmf_identify_passthru -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:11.584 17:17:59 nvmf_identify_passthru -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:11.584 17:17:59 nvmf_identify_passthru -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:27:11.584 17:17:59 nvmf_identify_passthru -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:27:11.584 17:17:59 nvmf_identify_passthru -- nvmf/common.sh@51 -- # have_pci_nics=0 00:27:11.584 17:17:59 nvmf_identify_passthru -- target/identify_passthru.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:11.584 17:17:59 nvmf_identify_passthru -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:11.584 17:17:59 nvmf_identify_passthru -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:11.584 17:17:59 nvmf_identify_passthru -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:11.584 17:17:59 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:11.584 17:17:59 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:11.584 17:17:59 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:11.584 17:17:59 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:27:11.584 17:17:59 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:11.584 17:17:59 nvmf_identify_passthru -- target/identify_passthru.sh@12 -- # nvmftestinit 00:27:11.584 17:17:59 nvmf_identify_passthru -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:27:11.584 17:17:59 nvmf_identify_passthru -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:11.584 17:17:59 nvmf_identify_passthru -- nvmf/common.sh@448 -- # prepare_net_devs 00:27:11.584 17:17:59 nvmf_identify_passthru -- nvmf/common.sh@410 -- # local -g is_hw=no 00:27:11.584 17:17:59 nvmf_identify_passthru -- nvmf/common.sh@412 -- # remove_spdk_ns 00:27:11.584 17:17:59 nvmf_identify_passthru -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:11.584 17:17:59 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:27:11.584 17:17:59 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:11.584 17:17:59 nvmf_identify_passthru -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:27:11.584 17:17:59 nvmf_identify_passthru -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:27:11.584 17:17:59 nvmf_identify_passthru -- nvmf/common.sh@285 -- # xtrace_disable 00:27:11.584 17:17:59 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:27:17.076 17:18:04 nvmf_identify_passthru -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:17.076 17:18:04 nvmf_identify_passthru -- nvmf/common.sh@291 -- # pci_devs=() 00:27:17.076 17:18:04 nvmf_identify_passthru -- nvmf/common.sh@291 -- # local -a pci_devs 00:27:17.076 17:18:04 nvmf_identify_passthru -- nvmf/common.sh@292 -- # pci_net_devs=() 00:27:17.076 17:18:04 nvmf_identify_passthru -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:27:17.076 17:18:04 nvmf_identify_passthru -- nvmf/common.sh@293 -- # pci_drivers=() 00:27:17.076 17:18:04 nvmf_identify_passthru -- nvmf/common.sh@293 -- # local -A pci_drivers 00:27:17.076 17:18:04 nvmf_identify_passthru -- nvmf/common.sh@295 -- # net_devs=() 00:27:17.076 17:18:04 nvmf_identify_passthru -- nvmf/common.sh@295 -- # local -ga net_devs 00:27:17.076 17:18:04 nvmf_identify_passthru -- nvmf/common.sh@296 -- # e810=() 00:27:17.076 17:18:04 nvmf_identify_passthru -- nvmf/common.sh@296 -- # local -ga e810 00:27:17.076 17:18:04 nvmf_identify_passthru -- nvmf/common.sh@297 -- # x722=() 00:27:17.076 17:18:04 nvmf_identify_passthru -- nvmf/common.sh@297 -- # local -ga x722 00:27:17.076 17:18:04 nvmf_identify_passthru -- nvmf/common.sh@298 -- # mlx=() 00:27:17.076 17:18:04 nvmf_identify_passthru -- nvmf/common.sh@298 -- # local -ga mlx 00:27:17.076 17:18:04 nvmf_identify_passthru -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:17.076 17:18:04 nvmf_identify_passthru -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:17.076 17:18:04 nvmf_identify_passthru -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:17.076 17:18:04 nvmf_identify_passthru -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:17.076 17:18:04 nvmf_identify_passthru -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:17.076 17:18:04 nvmf_identify_passthru -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:17.076 17:18:04 nvmf_identify_passthru -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:17.076 17:18:04 nvmf_identify_passthru -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:17.076 17:18:04 nvmf_identify_passthru -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:17.076 17:18:04 nvmf_identify_passthru -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:17.076 17:18:04 nvmf_identify_passthru -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:17.076 17:18:04 nvmf_identify_passthru -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:27:17.076 17:18:04 nvmf_identify_passthru -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:27:17.076 17:18:04 nvmf_identify_passthru -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:27:17.076 17:18:04 nvmf_identify_passthru -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:27:17.076 17:18:04 nvmf_identify_passthru -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:27:17.076 17:18:04 nvmf_identify_passthru -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:27:17.076 17:18:04 nvmf_identify_passthru -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:17.076 17:18:04 nvmf_identify_passthru -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:27:17.076 Found 0000:86:00.0 (0x8086 - 0x159b) 00:27:17.076 17:18:04 nvmf_identify_passthru -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:17.076 17:18:04 nvmf_identify_passthru -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:17.076 17:18:04 nvmf_identify_passthru -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:17.077 17:18:04 nvmf_identify_passthru -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:17.077 17:18:04 nvmf_identify_passthru -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:17.077 17:18:04 nvmf_identify_passthru -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:17.077 17:18:04 nvmf_identify_passthru -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:27:17.077 Found 0000:86:00.1 (0x8086 - 0x159b) 00:27:17.077 17:18:04 nvmf_identify_passthru -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:17.077 17:18:04 nvmf_identify_passthru -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:17.077 17:18:04 nvmf_identify_passthru -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:17.077 17:18:04 nvmf_identify_passthru -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:17.077 17:18:04 nvmf_identify_passthru -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:17.077 17:18:04 nvmf_identify_passthru -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:27:17.077 17:18:04 nvmf_identify_passthru -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:27:17.077 17:18:04 nvmf_identify_passthru -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:27:17.077 17:18:04 nvmf_identify_passthru -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:17.077 17:18:04 nvmf_identify_passthru -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:17.077 17:18:04 nvmf_identify_passthru -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:17.077 17:18:04 nvmf_identify_passthru -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:17.077 17:18:04 nvmf_identify_passthru -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:17.077 17:18:04 nvmf_identify_passthru -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:17.077 17:18:04 nvmf_identify_passthru -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:17.077 17:18:04 nvmf_identify_passthru -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:27:17.077 Found net devices under 0000:86:00.0: cvl_0_0 00:27:17.077 17:18:04 nvmf_identify_passthru -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:17.077 17:18:04 nvmf_identify_passthru -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:17.077 17:18:04 nvmf_identify_passthru -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:17.077 17:18:04 nvmf_identify_passthru -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:17.077 17:18:04 nvmf_identify_passthru -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:17.077 17:18:04 nvmf_identify_passthru -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:17.077 17:18:04 nvmf_identify_passthru -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:17.077 17:18:04 nvmf_identify_passthru -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:17.077 17:18:04 nvmf_identify_passthru -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:27:17.077 Found net devices under 0000:86:00.1: cvl_0_1 00:27:17.077 17:18:04 nvmf_identify_passthru -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:17.077 17:18:04 nvmf_identify_passthru -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:27:17.077 17:18:04 nvmf_identify_passthru -- nvmf/common.sh@414 -- # is_hw=yes 00:27:17.077 17:18:04 nvmf_identify_passthru -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:27:17.077 17:18:04 nvmf_identify_passthru -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:27:17.077 17:18:04 nvmf_identify_passthru -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:27:17.077 17:18:04 nvmf_identify_passthru -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:17.077 17:18:04 nvmf_identify_passthru -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:17.077 17:18:04 nvmf_identify_passthru -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:17.077 17:18:04 nvmf_identify_passthru -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:27:17.077 17:18:04 nvmf_identify_passthru -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:17.077 17:18:04 nvmf_identify_passthru -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:17.077 17:18:04 nvmf_identify_passthru -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:27:17.077 17:18:04 nvmf_identify_passthru -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:17.077 17:18:04 nvmf_identify_passthru -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:17.077 17:18:04 nvmf_identify_passthru -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:27:17.077 17:18:04 nvmf_identify_passthru -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:27:17.077 17:18:04 nvmf_identify_passthru -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:27:17.077 17:18:04 nvmf_identify_passthru -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:17.077 17:18:04 nvmf_identify_passthru -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:17.077 17:18:04 nvmf_identify_passthru -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:17.077 17:18:04 nvmf_identify_passthru -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:27:17.077 17:18:04 nvmf_identify_passthru -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:17.077 17:18:04 nvmf_identify_passthru -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:17.077 17:18:04 nvmf_identify_passthru -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:17.077 17:18:04 nvmf_identify_passthru -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:27:17.077 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:17.077 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.191 ms 00:27:17.077 00:27:17.077 --- 10.0.0.2 ping statistics --- 00:27:17.077 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:17.077 rtt min/avg/max/mdev = 0.191/0.191/0.191/0.000 ms 00:27:17.077 17:18:04 nvmf_identify_passthru -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:17.077 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:17.077 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.226 ms 00:27:17.077 00:27:17.077 --- 10.0.0.1 ping statistics --- 00:27:17.077 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:17.077 rtt min/avg/max/mdev = 0.226/0.226/0.226/0.000 ms 00:27:17.077 17:18:04 nvmf_identify_passthru -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:17.077 17:18:04 nvmf_identify_passthru -- nvmf/common.sh@422 -- # return 0 00:27:17.077 17:18:04 nvmf_identify_passthru -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:27:17.077 17:18:04 nvmf_identify_passthru -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:17.077 17:18:04 nvmf_identify_passthru -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:27:17.077 17:18:04 nvmf_identify_passthru -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:27:17.077 17:18:04 nvmf_identify_passthru -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:17.077 17:18:04 nvmf_identify_passthru -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:27:17.077 17:18:04 nvmf_identify_passthru -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:27:17.077 17:18:04 nvmf_identify_passthru -- target/identify_passthru.sh@14 -- # timing_enter nvme_identify 00:27:17.077 17:18:04 nvmf_identify_passthru -- common/autotest_common.sh@720 -- # xtrace_disable 00:27:17.077 17:18:04 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:27:17.077 17:18:04 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # get_first_nvme_bdf 00:27:17.077 17:18:04 nvmf_identify_passthru -- common/autotest_common.sh@1520 -- # bdfs=() 00:27:17.077 17:18:04 nvmf_identify_passthru -- common/autotest_common.sh@1520 -- # local bdfs 00:27:17.077 17:18:04 nvmf_identify_passthru -- common/autotest_common.sh@1521 -- # bdfs=($(get_nvme_bdfs)) 00:27:17.077 17:18:04 nvmf_identify_passthru -- common/autotest_common.sh@1521 -- # get_nvme_bdfs 00:27:17.077 17:18:04 nvmf_identify_passthru -- common/autotest_common.sh@1509 -- # bdfs=() 00:27:17.077 17:18:04 nvmf_identify_passthru -- common/autotest_common.sh@1509 -- # local bdfs 00:27:17.077 17:18:04 nvmf_identify_passthru -- common/autotest_common.sh@1510 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:27:17.077 17:18:04 nvmf_identify_passthru -- common/autotest_common.sh@1510 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:27:17.077 17:18:04 nvmf_identify_passthru -- common/autotest_common.sh@1510 -- # jq -r '.config[].params.traddr' 00:27:17.077 17:18:04 nvmf_identify_passthru -- common/autotest_common.sh@1511 -- # (( 1 == 0 )) 00:27:17.077 17:18:04 nvmf_identify_passthru -- common/autotest_common.sh@1515 -- # printf '%s\n' 0000:5e:00.0 00:27:17.077 17:18:04 nvmf_identify_passthru -- common/autotest_common.sh@1523 -- # echo 0000:5e:00.0 00:27:17.077 17:18:04 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # bdf=0000:5e:00.0 00:27:17.077 17:18:04 nvmf_identify_passthru -- target/identify_passthru.sh@17 -- # '[' -z 0000:5e:00.0 ']' 00:27:17.077 17:18:04 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # grep 'Serial Number:' 00:27:17.077 17:18:04 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:5e:00.0' -i 0 00:27:17.077 17:18:04 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # awk '{print $3}' 00:27:17.077 EAL: No free 2048 kB hugepages reported on node 1 00:27:21.262 17:18:08 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # nvme_serial_number=BTLJ72430F0E1P0FGN 00:27:21.262 17:18:08 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:5e:00.0' -i 0 00:27:21.262 17:18:08 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # grep 'Model Number:' 00:27:21.262 17:18:08 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # awk '{print $3}' 00:27:21.262 EAL: No free 2048 kB hugepages reported on node 1 00:27:25.445 17:18:12 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # nvme_model_number=INTEL 00:27:25.445 17:18:12 nvmf_identify_passthru -- target/identify_passthru.sh@26 -- # timing_exit nvme_identify 00:27:25.445 17:18:12 nvmf_identify_passthru -- common/autotest_common.sh@726 -- # xtrace_disable 00:27:25.445 17:18:12 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:27:25.445 17:18:12 nvmf_identify_passthru -- target/identify_passthru.sh@28 -- # timing_enter start_nvmf_tgt 00:27:25.445 17:18:12 nvmf_identify_passthru -- common/autotest_common.sh@720 -- # xtrace_disable 00:27:25.445 17:18:12 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:27:25.445 17:18:12 nvmf_identify_passthru -- target/identify_passthru.sh@31 -- # nvmfpid=3232004 00:27:25.445 17:18:12 nvmf_identify_passthru -- target/identify_passthru.sh@30 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:27:25.445 17:18:12 nvmf_identify_passthru -- target/identify_passthru.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:27:25.445 17:18:12 nvmf_identify_passthru -- target/identify_passthru.sh@35 -- # waitforlisten 3232004 00:27:25.445 17:18:12 nvmf_identify_passthru -- common/autotest_common.sh@827 -- # '[' -z 3232004 ']' 00:27:25.445 17:18:12 nvmf_identify_passthru -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:25.445 17:18:12 nvmf_identify_passthru -- common/autotest_common.sh@832 -- # local max_retries=100 00:27:25.445 17:18:12 nvmf_identify_passthru -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:25.445 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:25.445 17:18:12 nvmf_identify_passthru -- common/autotest_common.sh@836 -- # xtrace_disable 00:27:25.445 17:18:12 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:27:25.445 [2024-05-15 17:18:12.912483] Starting SPDK v24.05-pre git sha1 0ba8ca574 / DPDK 23.11.0 initialization... 00:27:25.445 [2024-05-15 17:18:12.912527] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:25.445 EAL: No free 2048 kB hugepages reported on node 1 00:27:25.445 [2024-05-15 17:18:12.965003] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:27:25.445 [2024-05-15 17:18:13.044053] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:25.445 [2024-05-15 17:18:13.044089] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:25.445 [2024-05-15 17:18:13.044096] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:25.445 [2024-05-15 17:18:13.044102] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:25.445 [2024-05-15 17:18:13.044107] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:25.445 [2024-05-15 17:18:13.044142] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:27:25.445 [2024-05-15 17:18:13.044245] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:27:25.445 [2024-05-15 17:18:13.044266] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:27:25.445 [2024-05-15 17:18:13.044267] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:26.377 17:18:13 nvmf_identify_passthru -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:27:26.377 17:18:13 nvmf_identify_passthru -- common/autotest_common.sh@860 -- # return 0 00:27:26.377 17:18:13 nvmf_identify_passthru -- target/identify_passthru.sh@36 -- # rpc_cmd -v nvmf_set_config --passthru-identify-ctrlr 00:27:26.377 17:18:13 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:26.377 17:18:13 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:27:26.377 INFO: Log level set to 20 00:27:26.377 INFO: Requests: 00:27:26.377 { 00:27:26.377 "jsonrpc": "2.0", 00:27:26.377 "method": "nvmf_set_config", 00:27:26.377 "id": 1, 00:27:26.377 "params": { 00:27:26.377 "admin_cmd_passthru": { 00:27:26.377 "identify_ctrlr": true 00:27:26.377 } 00:27:26.377 } 00:27:26.377 } 00:27:26.377 00:27:26.377 INFO: response: 00:27:26.377 { 00:27:26.377 "jsonrpc": "2.0", 00:27:26.377 "id": 1, 00:27:26.377 "result": true 00:27:26.377 } 00:27:26.377 00:27:26.377 17:18:13 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:26.377 17:18:13 nvmf_identify_passthru -- target/identify_passthru.sh@37 -- # rpc_cmd -v framework_start_init 00:27:26.377 17:18:13 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:26.377 17:18:13 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:27:26.377 INFO: Setting log level to 20 00:27:26.377 INFO: Setting log level to 20 00:27:26.377 INFO: Log level set to 20 00:27:26.377 INFO: Log level set to 20 00:27:26.377 INFO: Requests: 00:27:26.377 { 00:27:26.377 "jsonrpc": "2.0", 00:27:26.377 "method": "framework_start_init", 00:27:26.377 "id": 1 00:27:26.377 } 00:27:26.377 00:27:26.377 INFO: Requests: 00:27:26.377 { 00:27:26.377 "jsonrpc": "2.0", 00:27:26.377 "method": "framework_start_init", 00:27:26.377 "id": 1 00:27:26.377 } 00:27:26.377 00:27:26.377 [2024-05-15 17:18:13.843670] nvmf_tgt.c: 453:nvmf_tgt_advance_state: *NOTICE*: Custom identify ctrlr handler enabled 00:27:26.377 INFO: response: 00:27:26.377 { 00:27:26.377 "jsonrpc": "2.0", 00:27:26.377 "id": 1, 00:27:26.377 "result": true 00:27:26.377 } 00:27:26.377 00:27:26.377 INFO: response: 00:27:26.377 { 00:27:26.377 "jsonrpc": "2.0", 00:27:26.377 "id": 1, 00:27:26.377 "result": true 00:27:26.377 } 00:27:26.377 00:27:26.377 17:18:13 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:26.377 17:18:13 nvmf_identify_passthru -- target/identify_passthru.sh@38 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:27:26.377 17:18:13 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:26.377 17:18:13 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:27:26.377 INFO: Setting log level to 40 00:27:26.377 INFO: Setting log level to 40 00:27:26.377 INFO: Setting log level to 40 00:27:26.377 [2024-05-15 17:18:13.857105] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:26.377 17:18:13 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:26.377 17:18:13 nvmf_identify_passthru -- target/identify_passthru.sh@39 -- # timing_exit start_nvmf_tgt 00:27:26.377 17:18:13 nvmf_identify_passthru -- common/autotest_common.sh@726 -- # xtrace_disable 00:27:26.377 17:18:13 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:27:26.377 17:18:13 nvmf_identify_passthru -- target/identify_passthru.sh@41 -- # rpc_cmd bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:5e:00.0 00:27:26.377 17:18:13 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:26.377 17:18:13 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:27:29.654 Nvme0n1 00:27:29.654 17:18:16 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:29.654 17:18:16 nvmf_identify_passthru -- target/identify_passthru.sh@42 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 1 00:27:29.654 17:18:16 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:29.654 17:18:16 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:27:29.654 17:18:16 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:29.654 17:18:16 nvmf_identify_passthru -- target/identify_passthru.sh@43 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:27:29.654 17:18:16 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:29.654 17:18:16 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:27:29.654 17:18:16 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:29.654 17:18:16 nvmf_identify_passthru -- target/identify_passthru.sh@44 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:27:29.654 17:18:16 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:29.654 17:18:16 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:27:29.654 [2024-05-15 17:18:16.750419] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:27:29.654 [2024-05-15 17:18:16.750664] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:29.654 17:18:16 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:29.654 17:18:16 nvmf_identify_passthru -- target/identify_passthru.sh@46 -- # rpc_cmd nvmf_get_subsystems 00:27:29.654 17:18:16 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:29.654 17:18:16 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:27:29.654 [ 00:27:29.654 { 00:27:29.654 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:27:29.654 "subtype": "Discovery", 00:27:29.654 "listen_addresses": [], 00:27:29.654 "allow_any_host": true, 00:27:29.654 "hosts": [] 00:27:29.654 }, 00:27:29.654 { 00:27:29.654 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:27:29.654 "subtype": "NVMe", 00:27:29.654 "listen_addresses": [ 00:27:29.654 { 00:27:29.654 "trtype": "TCP", 00:27:29.654 "adrfam": "IPv4", 00:27:29.654 "traddr": "10.0.0.2", 00:27:29.654 "trsvcid": "4420" 00:27:29.654 } 00:27:29.654 ], 00:27:29.654 "allow_any_host": true, 00:27:29.654 "hosts": [], 00:27:29.654 "serial_number": "SPDK00000000000001", 00:27:29.654 "model_number": "SPDK bdev Controller", 00:27:29.654 "max_namespaces": 1, 00:27:29.654 "min_cntlid": 1, 00:27:29.654 "max_cntlid": 65519, 00:27:29.654 "namespaces": [ 00:27:29.654 { 00:27:29.654 "nsid": 1, 00:27:29.654 "bdev_name": "Nvme0n1", 00:27:29.654 "name": "Nvme0n1", 00:27:29.654 "nguid": "DC512C16CA5D444CA35A8C277D3B3211", 00:27:29.654 "uuid": "dc512c16-ca5d-444c-a35a-8c277d3b3211" 00:27:29.654 } 00:27:29.654 ] 00:27:29.654 } 00:27:29.654 ] 00:27:29.654 17:18:16 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:29.654 17:18:16 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:27:29.654 17:18:16 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # grep 'Serial Number:' 00:27:29.654 17:18:16 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # awk '{print $3}' 00:27:29.654 EAL: No free 2048 kB hugepages reported on node 1 00:27:29.654 17:18:16 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # nvmf_serial_number=BTLJ72430F0E1P0FGN 00:27:29.655 17:18:16 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:27:29.655 17:18:16 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # grep 'Model Number:' 00:27:29.655 17:18:16 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # awk '{print $3}' 00:27:29.655 EAL: No free 2048 kB hugepages reported on node 1 00:27:29.655 17:18:17 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # nvmf_model_number=INTEL 00:27:29.655 17:18:17 nvmf_identify_passthru -- target/identify_passthru.sh@63 -- # '[' BTLJ72430F0E1P0FGN '!=' BTLJ72430F0E1P0FGN ']' 00:27:29.655 17:18:17 nvmf_identify_passthru -- target/identify_passthru.sh@68 -- # '[' INTEL '!=' INTEL ']' 00:27:29.655 17:18:17 nvmf_identify_passthru -- target/identify_passthru.sh@73 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:27:29.655 17:18:17 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:29.655 17:18:17 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:27:29.655 17:18:17 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:29.655 17:18:17 nvmf_identify_passthru -- target/identify_passthru.sh@75 -- # trap - SIGINT SIGTERM EXIT 00:27:29.655 17:18:17 nvmf_identify_passthru -- target/identify_passthru.sh@77 -- # nvmftestfini 00:27:29.655 17:18:17 nvmf_identify_passthru -- nvmf/common.sh@488 -- # nvmfcleanup 00:27:29.655 17:18:17 nvmf_identify_passthru -- nvmf/common.sh@117 -- # sync 00:27:29.655 17:18:17 nvmf_identify_passthru -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:27:29.655 17:18:17 nvmf_identify_passthru -- nvmf/common.sh@120 -- # set +e 00:27:29.655 17:18:17 nvmf_identify_passthru -- nvmf/common.sh@121 -- # for i in {1..20} 00:27:29.655 17:18:17 nvmf_identify_passthru -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:27:29.655 rmmod nvme_tcp 00:27:29.655 rmmod nvme_fabrics 00:27:29.655 rmmod nvme_keyring 00:27:29.655 17:18:17 nvmf_identify_passthru -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:27:29.655 17:18:17 nvmf_identify_passthru -- nvmf/common.sh@124 -- # set -e 00:27:29.655 17:18:17 nvmf_identify_passthru -- nvmf/common.sh@125 -- # return 0 00:27:29.655 17:18:17 nvmf_identify_passthru -- nvmf/common.sh@489 -- # '[' -n 3232004 ']' 00:27:29.655 17:18:17 nvmf_identify_passthru -- nvmf/common.sh@490 -- # killprocess 3232004 00:27:29.655 17:18:17 nvmf_identify_passthru -- common/autotest_common.sh@946 -- # '[' -z 3232004 ']' 00:27:29.655 17:18:17 nvmf_identify_passthru -- common/autotest_common.sh@950 -- # kill -0 3232004 00:27:29.655 17:18:17 nvmf_identify_passthru -- common/autotest_common.sh@951 -- # uname 00:27:29.655 17:18:17 nvmf_identify_passthru -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:27:29.655 17:18:17 nvmf_identify_passthru -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3232004 00:27:29.655 17:18:17 nvmf_identify_passthru -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:27:29.655 17:18:17 nvmf_identify_passthru -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:27:29.655 17:18:17 nvmf_identify_passthru -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3232004' 00:27:29.655 killing process with pid 3232004 00:27:29.655 17:18:17 nvmf_identify_passthru -- common/autotest_common.sh@965 -- # kill 3232004 00:27:29.655 [2024-05-15 17:18:17.162746] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:27:29.655 17:18:17 nvmf_identify_passthru -- common/autotest_common.sh@970 -- # wait 3232004 00:27:31.025 17:18:18 nvmf_identify_passthru -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:27:31.025 17:18:18 nvmf_identify_passthru -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:27:31.025 17:18:18 nvmf_identify_passthru -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:27:31.025 17:18:18 nvmf_identify_passthru -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:27:31.025 17:18:18 nvmf_identify_passthru -- nvmf/common.sh@278 -- # remove_spdk_ns 00:27:31.025 17:18:18 nvmf_identify_passthru -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:31.025 17:18:18 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:27:31.025 17:18:18 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:33.614 17:18:20 nvmf_identify_passthru -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:27:33.614 00:27:33.614 real 0m21.718s 00:27:33.614 user 0m29.732s 00:27:33.614 sys 0m4.785s 00:27:33.614 17:18:20 nvmf_identify_passthru -- common/autotest_common.sh@1122 -- # xtrace_disable 00:27:33.614 17:18:20 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:27:33.614 ************************************ 00:27:33.614 END TEST nvmf_identify_passthru 00:27:33.614 ************************************ 00:27:33.614 17:18:20 -- spdk/autotest.sh@288 -- # run_test nvmf_dif /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:27:33.614 17:18:20 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:27:33.614 17:18:20 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:27:33.614 17:18:20 -- common/autotest_common.sh@10 -- # set +x 00:27:33.614 ************************************ 00:27:33.614 START TEST nvmf_dif 00:27:33.614 ************************************ 00:27:33.614 17:18:20 nvmf_dif -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:27:33.614 * Looking for test storage... 00:27:33.614 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:27:33.614 17:18:20 nvmf_dif -- target/dif.sh@13 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:33.614 17:18:20 nvmf_dif -- nvmf/common.sh@7 -- # uname -s 00:27:33.614 17:18:20 nvmf_dif -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:33.614 17:18:20 nvmf_dif -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:33.614 17:18:20 nvmf_dif -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:33.614 17:18:20 nvmf_dif -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:33.614 17:18:20 nvmf_dif -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:33.614 17:18:20 nvmf_dif -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:33.614 17:18:20 nvmf_dif -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:33.614 17:18:20 nvmf_dif -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:33.614 17:18:20 nvmf_dif -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:33.614 17:18:20 nvmf_dif -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:33.614 17:18:20 nvmf_dif -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:27:33.614 17:18:20 nvmf_dif -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:27:33.614 17:18:20 nvmf_dif -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:33.614 17:18:20 nvmf_dif -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:33.614 17:18:20 nvmf_dif -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:33.614 17:18:20 nvmf_dif -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:33.614 17:18:20 nvmf_dif -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:33.614 17:18:20 nvmf_dif -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:33.614 17:18:20 nvmf_dif -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:33.614 17:18:20 nvmf_dif -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:33.614 17:18:20 nvmf_dif -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:33.614 17:18:20 nvmf_dif -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:33.614 17:18:20 nvmf_dif -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:33.614 17:18:20 nvmf_dif -- paths/export.sh@5 -- # export PATH 00:27:33.614 17:18:20 nvmf_dif -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:33.614 17:18:20 nvmf_dif -- nvmf/common.sh@47 -- # : 0 00:27:33.614 17:18:20 nvmf_dif -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:27:33.614 17:18:20 nvmf_dif -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:27:33.614 17:18:20 nvmf_dif -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:33.614 17:18:20 nvmf_dif -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:33.614 17:18:20 nvmf_dif -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:33.614 17:18:20 nvmf_dif -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:27:33.614 17:18:20 nvmf_dif -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:27:33.614 17:18:20 nvmf_dif -- nvmf/common.sh@51 -- # have_pci_nics=0 00:27:33.614 17:18:20 nvmf_dif -- target/dif.sh@15 -- # NULL_META=16 00:27:33.614 17:18:20 nvmf_dif -- target/dif.sh@15 -- # NULL_BLOCK_SIZE=512 00:27:33.614 17:18:20 nvmf_dif -- target/dif.sh@15 -- # NULL_SIZE=64 00:27:33.614 17:18:20 nvmf_dif -- target/dif.sh@15 -- # NULL_DIF=1 00:27:33.614 17:18:20 nvmf_dif -- target/dif.sh@135 -- # nvmftestinit 00:27:33.614 17:18:20 nvmf_dif -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:27:33.614 17:18:20 nvmf_dif -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:33.614 17:18:20 nvmf_dif -- nvmf/common.sh@448 -- # prepare_net_devs 00:27:33.614 17:18:20 nvmf_dif -- nvmf/common.sh@410 -- # local -g is_hw=no 00:27:33.614 17:18:20 nvmf_dif -- nvmf/common.sh@412 -- # remove_spdk_ns 00:27:33.614 17:18:20 nvmf_dif -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:33.614 17:18:20 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:27:33.614 17:18:20 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:33.614 17:18:20 nvmf_dif -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:27:33.614 17:18:20 nvmf_dif -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:27:33.614 17:18:20 nvmf_dif -- nvmf/common.sh@285 -- # xtrace_disable 00:27:33.614 17:18:20 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:27:38.870 17:18:25 nvmf_dif -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:38.870 17:18:25 nvmf_dif -- nvmf/common.sh@291 -- # pci_devs=() 00:27:38.870 17:18:25 nvmf_dif -- nvmf/common.sh@291 -- # local -a pci_devs 00:27:38.870 17:18:25 nvmf_dif -- nvmf/common.sh@292 -- # pci_net_devs=() 00:27:38.870 17:18:25 nvmf_dif -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:27:38.870 17:18:25 nvmf_dif -- nvmf/common.sh@293 -- # pci_drivers=() 00:27:38.870 17:18:25 nvmf_dif -- nvmf/common.sh@293 -- # local -A pci_drivers 00:27:38.870 17:18:25 nvmf_dif -- nvmf/common.sh@295 -- # net_devs=() 00:27:38.870 17:18:25 nvmf_dif -- nvmf/common.sh@295 -- # local -ga net_devs 00:27:38.870 17:18:25 nvmf_dif -- nvmf/common.sh@296 -- # e810=() 00:27:38.870 17:18:25 nvmf_dif -- nvmf/common.sh@296 -- # local -ga e810 00:27:38.870 17:18:25 nvmf_dif -- nvmf/common.sh@297 -- # x722=() 00:27:38.870 17:18:25 nvmf_dif -- nvmf/common.sh@297 -- # local -ga x722 00:27:38.870 17:18:25 nvmf_dif -- nvmf/common.sh@298 -- # mlx=() 00:27:38.870 17:18:25 nvmf_dif -- nvmf/common.sh@298 -- # local -ga mlx 00:27:38.870 17:18:25 nvmf_dif -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:38.870 17:18:25 nvmf_dif -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:38.870 17:18:25 nvmf_dif -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:38.870 17:18:25 nvmf_dif -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:38.871 17:18:25 nvmf_dif -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:38.871 17:18:25 nvmf_dif -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:38.871 17:18:25 nvmf_dif -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:38.871 17:18:25 nvmf_dif -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:38.871 17:18:25 nvmf_dif -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:38.871 17:18:25 nvmf_dif -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:38.871 17:18:25 nvmf_dif -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:38.871 17:18:25 nvmf_dif -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:27:38.871 17:18:25 nvmf_dif -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:27:38.871 17:18:25 nvmf_dif -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:27:38.871 17:18:25 nvmf_dif -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:27:38.871 17:18:25 nvmf_dif -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:27:38.871 17:18:25 nvmf_dif -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:27:38.871 17:18:25 nvmf_dif -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:38.871 17:18:25 nvmf_dif -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:27:38.871 Found 0000:86:00.0 (0x8086 - 0x159b) 00:27:38.871 17:18:25 nvmf_dif -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:38.871 17:18:25 nvmf_dif -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:38.871 17:18:25 nvmf_dif -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:38.871 17:18:25 nvmf_dif -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:38.871 17:18:25 nvmf_dif -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:38.871 17:18:25 nvmf_dif -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:38.871 17:18:25 nvmf_dif -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:27:38.871 Found 0000:86:00.1 (0x8086 - 0x159b) 00:27:38.871 17:18:25 nvmf_dif -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:38.871 17:18:25 nvmf_dif -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:38.871 17:18:25 nvmf_dif -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:38.871 17:18:25 nvmf_dif -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:38.871 17:18:25 nvmf_dif -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:38.871 17:18:25 nvmf_dif -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:27:38.871 17:18:25 nvmf_dif -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:27:38.871 17:18:25 nvmf_dif -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:27:38.871 17:18:25 nvmf_dif -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:38.871 17:18:25 nvmf_dif -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:38.871 17:18:25 nvmf_dif -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:38.871 17:18:25 nvmf_dif -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:38.871 17:18:25 nvmf_dif -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:38.871 17:18:25 nvmf_dif -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:38.871 17:18:25 nvmf_dif -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:38.871 17:18:25 nvmf_dif -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:27:38.871 Found net devices under 0000:86:00.0: cvl_0_0 00:27:38.871 17:18:25 nvmf_dif -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:38.871 17:18:25 nvmf_dif -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:38.871 17:18:25 nvmf_dif -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:38.871 17:18:25 nvmf_dif -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:38.871 17:18:25 nvmf_dif -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:38.871 17:18:25 nvmf_dif -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:38.871 17:18:25 nvmf_dif -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:38.871 17:18:25 nvmf_dif -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:38.871 17:18:25 nvmf_dif -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:27:38.871 Found net devices under 0000:86:00.1: cvl_0_1 00:27:38.871 17:18:25 nvmf_dif -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:38.871 17:18:25 nvmf_dif -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:27:38.871 17:18:25 nvmf_dif -- nvmf/common.sh@414 -- # is_hw=yes 00:27:38.871 17:18:25 nvmf_dif -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:27:38.871 17:18:25 nvmf_dif -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:27:38.871 17:18:25 nvmf_dif -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:27:38.871 17:18:25 nvmf_dif -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:38.871 17:18:25 nvmf_dif -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:38.871 17:18:25 nvmf_dif -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:38.871 17:18:25 nvmf_dif -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:27:38.871 17:18:25 nvmf_dif -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:38.871 17:18:25 nvmf_dif -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:38.871 17:18:25 nvmf_dif -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:27:38.871 17:18:25 nvmf_dif -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:38.871 17:18:25 nvmf_dif -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:38.871 17:18:25 nvmf_dif -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:27:38.871 17:18:25 nvmf_dif -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:27:38.871 17:18:25 nvmf_dif -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:27:38.871 17:18:25 nvmf_dif -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:38.871 17:18:26 nvmf_dif -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:38.871 17:18:26 nvmf_dif -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:38.871 17:18:26 nvmf_dif -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:27:38.871 17:18:26 nvmf_dif -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:38.871 17:18:26 nvmf_dif -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:38.871 17:18:26 nvmf_dif -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:38.871 17:18:26 nvmf_dif -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:27:38.871 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:38.871 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.203 ms 00:27:38.871 00:27:38.871 --- 10.0.0.2 ping statistics --- 00:27:38.871 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:38.871 rtt min/avg/max/mdev = 0.203/0.203/0.203/0.000 ms 00:27:38.871 17:18:26 nvmf_dif -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:38.871 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:38.871 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.230 ms 00:27:38.871 00:27:38.871 --- 10.0.0.1 ping statistics --- 00:27:38.871 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:38.871 rtt min/avg/max/mdev = 0.230/0.230/0.230/0.000 ms 00:27:38.871 17:18:26 nvmf_dif -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:38.871 17:18:26 nvmf_dif -- nvmf/common.sh@422 -- # return 0 00:27:38.871 17:18:26 nvmf_dif -- nvmf/common.sh@450 -- # '[' iso == iso ']' 00:27:38.871 17:18:26 nvmf_dif -- nvmf/common.sh@451 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:27:41.398 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:27:41.398 0000:5e:00.0 (8086 0a54): Already using the vfio-pci driver 00:27:41.398 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:27:41.398 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:27:41.398 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:27:41.398 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:27:41.398 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:27:41.398 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:27:41.398 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:27:41.398 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:27:41.398 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:27:41.398 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:27:41.398 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:27:41.398 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:27:41.398 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:27:41.398 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:27:41.398 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:27:41.398 17:18:28 nvmf_dif -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:41.399 17:18:28 nvmf_dif -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:27:41.399 17:18:28 nvmf_dif -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:27:41.399 17:18:28 nvmf_dif -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:41.399 17:18:28 nvmf_dif -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:27:41.399 17:18:28 nvmf_dif -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:27:41.399 17:18:28 nvmf_dif -- target/dif.sh@136 -- # NVMF_TRANSPORT_OPTS+=' --dif-insert-or-strip' 00:27:41.399 17:18:28 nvmf_dif -- target/dif.sh@137 -- # nvmfappstart 00:27:41.399 17:18:28 nvmf_dif -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:27:41.399 17:18:28 nvmf_dif -- common/autotest_common.sh@720 -- # xtrace_disable 00:27:41.399 17:18:28 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:27:41.399 17:18:28 nvmf_dif -- nvmf/common.sh@481 -- # nvmfpid=3237469 00:27:41.399 17:18:28 nvmf_dif -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:27:41.399 17:18:28 nvmf_dif -- nvmf/common.sh@482 -- # waitforlisten 3237469 00:27:41.399 17:18:28 nvmf_dif -- common/autotest_common.sh@827 -- # '[' -z 3237469 ']' 00:27:41.399 17:18:28 nvmf_dif -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:41.399 17:18:28 nvmf_dif -- common/autotest_common.sh@832 -- # local max_retries=100 00:27:41.399 17:18:28 nvmf_dif -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:41.399 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:41.399 17:18:28 nvmf_dif -- common/autotest_common.sh@836 -- # xtrace_disable 00:27:41.399 17:18:28 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:27:41.399 [2024-05-15 17:18:28.989578] Starting SPDK v24.05-pre git sha1 0ba8ca574 / DPDK 23.11.0 initialization... 00:27:41.399 [2024-05-15 17:18:28.989617] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:41.399 EAL: No free 2048 kB hugepages reported on node 1 00:27:41.399 [2024-05-15 17:18:29.047544] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:41.656 [2024-05-15 17:18:29.127167] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:41.656 [2024-05-15 17:18:29.127201] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:41.656 [2024-05-15 17:18:29.127208] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:41.656 [2024-05-15 17:18:29.127215] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:41.656 [2024-05-15 17:18:29.127220] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:41.656 [2024-05-15 17:18:29.127243] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:42.221 17:18:29 nvmf_dif -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:27:42.221 17:18:29 nvmf_dif -- common/autotest_common.sh@860 -- # return 0 00:27:42.221 17:18:29 nvmf_dif -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:27:42.221 17:18:29 nvmf_dif -- common/autotest_common.sh@726 -- # xtrace_disable 00:27:42.221 17:18:29 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:27:42.221 17:18:29 nvmf_dif -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:42.221 17:18:29 nvmf_dif -- target/dif.sh@139 -- # create_transport 00:27:42.221 17:18:29 nvmf_dif -- target/dif.sh@50 -- # rpc_cmd nvmf_create_transport -t tcp -o --dif-insert-or-strip 00:27:42.221 17:18:29 nvmf_dif -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:42.221 17:18:29 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:27:42.221 [2024-05-15 17:18:29.833131] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:42.221 17:18:29 nvmf_dif -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:42.221 17:18:29 nvmf_dif -- target/dif.sh@141 -- # run_test fio_dif_1_default fio_dif_1 00:27:42.221 17:18:29 nvmf_dif -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:27:42.221 17:18:29 nvmf_dif -- common/autotest_common.sh@1103 -- # xtrace_disable 00:27:42.221 17:18:29 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:27:42.221 ************************************ 00:27:42.221 START TEST fio_dif_1_default 00:27:42.221 ************************************ 00:27:42.221 17:18:29 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1121 -- # fio_dif_1 00:27:42.221 17:18:29 nvmf_dif.fio_dif_1_default -- target/dif.sh@86 -- # create_subsystems 0 00:27:42.221 17:18:29 nvmf_dif.fio_dif_1_default -- target/dif.sh@28 -- # local sub 00:27:42.221 17:18:29 nvmf_dif.fio_dif_1_default -- target/dif.sh@30 -- # for sub in "$@" 00:27:42.221 17:18:29 nvmf_dif.fio_dif_1_default -- target/dif.sh@31 -- # create_subsystem 0 00:27:42.221 17:18:29 nvmf_dif.fio_dif_1_default -- target/dif.sh@18 -- # local sub_id=0 00:27:42.221 17:18:29 nvmf_dif.fio_dif_1_default -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:27:42.221 17:18:29 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:42.221 17:18:29 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:27:42.479 bdev_null0 00:27:42.479 17:18:29 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:42.479 17:18:29 nvmf_dif.fio_dif_1_default -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:27:42.479 17:18:29 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:42.480 17:18:29 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:27:42.480 17:18:29 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:42.480 17:18:29 nvmf_dif.fio_dif_1_default -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:27:42.480 17:18:29 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:42.480 17:18:29 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:27:42.480 17:18:29 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:42.480 17:18:29 nvmf_dif.fio_dif_1_default -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:27:42.480 17:18:29 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:42.480 17:18:29 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:27:42.480 [2024-05-15 17:18:29.905278] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:27:42.480 [2024-05-15 17:18:29.905476] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:42.480 17:18:29 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:42.480 17:18:29 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # fio /dev/fd/62 00:27:42.480 17:18:29 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # create_json_sub_conf 0 00:27:42.480 17:18:29 nvmf_dif.fio_dif_1_default -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:27:42.480 17:18:29 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@532 -- # config=() 00:27:42.480 17:18:29 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@532 -- # local subsystem config 00:27:42.480 17:18:29 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:27:42.480 17:18:29 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:42.480 17:18:29 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1352 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:27:42.480 17:18:29 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # gen_fio_conf 00:27:42.480 17:18:29 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:42.480 { 00:27:42.480 "params": { 00:27:42.480 "name": "Nvme$subsystem", 00:27:42.480 "trtype": "$TEST_TRANSPORT", 00:27:42.480 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:42.480 "adrfam": "ipv4", 00:27:42.480 "trsvcid": "$NVMF_PORT", 00:27:42.480 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:42.480 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:42.480 "hdgst": ${hdgst:-false}, 00:27:42.480 "ddgst": ${ddgst:-false} 00:27:42.480 }, 00:27:42.480 "method": "bdev_nvme_attach_controller" 00:27:42.480 } 00:27:42.480 EOF 00:27:42.480 )") 00:27:42.480 17:18:29 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1333 -- # local fio_dir=/usr/src/fio 00:27:42.480 17:18:29 nvmf_dif.fio_dif_1_default -- target/dif.sh@54 -- # local file 00:27:42.480 17:18:29 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1335 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:27:42.480 17:18:29 nvmf_dif.fio_dif_1_default -- target/dif.sh@56 -- # cat 00:27:42.480 17:18:29 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1335 -- # local sanitizers 00:27:42.480 17:18:29 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1336 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:27:42.480 17:18:29 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1337 -- # shift 00:27:42.480 17:18:29 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1339 -- # local asan_lib= 00:27:42.480 17:18:29 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:27:42.480 17:18:29 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@554 -- # cat 00:27:42.480 17:18:29 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file = 1 )) 00:27:42.480 17:18:29 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:27:42.480 17:18:29 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file <= files )) 00:27:42.480 17:18:29 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # grep libasan 00:27:42.480 17:18:29 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:27:42.480 17:18:29 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@556 -- # jq . 00:27:42.480 17:18:29 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@557 -- # IFS=, 00:27:42.480 17:18:29 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:27:42.480 "params": { 00:27:42.480 "name": "Nvme0", 00:27:42.480 "trtype": "tcp", 00:27:42.480 "traddr": "10.0.0.2", 00:27:42.480 "adrfam": "ipv4", 00:27:42.480 "trsvcid": "4420", 00:27:42.480 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:27:42.480 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:27:42.480 "hdgst": false, 00:27:42.480 "ddgst": false 00:27:42.480 }, 00:27:42.480 "method": "bdev_nvme_attach_controller" 00:27:42.480 }' 00:27:42.480 17:18:29 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # asan_lib= 00:27:42.480 17:18:29 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:27:42.480 17:18:29 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:27:42.480 17:18:29 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:27:42.480 17:18:29 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # grep libclang_rt.asan 00:27:42.480 17:18:29 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:27:42.480 17:18:29 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # asan_lib= 00:27:42.480 17:18:29 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:27:42.480 17:18:29 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1348 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:27:42.480 17:18:29 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1348 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:27:42.736 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:27:42.736 fio-3.35 00:27:42.736 Starting 1 thread 00:27:42.736 EAL: No free 2048 kB hugepages reported on node 1 00:27:54.918 00:27:54.918 filename0: (groupid=0, jobs=1): err= 0: pid=3237844: Wed May 15 17:18:40 2024 00:27:54.918 read: IOPS=189, BW=758KiB/s (776kB/s)(7584KiB/10004msec) 00:27:54.918 slat (nsec): min=4222, max=20577, avg=6229.58, stdev=627.93 00:27:54.918 clat (usec): min=533, max=49637, avg=21087.57, stdev=20429.42 00:27:54.918 lat (usec): min=539, max=49650, avg=21093.80, stdev=20429.37 00:27:54.918 clat percentiles (usec): 00:27:54.918 | 1.00th=[ 545], 5.00th=[ 553], 10.00th=[ 553], 20.00th=[ 562], 00:27:54.918 | 30.00th=[ 570], 40.00th=[ 586], 50.00th=[41157], 60.00th=[41157], 00:27:54.918 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41681], 95.00th=[41681], 00:27:54.918 | 99.00th=[41681], 99.50th=[41681], 99.90th=[49546], 99.95th=[49546], 00:27:54.918 | 99.99th=[49546] 00:27:54.918 bw ( KiB/s): min= 672, max= 768, per=100.00%, avg=759.58, stdev=25.78, samples=19 00:27:54.918 iops : min= 168, max= 192, avg=189.89, stdev= 6.45, samples=19 00:27:54.918 lat (usec) : 750=49.74%, 1000=0.05% 00:27:54.918 lat (msec) : 50=50.21% 00:27:54.918 cpu : usr=94.67%, sys=5.07%, ctx=9, majf=0, minf=205 00:27:54.918 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:27:54.918 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:54.918 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:54.918 issued rwts: total=1896,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:54.918 latency : target=0, window=0, percentile=100.00%, depth=4 00:27:54.918 00:27:54.918 Run status group 0 (all jobs): 00:27:54.918 READ: bw=758KiB/s (776kB/s), 758KiB/s-758KiB/s (776kB/s-776kB/s), io=7584KiB (7766kB), run=10004-10004msec 00:27:54.918 17:18:40 nvmf_dif.fio_dif_1_default -- target/dif.sh@88 -- # destroy_subsystems 0 00:27:54.918 17:18:40 nvmf_dif.fio_dif_1_default -- target/dif.sh@43 -- # local sub 00:27:54.918 17:18:40 nvmf_dif.fio_dif_1_default -- target/dif.sh@45 -- # for sub in "$@" 00:27:54.918 17:18:40 nvmf_dif.fio_dif_1_default -- target/dif.sh@46 -- # destroy_subsystem 0 00:27:54.918 17:18:40 nvmf_dif.fio_dif_1_default -- target/dif.sh@36 -- # local sub_id=0 00:27:54.918 17:18:40 nvmf_dif.fio_dif_1_default -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:27:54.918 17:18:40 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:54.918 17:18:40 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:27:54.918 17:18:40 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:54.918 17:18:40 nvmf_dif.fio_dif_1_default -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:27:54.918 17:18:40 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:54.918 17:18:40 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:27:54.918 17:18:40 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:54.918 00:27:54.918 real 0m11.022s 00:27:54.918 user 0m15.943s 00:27:54.918 sys 0m0.781s 00:27:54.918 17:18:40 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1122 -- # xtrace_disable 00:27:54.918 17:18:40 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:27:54.918 ************************************ 00:27:54.918 END TEST fio_dif_1_default 00:27:54.918 ************************************ 00:27:54.918 17:18:40 nvmf_dif -- target/dif.sh@142 -- # run_test fio_dif_1_multi_subsystems fio_dif_1_multi_subsystems 00:27:54.918 17:18:40 nvmf_dif -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:27:54.918 17:18:40 nvmf_dif -- common/autotest_common.sh@1103 -- # xtrace_disable 00:27:54.918 17:18:40 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:27:54.918 ************************************ 00:27:54.918 START TEST fio_dif_1_multi_subsystems 00:27:54.918 ************************************ 00:27:54.918 17:18:40 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1121 -- # fio_dif_1_multi_subsystems 00:27:54.918 17:18:40 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@92 -- # local files=1 00:27:54.918 17:18:40 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@94 -- # create_subsystems 0 1 00:27:54.918 17:18:40 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@28 -- # local sub 00:27:54.918 17:18:40 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:27:54.918 17:18:40 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 0 00:27:54.918 17:18:40 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=0 00:27:54.918 17:18:40 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:27:54.919 17:18:40 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:54.919 17:18:40 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:27:54.919 bdev_null0 00:27:54.919 17:18:40 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:54.919 17:18:40 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:27:54.919 17:18:40 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:54.919 17:18:40 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:27:54.919 17:18:40 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:54.919 17:18:40 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:27:54.919 17:18:40 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:54.919 17:18:40 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:27:54.919 17:18:40 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:54.919 17:18:40 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:27:54.919 17:18:40 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:54.919 17:18:40 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:27:54.919 [2024-05-15 17:18:40.998411] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:54.919 17:18:41 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:54.919 17:18:41 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:27:54.919 17:18:41 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 1 00:27:54.919 17:18:41 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=1 00:27:54.919 17:18:41 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:27:54.919 17:18:41 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:54.919 17:18:41 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:27:54.919 bdev_null1 00:27:54.919 17:18:41 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:54.919 17:18:41 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:27:54.919 17:18:41 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:54.919 17:18:41 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:27:54.919 17:18:41 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:54.919 17:18:41 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:27:54.919 17:18:41 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:54.919 17:18:41 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:27:54.919 17:18:41 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:54.919 17:18:41 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:27:54.919 17:18:41 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:54.919 17:18:41 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:27:54.919 17:18:41 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:54.919 17:18:41 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # fio /dev/fd/62 00:27:54.919 17:18:41 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # create_json_sub_conf 0 1 00:27:54.919 17:18:41 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:27:54.919 17:18:41 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@532 -- # config=() 00:27:54.919 17:18:41 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:27:54.919 17:18:41 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@532 -- # local subsystem config 00:27:54.919 17:18:41 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:54.919 17:18:41 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1352 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:27:54.919 17:18:41 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # gen_fio_conf 00:27:54.919 17:18:41 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:54.919 { 00:27:54.919 "params": { 00:27:54.919 "name": "Nvme$subsystem", 00:27:54.919 "trtype": "$TEST_TRANSPORT", 00:27:54.919 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:54.919 "adrfam": "ipv4", 00:27:54.919 "trsvcid": "$NVMF_PORT", 00:27:54.919 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:54.919 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:54.919 "hdgst": ${hdgst:-false}, 00:27:54.919 "ddgst": ${ddgst:-false} 00:27:54.919 }, 00:27:54.919 "method": "bdev_nvme_attach_controller" 00:27:54.919 } 00:27:54.919 EOF 00:27:54.919 )") 00:27:54.919 17:18:41 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1333 -- # local fio_dir=/usr/src/fio 00:27:54.919 17:18:41 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@54 -- # local file 00:27:54.919 17:18:41 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@56 -- # cat 00:27:54.919 17:18:41 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1335 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:27:54.919 17:18:41 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1335 -- # local sanitizers 00:27:54.919 17:18:41 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1336 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:27:54.919 17:18:41 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1337 -- # shift 00:27:54.919 17:18:41 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1339 -- # local asan_lib= 00:27:54.919 17:18:41 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:27:54.919 17:18:41 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # cat 00:27:54.919 17:18:41 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file = 1 )) 00:27:54.919 17:18:41 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:27:54.919 17:18:41 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:27:54.919 17:18:41 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@73 -- # cat 00:27:54.919 17:18:41 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # grep libasan 00:27:54.919 17:18:41 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:27:54.919 17:18:41 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:54.919 17:18:41 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:54.919 { 00:27:54.919 "params": { 00:27:54.919 "name": "Nvme$subsystem", 00:27:54.919 "trtype": "$TEST_TRANSPORT", 00:27:54.919 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:54.919 "adrfam": "ipv4", 00:27:54.919 "trsvcid": "$NVMF_PORT", 00:27:54.919 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:54.919 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:54.919 "hdgst": ${hdgst:-false}, 00:27:54.919 "ddgst": ${ddgst:-false} 00:27:54.919 }, 00:27:54.919 "method": "bdev_nvme_attach_controller" 00:27:54.919 } 00:27:54.919 EOF 00:27:54.919 )") 00:27:54.919 17:18:41 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file++ )) 00:27:54.919 17:18:41 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:27:54.919 17:18:41 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # cat 00:27:54.919 17:18:41 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@556 -- # jq . 00:27:54.919 17:18:41 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@557 -- # IFS=, 00:27:54.919 17:18:41 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:27:54.919 "params": { 00:27:54.919 "name": "Nvme0", 00:27:54.919 "trtype": "tcp", 00:27:54.919 "traddr": "10.0.0.2", 00:27:54.919 "adrfam": "ipv4", 00:27:54.919 "trsvcid": "4420", 00:27:54.919 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:27:54.919 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:27:54.919 "hdgst": false, 00:27:54.919 "ddgst": false 00:27:54.919 }, 00:27:54.919 "method": "bdev_nvme_attach_controller" 00:27:54.919 },{ 00:27:54.919 "params": { 00:27:54.919 "name": "Nvme1", 00:27:54.919 "trtype": "tcp", 00:27:54.919 "traddr": "10.0.0.2", 00:27:54.919 "adrfam": "ipv4", 00:27:54.919 "trsvcid": "4420", 00:27:54.919 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:27:54.919 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:27:54.919 "hdgst": false, 00:27:54.919 "ddgst": false 00:27:54.919 }, 00:27:54.919 "method": "bdev_nvme_attach_controller" 00:27:54.919 }' 00:27:54.919 17:18:41 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # asan_lib= 00:27:54.919 17:18:41 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:27:54.919 17:18:41 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:27:54.919 17:18:41 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:27:54.919 17:18:41 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # grep libclang_rt.asan 00:27:54.919 17:18:41 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:27:54.919 17:18:41 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # asan_lib= 00:27:54.919 17:18:41 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:27:54.919 17:18:41 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1348 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:27:54.919 17:18:41 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1348 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:27:54.919 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:27:54.919 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:27:54.919 fio-3.35 00:27:54.919 Starting 2 threads 00:27:54.919 EAL: No free 2048 kB hugepages reported on node 1 00:28:04.873 00:28:04.873 filename0: (groupid=0, jobs=1): err= 0: pid=3239817: Wed May 15 17:18:52 2024 00:28:04.873 read: IOPS=97, BW=390KiB/s (399kB/s)(3904KiB/10012msec) 00:28:04.873 slat (nsec): min=4223, max=27340, avg=7834.42, stdev=2455.92 00:28:04.873 clat (usec): min=40844, max=47263, avg=41006.67, stdev=408.80 00:28:04.873 lat (usec): min=40851, max=47275, avg=41014.51, stdev=408.75 00:28:04.873 clat percentiles (usec): 00:28:04.873 | 1.00th=[40633], 5.00th=[40633], 10.00th=[41157], 20.00th=[41157], 00:28:04.873 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:28:04.873 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:28:04.873 | 99.00th=[41157], 99.50th=[41681], 99.90th=[47449], 99.95th=[47449], 00:28:04.873 | 99.99th=[47449] 00:28:04.873 bw ( KiB/s): min= 384, max= 416, per=49.76%, avg=388.80, stdev=11.72, samples=20 00:28:04.873 iops : min= 96, max= 104, avg=97.20, stdev= 2.93, samples=20 00:28:04.873 lat (msec) : 50=100.00% 00:28:04.873 cpu : usr=97.83%, sys=1.92%, ctx=5, majf=0, minf=159 00:28:04.873 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:28:04.873 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:04.873 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:04.873 issued rwts: total=976,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:04.873 latency : target=0, window=0, percentile=100.00%, depth=4 00:28:04.873 filename1: (groupid=0, jobs=1): err= 0: pid=3239818: Wed May 15 17:18:52 2024 00:28:04.873 read: IOPS=97, BW=390KiB/s (399kB/s)(3904KiB/10013msec) 00:28:04.873 slat (nsec): min=6173, max=25928, avg=7860.47, stdev=2423.62 00:28:04.873 clat (usec): min=40808, max=45515, avg=41012.19, stdev=319.84 00:28:04.873 lat (usec): min=40815, max=45541, avg=41020.05, stdev=320.25 00:28:04.873 clat percentiles (usec): 00:28:04.873 | 1.00th=[40633], 5.00th=[40633], 10.00th=[41157], 20.00th=[41157], 00:28:04.873 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:28:04.873 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:28:04.873 | 99.00th=[42206], 99.50th=[42206], 99.90th=[45351], 99.95th=[45351], 00:28:04.873 | 99.99th=[45351] 00:28:04.874 bw ( KiB/s): min= 384, max= 416, per=49.76%, avg=388.80, stdev=11.72, samples=20 00:28:04.874 iops : min= 96, max= 104, avg=97.20, stdev= 2.93, samples=20 00:28:04.874 lat (msec) : 50=100.00% 00:28:04.874 cpu : usr=97.56%, sys=2.19%, ctx=13, majf=0, minf=86 00:28:04.874 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:28:04.874 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:04.874 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:04.874 issued rwts: total=976,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:04.874 latency : target=0, window=0, percentile=100.00%, depth=4 00:28:04.874 00:28:04.874 Run status group 0 (all jobs): 00:28:04.874 READ: bw=780KiB/s (799kB/s), 390KiB/s-390KiB/s (399kB/s-399kB/s), io=7808KiB (7995kB), run=10012-10013msec 00:28:04.874 17:18:52 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@96 -- # destroy_subsystems 0 1 00:28:04.874 17:18:52 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@43 -- # local sub 00:28:04.874 17:18:52 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:28:04.874 17:18:52 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 0 00:28:04.874 17:18:52 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=0 00:28:04.874 17:18:52 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:28:04.874 17:18:52 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:04.874 17:18:52 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:28:04.874 17:18:52 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:04.874 17:18:52 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:28:04.874 17:18:52 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:04.874 17:18:52 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:28:04.874 17:18:52 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:04.874 17:18:52 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:28:04.874 17:18:52 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 1 00:28:04.874 17:18:52 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=1 00:28:04.874 17:18:52 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:28:04.874 17:18:52 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:04.874 17:18:52 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:28:04.874 17:18:52 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:04.874 17:18:52 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:28:04.874 17:18:52 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:04.874 17:18:52 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:28:04.874 17:18:52 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:04.874 00:28:04.874 real 0m11.331s 00:28:04.874 user 0m26.116s 00:28:04.874 sys 0m0.726s 00:28:04.874 17:18:52 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1122 -- # xtrace_disable 00:28:04.874 17:18:52 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:28:04.874 ************************************ 00:28:04.874 END TEST fio_dif_1_multi_subsystems 00:28:04.874 ************************************ 00:28:04.874 17:18:52 nvmf_dif -- target/dif.sh@143 -- # run_test fio_dif_rand_params fio_dif_rand_params 00:28:04.874 17:18:52 nvmf_dif -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:28:04.874 17:18:52 nvmf_dif -- common/autotest_common.sh@1103 -- # xtrace_disable 00:28:04.874 17:18:52 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:28:04.874 ************************************ 00:28:04.874 START TEST fio_dif_rand_params 00:28:04.874 ************************************ 00:28:04.874 17:18:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1121 -- # fio_dif_rand_params 00:28:04.874 17:18:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@100 -- # local NULL_DIF 00:28:04.874 17:18:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@101 -- # local bs numjobs runtime iodepth files 00:28:04.874 17:18:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # NULL_DIF=3 00:28:04.874 17:18:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # bs=128k 00:28:04.874 17:18:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # numjobs=3 00:28:04.874 17:18:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # iodepth=3 00:28:04.874 17:18:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # runtime=5 00:28:04.874 17:18:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@105 -- # create_subsystems 0 00:28:04.874 17:18:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:28:04.874 17:18:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:28:04.874 17:18:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:28:04.874 17:18:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:28:04.874 17:18:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:28:04.874 17:18:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:04.874 17:18:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:04.874 bdev_null0 00:28:04.874 17:18:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:04.874 17:18:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:28:04.874 17:18:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:04.874 17:18:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:04.874 17:18:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:04.874 17:18:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:28:04.874 17:18:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:04.874 17:18:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:04.874 17:18:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:04.874 17:18:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:28:04.874 17:18:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:04.874 17:18:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:04.874 [2024-05-15 17:18:52.402103] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:04.874 17:18:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:04.874 17:18:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # fio /dev/fd/62 00:28:04.874 17:18:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # create_json_sub_conf 0 00:28:04.874 17:18:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:28:04.874 17:18:52 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # config=() 00:28:04.874 17:18:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:28:04.874 17:18:52 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # local subsystem config 00:28:04.874 17:18:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:28:04.874 17:18:52 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:04.874 17:18:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:28:04.874 17:18:52 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:04.874 { 00:28:04.874 "params": { 00:28:04.874 "name": "Nvme$subsystem", 00:28:04.874 "trtype": "$TEST_TRANSPORT", 00:28:04.874 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:04.874 "adrfam": "ipv4", 00:28:04.874 "trsvcid": "$NVMF_PORT", 00:28:04.874 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:04.874 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:04.874 "hdgst": ${hdgst:-false}, 00:28:04.874 "ddgst": ${ddgst:-false} 00:28:04.874 }, 00:28:04.874 "method": "bdev_nvme_attach_controller" 00:28:04.874 } 00:28:04.874 EOF 00:28:04.874 )") 00:28:04.874 17:18:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1333 -- # local fio_dir=/usr/src/fio 00:28:04.874 17:18:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:28:04.874 17:18:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1335 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:28:04.874 17:18:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:28:04.874 17:18:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1335 -- # local sanitizers 00:28:04.874 17:18:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1336 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:28:04.874 17:18:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # shift 00:28:04.874 17:18:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local asan_lib= 00:28:04.874 17:18:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:28:04.874 17:18:52 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:28:04.874 17:18:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:28:04.874 17:18:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:28:04.874 17:18:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:28:04.874 17:18:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # grep libasan 00:28:04.874 17:18:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:28:04.874 17:18:52 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # jq . 00:28:04.874 17:18:52 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@557 -- # IFS=, 00:28:04.874 17:18:52 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:28:04.874 "params": { 00:28:04.874 "name": "Nvme0", 00:28:04.874 "trtype": "tcp", 00:28:04.874 "traddr": "10.0.0.2", 00:28:04.874 "adrfam": "ipv4", 00:28:04.874 "trsvcid": "4420", 00:28:04.874 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:28:04.874 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:28:04.874 "hdgst": false, 00:28:04.874 "ddgst": false 00:28:04.874 }, 00:28:04.874 "method": "bdev_nvme_attach_controller" 00:28:04.874 }' 00:28:04.874 17:18:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # asan_lib= 00:28:04.874 17:18:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:28:04.874 17:18:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:28:04.874 17:18:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:28:04.874 17:18:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # grep libclang_rt.asan 00:28:04.874 17:18:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:28:04.874 17:18:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # asan_lib= 00:28:04.875 17:18:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:28:04.875 17:18:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:28:04.875 17:18:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:28:05.131 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:28:05.131 ... 00:28:05.131 fio-3.35 00:28:05.131 Starting 3 threads 00:28:05.131 EAL: No free 2048 kB hugepages reported on node 1 00:28:11.734 00:28:11.734 filename0: (groupid=0, jobs=1): err= 0: pid=3241781: Wed May 15 17:18:58 2024 00:28:11.734 read: IOPS=257, BW=32.2MiB/s (33.8MB/s)(161MiB/5005msec) 00:28:11.734 slat (nsec): min=6394, max=26380, avg=9982.29, stdev=2753.98 00:28:11.734 clat (usec): min=3949, max=51164, avg=11624.17, stdev=12480.85 00:28:11.734 lat (usec): min=3956, max=51178, avg=11634.16, stdev=12480.90 00:28:11.734 clat percentiles (usec): 00:28:11.734 | 1.00th=[ 4293], 5.00th=[ 4555], 10.00th=[ 4883], 20.00th=[ 5997], 00:28:11.734 | 30.00th=[ 6456], 40.00th=[ 6849], 50.00th=[ 7504], 60.00th=[ 8291], 00:28:11.734 | 70.00th=[ 8848], 80.00th=[ 9896], 90.00th=[45876], 95.00th=[47973], 00:28:11.734 | 99.00th=[50070], 99.50th=[50594], 99.90th=[50594], 99.95th=[51119], 00:28:11.734 | 99.99th=[51119] 00:28:11.734 bw ( KiB/s): min=23808, max=49920, per=31.03%, avg=32947.20, stdev=7952.09, samples=10 00:28:11.734 iops : min= 186, max= 390, avg=257.40, stdev=62.13, samples=10 00:28:11.734 lat (msec) : 4=0.08%, 10=80.31%, 20=9.38%, 50=8.91%, 100=1.32% 00:28:11.734 cpu : usr=94.96%, sys=4.72%, ctx=9, majf=0, minf=72 00:28:11.734 IO depths : 1=1.6%, 2=98.4%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:28:11.734 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:11.734 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:11.734 issued rwts: total=1290,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:11.734 latency : target=0, window=0, percentile=100.00%, depth=3 00:28:11.734 filename0: (groupid=0, jobs=1): err= 0: pid=3241782: Wed May 15 17:18:58 2024 00:28:11.734 read: IOPS=276, BW=34.6MiB/s (36.2MB/s)(174MiB/5042msec) 00:28:11.734 slat (nsec): min=6381, max=25905, avg=9870.87, stdev=2639.15 00:28:11.734 clat (usec): min=3878, max=53479, avg=10808.55, stdev=11654.75 00:28:11.734 lat (usec): min=3885, max=53486, avg=10818.42, stdev=11654.98 00:28:11.734 clat percentiles (usec): 00:28:11.734 | 1.00th=[ 4146], 5.00th=[ 4424], 10.00th=[ 4555], 20.00th=[ 5145], 00:28:11.734 | 30.00th=[ 6325], 40.00th=[ 6718], 50.00th=[ 7242], 60.00th=[ 8094], 00:28:11.734 | 70.00th=[ 8848], 80.00th=[10159], 90.00th=[11994], 95.00th=[47973], 00:28:11.734 | 99.00th=[50594], 99.50th=[51119], 99.90th=[52691], 99.95th=[53740], 00:28:11.734 | 99.99th=[53740] 00:28:11.734 bw ( KiB/s): min=22272, max=46080, per=33.57%, avg=35644.10, stdev=7438.16, samples=10 00:28:11.734 iops : min= 174, max= 360, avg=278.40, stdev=58.02, samples=10 00:28:11.734 lat (msec) : 4=0.36%, 10=78.55%, 20=12.55%, 50=7.17%, 100=1.36% 00:28:11.734 cpu : usr=94.43%, sys=5.26%, ctx=9, majf=0, minf=79 00:28:11.734 IO depths : 1=1.0%, 2=99.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:28:11.734 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:11.734 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:11.734 issued rwts: total=1394,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:11.734 latency : target=0, window=0, percentile=100.00%, depth=3 00:28:11.734 filename0: (groupid=0, jobs=1): err= 0: pid=3241783: Wed May 15 17:18:58 2024 00:28:11.734 read: IOPS=298, BW=37.4MiB/s (39.2MB/s)(187MiB/5012msec) 00:28:11.734 slat (nsec): min=6420, max=25390, avg=9965.12, stdev=2504.35 00:28:11.734 clat (usec): min=3843, max=51308, avg=10022.54, stdev=10430.19 00:28:11.734 lat (usec): min=3851, max=51315, avg=10032.51, stdev=10430.36 00:28:11.734 clat percentiles (usec): 00:28:11.734 | 1.00th=[ 4178], 5.00th=[ 4490], 10.00th=[ 4686], 20.00th=[ 5669], 00:28:11.734 | 30.00th=[ 6325], 40.00th=[ 6652], 50.00th=[ 7111], 60.00th=[ 7701], 00:28:11.734 | 70.00th=[ 8586], 80.00th=[ 9634], 90.00th=[11076], 95.00th=[46924], 00:28:11.734 | 99.00th=[49546], 99.50th=[50070], 99.90th=[50594], 99.95th=[51119], 00:28:11.734 | 99.99th=[51119] 00:28:11.734 bw ( KiB/s): min=24832, max=46080, per=36.05%, avg=38272.00, stdev=6439.42, samples=10 00:28:11.734 iops : min= 194, max= 360, avg=299.00, stdev=50.31, samples=10 00:28:11.734 lat (msec) : 4=0.13%, 10=83.71%, 20=9.35%, 50=6.48%, 100=0.33% 00:28:11.734 cpu : usr=93.99%, sys=5.69%, ctx=11, majf=0, minf=104 00:28:11.734 IO depths : 1=0.7%, 2=99.3%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:28:11.734 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:11.734 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:11.734 issued rwts: total=1498,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:11.734 latency : target=0, window=0, percentile=100.00%, depth=3 00:28:11.734 00:28:11.734 Run status group 0 (all jobs): 00:28:11.734 READ: bw=104MiB/s (109MB/s), 32.2MiB/s-37.4MiB/s (33.8MB/s-39.2MB/s), io=523MiB (548MB), run=5005-5042msec 00:28:11.734 17:18:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@107 -- # destroy_subsystems 0 00:28:11.734 17:18:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:28:11.734 17:18:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:28:11.734 17:18:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:28:11.734 17:18:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:28:11.734 17:18:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:28:11.734 17:18:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:11.734 17:18:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:11.734 17:18:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:11.734 17:18:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:28:11.734 17:18:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:11.734 17:18:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:11.734 17:18:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:11.734 17:18:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # NULL_DIF=2 00:28:11.734 17:18:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # bs=4k 00:28:11.735 17:18:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # numjobs=8 00:28:11.735 17:18:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # iodepth=16 00:28:11.735 17:18:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # runtime= 00:28:11.735 17:18:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # files=2 00:28:11.735 17:18:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@111 -- # create_subsystems 0 1 2 00:28:11.735 17:18:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:28:11.735 17:18:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:28:11.735 17:18:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:28:11.735 17:18:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:28:11.735 17:18:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 2 00:28:11.735 17:18:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:11.735 17:18:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:11.735 bdev_null0 00:28:11.735 17:18:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:11.735 17:18:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:28:11.735 17:18:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:11.735 17:18:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:11.735 17:18:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:11.735 17:18:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:28:11.735 17:18:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:11.735 17:18:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:11.735 17:18:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:11.735 17:18:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:28:11.735 17:18:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:11.735 17:18:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:11.735 [2024-05-15 17:18:58.622636] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:11.735 17:18:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:11.735 17:18:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:28:11.735 17:18:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:28:11.735 17:18:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:28:11.735 17:18:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 2 00:28:11.735 17:18:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:11.735 17:18:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:11.735 bdev_null1 00:28:11.735 17:18:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:11.735 17:18:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:28:11.735 17:18:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:11.735 17:18:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:11.735 17:18:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:11.735 17:18:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:28:11.735 17:18:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:11.735 17:18:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:11.735 17:18:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:11.735 17:18:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:28:11.735 17:18:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:11.735 17:18:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:11.735 17:18:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:11.735 17:18:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:28:11.735 17:18:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 2 00:28:11.735 17:18:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=2 00:28:11.735 17:18:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null2 64 512 --md-size 16 --dif-type 2 00:28:11.735 17:18:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:11.735 17:18:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:11.735 bdev_null2 00:28:11.735 17:18:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:11.735 17:18:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 --serial-number 53313233-2 --allow-any-host 00:28:11.735 17:18:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:11.735 17:18:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:11.735 17:18:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:11.735 17:18:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 bdev_null2 00:28:11.735 17:18:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:11.735 17:18:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:11.735 17:18:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:11.735 17:18:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:28:11.735 17:18:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:11.735 17:18:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:11.735 17:18:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:11.735 17:18:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # fio /dev/fd/62 00:28:11.735 17:18:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # create_json_sub_conf 0 1 2 00:28:11.735 17:18:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 2 00:28:11.735 17:18:58 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # config=() 00:28:11.735 17:18:58 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # local subsystem config 00:28:11.735 17:18:58 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:11.735 17:18:58 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:11.735 { 00:28:11.735 "params": { 00:28:11.735 "name": "Nvme$subsystem", 00:28:11.735 "trtype": "$TEST_TRANSPORT", 00:28:11.735 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:11.735 "adrfam": "ipv4", 00:28:11.735 "trsvcid": "$NVMF_PORT", 00:28:11.735 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:11.735 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:11.735 "hdgst": ${hdgst:-false}, 00:28:11.735 "ddgst": ${ddgst:-false} 00:28:11.735 }, 00:28:11.735 "method": "bdev_nvme_attach_controller" 00:28:11.735 } 00:28:11.735 EOF 00:28:11.735 )") 00:28:11.735 17:18:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:28:11.735 17:18:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:28:11.735 17:18:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:28:11.735 17:18:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1333 -- # local fio_dir=/usr/src/fio 00:28:11.735 17:18:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:28:11.735 17:18:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1335 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:28:11.735 17:18:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:28:11.735 17:18:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1335 -- # local sanitizers 00:28:11.735 17:18:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1336 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:28:11.735 17:18:58 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:28:11.735 17:18:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # shift 00:28:11.735 17:18:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local asan_lib= 00:28:11.735 17:18:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:28:11.735 17:18:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:28:11.735 17:18:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:28:11.735 17:18:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:28:11.735 17:18:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:28:11.735 17:18:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # grep libasan 00:28:11.735 17:18:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:28:11.735 17:18:58 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:11.735 17:18:58 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:11.735 { 00:28:11.735 "params": { 00:28:11.735 "name": "Nvme$subsystem", 00:28:11.735 "trtype": "$TEST_TRANSPORT", 00:28:11.735 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:11.735 "adrfam": "ipv4", 00:28:11.735 "trsvcid": "$NVMF_PORT", 00:28:11.735 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:11.735 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:11.735 "hdgst": ${hdgst:-false}, 00:28:11.735 "ddgst": ${ddgst:-false} 00:28:11.735 }, 00:28:11.735 "method": "bdev_nvme_attach_controller" 00:28:11.735 } 00:28:11.735 EOF 00:28:11.735 )") 00:28:11.735 17:18:58 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:28:11.735 17:18:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:28:11.735 17:18:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:28:11.735 17:18:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:28:11.735 17:18:58 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:11.735 17:18:58 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:11.735 { 00:28:11.735 "params": { 00:28:11.735 "name": "Nvme$subsystem", 00:28:11.735 "trtype": "$TEST_TRANSPORT", 00:28:11.735 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:11.735 "adrfam": "ipv4", 00:28:11.735 "trsvcid": "$NVMF_PORT", 00:28:11.735 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:11.735 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:11.735 "hdgst": ${hdgst:-false}, 00:28:11.735 "ddgst": ${ddgst:-false} 00:28:11.735 }, 00:28:11.735 "method": "bdev_nvme_attach_controller" 00:28:11.735 } 00:28:11.735 EOF 00:28:11.735 )") 00:28:11.735 17:18:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:28:11.736 17:18:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:28:11.736 17:18:58 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:28:11.736 17:18:58 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # jq . 00:28:11.736 17:18:58 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@557 -- # IFS=, 00:28:11.736 17:18:58 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:28:11.736 "params": { 00:28:11.736 "name": "Nvme0", 00:28:11.736 "trtype": "tcp", 00:28:11.736 "traddr": "10.0.0.2", 00:28:11.736 "adrfam": "ipv4", 00:28:11.736 "trsvcid": "4420", 00:28:11.736 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:28:11.736 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:28:11.736 "hdgst": false, 00:28:11.736 "ddgst": false 00:28:11.736 }, 00:28:11.736 "method": "bdev_nvme_attach_controller" 00:28:11.736 },{ 00:28:11.736 "params": { 00:28:11.736 "name": "Nvme1", 00:28:11.736 "trtype": "tcp", 00:28:11.736 "traddr": "10.0.0.2", 00:28:11.736 "adrfam": "ipv4", 00:28:11.736 "trsvcid": "4420", 00:28:11.736 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:28:11.736 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:28:11.736 "hdgst": false, 00:28:11.736 "ddgst": false 00:28:11.736 }, 00:28:11.736 "method": "bdev_nvme_attach_controller" 00:28:11.736 },{ 00:28:11.736 "params": { 00:28:11.736 "name": "Nvme2", 00:28:11.736 "trtype": "tcp", 00:28:11.736 "traddr": "10.0.0.2", 00:28:11.736 "adrfam": "ipv4", 00:28:11.736 "trsvcid": "4420", 00:28:11.736 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:28:11.736 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:28:11.736 "hdgst": false, 00:28:11.736 "ddgst": false 00:28:11.736 }, 00:28:11.736 "method": "bdev_nvme_attach_controller" 00:28:11.736 }' 00:28:11.736 17:18:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # asan_lib= 00:28:11.736 17:18:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:28:11.736 17:18:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:28:11.736 17:18:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:28:11.736 17:18:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # grep libclang_rt.asan 00:28:11.736 17:18:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:28:11.736 17:18:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # asan_lib= 00:28:11.736 17:18:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:28:11.736 17:18:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:28:11.736 17:18:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:28:11.736 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:28:11.736 ... 00:28:11.736 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:28:11.736 ... 00:28:11.736 filename2: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:28:11.736 ... 00:28:11.736 fio-3.35 00:28:11.736 Starting 24 threads 00:28:11.736 EAL: No free 2048 kB hugepages reported on node 1 00:28:23.925 00:28:23.925 filename0: (groupid=0, jobs=1): err= 0: pid=3243043: Wed May 15 17:19:10 2024 00:28:23.925 read: IOPS=630, BW=2520KiB/s (2581kB/s)(24.6MiB/10006msec) 00:28:23.925 slat (nsec): min=6734, max=92595, avg=38855.49, stdev=17902.23 00:28:23.925 clat (usec): min=1231, max=40717, avg=25084.70, stdev=3305.54 00:28:23.925 lat (usec): min=1245, max=40730, avg=25123.55, stdev=3307.88 00:28:23.925 clat percentiles (usec): 00:28:23.925 | 1.00th=[ 1647], 5.00th=[24511], 10.00th=[24773], 20.00th=[24773], 00:28:23.925 | 30.00th=[25035], 40.00th=[25035], 50.00th=[25297], 60.00th=[25297], 00:28:23.925 | 70.00th=[25560], 80.00th=[26084], 90.00th=[26870], 95.00th=[27395], 00:28:23.925 | 99.00th=[28181], 99.50th=[28181], 99.90th=[38011], 99.95th=[38011], 00:28:23.925 | 99.99th=[40633] 00:28:23.925 bw ( KiB/s): min= 2304, max= 3328, per=4.26%, avg=2526.00, stdev=208.29, samples=19 00:28:23.925 iops : min= 576, max= 832, avg=631.47, stdev=52.07, samples=19 00:28:23.925 lat (msec) : 2=1.02%, 10=1.02%, 20=0.25%, 50=97.72% 00:28:23.925 cpu : usr=99.11%, sys=0.52%, ctx=15, majf=0, minf=48 00:28:23.925 IO depths : 1=6.1%, 2=12.3%, 4=24.6%, 8=50.6%, 16=6.4%, 32=0.0%, >=64=0.0% 00:28:23.925 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:23.925 complete : 0=0.0%, 4=94.0%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:23.925 issued rwts: total=6304,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:23.925 latency : target=0, window=0, percentile=100.00%, depth=16 00:28:23.925 filename0: (groupid=0, jobs=1): err= 0: pid=3243044: Wed May 15 17:19:10 2024 00:28:23.925 read: IOPS=618, BW=2473KiB/s (2532kB/s)(24.2MiB/10016msec) 00:28:23.925 slat (nsec): min=5207, max=85838, avg=39516.11, stdev=17289.80 00:28:23.925 clat (usec): min=17964, max=48924, avg=25520.70, stdev=1492.63 00:28:23.925 lat (usec): min=17973, max=48939, avg=25560.22, stdev=1491.44 00:28:23.925 clat percentiles (usec): 00:28:23.925 | 1.00th=[23987], 5.00th=[24511], 10.00th=[24773], 20.00th=[24773], 00:28:23.925 | 30.00th=[25035], 40.00th=[25035], 50.00th=[25297], 60.00th=[25297], 00:28:23.925 | 70.00th=[25560], 80.00th=[26084], 90.00th=[26870], 95.00th=[27395], 00:28:23.925 | 99.00th=[27919], 99.50th=[28181], 99.90th=[49021], 99.95th=[49021], 00:28:23.925 | 99.99th=[49021] 00:28:23.925 bw ( KiB/s): min= 2299, max= 2560, per=4.17%, avg=2471.84, stdev=86.12, samples=19 00:28:23.925 iops : min= 574, max= 640, avg=617.89, stdev=21.59, samples=19 00:28:23.925 lat (msec) : 20=0.24%, 50=99.76% 00:28:23.925 cpu : usr=98.74%, sys=0.71%, ctx=58, majf=0, minf=34 00:28:23.925 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:28:23.925 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:23.925 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:23.925 issued rwts: total=6192,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:23.925 latency : target=0, window=0, percentile=100.00%, depth=16 00:28:23.925 filename0: (groupid=0, jobs=1): err= 0: pid=3243045: Wed May 15 17:19:10 2024 00:28:23.925 read: IOPS=618, BW=2473KiB/s (2532kB/s)(24.2MiB/10015msec) 00:28:23.925 slat (nsec): min=7045, max=86483, avg=37040.86, stdev=17735.78 00:28:23.925 clat (usec): min=18222, max=48687, avg=25555.41, stdev=1481.20 00:28:23.925 lat (usec): min=18241, max=48705, avg=25592.45, stdev=1479.51 00:28:23.925 clat percentiles (usec): 00:28:23.925 | 1.00th=[23987], 5.00th=[24511], 10.00th=[24773], 20.00th=[25035], 00:28:23.925 | 30.00th=[25035], 40.00th=[25297], 50.00th=[25297], 60.00th=[25297], 00:28:23.925 | 70.00th=[25560], 80.00th=[26084], 90.00th=[26870], 95.00th=[27657], 00:28:23.925 | 99.00th=[28181], 99.50th=[28443], 99.90th=[48497], 99.95th=[48497], 00:28:23.925 | 99.99th=[48497] 00:28:23.925 bw ( KiB/s): min= 2299, max= 2560, per=4.17%, avg=2471.84, stdev=86.12, samples=19 00:28:23.925 iops : min= 574, max= 640, avg=617.89, stdev=21.59, samples=19 00:28:23.925 lat (msec) : 20=0.26%, 50=99.74% 00:28:23.925 cpu : usr=98.97%, sys=0.64%, ctx=18, majf=0, minf=44 00:28:23.925 IO depths : 1=6.2%, 2=12.4%, 4=25.0%, 8=50.1%, 16=6.3%, 32=0.0%, >=64=0.0% 00:28:23.925 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:23.925 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:23.925 issued rwts: total=6192,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:23.925 latency : target=0, window=0, percentile=100.00%, depth=16 00:28:23.925 filename0: (groupid=0, jobs=1): err= 0: pid=3243046: Wed May 15 17:19:10 2024 00:28:23.925 read: IOPS=622, BW=2491KiB/s (2551kB/s)(24.4MiB/10021msec) 00:28:23.925 slat (nsec): min=6448, max=84199, avg=38566.35, stdev=14522.50 00:28:23.925 clat (usec): min=5311, max=40733, avg=25388.48, stdev=1996.03 00:28:23.925 lat (usec): min=5335, max=40747, avg=25427.05, stdev=1996.65 00:28:23.925 clat percentiles (usec): 00:28:23.925 | 1.00th=[15795], 5.00th=[24511], 10.00th=[24773], 20.00th=[25035], 00:28:23.925 | 30.00th=[25035], 40.00th=[25297], 50.00th=[25297], 60.00th=[25297], 00:28:23.925 | 70.00th=[25560], 80.00th=[26084], 90.00th=[26870], 95.00th=[27395], 00:28:23.925 | 99.00th=[27919], 99.50th=[28181], 99.90th=[37487], 99.95th=[37487], 00:28:23.925 | 99.99th=[40633] 00:28:23.925 bw ( KiB/s): min= 2432, max= 2688, per=4.20%, avg=2489.60, stdev=77.42, samples=20 00:28:23.925 iops : min= 608, max= 672, avg=622.40, stdev=19.35, samples=20 00:28:23.925 lat (msec) : 10=0.77%, 20=0.26%, 50=98.97% 00:28:23.925 cpu : usr=96.85%, sys=1.62%, ctx=377, majf=0, minf=31 00:28:23.925 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:28:23.925 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:23.925 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:23.925 issued rwts: total=6240,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:23.925 latency : target=0, window=0, percentile=100.00%, depth=16 00:28:23.925 filename0: (groupid=0, jobs=1): err= 0: pid=3243047: Wed May 15 17:19:10 2024 00:28:23.925 read: IOPS=619, BW=2478KiB/s (2538kB/s)(24.2MiB/10019msec) 00:28:23.925 slat (nsec): min=7044, max=76467, avg=35160.36, stdev=13818.53 00:28:23.925 clat (usec): min=12860, max=52250, avg=25548.98, stdev=1759.58 00:28:23.925 lat (usec): min=12870, max=52280, avg=25584.14, stdev=1759.53 00:28:23.925 clat percentiles (usec): 00:28:23.925 | 1.00th=[19530], 5.00th=[24511], 10.00th=[24773], 20.00th=[25035], 00:28:23.925 | 30.00th=[25035], 40.00th=[25297], 50.00th=[25297], 60.00th=[25560], 00:28:23.925 | 70.00th=[25560], 80.00th=[26084], 90.00th=[26870], 95.00th=[27657], 00:28:23.925 | 99.00th=[28705], 99.50th=[31851], 99.90th=[45876], 99.95th=[46400], 00:28:23.925 | 99.99th=[52167] 00:28:23.925 bw ( KiB/s): min= 2304, max= 2560, per=4.18%, avg=2476.25, stdev=72.49, samples=20 00:28:23.925 iops : min= 576, max= 640, avg=619.00, stdev=18.13, samples=20 00:28:23.925 lat (msec) : 20=1.21%, 50=98.76%, 100=0.03% 00:28:23.925 cpu : usr=97.60%, sys=1.24%, ctx=63, majf=0, minf=37 00:28:23.925 IO depths : 1=5.7%, 2=11.8%, 4=24.7%, 8=51.0%, 16=6.8%, 32=0.0%, >=64=0.0% 00:28:23.925 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:23.925 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:23.925 issued rwts: total=6208,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:23.925 latency : target=0, window=0, percentile=100.00%, depth=16 00:28:23.925 filename0: (groupid=0, jobs=1): err= 0: pid=3243048: Wed May 15 17:19:10 2024 00:28:23.925 read: IOPS=617, BW=2470KiB/s (2529kB/s)(24.1MiB/10002msec) 00:28:23.925 slat (nsec): min=5738, max=90190, avg=43073.38, stdev=15859.35 00:28:23.925 clat (usec): min=17352, max=63549, avg=25516.74, stdev=2140.22 00:28:23.925 lat (usec): min=17360, max=63566, avg=25559.81, stdev=2139.14 00:28:23.925 clat percentiles (usec): 00:28:23.925 | 1.00th=[24249], 5.00th=[24511], 10.00th=[24773], 20.00th=[24773], 00:28:23.925 | 30.00th=[25035], 40.00th=[25035], 50.00th=[25297], 60.00th=[25297], 00:28:23.925 | 70.00th=[25560], 80.00th=[25822], 90.00th=[26870], 95.00th=[27395], 00:28:23.925 | 99.00th=[27919], 99.50th=[28181], 99.90th=[63701], 99.95th=[63701], 00:28:23.925 | 99.99th=[63701] 00:28:23.925 bw ( KiB/s): min= 2304, max= 2560, per=4.17%, avg=2472.16, stdev=86.03, samples=19 00:28:23.925 iops : min= 576, max= 640, avg=618.00, stdev=21.53, samples=19 00:28:23.925 lat (msec) : 20=0.26%, 50=99.48%, 100=0.26% 00:28:23.925 cpu : usr=97.59%, sys=1.34%, ctx=305, majf=0, minf=29 00:28:23.925 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:28:23.925 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:23.925 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:23.925 issued rwts: total=6176,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:23.925 latency : target=0, window=0, percentile=100.00%, depth=16 00:28:23.925 filename0: (groupid=0, jobs=1): err= 0: pid=3243049: Wed May 15 17:19:10 2024 00:28:23.925 read: IOPS=617, BW=2469KiB/s (2528kB/s)(24.1MiB/10006msec) 00:28:23.925 slat (nsec): min=6656, max=83617, avg=38922.90, stdev=16482.05 00:28:23.925 clat (usec): min=17981, max=69910, avg=25568.91, stdev=2119.26 00:28:23.925 lat (usec): min=17996, max=69932, avg=25607.83, stdev=2117.78 00:28:23.925 clat percentiles (usec): 00:28:23.925 | 1.00th=[23987], 5.00th=[24511], 10.00th=[24773], 20.00th=[24773], 00:28:23.925 | 30.00th=[25035], 40.00th=[25035], 50.00th=[25297], 60.00th=[25297], 00:28:23.925 | 70.00th=[25560], 80.00th=[25822], 90.00th=[26870], 95.00th=[27395], 00:28:23.925 | 99.00th=[27919], 99.50th=[28181], 99.90th=[62653], 99.95th=[62653], 00:28:23.925 | 99.99th=[69731] 00:28:23.925 bw ( KiB/s): min= 2308, max= 2560, per=4.16%, avg=2465.63, stdev=71.58, samples=19 00:28:23.925 iops : min= 577, max= 640, avg=616.37, stdev=17.92, samples=19 00:28:23.925 lat (msec) : 20=0.28%, 50=99.47%, 100=0.26% 00:28:23.925 cpu : usr=99.09%, sys=0.51%, ctx=13, majf=0, minf=32 00:28:23.925 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:28:23.925 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:23.925 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:23.926 issued rwts: total=6176,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:23.926 latency : target=0, window=0, percentile=100.00%, depth=16 00:28:23.926 filename0: (groupid=0, jobs=1): err= 0: pid=3243050: Wed May 15 17:19:10 2024 00:28:23.926 read: IOPS=622, BW=2491KiB/s (2551kB/s)(24.4MiB/10020msec) 00:28:23.926 slat (nsec): min=8371, max=92799, avg=42152.51, stdev=15946.63 00:28:23.926 clat (usec): min=5285, max=40732, avg=25326.12, stdev=2014.54 00:28:23.926 lat (usec): min=5309, max=40748, avg=25368.27, stdev=2015.67 00:28:23.926 clat percentiles (usec): 00:28:23.926 | 1.00th=[15926], 5.00th=[24511], 10.00th=[24773], 20.00th=[24773], 00:28:23.926 | 30.00th=[25035], 40.00th=[25035], 50.00th=[25297], 60.00th=[25297], 00:28:23.926 | 70.00th=[25560], 80.00th=[25822], 90.00th=[26870], 95.00th=[27395], 00:28:23.926 | 99.00th=[27919], 99.50th=[28181], 99.90th=[37487], 99.95th=[37487], 00:28:23.926 | 99.99th=[40633] 00:28:23.926 bw ( KiB/s): min= 2432, max= 2693, per=4.20%, avg=2489.85, stdev=78.09, samples=20 00:28:23.926 iops : min= 608, max= 673, avg=622.45, stdev=19.49, samples=20 00:28:23.926 lat (msec) : 10=0.77%, 20=0.26%, 50=98.97% 00:28:23.926 cpu : usr=98.58%, sys=0.90%, ctx=67, majf=0, minf=32 00:28:23.926 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:28:23.926 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:23.926 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:23.926 issued rwts: total=6240,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:23.926 latency : target=0, window=0, percentile=100.00%, depth=16 00:28:23.926 filename1: (groupid=0, jobs=1): err= 0: pid=3243051: Wed May 15 17:19:10 2024 00:28:23.926 read: IOPS=617, BW=2469KiB/s (2528kB/s)(24.1MiB/10007msec) 00:28:23.926 slat (nsec): min=7467, max=79112, avg=34389.31, stdev=13476.56 00:28:23.926 clat (usec): min=17966, max=70509, avg=25638.54, stdev=2197.74 00:28:23.926 lat (usec): min=17981, max=70530, avg=25672.93, stdev=2195.79 00:28:23.926 clat percentiles (usec): 00:28:23.926 | 1.00th=[23987], 5.00th=[24511], 10.00th=[24773], 20.00th=[25035], 00:28:23.926 | 30.00th=[25035], 40.00th=[25297], 50.00th=[25297], 60.00th=[25560], 00:28:23.926 | 70.00th=[25560], 80.00th=[26084], 90.00th=[26870], 95.00th=[27657], 00:28:23.926 | 99.00th=[28181], 99.50th=[28443], 99.90th=[63177], 99.95th=[63177], 00:28:23.926 | 99.99th=[70779] 00:28:23.926 bw ( KiB/s): min= 2304, max= 2560, per=4.16%, avg=2465.42, stdev=72.07, samples=19 00:28:23.926 iops : min= 576, max= 640, avg=616.32, stdev=18.04, samples=19 00:28:23.926 lat (msec) : 20=0.45%, 50=99.29%, 100=0.26% 00:28:23.926 cpu : usr=96.44%, sys=1.99%, ctx=256, majf=0, minf=39 00:28:23.926 IO depths : 1=6.0%, 2=12.3%, 4=25.0%, 8=50.2%, 16=6.5%, 32=0.0%, >=64=0.0% 00:28:23.926 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:23.926 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:23.926 issued rwts: total=6176,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:23.926 latency : target=0, window=0, percentile=100.00%, depth=16 00:28:23.926 filename1: (groupid=0, jobs=1): err= 0: pid=3243052: Wed May 15 17:19:10 2024 00:28:23.926 read: IOPS=618, BW=2474KiB/s (2533kB/s)(24.2MiB/10013msec) 00:28:23.926 slat (nsec): min=7318, max=76430, avg=31419.78, stdev=13775.26 00:28:23.926 clat (usec): min=14638, max=46088, avg=25618.22, stdev=1563.60 00:28:23.926 lat (usec): min=14649, max=46118, avg=25649.64, stdev=1563.31 00:28:23.926 clat percentiles (usec): 00:28:23.926 | 1.00th=[24249], 5.00th=[24511], 10.00th=[24773], 20.00th=[25035], 00:28:23.926 | 30.00th=[25035], 40.00th=[25297], 50.00th=[25297], 60.00th=[25560], 00:28:23.926 | 70.00th=[25822], 80.00th=[26084], 90.00th=[26870], 95.00th=[27657], 00:28:23.926 | 99.00th=[28181], 99.50th=[35390], 99.90th=[45876], 99.95th=[45876], 00:28:23.926 | 99.99th=[45876] 00:28:23.926 bw ( KiB/s): min= 2304, max= 2560, per=4.17%, avg=2472.05, stdev=74.21, samples=19 00:28:23.926 iops : min= 576, max= 640, avg=617.95, stdev=18.55, samples=19 00:28:23.926 lat (msec) : 20=0.52%, 50=99.48% 00:28:23.926 cpu : usr=97.23%, sys=1.44%, ctx=88, majf=0, minf=47 00:28:23.926 IO depths : 1=6.2%, 2=12.4%, 4=24.9%, 8=50.2%, 16=6.3%, 32=0.0%, >=64=0.0% 00:28:23.926 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:23.926 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:23.926 issued rwts: total=6192,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:23.926 latency : target=0, window=0, percentile=100.00%, depth=16 00:28:23.926 filename1: (groupid=0, jobs=1): err= 0: pid=3243053: Wed May 15 17:19:10 2024 00:28:23.926 read: IOPS=621, BW=2485KiB/s (2545kB/s)(24.3MiB/10019msec) 00:28:23.926 slat (nsec): min=4435, max=76503, avg=33386.67, stdev=14083.51 00:28:23.926 clat (usec): min=10295, max=52397, avg=25479.89, stdev=2008.02 00:28:23.926 lat (usec): min=10305, max=52410, avg=25513.27, stdev=2009.05 00:28:23.926 clat percentiles (usec): 00:28:23.926 | 1.00th=[18482], 5.00th=[24511], 10.00th=[24773], 20.00th=[25035], 00:28:23.926 | 30.00th=[25035], 40.00th=[25297], 50.00th=[25297], 60.00th=[25297], 00:28:23.926 | 70.00th=[25560], 80.00th=[26084], 90.00th=[26870], 95.00th=[27657], 00:28:23.926 | 99.00th=[28705], 99.50th=[35390], 99.90th=[46400], 99.95th=[46400], 00:28:23.926 | 99.99th=[52167] 00:28:23.926 bw ( KiB/s): min= 2304, max= 2688, per=4.19%, avg=2482.65, stdev=87.00, samples=20 00:28:23.926 iops : min= 576, max= 672, avg=620.60, stdev=21.76, samples=20 00:28:23.926 lat (msec) : 20=1.93%, 50=98.04%, 100=0.03% 00:28:23.926 cpu : usr=98.89%, sys=0.70%, ctx=36, majf=0, minf=38 00:28:23.926 IO depths : 1=5.8%, 2=11.9%, 4=24.5%, 8=51.1%, 16=6.7%, 32=0.0%, >=64=0.0% 00:28:23.926 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:23.926 complete : 0=0.0%, 4=94.0%, 8=0.2%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:23.926 issued rwts: total=6224,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:23.926 latency : target=0, window=0, percentile=100.00%, depth=16 00:28:23.926 filename1: (groupid=0, jobs=1): err= 0: pid=3243054: Wed May 15 17:19:10 2024 00:28:23.926 read: IOPS=620, BW=2482KiB/s (2541kB/s)(24.2MiB/10003msec) 00:28:23.926 slat (nsec): min=6702, max=83571, avg=38388.65, stdev=18168.77 00:28:23.926 clat (usec): min=2694, max=49362, avg=25404.67, stdev=2062.32 00:28:23.926 lat (usec): min=2702, max=49403, avg=25443.06, stdev=2064.20 00:28:23.926 clat percentiles (usec): 00:28:23.926 | 1.00th=[23987], 5.00th=[24511], 10.00th=[24773], 20.00th=[24773], 00:28:23.926 | 30.00th=[25035], 40.00th=[25035], 50.00th=[25297], 60.00th=[25297], 00:28:23.926 | 70.00th=[25560], 80.00th=[25822], 90.00th=[26608], 95.00th=[27395], 00:28:23.926 | 99.00th=[27919], 99.50th=[28181], 99.90th=[49021], 99.95th=[49021], 00:28:23.926 | 99.99th=[49546] 00:28:23.926 bw ( KiB/s): min= 2304, max= 2560, per=4.17%, avg=2472.32, stdev=85.13, samples=19 00:28:23.926 iops : min= 576, max= 640, avg=618.05, stdev=21.26, samples=19 00:28:23.926 lat (msec) : 4=0.23%, 10=0.26%, 20=0.23%, 50=99.29% 00:28:23.926 cpu : usr=99.06%, sys=0.56%, ctx=17, majf=0, minf=30 00:28:23.926 IO depths : 1=6.2%, 2=12.4%, 4=25.0%, 8=50.1%, 16=6.3%, 32=0.0%, >=64=0.0% 00:28:23.926 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:23.926 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:23.926 issued rwts: total=6206,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:23.926 latency : target=0, window=0, percentile=100.00%, depth=16 00:28:23.926 filename1: (groupid=0, jobs=1): err= 0: pid=3243055: Wed May 15 17:19:10 2024 00:28:23.926 read: IOPS=622, BW=2488KiB/s (2548kB/s)(24.3MiB/10005msec) 00:28:23.926 slat (nsec): min=7124, max=62419, avg=19103.67, stdev=11174.03 00:28:23.926 clat (usec): min=5280, max=40791, avg=25567.42, stdev=2022.69 00:28:23.926 lat (usec): min=5294, max=40817, avg=25586.53, stdev=2022.25 00:28:23.926 clat percentiles (usec): 00:28:23.926 | 1.00th=[18744], 5.00th=[24773], 10.00th=[25035], 20.00th=[25035], 00:28:23.926 | 30.00th=[25297], 40.00th=[25297], 50.00th=[25297], 60.00th=[25560], 00:28:23.926 | 70.00th=[25822], 80.00th=[26084], 90.00th=[26870], 95.00th=[27657], 00:28:23.926 | 99.00th=[28181], 99.50th=[28705], 99.90th=[40633], 99.95th=[40633], 00:28:23.926 | 99.99th=[40633] 00:28:23.926 bw ( KiB/s): min= 2304, max= 2693, per=4.21%, avg=2492.89, stdev=89.79, samples=19 00:28:23.926 iops : min= 576, max= 673, avg=623.21, stdev=22.42, samples=19 00:28:23.926 lat (msec) : 10=0.77%, 20=0.26%, 50=98.97% 00:28:23.926 cpu : usr=98.96%, sys=0.66%, ctx=13, majf=0, minf=44 00:28:23.926 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:28:23.926 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:23.926 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:23.926 issued rwts: total=6224,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:23.926 latency : target=0, window=0, percentile=100.00%, depth=16 00:28:23.926 filename1: (groupid=0, jobs=1): err= 0: pid=3243056: Wed May 15 17:19:10 2024 00:28:23.926 read: IOPS=622, BW=2491KiB/s (2551kB/s)(24.4MiB/10020msec) 00:28:23.926 slat (nsec): min=7188, max=78355, avg=29289.72, stdev=15601.64 00:28:23.926 clat (usec): min=5339, max=41260, avg=25469.19, stdev=2045.33 00:28:23.926 lat (usec): min=5374, max=41284, avg=25498.48, stdev=2045.32 00:28:23.927 clat percentiles (usec): 00:28:23.927 | 1.00th=[15926], 5.00th=[24511], 10.00th=[24773], 20.00th=[25035], 00:28:23.927 | 30.00th=[25035], 40.00th=[25297], 50.00th=[25297], 60.00th=[25560], 00:28:23.927 | 70.00th=[25822], 80.00th=[26084], 90.00th=[26870], 95.00th=[27657], 00:28:23.927 | 99.00th=[27919], 99.50th=[28181], 99.90th=[41157], 99.95th=[41157], 00:28:23.927 | 99.99th=[41157] 00:28:23.927 bw ( KiB/s): min= 2432, max= 2693, per=4.20%, avg=2489.85, stdev=78.09, samples=20 00:28:23.927 iops : min= 608, max= 673, avg=622.45, stdev=19.49, samples=20 00:28:23.927 lat (msec) : 10=0.74%, 20=0.29%, 50=98.97% 00:28:23.927 cpu : usr=98.94%, sys=0.65%, ctx=11, majf=0, minf=33 00:28:23.927 IO depths : 1=6.2%, 2=12.4%, 4=25.0%, 8=50.1%, 16=6.3%, 32=0.0%, >=64=0.0% 00:28:23.927 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:23.927 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:23.927 issued rwts: total=6240,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:23.927 latency : target=0, window=0, percentile=100.00%, depth=16 00:28:23.927 filename1: (groupid=0, jobs=1): err= 0: pid=3243057: Wed May 15 17:19:10 2024 00:28:23.927 read: IOPS=617, BW=2470KiB/s (2529kB/s)(24.2MiB/10018msec) 00:28:23.927 slat (nsec): min=9645, max=80134, avg=40452.22, stdev=13610.94 00:28:23.927 clat (usec): min=17325, max=54185, avg=25565.14, stdev=1698.05 00:28:23.927 lat (usec): min=17365, max=54204, avg=25605.60, stdev=1696.40 00:28:23.927 clat percentiles (usec): 00:28:23.927 | 1.00th=[24249], 5.00th=[24511], 10.00th=[24773], 20.00th=[24773], 00:28:23.927 | 30.00th=[25035], 40.00th=[25035], 50.00th=[25297], 60.00th=[25297], 00:28:23.927 | 70.00th=[25560], 80.00th=[26084], 90.00th=[26870], 95.00th=[27395], 00:28:23.927 | 99.00th=[27919], 99.50th=[28181], 99.90th=[54264], 99.95th=[54264], 00:28:23.927 | 99.99th=[54264] 00:28:23.927 bw ( KiB/s): min= 2304, max= 2560, per=4.17%, avg=2471.89, stdev=86.17, samples=19 00:28:23.927 iops : min= 576, max= 640, avg=617.89, stdev=21.59, samples=19 00:28:23.927 lat (msec) : 20=0.16%, 50=99.58%, 100=0.26% 00:28:23.927 cpu : usr=98.64%, sys=0.81%, ctx=86, majf=0, minf=31 00:28:23.927 IO depths : 1=6.3%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:28:23.927 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:23.927 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:23.927 issued rwts: total=6186,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:23.927 latency : target=0, window=0, percentile=100.00%, depth=16 00:28:23.927 filename1: (groupid=0, jobs=1): err= 0: pid=3243058: Wed May 15 17:19:10 2024 00:28:23.927 read: IOPS=617, BW=2470KiB/s (2530kB/s)(24.1MiB/10003msec) 00:28:23.927 slat (nsec): min=6644, max=87329, avg=39141.94, stdev=14819.02 00:28:23.927 clat (usec): min=11085, max=61856, avg=25568.07, stdev=2371.91 00:28:23.927 lat (usec): min=11094, max=61873, avg=25607.21, stdev=2371.19 00:28:23.927 clat percentiles (usec): 00:28:23.927 | 1.00th=[20317], 5.00th=[24511], 10.00th=[24773], 20.00th=[24773], 00:28:23.927 | 30.00th=[25035], 40.00th=[25035], 50.00th=[25297], 60.00th=[25297], 00:28:23.927 | 70.00th=[25560], 80.00th=[26084], 90.00th=[26870], 95.00th=[27657], 00:28:23.927 | 99.00th=[30278], 99.50th=[35390], 99.90th=[61604], 99.95th=[61604], 00:28:23.927 | 99.99th=[61604] 00:28:23.927 bw ( KiB/s): min= 2304, max= 2560, per=4.16%, avg=2466.26, stdev=75.80, samples=19 00:28:23.927 iops : min= 576, max= 640, avg=616.53, stdev=18.97, samples=19 00:28:23.927 lat (msec) : 20=0.81%, 50=98.93%, 100=0.26% 00:28:23.927 cpu : usr=98.72%, sys=0.73%, ctx=145, majf=0, minf=32 00:28:23.927 IO depths : 1=5.7%, 2=11.8%, 4=24.4%, 8=51.3%, 16=6.9%, 32=0.0%, >=64=0.0% 00:28:23.927 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:23.927 complete : 0=0.0%, 4=94.0%, 8=0.2%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:23.927 issued rwts: total=6178,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:23.927 latency : target=0, window=0, percentile=100.00%, depth=16 00:28:23.927 filename2: (groupid=0, jobs=1): err= 0: pid=3243059: Wed May 15 17:19:10 2024 00:28:23.927 read: IOPS=614, BW=2458KiB/s (2517kB/s)(24.1MiB/10056msec) 00:28:23.927 slat (usec): min=5, max=218, avg=43.23, stdev=16.12 00:28:23.927 clat (usec): min=23571, max=55443, avg=25532.39, stdev=1832.37 00:28:23.927 lat (usec): min=23587, max=55459, avg=25575.61, stdev=1831.67 00:28:23.927 clat percentiles (usec): 00:28:23.927 | 1.00th=[24249], 5.00th=[24511], 10.00th=[24773], 20.00th=[24773], 00:28:23.927 | 30.00th=[25035], 40.00th=[25035], 50.00th=[25297], 60.00th=[25297], 00:28:23.927 | 70.00th=[25560], 80.00th=[25822], 90.00th=[26608], 95.00th=[27395], 00:28:23.927 | 99.00th=[27919], 99.50th=[28181], 99.90th=[54264], 99.95th=[55313], 00:28:23.927 | 99.99th=[55313] 00:28:23.927 bw ( KiB/s): min= 2304, max= 2560, per=4.16%, avg=2466.10, stdev=87.78, samples=20 00:28:23.927 iops : min= 576, max= 640, avg=616.45, stdev=21.98, samples=20 00:28:23.927 lat (msec) : 50=99.68%, 100=0.32% 00:28:23.927 cpu : usr=99.14%, sys=0.47%, ctx=16, majf=0, minf=24 00:28:23.927 IO depths : 1=6.3%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:28:23.927 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:23.927 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:23.927 issued rwts: total=6180,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:23.927 latency : target=0, window=0, percentile=100.00%, depth=16 00:28:23.927 filename2: (groupid=0, jobs=1): err= 0: pid=3243060: Wed May 15 17:19:10 2024 00:28:23.927 read: IOPS=618, BW=2472KiB/s (2531kB/s)(24.2MiB/10019msec) 00:28:23.927 slat (nsec): min=6585, max=87107, avg=25255.91, stdev=18442.76 00:28:23.927 clat (usec): min=18348, max=52543, avg=25693.86, stdev=1638.81 00:28:23.927 lat (usec): min=18362, max=52561, avg=25719.12, stdev=1636.93 00:28:23.927 clat percentiles (usec): 00:28:23.927 | 1.00th=[24249], 5.00th=[24511], 10.00th=[24773], 20.00th=[25035], 00:28:23.927 | 30.00th=[25297], 40.00th=[25297], 50.00th=[25297], 60.00th=[25560], 00:28:23.927 | 70.00th=[25822], 80.00th=[26084], 90.00th=[26870], 95.00th=[27657], 00:28:23.927 | 99.00th=[28181], 99.50th=[28443], 99.90th=[52691], 99.95th=[52691], 00:28:23.927 | 99.99th=[52691] 00:28:23.927 bw ( KiB/s): min= 2304, max= 2560, per=4.17%, avg=2469.85, stdev=83.89, samples=20 00:28:23.927 iops : min= 576, max= 640, avg=617.40, stdev=20.97, samples=20 00:28:23.927 lat (msec) : 20=0.19%, 50=99.55%, 100=0.26% 00:28:23.927 cpu : usr=98.19%, sys=1.09%, ctx=38, majf=0, minf=34 00:28:23.927 IO depths : 1=6.2%, 2=12.4%, 4=25.0%, 8=50.1%, 16=6.3%, 32=0.0%, >=64=0.0% 00:28:23.927 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:23.927 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:23.927 issued rwts: total=6192,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:23.927 latency : target=0, window=0, percentile=100.00%, depth=16 00:28:23.927 filename2: (groupid=0, jobs=1): err= 0: pid=3243061: Wed May 15 17:19:10 2024 00:28:23.927 read: IOPS=618, BW=2474KiB/s (2533kB/s)(24.2MiB/10013msec) 00:28:23.927 slat (nsec): min=6977, max=63778, avg=26689.17, stdev=13154.49 00:28:23.927 clat (usec): min=12877, max=52216, avg=25665.15, stdev=1536.40 00:28:23.927 lat (usec): min=12887, max=52246, avg=25691.84, stdev=1535.87 00:28:23.927 clat percentiles (usec): 00:28:23.927 | 1.00th=[24249], 5.00th=[24773], 10.00th=[25035], 20.00th=[25035], 00:28:23.927 | 30.00th=[25297], 40.00th=[25297], 50.00th=[25297], 60.00th=[25560], 00:28:23.927 | 70.00th=[25822], 80.00th=[26084], 90.00th=[26870], 95.00th=[27657], 00:28:23.927 | 99.00th=[28181], 99.50th=[28705], 99.90th=[45876], 99.95th=[45876], 00:28:23.927 | 99.99th=[52167] 00:28:23.927 bw ( KiB/s): min= 2304, max= 2560, per=4.17%, avg=2471.84, stdev=74.33, samples=19 00:28:23.927 iops : min= 576, max= 640, avg=617.89, stdev=18.58, samples=19 00:28:23.927 lat (msec) : 20=0.42%, 50=99.55%, 100=0.03% 00:28:23.927 cpu : usr=97.36%, sys=1.47%, ctx=57, majf=0, minf=50 00:28:23.927 IO depths : 1=6.2%, 2=12.4%, 4=25.0%, 8=50.1%, 16=6.3%, 32=0.0%, >=64=0.0% 00:28:23.927 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:23.927 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:23.927 issued rwts: total=6192,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:23.927 latency : target=0, window=0, percentile=100.00%, depth=16 00:28:23.927 filename2: (groupid=0, jobs=1): err= 0: pid=3243062: Wed May 15 17:19:10 2024 00:28:23.927 read: IOPS=617, BW=2470KiB/s (2530kB/s)(24.1MiB/10003msec) 00:28:23.927 slat (nsec): min=6484, max=87643, avg=36687.75, stdev=16925.74 00:28:23.927 clat (usec): min=2664, max=69305, avg=25620.35, stdev=3173.23 00:28:23.927 lat (usec): min=2679, max=69344, avg=25657.04, stdev=3173.54 00:28:23.927 clat percentiles (usec): 00:28:23.927 | 1.00th=[15533], 5.00th=[23725], 10.00th=[24511], 20.00th=[24773], 00:28:23.927 | 30.00th=[25035], 40.00th=[25297], 50.00th=[25297], 60.00th=[25560], 00:28:23.927 | 70.00th=[25822], 80.00th=[26346], 90.00th=[27395], 95.00th=[28443], 00:28:23.927 | 99.00th=[36439], 99.50th=[39584], 99.90th=[57410], 99.95th=[57410], 00:28:23.927 | 99.99th=[69731] 00:28:23.927 bw ( KiB/s): min= 2304, max= 2560, per=4.16%, avg=2463.05, stdev=84.80, samples=19 00:28:23.927 iops : min= 576, max= 640, avg=615.74, stdev=21.18, samples=19 00:28:23.927 lat (msec) : 4=0.06%, 10=0.26%, 20=2.07%, 50=97.35%, 100=0.26% 00:28:23.927 cpu : usr=95.96%, sys=2.16%, ctx=237, majf=0, minf=32 00:28:23.927 IO depths : 1=3.3%, 2=7.9%, 4=19.5%, 8=59.1%, 16=10.2%, 32=0.0%, >=64=0.0% 00:28:23.927 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:23.927 complete : 0=0.0%, 4=93.0%, 8=2.2%, 16=4.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:23.927 issued rwts: total=6178,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:23.927 latency : target=0, window=0, percentile=100.00%, depth=16 00:28:23.927 filename2: (groupid=0, jobs=1): err= 0: pid=3243063: Wed May 15 17:19:10 2024 00:28:23.927 read: IOPS=617, BW=2470KiB/s (2529kB/s)(24.1MiB/10001msec) 00:28:23.927 slat (nsec): min=5959, max=80155, avg=41929.57, stdev=13303.81 00:28:23.927 clat (usec): min=17425, max=63101, avg=25547.44, stdev=2125.56 00:28:23.927 lat (usec): min=17435, max=63119, avg=25589.37, stdev=2124.17 00:28:23.927 clat percentiles (usec): 00:28:23.927 | 1.00th=[24249], 5.00th=[24511], 10.00th=[24773], 20.00th=[24773], 00:28:23.927 | 30.00th=[25035], 40.00th=[25035], 50.00th=[25297], 60.00th=[25297], 00:28:23.927 | 70.00th=[25560], 80.00th=[26084], 90.00th=[26870], 95.00th=[27395], 00:28:23.927 | 99.00th=[27919], 99.50th=[28181], 99.90th=[63177], 99.95th=[63177], 00:28:23.927 | 99.99th=[63177] 00:28:23.927 bw ( KiB/s): min= 2304, max= 2560, per=4.17%, avg=2472.37, stdev=85.60, samples=19 00:28:23.927 iops : min= 576, max= 640, avg=618.05, stdev=21.42, samples=19 00:28:23.927 lat (msec) : 20=0.29%, 50=99.45%, 100=0.26% 00:28:23.927 cpu : usr=98.96%, sys=0.62%, ctx=77, majf=0, minf=37 00:28:23.927 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:28:23.927 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:23.927 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:23.927 issued rwts: total=6176,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:23.927 latency : target=0, window=0, percentile=100.00%, depth=16 00:28:23.927 filename2: (groupid=0, jobs=1): err= 0: pid=3243064: Wed May 15 17:19:10 2024 00:28:23.927 read: IOPS=622, BW=2492KiB/s (2552kB/s)(24.4MiB/10046msec) 00:28:23.927 slat (nsec): min=6241, max=80160, avg=18341.71, stdev=13965.65 00:28:23.928 clat (usec): min=11131, max=79047, avg=25586.41, stdev=3773.28 00:28:23.928 lat (usec): min=11174, max=79064, avg=25604.75, stdev=3771.42 00:28:23.928 clat percentiles (usec): 00:28:23.928 | 1.00th=[19268], 5.00th=[20317], 10.00th=[21365], 20.00th=[22938], 00:28:23.928 | 30.00th=[25297], 40.00th=[25297], 50.00th=[25297], 60.00th=[25560], 00:28:23.928 | 70.00th=[26084], 80.00th=[27132], 90.00th=[29230], 95.00th=[30802], 00:28:23.928 | 99.00th=[32900], 99.50th=[38536], 99.90th=[68682], 99.95th=[68682], 00:28:23.928 | 99.99th=[79168] 00:28:23.928 bw ( KiB/s): min= 2240, max= 2592, per=4.22%, avg=2498.21, stdev=85.67, samples=19 00:28:23.928 iops : min= 560, max= 648, avg=624.53, stdev=21.41, samples=19 00:28:23.928 lat (msec) : 20=3.15%, 50=96.50%, 100=0.35% 00:28:23.928 cpu : usr=98.46%, sys=0.84%, ctx=62, majf=0, minf=39 00:28:23.928 IO depths : 1=0.1%, 2=0.1%, 4=2.3%, 8=81.0%, 16=16.6%, 32=0.0%, >=64=0.0% 00:28:23.928 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:23.928 complete : 0=0.0%, 4=89.0%, 8=9.3%, 16=1.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:23.928 issued rwts: total=6258,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:23.928 latency : target=0, window=0, percentile=100.00%, depth=16 00:28:23.928 filename2: (groupid=0, jobs=1): err= 0: pid=3243065: Wed May 15 17:19:10 2024 00:28:23.928 read: IOPS=621, BW=2487KiB/s (2547kB/s)(24.3MiB/10003msec) 00:28:23.928 slat (nsec): min=6533, max=92604, avg=36348.00, stdev=17220.44 00:28:23.928 clat (usec): min=6817, max=57744, avg=25462.65, stdev=2823.06 00:28:23.928 lat (usec): min=6824, max=57780, avg=25499.00, stdev=2824.20 00:28:23.928 clat percentiles (usec): 00:28:23.928 | 1.00th=[16581], 5.00th=[22152], 10.00th=[24511], 20.00th=[24773], 00:28:23.928 | 30.00th=[25035], 40.00th=[25297], 50.00th=[25297], 60.00th=[25560], 00:28:23.928 | 70.00th=[25560], 80.00th=[26346], 90.00th=[27395], 95.00th=[27919], 00:28:23.928 | 99.00th=[34341], 99.50th=[38536], 99.90th=[50070], 99.95th=[50070], 00:28:23.928 | 99.99th=[57934] 00:28:23.928 bw ( KiB/s): min= 2304, max= 2640, per=4.19%, avg=2483.95, stdev=87.46, samples=19 00:28:23.928 iops : min= 576, max= 660, avg=620.95, stdev=21.89, samples=19 00:28:23.928 lat (msec) : 10=0.26%, 20=3.57%, 50=95.92%, 100=0.26% 00:28:23.928 cpu : usr=98.90%, sys=0.72%, ctx=15, majf=0, minf=24 00:28:23.928 IO depths : 1=3.2%, 2=7.9%, 4=21.1%, 8=58.1%, 16=9.8%, 32=0.0%, >=64=0.0% 00:28:23.928 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:23.928 complete : 0=0.0%, 4=93.4%, 8=1.3%, 16=5.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:23.928 issued rwts: total=6220,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:23.928 latency : target=0, window=0, percentile=100.00%, depth=16 00:28:23.928 filename2: (groupid=0, jobs=1): err= 0: pid=3243066: Wed May 15 17:19:10 2024 00:28:23.928 read: IOPS=621, BW=2486KiB/s (2545kB/s)(24.3MiB/10003msec) 00:28:23.928 slat (nsec): min=6524, max=88532, avg=31530.68, stdev=16797.60 00:28:23.928 clat (usec): min=4766, max=64986, avg=25476.12, stdev=2960.85 00:28:23.928 lat (usec): min=4774, max=65008, avg=25507.65, stdev=2961.26 00:28:23.928 clat percentiles (usec): 00:28:23.928 | 1.00th=[18220], 5.00th=[21890], 10.00th=[24511], 20.00th=[24773], 00:28:23.928 | 30.00th=[25035], 40.00th=[25035], 50.00th=[25297], 60.00th=[25297], 00:28:23.928 | 70.00th=[25560], 80.00th=[26346], 90.00th=[27395], 95.00th=[27919], 00:28:23.928 | 99.00th=[32113], 99.50th=[35914], 99.90th=[64750], 99.95th=[64750], 00:28:23.928 | 99.99th=[64750] 00:28:23.928 bw ( KiB/s): min= 2304, max= 2672, per=4.19%, avg=2480.74, stdev=93.29, samples=19 00:28:23.928 iops : min= 576, max= 668, avg=620.16, stdev=23.30, samples=19 00:28:23.928 lat (msec) : 10=0.26%, 20=1.96%, 50=97.52%, 100=0.26% 00:28:23.928 cpu : usr=99.03%, sys=0.59%, ctx=25, majf=0, minf=29 00:28:23.928 IO depths : 1=4.4%, 2=9.0%, 4=19.3%, 8=58.2%, 16=9.1%, 32=0.0%, >=64=0.0% 00:28:23.928 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:23.928 complete : 0=0.0%, 4=92.7%, 8=2.5%, 16=4.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:23.928 issued rwts: total=6216,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:23.928 latency : target=0, window=0, percentile=100.00%, depth=16 00:28:23.928 00:28:23.928 Run status group 0 (all jobs): 00:28:23.928 READ: bw=57.9MiB/s (60.7MB/s), 2458KiB/s-2520KiB/s (2517kB/s-2581kB/s), io=582MiB (610MB), run=10001-10056msec 00:28:23.928 17:19:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@113 -- # destroy_subsystems 0 1 2 00:28:23.928 17:19:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:28:23.928 17:19:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:28:23.928 17:19:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:28:23.928 17:19:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:28:23.928 17:19:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:28:23.928 17:19:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:23.928 17:19:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:23.928 17:19:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:23.928 17:19:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:28:23.928 17:19:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:23.928 17:19:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:23.928 17:19:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:23.928 17:19:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:28:23.928 17:19:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:28:23.928 17:19:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:28:23.928 17:19:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:28:23.928 17:19:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:23.928 17:19:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:23.928 17:19:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:23.928 17:19:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:28:23.928 17:19:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:23.928 17:19:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:23.928 17:19:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:23.928 17:19:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:28:23.928 17:19:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 2 00:28:23.928 17:19:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=2 00:28:23.928 17:19:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:28:23.928 17:19:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:23.928 17:19:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:23.928 17:19:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:23.928 17:19:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null2 00:28:23.928 17:19:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:23.928 17:19:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:23.928 17:19:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:23.928 17:19:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # NULL_DIF=1 00:28:23.928 17:19:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # bs=8k,16k,128k 00:28:23.928 17:19:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # numjobs=2 00:28:23.928 17:19:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # iodepth=8 00:28:23.928 17:19:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # runtime=5 00:28:23.928 17:19:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # files=1 00:28:23.928 17:19:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@117 -- # create_subsystems 0 1 00:28:23.928 17:19:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:28:23.928 17:19:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:28:23.928 17:19:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:28:23.928 17:19:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:28:23.928 17:19:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:28:23.928 17:19:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:23.928 17:19:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:23.928 bdev_null0 00:28:23.929 17:19:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:23.929 17:19:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:28:23.929 17:19:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:23.929 17:19:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:23.929 17:19:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:23.929 17:19:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:28:23.929 17:19:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:23.929 17:19:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:23.929 17:19:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:23.929 17:19:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:28:23.929 17:19:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:23.929 17:19:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:23.929 [2024-05-15 17:19:10.373608] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:23.929 17:19:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:23.929 17:19:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:28:23.929 17:19:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:28:23.929 17:19:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:28:23.929 17:19:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:28:23.929 17:19:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:23.929 17:19:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:23.929 bdev_null1 00:28:23.929 17:19:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:23.929 17:19:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:28:23.929 17:19:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:23.929 17:19:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:23.929 17:19:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:23.929 17:19:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:28:23.929 17:19:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:23.929 17:19:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:23.929 17:19:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:23.929 17:19:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:28:23.929 17:19:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:23.929 17:19:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:23.929 17:19:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:23.929 17:19:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # create_json_sub_conf 0 1 00:28:23.929 17:19:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # fio /dev/fd/62 00:28:23.929 17:19:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:28:23.929 17:19:10 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # config=() 00:28:23.929 17:19:10 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # local subsystem config 00:28:23.929 17:19:10 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:23.929 17:19:10 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:23.929 { 00:28:23.929 "params": { 00:28:23.929 "name": "Nvme$subsystem", 00:28:23.929 "trtype": "$TEST_TRANSPORT", 00:28:23.929 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:23.929 "adrfam": "ipv4", 00:28:23.929 "trsvcid": "$NVMF_PORT", 00:28:23.929 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:23.929 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:23.929 "hdgst": ${hdgst:-false}, 00:28:23.929 "ddgst": ${ddgst:-false} 00:28:23.929 }, 00:28:23.929 "method": "bdev_nvme_attach_controller" 00:28:23.929 } 00:28:23.929 EOF 00:28:23.929 )") 00:28:23.929 17:19:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:28:23.929 17:19:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:28:23.929 17:19:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:28:23.929 17:19:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1333 -- # local fio_dir=/usr/src/fio 00:28:23.929 17:19:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:28:23.929 17:19:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1335 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:28:23.929 17:19:10 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:28:23.929 17:19:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:28:23.929 17:19:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1335 -- # local sanitizers 00:28:23.929 17:19:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1336 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:28:23.929 17:19:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # shift 00:28:23.929 17:19:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local asan_lib= 00:28:23.929 17:19:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:28:23.929 17:19:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:28:23.929 17:19:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:28:23.929 17:19:10 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:23.929 17:19:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:28:23.929 17:19:10 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:23.929 { 00:28:23.929 "params": { 00:28:23.929 "name": "Nvme$subsystem", 00:28:23.929 "trtype": "$TEST_TRANSPORT", 00:28:23.929 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:23.929 "adrfam": "ipv4", 00:28:23.929 "trsvcid": "$NVMF_PORT", 00:28:23.929 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:23.929 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:23.929 "hdgst": ${hdgst:-false}, 00:28:23.929 "ddgst": ${ddgst:-false} 00:28:23.929 }, 00:28:23.929 "method": "bdev_nvme_attach_controller" 00:28:23.929 } 00:28:23.929 EOF 00:28:23.929 )") 00:28:23.929 17:19:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # grep libasan 00:28:23.929 17:19:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:28:23.929 17:19:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:28:23.929 17:19:10 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:28:23.929 17:19:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:28:23.929 17:19:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:28:23.929 17:19:10 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # jq . 00:28:23.929 17:19:10 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@557 -- # IFS=, 00:28:23.929 17:19:10 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:28:23.929 "params": { 00:28:23.929 "name": "Nvme0", 00:28:23.929 "trtype": "tcp", 00:28:23.929 "traddr": "10.0.0.2", 00:28:23.929 "adrfam": "ipv4", 00:28:23.929 "trsvcid": "4420", 00:28:23.929 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:28:23.929 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:28:23.929 "hdgst": false, 00:28:23.929 "ddgst": false 00:28:23.929 }, 00:28:23.929 "method": "bdev_nvme_attach_controller" 00:28:23.929 },{ 00:28:23.929 "params": { 00:28:23.929 "name": "Nvme1", 00:28:23.929 "trtype": "tcp", 00:28:23.929 "traddr": "10.0.0.2", 00:28:23.929 "adrfam": "ipv4", 00:28:23.929 "trsvcid": "4420", 00:28:23.929 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:28:23.929 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:28:23.929 "hdgst": false, 00:28:23.929 "ddgst": false 00:28:23.929 }, 00:28:23.929 "method": "bdev_nvme_attach_controller" 00:28:23.929 }' 00:28:23.929 17:19:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # asan_lib= 00:28:23.929 17:19:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:28:23.929 17:19:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:28:23.929 17:19:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:28:23.929 17:19:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # grep libclang_rt.asan 00:28:23.929 17:19:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:28:23.929 17:19:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # asan_lib= 00:28:23.929 17:19:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:28:23.929 17:19:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:28:23.929 17:19:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:28:23.929 filename0: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:28:23.929 ... 00:28:23.929 filename1: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:28:23.929 ... 00:28:23.929 fio-3.35 00:28:23.929 Starting 4 threads 00:28:23.929 EAL: No free 2048 kB hugepages reported on node 1 00:28:29.187 00:28:29.187 filename0: (groupid=0, jobs=1): err= 0: pid=3245017: Wed May 15 17:19:16 2024 00:28:29.187 read: IOPS=2680, BW=20.9MiB/s (22.0MB/s)(105MiB/5002msec) 00:28:29.187 slat (nsec): min=6044, max=61949, avg=15787.25, stdev=9809.80 00:28:29.187 clat (usec): min=1148, max=5198, avg=2939.45, stdev=386.85 00:28:29.187 lat (usec): min=1168, max=5228, avg=2955.24, stdev=387.37 00:28:29.187 clat percentiles (usec): 00:28:29.187 | 1.00th=[ 1991], 5.00th=[ 2376], 10.00th=[ 2540], 20.00th=[ 2704], 00:28:29.187 | 30.00th=[ 2802], 40.00th=[ 2868], 50.00th=[ 2933], 60.00th=[ 2966], 00:28:29.187 | 70.00th=[ 3032], 80.00th=[ 3163], 90.00th=[ 3326], 95.00th=[ 3556], 00:28:29.187 | 99.00th=[ 4359], 99.50th=[ 4555], 99.90th=[ 4883], 99.95th=[ 5080], 00:28:29.187 | 99.99th=[ 5145] 00:28:29.187 bw ( KiB/s): min=20928, max=22240, per=25.51%, avg=21525.33, stdev=396.55, samples=9 00:28:29.187 iops : min= 2616, max= 2780, avg=2690.67, stdev=49.57, samples=9 00:28:29.187 lat (msec) : 2=1.01%, 4=96.77%, 10=2.22% 00:28:29.187 cpu : usr=96.76%, sys=2.74%, ctx=35, majf=0, minf=120 00:28:29.187 IO depths : 1=0.1%, 2=3.4%, 4=67.9%, 8=28.6%, 16=0.0%, 32=0.0%, >=64=0.0% 00:28:29.187 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:29.187 complete : 0=0.0%, 4=93.3%, 8=6.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:29.187 issued rwts: total=13406,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:29.187 latency : target=0, window=0, percentile=100.00%, depth=8 00:28:29.187 filename0: (groupid=0, jobs=1): err= 0: pid=3245018: Wed May 15 17:19:16 2024 00:28:29.187 read: IOPS=2666, BW=20.8MiB/s (21.8MB/s)(104MiB/5003msec) 00:28:29.187 slat (nsec): min=6247, max=63150, avg=15911.34, stdev=11032.61 00:28:29.187 clat (usec): min=923, max=5381, avg=2953.20, stdev=501.32 00:28:29.187 lat (usec): min=951, max=5388, avg=2969.12, stdev=501.49 00:28:29.187 clat percentiles (usec): 00:28:29.187 | 1.00th=[ 1811], 5.00th=[ 2212], 10.00th=[ 2409], 20.00th=[ 2638], 00:28:29.187 | 30.00th=[ 2737], 40.00th=[ 2835], 50.00th=[ 2933], 60.00th=[ 2999], 00:28:29.187 | 70.00th=[ 3064], 80.00th=[ 3195], 90.00th=[ 3458], 95.00th=[ 4015], 00:28:29.187 | 99.00th=[ 4621], 99.50th=[ 4817], 99.90th=[ 5080], 99.95th=[ 5145], 00:28:29.187 | 99.99th=[ 5276] 00:28:29.187 bw ( KiB/s): min=20720, max=23360, per=25.28%, avg=21332.80, stdev=766.67, samples=10 00:28:29.187 iops : min= 2590, max= 2920, avg=2666.60, stdev=95.83, samples=10 00:28:29.187 lat (usec) : 1000=0.02% 00:28:29.187 lat (msec) : 2=1.74%, 4=93.07%, 10=5.17% 00:28:29.187 cpu : usr=97.70%, sys=1.92%, ctx=12, majf=0, minf=38 00:28:29.187 IO depths : 1=0.2%, 2=4.1%, 4=68.0%, 8=27.8%, 16=0.0%, 32=0.0%, >=64=0.0% 00:28:29.187 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:29.187 complete : 0=0.0%, 4=92.7%, 8=7.3%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:29.187 issued rwts: total=13339,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:29.187 latency : target=0, window=0, percentile=100.00%, depth=8 00:28:29.187 filename1: (groupid=0, jobs=1): err= 0: pid=3245019: Wed May 15 17:19:16 2024 00:28:29.187 read: IOPS=2606, BW=20.4MiB/s (21.4MB/s)(102MiB/5001msec) 00:28:29.187 slat (usec): min=6, max=110, avg=14.99, stdev=10.92 00:28:29.187 clat (usec): min=847, max=5544, avg=3024.20, stdev=492.92 00:28:29.187 lat (usec): min=864, max=5557, avg=3039.19, stdev=492.61 00:28:29.187 clat percentiles (usec): 00:28:29.187 | 1.00th=[ 1942], 5.00th=[ 2409], 10.00th=[ 2573], 20.00th=[ 2737], 00:28:29.187 | 30.00th=[ 2802], 40.00th=[ 2900], 50.00th=[ 2933], 60.00th=[ 3032], 00:28:29.187 | 70.00th=[ 3097], 80.00th=[ 3228], 90.00th=[ 3556], 95.00th=[ 4146], 00:28:29.187 | 99.00th=[ 4686], 99.50th=[ 4817], 99.90th=[ 5145], 99.95th=[ 5276], 00:28:29.187 | 99.99th=[ 5342] 00:28:29.187 bw ( KiB/s): min=19856, max=22064, per=24.67%, avg=20817.00, stdev=749.62, samples=9 00:28:29.187 iops : min= 2482, max= 2758, avg=2602.11, stdev=93.70, samples=9 00:28:29.187 lat (usec) : 1000=0.01% 00:28:29.187 lat (msec) : 2=1.27%, 4=92.62%, 10=6.10% 00:28:29.187 cpu : usr=92.72%, sys=4.46%, ctx=130, majf=0, minf=91 00:28:29.187 IO depths : 1=0.2%, 2=3.4%, 4=69.1%, 8=27.3%, 16=0.0%, 32=0.0%, >=64=0.0% 00:28:29.187 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:29.187 complete : 0=0.0%, 4=92.4%, 8=7.6%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:29.187 issued rwts: total=13036,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:29.187 latency : target=0, window=0, percentile=100.00%, depth=8 00:28:29.187 filename1: (groupid=0, jobs=1): err= 0: pid=3245020: Wed May 15 17:19:16 2024 00:28:29.187 read: IOPS=2597, BW=20.3MiB/s (21.3MB/s)(102MiB/5002msec) 00:28:29.187 slat (nsec): min=6086, max=69615, avg=16187.55, stdev=10850.26 00:28:29.187 clat (usec): min=1128, max=5709, avg=3032.60, stdev=502.17 00:28:29.187 lat (usec): min=1140, max=5733, avg=3048.79, stdev=501.41 00:28:29.187 clat percentiles (usec): 00:28:29.187 | 1.00th=[ 2073], 5.00th=[ 2409], 10.00th=[ 2573], 20.00th=[ 2704], 00:28:29.187 | 30.00th=[ 2802], 40.00th=[ 2900], 50.00th=[ 2933], 60.00th=[ 2999], 00:28:29.187 | 70.00th=[ 3097], 80.00th=[ 3228], 90.00th=[ 3687], 95.00th=[ 4228], 00:28:29.187 | 99.00th=[ 4686], 99.50th=[ 4883], 99.90th=[ 5211], 99.95th=[ 5276], 00:28:29.187 | 99.99th=[ 5669] 00:28:29.187 bw ( KiB/s): min=20368, max=21344, per=24.63%, avg=20782.40, stdev=361.92, samples=10 00:28:29.187 iops : min= 2546, max= 2668, avg=2597.80, stdev=45.24, samples=10 00:28:29.187 lat (msec) : 2=0.66%, 4=92.17%, 10=7.17% 00:28:29.187 cpu : usr=97.54%, sys=2.10%, ctx=10, majf=0, minf=91 00:28:29.187 IO depths : 1=0.2%, 2=3.2%, 4=69.2%, 8=27.4%, 16=0.0%, 32=0.0%, >=64=0.0% 00:28:29.187 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:29.187 complete : 0=0.0%, 4=92.4%, 8=7.6%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:29.187 issued rwts: total=12992,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:29.187 latency : target=0, window=0, percentile=100.00%, depth=8 00:28:29.187 00:28:29.187 Run status group 0 (all jobs): 00:28:29.187 READ: bw=82.4MiB/s (86.4MB/s), 20.3MiB/s-20.9MiB/s (21.3MB/s-22.0MB/s), io=412MiB (432MB), run=5001-5003msec 00:28:29.187 17:19:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@119 -- # destroy_subsystems 0 1 00:28:29.187 17:19:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:28:29.187 17:19:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:28:29.187 17:19:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:28:29.187 17:19:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:28:29.187 17:19:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:28:29.187 17:19:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:29.187 17:19:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:29.187 17:19:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:29.187 17:19:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:28:29.187 17:19:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:29.187 17:19:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:29.187 17:19:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:29.187 17:19:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:28:29.187 17:19:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:28:29.187 17:19:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:28:29.187 17:19:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:28:29.187 17:19:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:29.187 17:19:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:29.187 17:19:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:29.187 17:19:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:28:29.187 17:19:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:29.187 17:19:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:29.187 17:19:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:29.187 00:28:29.187 real 0m24.396s 00:28:29.187 user 4m51.399s 00:28:29.187 sys 0m4.706s 00:28:29.187 17:19:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1122 -- # xtrace_disable 00:28:29.187 17:19:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:29.187 ************************************ 00:28:29.187 END TEST fio_dif_rand_params 00:28:29.187 ************************************ 00:28:29.187 17:19:16 nvmf_dif -- target/dif.sh@144 -- # run_test fio_dif_digest fio_dif_digest 00:28:29.187 17:19:16 nvmf_dif -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:28:29.188 17:19:16 nvmf_dif -- common/autotest_common.sh@1103 -- # xtrace_disable 00:28:29.188 17:19:16 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:28:29.188 ************************************ 00:28:29.188 START TEST fio_dif_digest 00:28:29.188 ************************************ 00:28:29.188 17:19:16 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1121 -- # fio_dif_digest 00:28:29.188 17:19:16 nvmf_dif.fio_dif_digest -- target/dif.sh@123 -- # local NULL_DIF 00:28:29.188 17:19:16 nvmf_dif.fio_dif_digest -- target/dif.sh@124 -- # local bs numjobs runtime iodepth files 00:28:29.188 17:19:16 nvmf_dif.fio_dif_digest -- target/dif.sh@125 -- # local hdgst ddgst 00:28:29.188 17:19:16 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # NULL_DIF=3 00:28:29.188 17:19:16 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # bs=128k,128k,128k 00:28:29.188 17:19:16 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # numjobs=3 00:28:29.188 17:19:16 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # iodepth=3 00:28:29.188 17:19:16 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # runtime=10 00:28:29.188 17:19:16 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # hdgst=true 00:28:29.188 17:19:16 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # ddgst=true 00:28:29.188 17:19:16 nvmf_dif.fio_dif_digest -- target/dif.sh@130 -- # create_subsystems 0 00:28:29.188 17:19:16 nvmf_dif.fio_dif_digest -- target/dif.sh@28 -- # local sub 00:28:29.188 17:19:16 nvmf_dif.fio_dif_digest -- target/dif.sh@30 -- # for sub in "$@" 00:28:29.446 17:19:16 nvmf_dif.fio_dif_digest -- target/dif.sh@31 -- # create_subsystem 0 00:28:29.446 17:19:16 nvmf_dif.fio_dif_digest -- target/dif.sh@18 -- # local sub_id=0 00:28:29.446 17:19:16 nvmf_dif.fio_dif_digest -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:28:29.446 17:19:16 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:29.446 17:19:16 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:28:29.446 bdev_null0 00:28:29.446 17:19:16 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:29.447 17:19:16 nvmf_dif.fio_dif_digest -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:28:29.447 17:19:16 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:29.447 17:19:16 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:28:29.447 17:19:16 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:29.447 17:19:16 nvmf_dif.fio_dif_digest -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:28:29.447 17:19:16 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:29.447 17:19:16 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:28:29.447 17:19:16 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:29.447 17:19:16 nvmf_dif.fio_dif_digest -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:28:29.447 17:19:16 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:29.447 17:19:16 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:28:29.447 [2024-05-15 17:19:16.875314] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:29.447 17:19:16 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:29.447 17:19:16 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # fio /dev/fd/62 00:28:29.447 17:19:16 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # create_json_sub_conf 0 00:28:29.447 17:19:16 nvmf_dif.fio_dif_digest -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:28:29.447 17:19:16 nvmf_dif.fio_dif_digest -- nvmf/common.sh@532 -- # config=() 00:28:29.447 17:19:16 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:28:29.447 17:19:16 nvmf_dif.fio_dif_digest -- nvmf/common.sh@532 -- # local subsystem config 00:28:29.447 17:19:16 nvmf_dif.fio_dif_digest -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:29.447 17:19:16 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1352 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:28:29.447 17:19:16 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # gen_fio_conf 00:28:29.447 17:19:16 nvmf_dif.fio_dif_digest -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:29.447 { 00:28:29.447 "params": { 00:28:29.447 "name": "Nvme$subsystem", 00:28:29.447 "trtype": "$TEST_TRANSPORT", 00:28:29.447 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:29.447 "adrfam": "ipv4", 00:28:29.447 "trsvcid": "$NVMF_PORT", 00:28:29.447 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:29.447 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:29.447 "hdgst": ${hdgst:-false}, 00:28:29.447 "ddgst": ${ddgst:-false} 00:28:29.447 }, 00:28:29.447 "method": "bdev_nvme_attach_controller" 00:28:29.447 } 00:28:29.447 EOF 00:28:29.447 )") 00:28:29.447 17:19:16 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1333 -- # local fio_dir=/usr/src/fio 00:28:29.447 17:19:16 nvmf_dif.fio_dif_digest -- target/dif.sh@54 -- # local file 00:28:29.447 17:19:16 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1335 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:28:29.447 17:19:16 nvmf_dif.fio_dif_digest -- target/dif.sh@56 -- # cat 00:28:29.447 17:19:16 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1335 -- # local sanitizers 00:28:29.447 17:19:16 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1336 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:28:29.447 17:19:16 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1337 -- # shift 00:28:29.447 17:19:16 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1339 -- # local asan_lib= 00:28:29.447 17:19:16 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:28:29.447 17:19:16 nvmf_dif.fio_dif_digest -- nvmf/common.sh@554 -- # cat 00:28:29.447 17:19:16 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file = 1 )) 00:28:29.447 17:19:16 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file <= files )) 00:28:29.447 17:19:16 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:28:29.447 17:19:16 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # grep libasan 00:28:29.447 17:19:16 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:28:29.447 17:19:16 nvmf_dif.fio_dif_digest -- nvmf/common.sh@556 -- # jq . 00:28:29.447 17:19:16 nvmf_dif.fio_dif_digest -- nvmf/common.sh@557 -- # IFS=, 00:28:29.447 17:19:16 nvmf_dif.fio_dif_digest -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:28:29.447 "params": { 00:28:29.447 "name": "Nvme0", 00:28:29.447 "trtype": "tcp", 00:28:29.447 "traddr": "10.0.0.2", 00:28:29.447 "adrfam": "ipv4", 00:28:29.447 "trsvcid": "4420", 00:28:29.447 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:28:29.447 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:28:29.447 "hdgst": true, 00:28:29.447 "ddgst": true 00:28:29.447 }, 00:28:29.447 "method": "bdev_nvme_attach_controller" 00:28:29.447 }' 00:28:29.447 17:19:16 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # asan_lib= 00:28:29.447 17:19:16 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:28:29.447 17:19:16 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:28:29.447 17:19:16 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:28:29.447 17:19:16 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # grep libclang_rt.asan 00:28:29.447 17:19:16 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:28:29.447 17:19:16 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # asan_lib= 00:28:29.447 17:19:16 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:28:29.447 17:19:16 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1348 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:28:29.447 17:19:16 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1348 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:28:29.704 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:28:29.704 ... 00:28:29.704 fio-3.35 00:28:29.704 Starting 3 threads 00:28:29.704 EAL: No free 2048 kB hugepages reported on node 1 00:28:41.893 00:28:41.893 filename0: (groupid=0, jobs=1): err= 0: pid=3246078: Wed May 15 17:19:27 2024 00:28:41.893 read: IOPS=277, BW=34.7MiB/s (36.3MB/s)(348MiB/10044msec) 00:28:41.893 slat (nsec): min=6645, max=51132, avg=11463.13, stdev=2280.32 00:28:41.893 clat (usec): min=7471, max=50664, avg=10789.62, stdev=1325.05 00:28:41.893 lat (usec): min=7482, max=50675, avg=10801.08, stdev=1325.01 00:28:41.893 clat percentiles (usec): 00:28:41.893 | 1.00th=[ 8848], 5.00th=[ 9503], 10.00th=[ 9765], 20.00th=[10028], 00:28:41.893 | 30.00th=[10290], 40.00th=[10552], 50.00th=[10814], 60.00th=[10945], 00:28:41.893 | 70.00th=[11207], 80.00th=[11469], 90.00th=[11863], 95.00th=[12256], 00:28:41.893 | 99.00th=[12911], 99.50th=[13042], 99.90th=[14091], 99.95th=[47449], 00:28:41.893 | 99.99th=[50594] 00:28:41.893 bw ( KiB/s): min=34560, max=36608, per=34.15%, avg=35622.40, stdev=611.90, samples=20 00:28:41.893 iops : min= 270, max= 286, avg=278.30, stdev= 4.78, samples=20 00:28:41.893 lat (msec) : 10=17.88%, 20=82.05%, 50=0.04%, 100=0.04% 00:28:41.893 cpu : usr=94.50%, sys=5.18%, ctx=17, majf=0, minf=77 00:28:41.893 IO depths : 1=0.3%, 2=99.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:28:41.894 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:41.894 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:41.894 issued rwts: total=2785,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:41.894 latency : target=0, window=0, percentile=100.00%, depth=3 00:28:41.894 filename0: (groupid=0, jobs=1): err= 0: pid=3246079: Wed May 15 17:19:27 2024 00:28:41.894 read: IOPS=271, BW=34.0MiB/s (35.6MB/s)(342MiB/10047msec) 00:28:41.894 slat (nsec): min=6602, max=26564, avg=11499.95, stdev=2241.23 00:28:41.894 clat (usec): min=6878, max=48227, avg=11002.48, stdev=1304.08 00:28:41.894 lat (usec): min=6891, max=48239, avg=11013.98, stdev=1304.02 00:28:41.894 clat percentiles (usec): 00:28:41.894 | 1.00th=[ 8979], 5.00th=[ 9634], 10.00th=[ 9896], 20.00th=[10290], 00:28:41.894 | 30.00th=[10552], 40.00th=[10814], 50.00th=[10945], 60.00th=[11207], 00:28:41.894 | 70.00th=[11469], 80.00th=[11731], 90.00th=[11994], 95.00th=[12387], 00:28:41.894 | 99.00th=[13042], 99.50th=[13304], 99.90th=[14222], 99.95th=[46924], 00:28:41.894 | 99.99th=[47973] 00:28:41.894 bw ( KiB/s): min=34048, max=35840, per=33.50%, avg=34944.00, stdev=554.06, samples=20 00:28:41.894 iops : min= 266, max= 280, avg=273.00, stdev= 4.33, samples=20 00:28:41.894 lat (msec) : 10=12.30%, 20=87.63%, 50=0.07% 00:28:41.894 cpu : usr=94.49%, sys=5.19%, ctx=23, majf=0, minf=159 00:28:41.894 IO depths : 1=0.3%, 2=99.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:28:41.894 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:41.894 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:41.894 issued rwts: total=2732,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:41.894 latency : target=0, window=0, percentile=100.00%, depth=3 00:28:41.894 filename0: (groupid=0, jobs=1): err= 0: pid=3246080: Wed May 15 17:19:27 2024 00:28:41.894 read: IOPS=265, BW=33.2MiB/s (34.8MB/s)(334MiB/10044msec) 00:28:41.894 slat (nsec): min=6618, max=23443, avg=11733.50, stdev=2103.10 00:28:41.894 clat (usec): min=8214, max=48678, avg=11255.99, stdev=1297.47 00:28:41.894 lat (usec): min=8227, max=48685, avg=11267.73, stdev=1297.45 00:28:41.894 clat percentiles (usec): 00:28:41.894 | 1.00th=[ 9372], 5.00th=[ 9896], 10.00th=[10159], 20.00th=[10552], 00:28:41.894 | 30.00th=[10814], 40.00th=[10945], 50.00th=[11207], 60.00th=[11338], 00:28:41.894 | 70.00th=[11600], 80.00th=[11863], 90.00th=[12256], 95.00th=[12649], 00:28:41.894 | 99.00th=[13304], 99.50th=[13698], 99.90th=[14746], 99.95th=[45876], 00:28:41.894 | 99.99th=[48497] 00:28:41.894 bw ( KiB/s): min=32768, max=34816, per=32.74%, avg=34150.40, stdev=601.23, samples=20 00:28:41.894 iops : min= 256, max= 272, avg=266.80, stdev= 4.70, samples=20 00:28:41.894 lat (msec) : 10=5.96%, 20=93.97%, 50=0.07% 00:28:41.894 cpu : usr=94.46%, sys=5.22%, ctx=20, majf=0, minf=120 00:28:41.894 IO depths : 1=0.3%, 2=99.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:28:41.894 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:41.894 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:41.894 issued rwts: total=2670,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:41.894 latency : target=0, window=0, percentile=100.00%, depth=3 00:28:41.894 00:28:41.894 Run status group 0 (all jobs): 00:28:41.894 READ: bw=102MiB/s (107MB/s), 33.2MiB/s-34.7MiB/s (34.8MB/s-36.3MB/s), io=1023MiB (1073MB), run=10044-10047msec 00:28:41.894 17:19:28 nvmf_dif.fio_dif_digest -- target/dif.sh@132 -- # destroy_subsystems 0 00:28:41.894 17:19:28 nvmf_dif.fio_dif_digest -- target/dif.sh@43 -- # local sub 00:28:41.894 17:19:28 nvmf_dif.fio_dif_digest -- target/dif.sh@45 -- # for sub in "$@" 00:28:41.894 17:19:28 nvmf_dif.fio_dif_digest -- target/dif.sh@46 -- # destroy_subsystem 0 00:28:41.894 17:19:28 nvmf_dif.fio_dif_digest -- target/dif.sh@36 -- # local sub_id=0 00:28:41.894 17:19:28 nvmf_dif.fio_dif_digest -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:28:41.894 17:19:28 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:41.894 17:19:28 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:28:41.894 17:19:28 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:41.894 17:19:28 nvmf_dif.fio_dif_digest -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:28:41.894 17:19:28 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:41.894 17:19:28 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:28:41.894 17:19:28 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:41.894 00:28:41.894 real 0m11.208s 00:28:41.894 user 0m35.172s 00:28:41.894 sys 0m1.860s 00:28:41.894 17:19:28 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1122 -- # xtrace_disable 00:28:41.894 17:19:28 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:28:41.894 ************************************ 00:28:41.894 END TEST fio_dif_digest 00:28:41.894 ************************************ 00:28:41.894 17:19:28 nvmf_dif -- target/dif.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:28:41.894 17:19:28 nvmf_dif -- target/dif.sh@147 -- # nvmftestfini 00:28:41.894 17:19:28 nvmf_dif -- nvmf/common.sh@488 -- # nvmfcleanup 00:28:41.894 17:19:28 nvmf_dif -- nvmf/common.sh@117 -- # sync 00:28:41.894 17:19:28 nvmf_dif -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:28:41.894 17:19:28 nvmf_dif -- nvmf/common.sh@120 -- # set +e 00:28:41.894 17:19:28 nvmf_dif -- nvmf/common.sh@121 -- # for i in {1..20} 00:28:41.894 17:19:28 nvmf_dif -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:28:41.894 rmmod nvme_tcp 00:28:41.894 rmmod nvme_fabrics 00:28:41.894 rmmod nvme_keyring 00:28:41.894 17:19:28 nvmf_dif -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:28:41.894 17:19:28 nvmf_dif -- nvmf/common.sh@124 -- # set -e 00:28:41.894 17:19:28 nvmf_dif -- nvmf/common.sh@125 -- # return 0 00:28:41.894 17:19:28 nvmf_dif -- nvmf/common.sh@489 -- # '[' -n 3237469 ']' 00:28:41.894 17:19:28 nvmf_dif -- nvmf/common.sh@490 -- # killprocess 3237469 00:28:41.894 17:19:28 nvmf_dif -- common/autotest_common.sh@946 -- # '[' -z 3237469 ']' 00:28:41.894 17:19:28 nvmf_dif -- common/autotest_common.sh@950 -- # kill -0 3237469 00:28:41.894 17:19:28 nvmf_dif -- common/autotest_common.sh@951 -- # uname 00:28:41.894 17:19:28 nvmf_dif -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:28:41.894 17:19:28 nvmf_dif -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3237469 00:28:41.894 17:19:28 nvmf_dif -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:28:41.894 17:19:28 nvmf_dif -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:28:41.894 17:19:28 nvmf_dif -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3237469' 00:28:41.894 killing process with pid 3237469 00:28:41.894 17:19:28 nvmf_dif -- common/autotest_common.sh@965 -- # kill 3237469 00:28:41.894 [2024-05-15 17:19:28.217610] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:28:41.894 17:19:28 nvmf_dif -- common/autotest_common.sh@970 -- # wait 3237469 00:28:41.894 17:19:28 nvmf_dif -- nvmf/common.sh@492 -- # '[' iso == iso ']' 00:28:41.894 17:19:28 nvmf_dif -- nvmf/common.sh@493 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:28:43.267 Waiting for block devices as requested 00:28:43.268 0000:5e:00.0 (8086 0a54): vfio-pci -> nvme 00:28:43.268 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:28:43.525 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:28:43.525 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:28:43.525 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:28:43.525 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:28:43.783 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:28:43.783 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:28:43.783 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:28:43.783 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:28:44.041 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:28:44.041 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:28:44.041 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:28:44.298 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:28:44.298 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:28:44.298 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:28:44.298 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:28:44.556 17:19:32 nvmf_dif -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:28:44.556 17:19:32 nvmf_dif -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:28:44.556 17:19:32 nvmf_dif -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:28:44.556 17:19:32 nvmf_dif -- nvmf/common.sh@278 -- # remove_spdk_ns 00:28:44.556 17:19:32 nvmf_dif -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:44.556 17:19:32 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:28:44.556 17:19:32 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:46.455 17:19:34 nvmf_dif -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:28:46.455 00:28:46.455 real 1m13.337s 00:28:46.455 user 7m8.719s 00:28:46.455 sys 0m18.747s 00:28:46.455 17:19:34 nvmf_dif -- common/autotest_common.sh@1122 -- # xtrace_disable 00:28:46.455 17:19:34 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:28:46.455 ************************************ 00:28:46.455 END TEST nvmf_dif 00:28:46.455 ************************************ 00:28:46.713 17:19:34 -- spdk/autotest.sh@289 -- # run_test nvmf_abort_qd_sizes /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:28:46.714 17:19:34 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:28:46.714 17:19:34 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:28:46.714 17:19:34 -- common/autotest_common.sh@10 -- # set +x 00:28:46.714 ************************************ 00:28:46.714 START TEST nvmf_abort_qd_sizes 00:28:46.714 ************************************ 00:28:46.714 17:19:34 nvmf_abort_qd_sizes -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:28:46.714 * Looking for test storage... 00:28:46.714 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:28:46.714 17:19:34 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:46.714 17:19:34 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # uname -s 00:28:46.714 17:19:34 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:46.714 17:19:34 nvmf_abort_qd_sizes -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:46.714 17:19:34 nvmf_abort_qd_sizes -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:46.714 17:19:34 nvmf_abort_qd_sizes -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:46.714 17:19:34 nvmf_abort_qd_sizes -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:46.714 17:19:34 nvmf_abort_qd_sizes -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:46.714 17:19:34 nvmf_abort_qd_sizes -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:46.714 17:19:34 nvmf_abort_qd_sizes -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:46.714 17:19:34 nvmf_abort_qd_sizes -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:46.714 17:19:34 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:46.714 17:19:34 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:28:46.714 17:19:34 nvmf_abort_qd_sizes -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:28:46.714 17:19:34 nvmf_abort_qd_sizes -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:46.714 17:19:34 nvmf_abort_qd_sizes -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:46.714 17:19:34 nvmf_abort_qd_sizes -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:46.714 17:19:34 nvmf_abort_qd_sizes -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:46.714 17:19:34 nvmf_abort_qd_sizes -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:46.714 17:19:34 nvmf_abort_qd_sizes -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:46.714 17:19:34 nvmf_abort_qd_sizes -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:46.714 17:19:34 nvmf_abort_qd_sizes -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:46.714 17:19:34 nvmf_abort_qd_sizes -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:46.714 17:19:34 nvmf_abort_qd_sizes -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:46.714 17:19:34 nvmf_abort_qd_sizes -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:46.714 17:19:34 nvmf_abort_qd_sizes -- paths/export.sh@5 -- # export PATH 00:28:46.714 17:19:34 nvmf_abort_qd_sizes -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:46.714 17:19:34 nvmf_abort_qd_sizes -- nvmf/common.sh@47 -- # : 0 00:28:46.714 17:19:34 nvmf_abort_qd_sizes -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:28:46.714 17:19:34 nvmf_abort_qd_sizes -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:28:46.714 17:19:34 nvmf_abort_qd_sizes -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:46.714 17:19:34 nvmf_abort_qd_sizes -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:46.714 17:19:34 nvmf_abort_qd_sizes -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:46.714 17:19:34 nvmf_abort_qd_sizes -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:28:46.714 17:19:34 nvmf_abort_qd_sizes -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:28:46.714 17:19:34 nvmf_abort_qd_sizes -- nvmf/common.sh@51 -- # have_pci_nics=0 00:28:46.714 17:19:34 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@70 -- # nvmftestinit 00:28:46.714 17:19:34 nvmf_abort_qd_sizes -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:28:46.714 17:19:34 nvmf_abort_qd_sizes -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:46.714 17:19:34 nvmf_abort_qd_sizes -- nvmf/common.sh@448 -- # prepare_net_devs 00:28:46.714 17:19:34 nvmf_abort_qd_sizes -- nvmf/common.sh@410 -- # local -g is_hw=no 00:28:46.714 17:19:34 nvmf_abort_qd_sizes -- nvmf/common.sh@412 -- # remove_spdk_ns 00:28:46.714 17:19:34 nvmf_abort_qd_sizes -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:46.714 17:19:34 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:28:46.714 17:19:34 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:46.714 17:19:34 nvmf_abort_qd_sizes -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:28:46.714 17:19:34 nvmf_abort_qd_sizes -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:28:46.714 17:19:34 nvmf_abort_qd_sizes -- nvmf/common.sh@285 -- # xtrace_disable 00:28:46.714 17:19:34 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:28:51.973 17:19:39 nvmf_abort_qd_sizes -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:51.973 17:19:39 nvmf_abort_qd_sizes -- nvmf/common.sh@291 -- # pci_devs=() 00:28:51.973 17:19:39 nvmf_abort_qd_sizes -- nvmf/common.sh@291 -- # local -a pci_devs 00:28:51.973 17:19:39 nvmf_abort_qd_sizes -- nvmf/common.sh@292 -- # pci_net_devs=() 00:28:51.973 17:19:39 nvmf_abort_qd_sizes -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:28:51.973 17:19:39 nvmf_abort_qd_sizes -- nvmf/common.sh@293 -- # pci_drivers=() 00:28:51.973 17:19:39 nvmf_abort_qd_sizes -- nvmf/common.sh@293 -- # local -A pci_drivers 00:28:51.973 17:19:39 nvmf_abort_qd_sizes -- nvmf/common.sh@295 -- # net_devs=() 00:28:51.973 17:19:39 nvmf_abort_qd_sizes -- nvmf/common.sh@295 -- # local -ga net_devs 00:28:51.973 17:19:39 nvmf_abort_qd_sizes -- nvmf/common.sh@296 -- # e810=() 00:28:51.973 17:19:39 nvmf_abort_qd_sizes -- nvmf/common.sh@296 -- # local -ga e810 00:28:51.973 17:19:39 nvmf_abort_qd_sizes -- nvmf/common.sh@297 -- # x722=() 00:28:51.973 17:19:39 nvmf_abort_qd_sizes -- nvmf/common.sh@297 -- # local -ga x722 00:28:51.973 17:19:39 nvmf_abort_qd_sizes -- nvmf/common.sh@298 -- # mlx=() 00:28:51.973 17:19:39 nvmf_abort_qd_sizes -- nvmf/common.sh@298 -- # local -ga mlx 00:28:51.973 17:19:39 nvmf_abort_qd_sizes -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:51.973 17:19:39 nvmf_abort_qd_sizes -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:51.973 17:19:39 nvmf_abort_qd_sizes -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:51.973 17:19:39 nvmf_abort_qd_sizes -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:51.973 17:19:39 nvmf_abort_qd_sizes -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:51.973 17:19:39 nvmf_abort_qd_sizes -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:51.973 17:19:39 nvmf_abort_qd_sizes -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:51.974 17:19:39 nvmf_abort_qd_sizes -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:51.974 17:19:39 nvmf_abort_qd_sizes -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:51.974 17:19:39 nvmf_abort_qd_sizes -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:51.974 17:19:39 nvmf_abort_qd_sizes -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:51.974 17:19:39 nvmf_abort_qd_sizes -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:28:51.974 17:19:39 nvmf_abort_qd_sizes -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:28:51.974 17:19:39 nvmf_abort_qd_sizes -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:28:51.974 17:19:39 nvmf_abort_qd_sizes -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:28:51.974 17:19:39 nvmf_abort_qd_sizes -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:28:51.974 17:19:39 nvmf_abort_qd_sizes -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:28:51.974 17:19:39 nvmf_abort_qd_sizes -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:28:51.974 17:19:39 nvmf_abort_qd_sizes -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:28:51.974 Found 0000:86:00.0 (0x8086 - 0x159b) 00:28:51.974 17:19:39 nvmf_abort_qd_sizes -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:28:51.974 17:19:39 nvmf_abort_qd_sizes -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:28:51.974 17:19:39 nvmf_abort_qd_sizes -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:51.974 17:19:39 nvmf_abort_qd_sizes -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:51.974 17:19:39 nvmf_abort_qd_sizes -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:28:51.974 17:19:39 nvmf_abort_qd_sizes -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:28:51.974 17:19:39 nvmf_abort_qd_sizes -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:28:51.974 Found 0000:86:00.1 (0x8086 - 0x159b) 00:28:51.974 17:19:39 nvmf_abort_qd_sizes -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:28:51.974 17:19:39 nvmf_abort_qd_sizes -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:28:51.974 17:19:39 nvmf_abort_qd_sizes -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:51.974 17:19:39 nvmf_abort_qd_sizes -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:51.974 17:19:39 nvmf_abort_qd_sizes -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:28:51.974 17:19:39 nvmf_abort_qd_sizes -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:28:51.974 17:19:39 nvmf_abort_qd_sizes -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:28:51.974 17:19:39 nvmf_abort_qd_sizes -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:28:51.974 17:19:39 nvmf_abort_qd_sizes -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:28:51.974 17:19:39 nvmf_abort_qd_sizes -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:51.974 17:19:39 nvmf_abort_qd_sizes -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:28:51.974 17:19:39 nvmf_abort_qd_sizes -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:51.974 17:19:39 nvmf_abort_qd_sizes -- nvmf/common.sh@390 -- # [[ up == up ]] 00:28:51.974 17:19:39 nvmf_abort_qd_sizes -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:28:51.974 17:19:39 nvmf_abort_qd_sizes -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:51.974 17:19:39 nvmf_abort_qd_sizes -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:28:51.974 Found net devices under 0000:86:00.0: cvl_0_0 00:28:51.974 17:19:39 nvmf_abort_qd_sizes -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:28:51.974 17:19:39 nvmf_abort_qd_sizes -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:28:51.974 17:19:39 nvmf_abort_qd_sizes -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:51.974 17:19:39 nvmf_abort_qd_sizes -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:28:51.974 17:19:39 nvmf_abort_qd_sizes -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:51.974 17:19:39 nvmf_abort_qd_sizes -- nvmf/common.sh@390 -- # [[ up == up ]] 00:28:51.974 17:19:39 nvmf_abort_qd_sizes -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:28:51.974 17:19:39 nvmf_abort_qd_sizes -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:51.974 17:19:39 nvmf_abort_qd_sizes -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:28:51.974 Found net devices under 0000:86:00.1: cvl_0_1 00:28:51.974 17:19:39 nvmf_abort_qd_sizes -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:28:51.974 17:19:39 nvmf_abort_qd_sizes -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:28:51.974 17:19:39 nvmf_abort_qd_sizes -- nvmf/common.sh@414 -- # is_hw=yes 00:28:51.974 17:19:39 nvmf_abort_qd_sizes -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:28:51.974 17:19:39 nvmf_abort_qd_sizes -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:28:51.974 17:19:39 nvmf_abort_qd_sizes -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:28:51.974 17:19:39 nvmf_abort_qd_sizes -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:51.974 17:19:39 nvmf_abort_qd_sizes -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:51.974 17:19:39 nvmf_abort_qd_sizes -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:51.974 17:19:39 nvmf_abort_qd_sizes -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:28:51.974 17:19:39 nvmf_abort_qd_sizes -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:51.974 17:19:39 nvmf_abort_qd_sizes -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:51.974 17:19:39 nvmf_abort_qd_sizes -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:28:51.974 17:19:39 nvmf_abort_qd_sizes -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:51.974 17:19:39 nvmf_abort_qd_sizes -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:51.974 17:19:39 nvmf_abort_qd_sizes -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:28:51.974 17:19:39 nvmf_abort_qd_sizes -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:28:51.974 17:19:39 nvmf_abort_qd_sizes -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:28:51.974 17:19:39 nvmf_abort_qd_sizes -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:51.974 17:19:39 nvmf_abort_qd_sizes -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:51.974 17:19:39 nvmf_abort_qd_sizes -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:51.974 17:19:39 nvmf_abort_qd_sizes -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:28:51.974 17:19:39 nvmf_abort_qd_sizes -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:51.974 17:19:39 nvmf_abort_qd_sizes -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:51.974 17:19:39 nvmf_abort_qd_sizes -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:51.974 17:19:39 nvmf_abort_qd_sizes -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:28:51.974 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:51.974 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.274 ms 00:28:51.974 00:28:51.974 --- 10.0.0.2 ping statistics --- 00:28:51.974 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:51.974 rtt min/avg/max/mdev = 0.274/0.274/0.274/0.000 ms 00:28:51.974 17:19:39 nvmf_abort_qd_sizes -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:51.974 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:51.974 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.224 ms 00:28:51.974 00:28:51.974 --- 10.0.0.1 ping statistics --- 00:28:51.974 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:51.974 rtt min/avg/max/mdev = 0.224/0.224/0.224/0.000 ms 00:28:51.974 17:19:39 nvmf_abort_qd_sizes -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:51.974 17:19:39 nvmf_abort_qd_sizes -- nvmf/common.sh@422 -- # return 0 00:28:51.974 17:19:39 nvmf_abort_qd_sizes -- nvmf/common.sh@450 -- # '[' iso == iso ']' 00:28:51.974 17:19:39 nvmf_abort_qd_sizes -- nvmf/common.sh@451 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:28:54.556 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:28:54.556 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:28:54.556 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:28:54.556 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:28:54.556 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:28:54.556 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:28:54.556 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:28:54.556 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:28:54.556 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:28:54.556 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:28:54.556 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:28:54.556 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:28:54.556 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:28:54.556 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:28:54.556 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:28:54.556 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:28:55.491 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:28:55.491 17:19:42 nvmf_abort_qd_sizes -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:55.491 17:19:42 nvmf_abort_qd_sizes -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:28:55.491 17:19:42 nvmf_abort_qd_sizes -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:28:55.491 17:19:42 nvmf_abort_qd_sizes -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:55.491 17:19:42 nvmf_abort_qd_sizes -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:28:55.491 17:19:42 nvmf_abort_qd_sizes -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:28:55.491 17:19:42 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@71 -- # nvmfappstart -m 0xf 00:28:55.491 17:19:42 nvmf_abort_qd_sizes -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:28:55.491 17:19:42 nvmf_abort_qd_sizes -- common/autotest_common.sh@720 -- # xtrace_disable 00:28:55.491 17:19:42 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:28:55.491 17:19:42 nvmf_abort_qd_sizes -- nvmf/common.sh@481 -- # nvmfpid=3253856 00:28:55.491 17:19:42 nvmf_abort_qd_sizes -- nvmf/common.sh@482 -- # waitforlisten 3253856 00:28:55.491 17:19:42 nvmf_abort_qd_sizes -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xf 00:28:55.491 17:19:42 nvmf_abort_qd_sizes -- common/autotest_common.sh@827 -- # '[' -z 3253856 ']' 00:28:55.491 17:19:42 nvmf_abort_qd_sizes -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:55.491 17:19:42 nvmf_abort_qd_sizes -- common/autotest_common.sh@832 -- # local max_retries=100 00:28:55.491 17:19:42 nvmf_abort_qd_sizes -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:55.491 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:55.491 17:19:42 nvmf_abort_qd_sizes -- common/autotest_common.sh@836 -- # xtrace_disable 00:28:55.491 17:19:42 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:28:55.491 [2024-05-15 17:19:43.007611] Starting SPDK v24.05-pre git sha1 0ba8ca574 / DPDK 23.11.0 initialization... 00:28:55.491 [2024-05-15 17:19:43.007652] [ DPDK EAL parameters: nvmf -c 0xf --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:55.491 EAL: No free 2048 kB hugepages reported on node 1 00:28:55.491 [2024-05-15 17:19:43.064479] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:28:55.749 [2024-05-15 17:19:43.151437] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:55.749 [2024-05-15 17:19:43.151473] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:55.749 [2024-05-15 17:19:43.151480] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:55.749 [2024-05-15 17:19:43.151486] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:55.749 [2024-05-15 17:19:43.151491] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:55.749 [2024-05-15 17:19:43.151529] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:28:55.749 [2024-05-15 17:19:43.151654] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:28:55.749 [2024-05-15 17:19:43.151717] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:28:55.749 [2024-05-15 17:19:43.151718] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:28:56.313 17:19:43 nvmf_abort_qd_sizes -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:28:56.313 17:19:43 nvmf_abort_qd_sizes -- common/autotest_common.sh@860 -- # return 0 00:28:56.313 17:19:43 nvmf_abort_qd_sizes -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:28:56.313 17:19:43 nvmf_abort_qd_sizes -- common/autotest_common.sh@726 -- # xtrace_disable 00:28:56.313 17:19:43 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:28:56.313 17:19:43 nvmf_abort_qd_sizes -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:56.313 17:19:43 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@73 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini || :; clean_kernel_target' SIGINT SIGTERM EXIT 00:28:56.313 17:19:43 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # mapfile -t nvmes 00:28:56.313 17:19:43 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # nvme_in_userspace 00:28:56.313 17:19:43 nvmf_abort_qd_sizes -- scripts/common.sh@309 -- # local bdf bdfs 00:28:56.313 17:19:43 nvmf_abort_qd_sizes -- scripts/common.sh@310 -- # local nvmes 00:28:56.313 17:19:43 nvmf_abort_qd_sizes -- scripts/common.sh@312 -- # [[ -n 0000:5e:00.0 ]] 00:28:56.313 17:19:43 nvmf_abort_qd_sizes -- scripts/common.sh@313 -- # nvmes=(${pci_bus_cache["0x010802"]}) 00:28:56.313 17:19:43 nvmf_abort_qd_sizes -- scripts/common.sh@318 -- # for bdf in "${nvmes[@]}" 00:28:56.313 17:19:43 nvmf_abort_qd_sizes -- scripts/common.sh@319 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:5e:00.0 ]] 00:28:56.313 17:19:43 nvmf_abort_qd_sizes -- scripts/common.sh@320 -- # uname -s 00:28:56.313 17:19:43 nvmf_abort_qd_sizes -- scripts/common.sh@320 -- # [[ Linux == FreeBSD ]] 00:28:56.313 17:19:43 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # bdfs+=("$bdf") 00:28:56.313 17:19:43 nvmf_abort_qd_sizes -- scripts/common.sh@325 -- # (( 1 )) 00:28:56.313 17:19:43 nvmf_abort_qd_sizes -- scripts/common.sh@326 -- # printf '%s\n' 0000:5e:00.0 00:28:56.313 17:19:43 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@76 -- # (( 1 > 0 )) 00:28:56.313 17:19:43 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@78 -- # nvme=0000:5e:00.0 00:28:56.313 17:19:43 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@80 -- # run_test spdk_target_abort spdk_target 00:28:56.313 17:19:43 nvmf_abort_qd_sizes -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:28:56.313 17:19:43 nvmf_abort_qd_sizes -- common/autotest_common.sh@1103 -- # xtrace_disable 00:28:56.313 17:19:43 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:28:56.313 ************************************ 00:28:56.313 START TEST spdk_target_abort 00:28:56.313 ************************************ 00:28:56.313 17:19:43 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1121 -- # spdk_target 00:28:56.313 17:19:43 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@43 -- # local name=spdk_target 00:28:56.313 17:19:43 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@45 -- # rpc_cmd bdev_nvme_attach_controller -t pcie -a 0000:5e:00.0 -b spdk_target 00:28:56.313 17:19:43 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:56.313 17:19:43 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:28:59.587 spdk_targetn1 00:28:59.587 17:19:46 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:59.587 17:19:46 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@47 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:28:59.587 17:19:46 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:59.587 17:19:46 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:28:59.587 [2024-05-15 17:19:46.737970] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:59.587 17:19:46 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:59.587 17:19:46 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:testnqn -a -s SPDKISFASTANDAWESOME 00:28:59.587 17:19:46 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:59.587 17:19:46 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:28:59.587 17:19:46 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:59.587 17:19:46 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@49 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:testnqn spdk_targetn1 00:28:59.587 17:19:46 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:59.587 17:19:46 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:28:59.587 17:19:46 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:59.587 17:19:46 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@50 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:testnqn -t tcp -a 10.0.0.2 -s 4420 00:28:59.587 17:19:46 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:59.587 17:19:46 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:28:59.587 [2024-05-15 17:19:46.766723] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:28:59.587 [2024-05-15 17:19:46.766958] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:59.587 17:19:46 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:59.587 17:19:46 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@52 -- # rabort tcp IPv4 10.0.0.2 4420 nqn.2016-06.io.spdk:testnqn 00:28:59.587 17:19:46 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:28:59.587 17:19:46 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:28:59.587 17:19:46 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.2 00:28:59.587 17:19:46 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:28:59.587 17:19:46 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:28:59.587 17:19:46 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:28:59.587 17:19:46 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:28:59.587 17:19:46 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:28:59.587 17:19:46 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:28:59.587 17:19:46 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:28:59.587 17:19:46 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:28:59.587 17:19:46 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:28:59.587 17:19:46 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:28:59.587 17:19:46 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2' 00:28:59.587 17:19:46 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:28:59.587 17:19:46 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:28:59.587 17:19:46 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:28:59.587 17:19:46 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:28:59.587 17:19:46 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:28:59.587 17:19:46 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:28:59.587 EAL: No free 2048 kB hugepages reported on node 1 00:29:02.861 Initializing NVMe Controllers 00:29:02.861 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:29:02.861 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:29:02.861 Initialization complete. Launching workers. 00:29:02.861 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 15002, failed: 0 00:29:02.861 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1370, failed to submit 13632 00:29:02.861 success 767, unsuccess 603, failed 0 00:29:02.861 17:19:49 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:29:02.861 17:19:49 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:29:02.861 EAL: No free 2048 kB hugepages reported on node 1 00:29:06.136 Initializing NVMe Controllers 00:29:06.136 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:29:06.136 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:29:06.136 Initialization complete. Launching workers. 00:29:06.136 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 8478, failed: 0 00:29:06.136 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1251, failed to submit 7227 00:29:06.136 success 319, unsuccess 932, failed 0 00:29:06.136 17:19:53 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:29:06.136 17:19:53 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:29:06.136 EAL: No free 2048 kB hugepages reported on node 1 00:29:09.410 Initializing NVMe Controllers 00:29:09.410 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:29:09.410 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:29:09.410 Initialization complete. Launching workers. 00:29:09.410 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 37279, failed: 0 00:29:09.410 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 2941, failed to submit 34338 00:29:09.410 success 600, unsuccess 2341, failed 0 00:29:09.410 17:19:56 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@54 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:testnqn 00:29:09.410 17:19:56 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:09.410 17:19:56 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:29:09.410 17:19:56 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:09.410 17:19:56 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@55 -- # rpc_cmd bdev_nvme_detach_controller spdk_target 00:29:09.410 17:19:56 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:09.410 17:19:56 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:29:10.341 17:19:57 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:10.341 17:19:57 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@61 -- # killprocess 3253856 00:29:10.341 17:19:57 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@946 -- # '[' -z 3253856 ']' 00:29:10.341 17:19:57 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@950 -- # kill -0 3253856 00:29:10.341 17:19:57 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@951 -- # uname 00:29:10.341 17:19:57 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:29:10.341 17:19:57 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3253856 00:29:10.341 17:19:57 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:29:10.341 17:19:57 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:29:10.341 17:19:57 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3253856' 00:29:10.341 killing process with pid 3253856 00:29:10.341 17:19:57 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@965 -- # kill 3253856 00:29:10.341 [2024-05-15 17:19:57.828064] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:29:10.341 17:19:57 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@970 -- # wait 3253856 00:29:10.598 00:29:10.598 real 0m14.133s 00:29:10.598 user 0m56.224s 00:29:10.598 sys 0m2.314s 00:29:10.598 17:19:58 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1122 -- # xtrace_disable 00:29:10.598 17:19:58 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:29:10.598 ************************************ 00:29:10.598 END TEST spdk_target_abort 00:29:10.598 ************************************ 00:29:10.598 17:19:58 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@81 -- # run_test kernel_target_abort kernel_target 00:29:10.598 17:19:58 nvmf_abort_qd_sizes -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:29:10.598 17:19:58 nvmf_abort_qd_sizes -- common/autotest_common.sh@1103 -- # xtrace_disable 00:29:10.598 17:19:58 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:29:10.598 ************************************ 00:29:10.598 START TEST kernel_target_abort 00:29:10.598 ************************************ 00:29:10.598 17:19:58 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1121 -- # kernel_target 00:29:10.598 17:19:58 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # get_main_ns_ip 00:29:10.598 17:19:58 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@741 -- # local ip 00:29:10.598 17:19:58 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@742 -- # ip_candidates=() 00:29:10.598 17:19:58 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@742 -- # local -A ip_candidates 00:29:10.598 17:19:58 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:10.598 17:19:58 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:10.598 17:19:58 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:29:10.598 17:19:58 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:10.598 17:19:58 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:29:10.598 17:19:58 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:29:10.598 17:19:58 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:29:10.598 17:19:58 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:29:10.598 17:19:58 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@632 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:29:10.598 17:19:58 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@634 -- # nvmet=/sys/kernel/config/nvmet 00:29:10.598 17:19:58 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@635 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:29:10.598 17:19:58 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@636 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:29:10.598 17:19:58 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@637 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:29:10.598 17:19:58 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@639 -- # local block nvme 00:29:10.598 17:19:58 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@641 -- # [[ ! -e /sys/module/nvmet ]] 00:29:10.598 17:19:58 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@642 -- # modprobe nvmet 00:29:10.598 17:19:58 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@645 -- # [[ -e /sys/kernel/config/nvmet ]] 00:29:10.598 17:19:58 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@647 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:29:12.498 Waiting for block devices as requested 00:29:12.498 0000:5e:00.0 (8086 0a54): vfio-pci -> nvme 00:29:12.757 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:29:12.757 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:29:12.757 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:29:13.048 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:29:13.048 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:29:13.048 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:29:13.048 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:29:13.307 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:29:13.307 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:29:13.307 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:29:13.307 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:29:13.565 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:29:13.565 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:29:13.565 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:29:13.823 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:29:13.823 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:29:13.823 17:20:01 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:29:13.823 17:20:01 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n1 ]] 00:29:13.823 17:20:01 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@652 -- # is_block_zoned nvme0n1 00:29:13.823 17:20:01 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1658 -- # local device=nvme0n1 00:29:13.823 17:20:01 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:29:13.823 17:20:01 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1661 -- # [[ none != none ]] 00:29:13.823 17:20:01 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # block_in_use nvme0n1 00:29:13.823 17:20:01 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:29:13.823 17:20:01 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:29:13.823 No valid GPT data, bailing 00:29:13.823 17:20:01 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:29:13.823 17:20:01 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # pt= 00:29:13.823 17:20:01 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@392 -- # return 1 00:29:13.823 17:20:01 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n1 00:29:13.824 17:20:01 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@656 -- # [[ -b /dev/nvme0n1 ]] 00:29:13.824 17:20:01 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@658 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:29:13.824 17:20:01 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@659 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:29:14.082 17:20:01 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@660 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:29:14.082 17:20:01 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@665 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:29:14.082 17:20:01 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@667 -- # echo 1 00:29:14.082 17:20:01 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@668 -- # echo /dev/nvme0n1 00:29:14.082 17:20:01 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@669 -- # echo 1 00:29:14.082 17:20:01 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@671 -- # echo 10.0.0.1 00:29:14.082 17:20:01 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@672 -- # echo tcp 00:29:14.082 17:20:01 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@673 -- # echo 4420 00:29:14.082 17:20:01 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@674 -- # echo ipv4 00:29:14.082 17:20:01 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@677 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:29:14.082 17:20:01 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@680 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -a 10.0.0.1 -t tcp -s 4420 00:29:14.082 00:29:14.082 Discovery Log Number of Records 2, Generation counter 2 00:29:14.082 =====Discovery Log Entry 0====== 00:29:14.082 trtype: tcp 00:29:14.082 adrfam: ipv4 00:29:14.082 subtype: current discovery subsystem 00:29:14.082 treq: not specified, sq flow control disable supported 00:29:14.082 portid: 1 00:29:14.082 trsvcid: 4420 00:29:14.082 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:29:14.082 traddr: 10.0.0.1 00:29:14.082 eflags: none 00:29:14.082 sectype: none 00:29:14.082 =====Discovery Log Entry 1====== 00:29:14.082 trtype: tcp 00:29:14.082 adrfam: ipv4 00:29:14.082 subtype: nvme subsystem 00:29:14.082 treq: not specified, sq flow control disable supported 00:29:14.082 portid: 1 00:29:14.082 trsvcid: 4420 00:29:14.082 subnqn: nqn.2016-06.io.spdk:testnqn 00:29:14.082 traddr: 10.0.0.1 00:29:14.082 eflags: none 00:29:14.082 sectype: none 00:29:14.082 17:20:01 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@66 -- # rabort tcp IPv4 10.0.0.1 4420 nqn.2016-06.io.spdk:testnqn 00:29:14.082 17:20:01 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:29:14.082 17:20:01 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:29:14.082 17:20:01 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.1 00:29:14.082 17:20:01 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:29:14.082 17:20:01 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:29:14.082 17:20:01 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:29:14.082 17:20:01 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:29:14.082 17:20:01 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:29:14.082 17:20:01 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:29:14.082 17:20:01 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:29:14.082 17:20:01 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:29:14.082 17:20:01 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:29:14.082 17:20:01 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:29:14.082 17:20:01 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1' 00:29:14.082 17:20:01 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:29:14.082 17:20:01 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420' 00:29:14.082 17:20:01 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:29:14.082 17:20:01 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:29:14.082 17:20:01 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:29:14.082 17:20:01 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:29:14.082 EAL: No free 2048 kB hugepages reported on node 1 00:29:17.360 Initializing NVMe Controllers 00:29:17.360 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:29:17.360 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:29:17.360 Initialization complete. Launching workers. 00:29:17.360 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 82056, failed: 0 00:29:17.360 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 82056, failed to submit 0 00:29:17.360 success 0, unsuccess 82056, failed 0 00:29:17.360 17:20:04 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:29:17.360 17:20:04 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:29:17.360 EAL: No free 2048 kB hugepages reported on node 1 00:29:20.691 Initializing NVMe Controllers 00:29:20.691 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:29:20.691 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:29:20.691 Initialization complete. Launching workers. 00:29:20.691 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 134403, failed: 0 00:29:20.691 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 33690, failed to submit 100713 00:29:20.691 success 0, unsuccess 33690, failed 0 00:29:20.691 17:20:07 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:29:20.691 17:20:07 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:29:20.691 EAL: No free 2048 kB hugepages reported on node 1 00:29:23.223 Initializing NVMe Controllers 00:29:23.223 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:29:23.223 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:29:23.223 Initialization complete. Launching workers. 00:29:23.223 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 128240, failed: 0 00:29:23.223 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 32098, failed to submit 96142 00:29:23.223 success 0, unsuccess 32098, failed 0 00:29:23.223 17:20:10 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@67 -- # clean_kernel_target 00:29:23.223 17:20:10 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@684 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:29:23.223 17:20:10 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@686 -- # echo 0 00:29:23.223 17:20:10 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@688 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:29:23.223 17:20:10 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@689 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:29:23.223 17:20:10 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@690 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:29:23.223 17:20:10 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@691 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:29:23.223 17:20:10 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@693 -- # modules=(/sys/module/nvmet/holders/*) 00:29:23.223 17:20:10 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@695 -- # modprobe -r nvmet_tcp nvmet 00:29:23.481 17:20:10 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@698 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:29:26.007 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:29:26.007 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:29:26.007 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:29:26.007 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:29:26.007 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:29:26.007 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:29:26.007 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:29:26.007 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:29:26.007 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:29:26.007 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:29:26.007 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:29:26.007 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:29:26.007 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:29:26.007 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:29:26.007 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:29:26.007 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:29:26.940 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:29:26.940 00:29:26.940 real 0m16.401s 00:29:26.940 user 0m7.818s 00:29:26.940 sys 0m4.478s 00:29:26.940 17:20:14 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1122 -- # xtrace_disable 00:29:26.940 17:20:14 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@10 -- # set +x 00:29:26.940 ************************************ 00:29:26.940 END TEST kernel_target_abort 00:29:26.940 ************************************ 00:29:26.940 17:20:14 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:29:26.940 17:20:14 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@84 -- # nvmftestfini 00:29:26.940 17:20:14 nvmf_abort_qd_sizes -- nvmf/common.sh@488 -- # nvmfcleanup 00:29:26.940 17:20:14 nvmf_abort_qd_sizes -- nvmf/common.sh@117 -- # sync 00:29:26.940 17:20:14 nvmf_abort_qd_sizes -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:29:26.940 17:20:14 nvmf_abort_qd_sizes -- nvmf/common.sh@120 -- # set +e 00:29:26.940 17:20:14 nvmf_abort_qd_sizes -- nvmf/common.sh@121 -- # for i in {1..20} 00:29:26.940 17:20:14 nvmf_abort_qd_sizes -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:29:26.941 rmmod nvme_tcp 00:29:26.941 rmmod nvme_fabrics 00:29:27.198 rmmod nvme_keyring 00:29:27.198 17:20:14 nvmf_abort_qd_sizes -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:29:27.198 17:20:14 nvmf_abort_qd_sizes -- nvmf/common.sh@124 -- # set -e 00:29:27.198 17:20:14 nvmf_abort_qd_sizes -- nvmf/common.sh@125 -- # return 0 00:29:27.198 17:20:14 nvmf_abort_qd_sizes -- nvmf/common.sh@489 -- # '[' -n 3253856 ']' 00:29:27.198 17:20:14 nvmf_abort_qd_sizes -- nvmf/common.sh@490 -- # killprocess 3253856 00:29:27.198 17:20:14 nvmf_abort_qd_sizes -- common/autotest_common.sh@946 -- # '[' -z 3253856 ']' 00:29:27.198 17:20:14 nvmf_abort_qd_sizes -- common/autotest_common.sh@950 -- # kill -0 3253856 00:29:27.199 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 950: kill: (3253856) - No such process 00:29:27.199 17:20:14 nvmf_abort_qd_sizes -- common/autotest_common.sh@973 -- # echo 'Process with pid 3253856 is not found' 00:29:27.199 Process with pid 3253856 is not found 00:29:27.199 17:20:14 nvmf_abort_qd_sizes -- nvmf/common.sh@492 -- # '[' iso == iso ']' 00:29:27.199 17:20:14 nvmf_abort_qd_sizes -- nvmf/common.sh@493 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:29:29.723 Waiting for block devices as requested 00:29:29.723 0000:5e:00.0 (8086 0a54): vfio-pci -> nvme 00:29:29.723 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:29:29.723 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:29:29.723 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:29:29.723 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:29:29.723 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:29:29.723 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:29:29.980 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:29:29.980 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:29:29.980 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:29:29.980 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:29:30.238 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:29:30.238 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:29:30.238 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:29:30.238 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:29:30.497 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:29:30.497 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:29:30.497 17:20:18 nvmf_abort_qd_sizes -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:29:30.497 17:20:18 nvmf_abort_qd_sizes -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:29:30.497 17:20:18 nvmf_abort_qd_sizes -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:29:30.497 17:20:18 nvmf_abort_qd_sizes -- nvmf/common.sh@278 -- # remove_spdk_ns 00:29:30.497 17:20:18 nvmf_abort_qd_sizes -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:30.497 17:20:18 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:29:30.497 17:20:18 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:33.026 17:20:20 nvmf_abort_qd_sizes -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:29:33.026 00:29:33.026 real 0m46.020s 00:29:33.026 user 1m7.626s 00:29:33.026 sys 0m14.468s 00:29:33.026 17:20:20 nvmf_abort_qd_sizes -- common/autotest_common.sh@1122 -- # xtrace_disable 00:29:33.026 17:20:20 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:29:33.026 ************************************ 00:29:33.026 END TEST nvmf_abort_qd_sizes 00:29:33.026 ************************************ 00:29:33.026 17:20:20 -- spdk/autotest.sh@291 -- # run_test keyring_file /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 00:29:33.026 17:20:20 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:29:33.026 17:20:20 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:29:33.026 17:20:20 -- common/autotest_common.sh@10 -- # set +x 00:29:33.026 ************************************ 00:29:33.026 START TEST keyring_file 00:29:33.026 ************************************ 00:29:33.026 17:20:20 keyring_file -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 00:29:33.026 * Looking for test storage... 00:29:33.026 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring 00:29:33.026 17:20:20 keyring_file -- keyring/file.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/common.sh 00:29:33.026 17:20:20 keyring_file -- keyring/common.sh@4 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:33.026 17:20:20 keyring_file -- nvmf/common.sh@7 -- # uname -s 00:29:33.026 17:20:20 keyring_file -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:33.026 17:20:20 keyring_file -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:33.026 17:20:20 keyring_file -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:33.026 17:20:20 keyring_file -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:33.026 17:20:20 keyring_file -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:33.026 17:20:20 keyring_file -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:33.026 17:20:20 keyring_file -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:33.026 17:20:20 keyring_file -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:33.026 17:20:20 keyring_file -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:33.026 17:20:20 keyring_file -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:33.026 17:20:20 keyring_file -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:29:33.026 17:20:20 keyring_file -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:29:33.026 17:20:20 keyring_file -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:33.026 17:20:20 keyring_file -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:33.026 17:20:20 keyring_file -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:33.026 17:20:20 keyring_file -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:33.026 17:20:20 keyring_file -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:33.026 17:20:20 keyring_file -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:33.026 17:20:20 keyring_file -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:33.026 17:20:20 keyring_file -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:33.026 17:20:20 keyring_file -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:33.026 17:20:20 keyring_file -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:33.027 17:20:20 keyring_file -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:33.027 17:20:20 keyring_file -- paths/export.sh@5 -- # export PATH 00:29:33.027 17:20:20 keyring_file -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:33.027 17:20:20 keyring_file -- nvmf/common.sh@47 -- # : 0 00:29:33.027 17:20:20 keyring_file -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:29:33.027 17:20:20 keyring_file -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:29:33.027 17:20:20 keyring_file -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:33.027 17:20:20 keyring_file -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:33.027 17:20:20 keyring_file -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:33.027 17:20:20 keyring_file -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:29:33.027 17:20:20 keyring_file -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:29:33.027 17:20:20 keyring_file -- nvmf/common.sh@51 -- # have_pci_nics=0 00:29:33.027 17:20:20 keyring_file -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:29:33.027 17:20:20 keyring_file -- keyring/file.sh@13 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:29:33.027 17:20:20 keyring_file -- keyring/file.sh@14 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:29:33.027 17:20:20 keyring_file -- keyring/file.sh@15 -- # key0=00112233445566778899aabbccddeeff 00:29:33.027 17:20:20 keyring_file -- keyring/file.sh@16 -- # key1=112233445566778899aabbccddeeff00 00:29:33.027 17:20:20 keyring_file -- keyring/file.sh@24 -- # trap cleanup EXIT 00:29:33.027 17:20:20 keyring_file -- keyring/file.sh@26 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:29:33.027 17:20:20 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:29:33.027 17:20:20 keyring_file -- keyring/common.sh@17 -- # name=key0 00:29:33.027 17:20:20 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:29:33.027 17:20:20 keyring_file -- keyring/common.sh@17 -- # digest=0 00:29:33.027 17:20:20 keyring_file -- keyring/common.sh@18 -- # mktemp 00:29:33.027 17:20:20 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.smC6LGjf7r 00:29:33.027 17:20:20 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:29:33.027 17:20:20 keyring_file -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:29:33.027 17:20:20 keyring_file -- nvmf/common.sh@702 -- # local prefix key digest 00:29:33.027 17:20:20 keyring_file -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:29:33.027 17:20:20 keyring_file -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:29:33.027 17:20:20 keyring_file -- nvmf/common.sh@704 -- # digest=0 00:29:33.027 17:20:20 keyring_file -- nvmf/common.sh@705 -- # python - 00:29:33.027 17:20:20 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.smC6LGjf7r 00:29:33.027 17:20:20 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.smC6LGjf7r 00:29:33.027 17:20:20 keyring_file -- keyring/file.sh@26 -- # key0path=/tmp/tmp.smC6LGjf7r 00:29:33.027 17:20:20 keyring_file -- keyring/file.sh@27 -- # prep_key key1 112233445566778899aabbccddeeff00 0 00:29:33.027 17:20:20 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:29:33.027 17:20:20 keyring_file -- keyring/common.sh@17 -- # name=key1 00:29:33.027 17:20:20 keyring_file -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:29:33.027 17:20:20 keyring_file -- keyring/common.sh@17 -- # digest=0 00:29:33.027 17:20:20 keyring_file -- keyring/common.sh@18 -- # mktemp 00:29:33.027 17:20:20 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.n79xHaLl9j 00:29:33.027 17:20:20 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:29:33.027 17:20:20 keyring_file -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:29:33.027 17:20:20 keyring_file -- nvmf/common.sh@702 -- # local prefix key digest 00:29:33.027 17:20:20 keyring_file -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:29:33.027 17:20:20 keyring_file -- nvmf/common.sh@704 -- # key=112233445566778899aabbccddeeff00 00:29:33.027 17:20:20 keyring_file -- nvmf/common.sh@704 -- # digest=0 00:29:33.027 17:20:20 keyring_file -- nvmf/common.sh@705 -- # python - 00:29:33.027 17:20:20 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.n79xHaLl9j 00:29:33.027 17:20:20 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.n79xHaLl9j 00:29:33.027 17:20:20 keyring_file -- keyring/file.sh@27 -- # key1path=/tmp/tmp.n79xHaLl9j 00:29:33.027 17:20:20 keyring_file -- keyring/file.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:29:33.027 17:20:20 keyring_file -- keyring/file.sh@30 -- # tgtpid=3262609 00:29:33.027 17:20:20 keyring_file -- keyring/file.sh@32 -- # waitforlisten 3262609 00:29:33.027 17:20:20 keyring_file -- common/autotest_common.sh@827 -- # '[' -z 3262609 ']' 00:29:33.027 17:20:20 keyring_file -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:33.027 17:20:20 keyring_file -- common/autotest_common.sh@832 -- # local max_retries=100 00:29:33.027 17:20:20 keyring_file -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:33.027 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:33.027 17:20:20 keyring_file -- common/autotest_common.sh@836 -- # xtrace_disable 00:29:33.027 17:20:20 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:29:33.027 [2024-05-15 17:20:20.530637] Starting SPDK v24.05-pre git sha1 0ba8ca574 / DPDK 23.11.0 initialization... 00:29:33.027 [2024-05-15 17:20:20.530687] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3262609 ] 00:29:33.027 EAL: No free 2048 kB hugepages reported on node 1 00:29:33.027 [2024-05-15 17:20:20.584716] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:33.027 [2024-05-15 17:20:20.655889] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:29:33.285 17:20:20 keyring_file -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:29:33.285 17:20:20 keyring_file -- common/autotest_common.sh@860 -- # return 0 00:29:33.285 17:20:20 keyring_file -- keyring/file.sh@33 -- # rpc_cmd 00:29:33.285 17:20:20 keyring_file -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:33.285 17:20:20 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:29:33.285 [2024-05-15 17:20:20.854287] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:33.285 null0 00:29:33.285 [2024-05-15 17:20:20.886304] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:29:33.285 [2024-05-15 17:20:20.886350] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:29:33.285 [2024-05-15 17:20:20.886663] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:29:33.285 [2024-05-15 17:20:20.894341] tcp.c:3665:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:29:33.285 17:20:20 keyring_file -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:33.285 17:20:20 keyring_file -- keyring/file.sh@43 -- # NOT rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:29:33.285 17:20:20 keyring_file -- common/autotest_common.sh@648 -- # local es=0 00:29:33.285 17:20:20 keyring_file -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:29:33.285 17:20:20 keyring_file -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:29:33.285 17:20:20 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:29:33.285 17:20:20 keyring_file -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:29:33.285 17:20:20 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:29:33.285 17:20:20 keyring_file -- common/autotest_common.sh@651 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:29:33.285 17:20:20 keyring_file -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:33.285 17:20:20 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:29:33.285 [2024-05-15 17:20:20.902360] nvmf_rpc.c: 773:nvmf_rpc_listen_paused: *ERROR*: Listener already exists 00:29:33.285 request: 00:29:33.285 { 00:29:33.285 "nqn": "nqn.2016-06.io.spdk:cnode0", 00:29:33.285 "secure_channel": false, 00:29:33.285 "listen_address": { 00:29:33.285 "trtype": "tcp", 00:29:33.285 "traddr": "127.0.0.1", 00:29:33.285 "trsvcid": "4420" 00:29:33.285 }, 00:29:33.285 "method": "nvmf_subsystem_add_listener", 00:29:33.285 "req_id": 1 00:29:33.285 } 00:29:33.285 Got JSON-RPC error response 00:29:33.285 response: 00:29:33.285 { 00:29:33.285 "code": -32602, 00:29:33.285 "message": "Invalid parameters" 00:29:33.285 } 00:29:33.285 17:20:20 keyring_file -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:29:33.285 17:20:20 keyring_file -- common/autotest_common.sh@651 -- # es=1 00:29:33.285 17:20:20 keyring_file -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:29:33.285 17:20:20 keyring_file -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:29:33.285 17:20:20 keyring_file -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:29:33.285 17:20:20 keyring_file -- keyring/file.sh@46 -- # bperfpid=3262626 00:29:33.285 17:20:20 keyring_file -- keyring/file.sh@48 -- # waitforlisten 3262626 /var/tmp/bperf.sock 00:29:33.285 17:20:20 keyring_file -- common/autotest_common.sh@827 -- # '[' -z 3262626 ']' 00:29:33.285 17:20:20 keyring_file -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bperf.sock 00:29:33.285 17:20:20 keyring_file -- common/autotest_common.sh@832 -- # local max_retries=100 00:29:33.285 17:20:20 keyring_file -- keyring/file.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z 00:29:33.285 17:20:20 keyring_file -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:29:33.285 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:29:33.285 17:20:20 keyring_file -- common/autotest_common.sh@836 -- # xtrace_disable 00:29:33.285 17:20:20 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:29:33.543 [2024-05-15 17:20:20.945948] Starting SPDK v24.05-pre git sha1 0ba8ca574 / DPDK 23.11.0 initialization... 00:29:33.543 [2024-05-15 17:20:20.945988] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3262626 ] 00:29:33.543 EAL: No free 2048 kB hugepages reported on node 1 00:29:33.543 [2024-05-15 17:20:20.998987] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:33.543 [2024-05-15 17:20:21.077482] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:29:34.106 17:20:21 keyring_file -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:29:34.106 17:20:21 keyring_file -- common/autotest_common.sh@860 -- # return 0 00:29:34.106 17:20:21 keyring_file -- keyring/file.sh@49 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.smC6LGjf7r 00:29:34.106 17:20:21 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.smC6LGjf7r 00:29:34.364 17:20:21 keyring_file -- keyring/file.sh@50 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.n79xHaLl9j 00:29:34.364 17:20:21 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.n79xHaLl9j 00:29:34.621 17:20:22 keyring_file -- keyring/file.sh@51 -- # get_key key0 00:29:34.621 17:20:22 keyring_file -- keyring/file.sh@51 -- # jq -r .path 00:29:34.621 17:20:22 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:29:34.621 17:20:22 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:29:34.621 17:20:22 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:29:34.621 17:20:22 keyring_file -- keyring/file.sh@51 -- # [[ /tmp/tmp.smC6LGjf7r == \/\t\m\p\/\t\m\p\.\s\m\C\6\L\G\j\f\7\r ]] 00:29:34.621 17:20:22 keyring_file -- keyring/file.sh@52 -- # get_key key1 00:29:34.621 17:20:22 keyring_file -- keyring/file.sh@52 -- # jq -r .path 00:29:34.621 17:20:22 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:29:34.621 17:20:22 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:29:34.621 17:20:22 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:29:34.878 17:20:22 keyring_file -- keyring/file.sh@52 -- # [[ /tmp/tmp.n79xHaLl9j == \/\t\m\p\/\t\m\p\.\n\7\9\x\H\a\L\l\9\j ]] 00:29:34.878 17:20:22 keyring_file -- keyring/file.sh@53 -- # get_refcnt key0 00:29:34.878 17:20:22 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:29:34.878 17:20:22 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:29:34.878 17:20:22 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:29:34.878 17:20:22 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:29:34.878 17:20:22 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:29:35.135 17:20:22 keyring_file -- keyring/file.sh@53 -- # (( 1 == 1 )) 00:29:35.135 17:20:22 keyring_file -- keyring/file.sh@54 -- # get_refcnt key1 00:29:35.135 17:20:22 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:29:35.135 17:20:22 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:29:35.135 17:20:22 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:29:35.135 17:20:22 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:29:35.135 17:20:22 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:29:35.135 17:20:22 keyring_file -- keyring/file.sh@54 -- # (( 1 == 1 )) 00:29:35.135 17:20:22 keyring_file -- keyring/file.sh@57 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:29:35.135 17:20:22 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:29:35.392 [2024-05-15 17:20:22.934318] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:29:35.392 nvme0n1 00:29:35.392 17:20:23 keyring_file -- keyring/file.sh@59 -- # get_refcnt key0 00:29:35.392 17:20:23 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:29:35.392 17:20:23 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:29:35.392 17:20:23 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:29:35.392 17:20:23 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:29:35.392 17:20:23 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:29:35.649 17:20:23 keyring_file -- keyring/file.sh@59 -- # (( 2 == 2 )) 00:29:35.649 17:20:23 keyring_file -- keyring/file.sh@60 -- # get_refcnt key1 00:29:35.649 17:20:23 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:29:35.649 17:20:23 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:29:35.649 17:20:23 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:29:35.649 17:20:23 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:29:35.649 17:20:23 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:29:35.906 17:20:23 keyring_file -- keyring/file.sh@60 -- # (( 1 == 1 )) 00:29:35.906 17:20:23 keyring_file -- keyring/file.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:29:35.906 Running I/O for 1 seconds... 00:29:36.837 00:29:36.837 Latency(us) 00:29:36.837 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:36.837 Job: nvme0n1 (Core Mask 0x2, workload: randrw, percentage: 50, depth: 128, IO size: 4096) 00:29:36.837 nvme0n1 : 1.01 14451.96 56.45 0.00 0.00 8828.46 2920.63 13107.20 00:29:36.837 =================================================================================================================== 00:29:36.837 Total : 14451.96 56.45 0.00 0.00 8828.46 2920.63 13107.20 00:29:36.837 0 00:29:36.837 17:20:24 keyring_file -- keyring/file.sh@64 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:29:36.837 17:20:24 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:29:37.094 17:20:24 keyring_file -- keyring/file.sh@65 -- # get_refcnt key0 00:29:37.094 17:20:24 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:29:37.094 17:20:24 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:29:37.094 17:20:24 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:29:37.094 17:20:24 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:29:37.094 17:20:24 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:29:37.352 17:20:24 keyring_file -- keyring/file.sh@65 -- # (( 1 == 1 )) 00:29:37.352 17:20:24 keyring_file -- keyring/file.sh@66 -- # get_refcnt key1 00:29:37.352 17:20:24 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:29:37.352 17:20:24 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:29:37.352 17:20:24 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:29:37.352 17:20:24 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:29:37.352 17:20:24 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:29:37.609 17:20:25 keyring_file -- keyring/file.sh@66 -- # (( 1 == 1 )) 00:29:37.609 17:20:25 keyring_file -- keyring/file.sh@69 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:29:37.609 17:20:25 keyring_file -- common/autotest_common.sh@648 -- # local es=0 00:29:37.609 17:20:25 keyring_file -- common/autotest_common.sh@650 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:29:37.609 17:20:25 keyring_file -- common/autotest_common.sh@636 -- # local arg=bperf_cmd 00:29:37.609 17:20:25 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:29:37.609 17:20:25 keyring_file -- common/autotest_common.sh@640 -- # type -t bperf_cmd 00:29:37.609 17:20:25 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:29:37.609 17:20:25 keyring_file -- common/autotest_common.sh@651 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:29:37.609 17:20:25 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:29:37.609 [2024-05-15 17:20:25.181174] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:29:37.609 [2024-05-15 17:20:25.181871] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16731e0 (107): Transport endpoint is not connected 00:29:37.609 [2024-05-15 17:20:25.182867] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16731e0 (9): Bad file descriptor 00:29:37.609 [2024-05-15 17:20:25.183868] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:29:37.609 [2024-05-15 17:20:25.183876] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:29:37.609 [2024-05-15 17:20:25.183883] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:29:37.609 request: 00:29:37.609 { 00:29:37.609 "name": "nvme0", 00:29:37.609 "trtype": "tcp", 00:29:37.609 "traddr": "127.0.0.1", 00:29:37.609 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:29:37.609 "adrfam": "ipv4", 00:29:37.609 "trsvcid": "4420", 00:29:37.609 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:29:37.609 "psk": "key1", 00:29:37.609 "method": "bdev_nvme_attach_controller", 00:29:37.609 "req_id": 1 00:29:37.609 } 00:29:37.609 Got JSON-RPC error response 00:29:37.609 response: 00:29:37.609 { 00:29:37.609 "code": -32602, 00:29:37.609 "message": "Invalid parameters" 00:29:37.609 } 00:29:37.609 17:20:25 keyring_file -- common/autotest_common.sh@651 -- # es=1 00:29:37.609 17:20:25 keyring_file -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:29:37.609 17:20:25 keyring_file -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:29:37.609 17:20:25 keyring_file -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:29:37.609 17:20:25 keyring_file -- keyring/file.sh@71 -- # get_refcnt key0 00:29:37.609 17:20:25 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:29:37.609 17:20:25 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:29:37.609 17:20:25 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:29:37.609 17:20:25 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:29:37.609 17:20:25 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:29:37.867 17:20:25 keyring_file -- keyring/file.sh@71 -- # (( 1 == 1 )) 00:29:37.867 17:20:25 keyring_file -- keyring/file.sh@72 -- # get_refcnt key1 00:29:37.867 17:20:25 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:29:37.867 17:20:25 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:29:37.867 17:20:25 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:29:37.867 17:20:25 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:29:37.867 17:20:25 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:29:38.124 17:20:25 keyring_file -- keyring/file.sh@72 -- # (( 1 == 1 )) 00:29:38.124 17:20:25 keyring_file -- keyring/file.sh@75 -- # bperf_cmd keyring_file_remove_key key0 00:29:38.124 17:20:25 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:29:38.124 17:20:25 keyring_file -- keyring/file.sh@76 -- # bperf_cmd keyring_file_remove_key key1 00:29:38.124 17:20:25 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key1 00:29:38.381 17:20:25 keyring_file -- keyring/file.sh@77 -- # bperf_cmd keyring_get_keys 00:29:38.381 17:20:25 keyring_file -- keyring/file.sh@77 -- # jq length 00:29:38.381 17:20:25 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:29:38.638 17:20:26 keyring_file -- keyring/file.sh@77 -- # (( 0 == 0 )) 00:29:38.638 17:20:26 keyring_file -- keyring/file.sh@80 -- # chmod 0660 /tmp/tmp.smC6LGjf7r 00:29:38.638 17:20:26 keyring_file -- keyring/file.sh@81 -- # NOT bperf_cmd keyring_file_add_key key0 /tmp/tmp.smC6LGjf7r 00:29:38.638 17:20:26 keyring_file -- common/autotest_common.sh@648 -- # local es=0 00:29:38.638 17:20:26 keyring_file -- common/autotest_common.sh@650 -- # valid_exec_arg bperf_cmd keyring_file_add_key key0 /tmp/tmp.smC6LGjf7r 00:29:38.638 17:20:26 keyring_file -- common/autotest_common.sh@636 -- # local arg=bperf_cmd 00:29:38.638 17:20:26 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:29:38.638 17:20:26 keyring_file -- common/autotest_common.sh@640 -- # type -t bperf_cmd 00:29:38.638 17:20:26 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:29:38.638 17:20:26 keyring_file -- common/autotest_common.sh@651 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.smC6LGjf7r 00:29:38.638 17:20:26 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.smC6LGjf7r 00:29:38.638 [2024-05-15 17:20:26.242981] keyring.c: 34:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.smC6LGjf7r': 0100660 00:29:38.638 [2024-05-15 17:20:26.243008] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:29:38.638 request: 00:29:38.638 { 00:29:38.638 "name": "key0", 00:29:38.638 "path": "/tmp/tmp.smC6LGjf7r", 00:29:38.638 "method": "keyring_file_add_key", 00:29:38.638 "req_id": 1 00:29:38.638 } 00:29:38.638 Got JSON-RPC error response 00:29:38.638 response: 00:29:38.638 { 00:29:38.638 "code": -1, 00:29:38.638 "message": "Operation not permitted" 00:29:38.638 } 00:29:38.638 17:20:26 keyring_file -- common/autotest_common.sh@651 -- # es=1 00:29:38.638 17:20:26 keyring_file -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:29:38.638 17:20:26 keyring_file -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:29:38.638 17:20:26 keyring_file -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:29:38.638 17:20:26 keyring_file -- keyring/file.sh@84 -- # chmod 0600 /tmp/tmp.smC6LGjf7r 00:29:38.638 17:20:26 keyring_file -- keyring/file.sh@85 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.smC6LGjf7r 00:29:38.638 17:20:26 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.smC6LGjf7r 00:29:38.895 17:20:26 keyring_file -- keyring/file.sh@86 -- # rm -f /tmp/tmp.smC6LGjf7r 00:29:38.895 17:20:26 keyring_file -- keyring/file.sh@88 -- # get_refcnt key0 00:29:38.896 17:20:26 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:29:38.896 17:20:26 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:29:38.896 17:20:26 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:29:38.896 17:20:26 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:29:38.896 17:20:26 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:29:39.153 17:20:26 keyring_file -- keyring/file.sh@88 -- # (( 1 == 1 )) 00:29:39.153 17:20:26 keyring_file -- keyring/file.sh@90 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:29:39.153 17:20:26 keyring_file -- common/autotest_common.sh@648 -- # local es=0 00:29:39.153 17:20:26 keyring_file -- common/autotest_common.sh@650 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:29:39.153 17:20:26 keyring_file -- common/autotest_common.sh@636 -- # local arg=bperf_cmd 00:29:39.153 17:20:26 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:29:39.153 17:20:26 keyring_file -- common/autotest_common.sh@640 -- # type -t bperf_cmd 00:29:39.153 17:20:26 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:29:39.153 17:20:26 keyring_file -- common/autotest_common.sh@651 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:29:39.153 17:20:26 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:29:39.153 [2024-05-15 17:20:26.776379] keyring.c: 29:keyring_file_check_path: *ERROR*: Could not stat key file '/tmp/tmp.smC6LGjf7r': No such file or directory 00:29:39.153 [2024-05-15 17:20:26.776403] nvme_tcp.c:2573:nvme_tcp_generate_tls_credentials: *ERROR*: Failed to obtain key 'key0': No such file or directory 00:29:39.153 [2024-05-15 17:20:26.776426] nvme.c: 683:nvme_ctrlr_probe: *ERROR*: Failed to construct NVMe controller for SSD: 127.0.0.1 00:29:39.153 [2024-05-15 17:20:26.776433] nvme.c: 821:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:29:39.153 [2024-05-15 17:20:26.776438] bdev_nvme.c:6252:bdev_nvme_create: *ERROR*: No controller was found with provided trid (traddr: 127.0.0.1) 00:29:39.153 request: 00:29:39.153 { 00:29:39.153 "name": "nvme0", 00:29:39.153 "trtype": "tcp", 00:29:39.153 "traddr": "127.0.0.1", 00:29:39.153 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:29:39.153 "adrfam": "ipv4", 00:29:39.153 "trsvcid": "4420", 00:29:39.153 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:29:39.153 "psk": "key0", 00:29:39.153 "method": "bdev_nvme_attach_controller", 00:29:39.153 "req_id": 1 00:29:39.153 } 00:29:39.153 Got JSON-RPC error response 00:29:39.153 response: 00:29:39.153 { 00:29:39.153 "code": -19, 00:29:39.153 "message": "No such device" 00:29:39.153 } 00:29:39.153 17:20:26 keyring_file -- common/autotest_common.sh@651 -- # es=1 00:29:39.153 17:20:26 keyring_file -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:29:39.153 17:20:26 keyring_file -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:29:39.153 17:20:26 keyring_file -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:29:39.153 17:20:26 keyring_file -- keyring/file.sh@92 -- # bperf_cmd keyring_file_remove_key key0 00:29:39.153 17:20:26 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:29:39.411 17:20:26 keyring_file -- keyring/file.sh@95 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:29:39.411 17:20:26 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:29:39.411 17:20:26 keyring_file -- keyring/common.sh@17 -- # name=key0 00:29:39.411 17:20:26 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:29:39.411 17:20:26 keyring_file -- keyring/common.sh@17 -- # digest=0 00:29:39.411 17:20:26 keyring_file -- keyring/common.sh@18 -- # mktemp 00:29:39.411 17:20:26 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.9NoW2rmD4A 00:29:39.411 17:20:26 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:29:39.411 17:20:26 keyring_file -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:29:39.411 17:20:26 keyring_file -- nvmf/common.sh@702 -- # local prefix key digest 00:29:39.411 17:20:26 keyring_file -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:29:39.411 17:20:26 keyring_file -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:29:39.411 17:20:26 keyring_file -- nvmf/common.sh@704 -- # digest=0 00:29:39.411 17:20:26 keyring_file -- nvmf/common.sh@705 -- # python - 00:29:39.411 17:20:26 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.9NoW2rmD4A 00:29:39.411 17:20:27 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.9NoW2rmD4A 00:29:39.411 17:20:27 keyring_file -- keyring/file.sh@95 -- # key0path=/tmp/tmp.9NoW2rmD4A 00:29:39.411 17:20:27 keyring_file -- keyring/file.sh@96 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.9NoW2rmD4A 00:29:39.411 17:20:27 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.9NoW2rmD4A 00:29:39.676 17:20:27 keyring_file -- keyring/file.sh@97 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:29:39.676 17:20:27 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:29:39.957 nvme0n1 00:29:39.957 17:20:27 keyring_file -- keyring/file.sh@99 -- # get_refcnt key0 00:29:39.957 17:20:27 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:29:39.957 17:20:27 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:29:39.957 17:20:27 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:29:39.957 17:20:27 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:29:39.957 17:20:27 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:29:39.957 17:20:27 keyring_file -- keyring/file.sh@99 -- # (( 2 == 2 )) 00:29:39.957 17:20:27 keyring_file -- keyring/file.sh@100 -- # bperf_cmd keyring_file_remove_key key0 00:29:39.957 17:20:27 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:29:40.230 17:20:27 keyring_file -- keyring/file.sh@101 -- # get_key key0 00:29:40.230 17:20:27 keyring_file -- keyring/file.sh@101 -- # jq -r .removed 00:29:40.230 17:20:27 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:29:40.230 17:20:27 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:29:40.230 17:20:27 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:29:40.488 17:20:27 keyring_file -- keyring/file.sh@101 -- # [[ true == \t\r\u\e ]] 00:29:40.488 17:20:27 keyring_file -- keyring/file.sh@102 -- # get_refcnt key0 00:29:40.488 17:20:27 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:29:40.488 17:20:27 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:29:40.488 17:20:27 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:29:40.488 17:20:27 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:29:40.488 17:20:27 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:29:40.488 17:20:28 keyring_file -- keyring/file.sh@102 -- # (( 1 == 1 )) 00:29:40.488 17:20:28 keyring_file -- keyring/file.sh@103 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:29:40.488 17:20:28 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:29:40.746 17:20:28 keyring_file -- keyring/file.sh@104 -- # bperf_cmd keyring_get_keys 00:29:40.746 17:20:28 keyring_file -- keyring/file.sh@104 -- # jq length 00:29:40.746 17:20:28 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:29:41.003 17:20:28 keyring_file -- keyring/file.sh@104 -- # (( 0 == 0 )) 00:29:41.003 17:20:28 keyring_file -- keyring/file.sh@107 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.9NoW2rmD4A 00:29:41.003 17:20:28 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.9NoW2rmD4A 00:29:41.003 17:20:28 keyring_file -- keyring/file.sh@108 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.n79xHaLl9j 00:29:41.003 17:20:28 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.n79xHaLl9j 00:29:41.260 17:20:28 keyring_file -- keyring/file.sh@109 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:29:41.260 17:20:28 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:29:41.517 nvme0n1 00:29:41.517 17:20:29 keyring_file -- keyring/file.sh@112 -- # bperf_cmd save_config 00:29:41.517 17:20:29 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock save_config 00:29:41.775 17:20:29 keyring_file -- keyring/file.sh@112 -- # config='{ 00:29:41.775 "subsystems": [ 00:29:41.775 { 00:29:41.775 "subsystem": "keyring", 00:29:41.775 "config": [ 00:29:41.775 { 00:29:41.775 "method": "keyring_file_add_key", 00:29:41.775 "params": { 00:29:41.775 "name": "key0", 00:29:41.775 "path": "/tmp/tmp.9NoW2rmD4A" 00:29:41.775 } 00:29:41.775 }, 00:29:41.775 { 00:29:41.775 "method": "keyring_file_add_key", 00:29:41.775 "params": { 00:29:41.775 "name": "key1", 00:29:41.775 "path": "/tmp/tmp.n79xHaLl9j" 00:29:41.775 } 00:29:41.775 } 00:29:41.775 ] 00:29:41.775 }, 00:29:41.775 { 00:29:41.775 "subsystem": "iobuf", 00:29:41.775 "config": [ 00:29:41.775 { 00:29:41.775 "method": "iobuf_set_options", 00:29:41.775 "params": { 00:29:41.775 "small_pool_count": 8192, 00:29:41.775 "large_pool_count": 1024, 00:29:41.775 "small_bufsize": 8192, 00:29:41.775 "large_bufsize": 135168 00:29:41.775 } 00:29:41.775 } 00:29:41.775 ] 00:29:41.775 }, 00:29:41.775 { 00:29:41.775 "subsystem": "sock", 00:29:41.775 "config": [ 00:29:41.775 { 00:29:41.775 "method": "sock_impl_set_options", 00:29:41.775 "params": { 00:29:41.775 "impl_name": "posix", 00:29:41.775 "recv_buf_size": 2097152, 00:29:41.775 "send_buf_size": 2097152, 00:29:41.775 "enable_recv_pipe": true, 00:29:41.775 "enable_quickack": false, 00:29:41.775 "enable_placement_id": 0, 00:29:41.775 "enable_zerocopy_send_server": true, 00:29:41.775 "enable_zerocopy_send_client": false, 00:29:41.775 "zerocopy_threshold": 0, 00:29:41.775 "tls_version": 0, 00:29:41.775 "enable_ktls": false 00:29:41.775 } 00:29:41.775 }, 00:29:41.775 { 00:29:41.775 "method": "sock_impl_set_options", 00:29:41.775 "params": { 00:29:41.775 "impl_name": "ssl", 00:29:41.775 "recv_buf_size": 4096, 00:29:41.775 "send_buf_size": 4096, 00:29:41.775 "enable_recv_pipe": true, 00:29:41.775 "enable_quickack": false, 00:29:41.775 "enable_placement_id": 0, 00:29:41.775 "enable_zerocopy_send_server": true, 00:29:41.775 "enable_zerocopy_send_client": false, 00:29:41.775 "zerocopy_threshold": 0, 00:29:41.775 "tls_version": 0, 00:29:41.775 "enable_ktls": false 00:29:41.775 } 00:29:41.775 } 00:29:41.775 ] 00:29:41.775 }, 00:29:41.775 { 00:29:41.775 "subsystem": "vmd", 00:29:41.775 "config": [] 00:29:41.775 }, 00:29:41.775 { 00:29:41.775 "subsystem": "accel", 00:29:41.775 "config": [ 00:29:41.775 { 00:29:41.775 "method": "accel_set_options", 00:29:41.775 "params": { 00:29:41.775 "small_cache_size": 128, 00:29:41.775 "large_cache_size": 16, 00:29:41.775 "task_count": 2048, 00:29:41.775 "sequence_count": 2048, 00:29:41.775 "buf_count": 2048 00:29:41.775 } 00:29:41.775 } 00:29:41.775 ] 00:29:41.775 }, 00:29:41.775 { 00:29:41.775 "subsystem": "bdev", 00:29:41.775 "config": [ 00:29:41.775 { 00:29:41.775 "method": "bdev_set_options", 00:29:41.775 "params": { 00:29:41.775 "bdev_io_pool_size": 65535, 00:29:41.775 "bdev_io_cache_size": 256, 00:29:41.775 "bdev_auto_examine": true, 00:29:41.775 "iobuf_small_cache_size": 128, 00:29:41.775 "iobuf_large_cache_size": 16 00:29:41.775 } 00:29:41.775 }, 00:29:41.775 { 00:29:41.775 "method": "bdev_raid_set_options", 00:29:41.775 "params": { 00:29:41.775 "process_window_size_kb": 1024 00:29:41.775 } 00:29:41.775 }, 00:29:41.775 { 00:29:41.775 "method": "bdev_iscsi_set_options", 00:29:41.775 "params": { 00:29:41.775 "timeout_sec": 30 00:29:41.775 } 00:29:41.775 }, 00:29:41.775 { 00:29:41.775 "method": "bdev_nvme_set_options", 00:29:41.775 "params": { 00:29:41.775 "action_on_timeout": "none", 00:29:41.775 "timeout_us": 0, 00:29:41.775 "timeout_admin_us": 0, 00:29:41.775 "keep_alive_timeout_ms": 10000, 00:29:41.775 "arbitration_burst": 0, 00:29:41.775 "low_priority_weight": 0, 00:29:41.775 "medium_priority_weight": 0, 00:29:41.775 "high_priority_weight": 0, 00:29:41.775 "nvme_adminq_poll_period_us": 10000, 00:29:41.775 "nvme_ioq_poll_period_us": 0, 00:29:41.775 "io_queue_requests": 512, 00:29:41.775 "delay_cmd_submit": true, 00:29:41.775 "transport_retry_count": 4, 00:29:41.775 "bdev_retry_count": 3, 00:29:41.775 "transport_ack_timeout": 0, 00:29:41.775 "ctrlr_loss_timeout_sec": 0, 00:29:41.775 "reconnect_delay_sec": 0, 00:29:41.775 "fast_io_fail_timeout_sec": 0, 00:29:41.775 "disable_auto_failback": false, 00:29:41.775 "generate_uuids": false, 00:29:41.775 "transport_tos": 0, 00:29:41.775 "nvme_error_stat": false, 00:29:41.775 "rdma_srq_size": 0, 00:29:41.775 "io_path_stat": false, 00:29:41.775 "allow_accel_sequence": false, 00:29:41.775 "rdma_max_cq_size": 0, 00:29:41.775 "rdma_cm_event_timeout_ms": 0, 00:29:41.775 "dhchap_digests": [ 00:29:41.775 "sha256", 00:29:41.775 "sha384", 00:29:41.775 "sha512" 00:29:41.775 ], 00:29:41.775 "dhchap_dhgroups": [ 00:29:41.775 "null", 00:29:41.775 "ffdhe2048", 00:29:41.775 "ffdhe3072", 00:29:41.775 "ffdhe4096", 00:29:41.775 "ffdhe6144", 00:29:41.775 "ffdhe8192" 00:29:41.775 ] 00:29:41.775 } 00:29:41.775 }, 00:29:41.775 { 00:29:41.775 "method": "bdev_nvme_attach_controller", 00:29:41.775 "params": { 00:29:41.775 "name": "nvme0", 00:29:41.775 "trtype": "TCP", 00:29:41.775 "adrfam": "IPv4", 00:29:41.775 "traddr": "127.0.0.1", 00:29:41.775 "trsvcid": "4420", 00:29:41.775 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:29:41.775 "prchk_reftag": false, 00:29:41.775 "prchk_guard": false, 00:29:41.775 "ctrlr_loss_timeout_sec": 0, 00:29:41.775 "reconnect_delay_sec": 0, 00:29:41.775 "fast_io_fail_timeout_sec": 0, 00:29:41.775 "psk": "key0", 00:29:41.775 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:29:41.775 "hdgst": false, 00:29:41.775 "ddgst": false 00:29:41.775 } 00:29:41.775 }, 00:29:41.775 { 00:29:41.775 "method": "bdev_nvme_set_hotplug", 00:29:41.775 "params": { 00:29:41.775 "period_us": 100000, 00:29:41.775 "enable": false 00:29:41.775 } 00:29:41.775 }, 00:29:41.775 { 00:29:41.775 "method": "bdev_wait_for_examine" 00:29:41.775 } 00:29:41.775 ] 00:29:41.775 }, 00:29:41.775 { 00:29:41.775 "subsystem": "nbd", 00:29:41.775 "config": [] 00:29:41.775 } 00:29:41.775 ] 00:29:41.775 }' 00:29:41.775 17:20:29 keyring_file -- keyring/file.sh@114 -- # killprocess 3262626 00:29:41.775 17:20:29 keyring_file -- common/autotest_common.sh@946 -- # '[' -z 3262626 ']' 00:29:41.775 17:20:29 keyring_file -- common/autotest_common.sh@950 -- # kill -0 3262626 00:29:41.775 17:20:29 keyring_file -- common/autotest_common.sh@951 -- # uname 00:29:41.775 17:20:29 keyring_file -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:29:41.775 17:20:29 keyring_file -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3262626 00:29:41.775 17:20:29 keyring_file -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:29:41.775 17:20:29 keyring_file -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:29:41.775 17:20:29 keyring_file -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3262626' 00:29:41.775 killing process with pid 3262626 00:29:41.775 17:20:29 keyring_file -- common/autotest_common.sh@965 -- # kill 3262626 00:29:41.775 Received shutdown signal, test time was about 1.000000 seconds 00:29:41.775 00:29:41.775 Latency(us) 00:29:41.775 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:41.776 =================================================================================================================== 00:29:41.776 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:29:41.776 17:20:29 keyring_file -- common/autotest_common.sh@970 -- # wait 3262626 00:29:42.034 17:20:29 keyring_file -- keyring/file.sh@117 -- # bperfpid=3264139 00:29:42.034 17:20:29 keyring_file -- keyring/file.sh@119 -- # waitforlisten 3264139 /var/tmp/bperf.sock 00:29:42.034 17:20:29 keyring_file -- common/autotest_common.sh@827 -- # '[' -z 3264139 ']' 00:29:42.034 17:20:29 keyring_file -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bperf.sock 00:29:42.034 17:20:29 keyring_file -- keyring/file.sh@115 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z -c /dev/fd/63 00:29:42.034 17:20:29 keyring_file -- common/autotest_common.sh@832 -- # local max_retries=100 00:29:42.034 17:20:29 keyring_file -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:29:42.034 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:29:42.034 17:20:29 keyring_file -- keyring/file.sh@115 -- # echo '{ 00:29:42.034 "subsystems": [ 00:29:42.034 { 00:29:42.034 "subsystem": "keyring", 00:29:42.034 "config": [ 00:29:42.034 { 00:29:42.034 "method": "keyring_file_add_key", 00:29:42.034 "params": { 00:29:42.034 "name": "key0", 00:29:42.034 "path": "/tmp/tmp.9NoW2rmD4A" 00:29:42.034 } 00:29:42.034 }, 00:29:42.034 { 00:29:42.034 "method": "keyring_file_add_key", 00:29:42.034 "params": { 00:29:42.034 "name": "key1", 00:29:42.034 "path": "/tmp/tmp.n79xHaLl9j" 00:29:42.034 } 00:29:42.034 } 00:29:42.034 ] 00:29:42.034 }, 00:29:42.034 { 00:29:42.034 "subsystem": "iobuf", 00:29:42.034 "config": [ 00:29:42.034 { 00:29:42.034 "method": "iobuf_set_options", 00:29:42.034 "params": { 00:29:42.034 "small_pool_count": 8192, 00:29:42.034 "large_pool_count": 1024, 00:29:42.034 "small_bufsize": 8192, 00:29:42.034 "large_bufsize": 135168 00:29:42.034 } 00:29:42.034 } 00:29:42.034 ] 00:29:42.034 }, 00:29:42.034 { 00:29:42.034 "subsystem": "sock", 00:29:42.034 "config": [ 00:29:42.034 { 00:29:42.034 "method": "sock_impl_set_options", 00:29:42.034 "params": { 00:29:42.034 "impl_name": "posix", 00:29:42.034 "recv_buf_size": 2097152, 00:29:42.034 "send_buf_size": 2097152, 00:29:42.034 "enable_recv_pipe": true, 00:29:42.034 "enable_quickack": false, 00:29:42.034 "enable_placement_id": 0, 00:29:42.034 "enable_zerocopy_send_server": true, 00:29:42.034 "enable_zerocopy_send_client": false, 00:29:42.034 "zerocopy_threshold": 0, 00:29:42.034 "tls_version": 0, 00:29:42.034 "enable_ktls": false 00:29:42.034 } 00:29:42.034 }, 00:29:42.034 { 00:29:42.034 "method": "sock_impl_set_options", 00:29:42.034 "params": { 00:29:42.034 "impl_name": "ssl", 00:29:42.034 "recv_buf_size": 4096, 00:29:42.034 "send_buf_size": 4096, 00:29:42.034 "enable_recv_pipe": true, 00:29:42.034 "enable_quickack": false, 00:29:42.034 "enable_placement_id": 0, 00:29:42.034 "enable_zerocopy_send_server": true, 00:29:42.034 "enable_zerocopy_send_client": false, 00:29:42.034 "zerocopy_threshold": 0, 00:29:42.034 "tls_version": 0, 00:29:42.034 "enable_ktls": false 00:29:42.034 } 00:29:42.034 } 00:29:42.034 ] 00:29:42.034 }, 00:29:42.034 { 00:29:42.034 "subsystem": "vmd", 00:29:42.034 "config": [] 00:29:42.034 }, 00:29:42.034 { 00:29:42.034 "subsystem": "accel", 00:29:42.034 "config": [ 00:29:42.034 { 00:29:42.034 "method": "accel_set_options", 00:29:42.034 "params": { 00:29:42.034 "small_cache_size": 128, 00:29:42.034 "large_cache_size": 16, 00:29:42.034 "task_count": 2048, 00:29:42.034 "sequence_count": 2048, 00:29:42.034 "buf_count": 2048 00:29:42.034 } 00:29:42.034 } 00:29:42.034 ] 00:29:42.034 }, 00:29:42.034 { 00:29:42.034 "subsystem": "bdev", 00:29:42.034 "config": [ 00:29:42.034 { 00:29:42.034 "method": "bdev_set_options", 00:29:42.034 "params": { 00:29:42.034 "bdev_io_pool_size": 65535, 00:29:42.034 "bdev_io_cache_size": 256, 00:29:42.034 "bdev_auto_examine": true, 00:29:42.034 "iobuf_small_cache_size": 128, 00:29:42.034 "iobuf_large_cache_size": 16 00:29:42.034 } 00:29:42.034 }, 00:29:42.034 { 00:29:42.034 "method": "bdev_raid_set_options", 00:29:42.034 "params": { 00:29:42.034 "process_window_size_kb": 1024 00:29:42.034 } 00:29:42.034 }, 00:29:42.034 { 00:29:42.034 "method": "bdev_iscsi_set_options", 00:29:42.034 "params": { 00:29:42.034 "timeout_sec": 30 00:29:42.034 } 00:29:42.034 }, 00:29:42.034 { 00:29:42.034 "method": "bdev_nvme_set_options", 00:29:42.034 "params": { 00:29:42.034 "action_on_timeout": "none", 00:29:42.034 "timeout_us": 0, 00:29:42.034 "timeout_admin_us": 0, 00:29:42.034 "keep_alive_timeout_ms": 10000, 00:29:42.034 "arbitration_burst": 0, 00:29:42.034 "low_priority_weight": 0, 00:29:42.034 "medium_priority_weight": 0, 00:29:42.034 "high_priority_weight": 0, 00:29:42.034 "nvme_adminq_poll_period_us": 10000, 00:29:42.034 "nvme_ioq_poll_period_us": 0, 00:29:42.034 "io_queue_requests": 512, 00:29:42.034 "delay_cmd_submit": true, 00:29:42.034 "transport_retry_count": 4, 00:29:42.034 "bdev_retry_count": 3, 00:29:42.034 "transport_ack_timeout": 0, 00:29:42.034 "ctrlr_loss_timeout_sec": 0, 00:29:42.034 "reconnect_delay_sec": 0, 00:29:42.035 "fast_io_fail_timeout_sec": 0, 00:29:42.035 "disable_auto_failback": false, 00:29:42.035 "generate_uuids": false, 00:29:42.035 "transport_tos": 0, 00:29:42.035 "nvme_error_stat": false, 00:29:42.035 "rdma_srq_size": 0, 00:29:42.035 "io_path_stat": false, 00:29:42.035 "allow_accel_sequence": false, 00:29:42.035 "rdma_max_cq_size": 0, 00:29:42.035 "rdma_cm_event_timeout_ms": 0, 00:29:42.035 "dhchap_digests": [ 00:29:42.035 "sha256", 00:29:42.035 "sha384", 00:29:42.035 "sha512" 00:29:42.035 ], 00:29:42.035 "dhchap_dhgroups": [ 00:29:42.035 "null", 00:29:42.035 "ffdhe2048", 00:29:42.035 "ffdhe3072", 00:29:42.035 "ffdhe4096", 00:29:42.035 "ffdhe6144", 00:29:42.035 "ffdhe8192" 00:29:42.035 ] 00:29:42.035 } 00:29:42.035 }, 00:29:42.035 { 00:29:42.035 "method": "bdev_nvme_attach_controller", 00:29:42.035 "params": { 00:29:42.035 "name": "nvme0", 00:29:42.035 "trtype": "TCP", 00:29:42.035 "adrfam": "IPv4", 00:29:42.035 "traddr": "127.0.0.1", 00:29:42.035 "trsvcid": "4420", 00:29:42.035 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:29:42.035 "prchk_reftag": false, 00:29:42.035 "prchk_guard": false, 00:29:42.035 "ctrlr_loss_timeout_sec": 0, 00:29:42.035 "reconnect_delay_sec": 0, 00:29:42.035 "fast_io_fail_timeout_sec": 0, 00:29:42.035 "psk": "key0", 00:29:42.035 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:29:42.035 "hdgst": false, 00:29:42.035 "ddgst": false 00:29:42.035 } 00:29:42.035 }, 00:29:42.035 { 00:29:42.035 "method": "bdev_nvme_set_hotplug", 00:29:42.035 "params": { 00:29:42.035 "period_us": 100000, 00:29:42.035 "enable": false 00:29:42.035 } 00:29:42.035 }, 00:29:42.035 { 00:29:42.035 "method": "bdev_wait_for_examine" 00:29:42.035 } 00:29:42.035 ] 00:29:42.035 }, 00:29:42.035 { 00:29:42.035 "subsystem": "nbd", 00:29:42.035 "config": [] 00:29:42.035 } 00:29:42.035 ] 00:29:42.035 }' 00:29:42.035 17:20:29 keyring_file -- common/autotest_common.sh@836 -- # xtrace_disable 00:29:42.035 17:20:29 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:29:42.035 [2024-05-15 17:20:29.573294] Starting SPDK v24.05-pre git sha1 0ba8ca574 / DPDK 23.11.0 initialization... 00:29:42.035 [2024-05-15 17:20:29.573342] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3264139 ] 00:29:42.035 EAL: No free 2048 kB hugepages reported on node 1 00:29:42.035 [2024-05-15 17:20:29.626542] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:42.292 [2024-05-15 17:20:29.694690] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:29:42.292 [2024-05-15 17:20:29.844752] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:29:42.856 17:20:30 keyring_file -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:29:42.856 17:20:30 keyring_file -- common/autotest_common.sh@860 -- # return 0 00:29:42.856 17:20:30 keyring_file -- keyring/file.sh@120 -- # bperf_cmd keyring_get_keys 00:29:42.856 17:20:30 keyring_file -- keyring/file.sh@120 -- # jq length 00:29:42.856 17:20:30 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:29:43.114 17:20:30 keyring_file -- keyring/file.sh@120 -- # (( 2 == 2 )) 00:29:43.114 17:20:30 keyring_file -- keyring/file.sh@121 -- # get_refcnt key0 00:29:43.114 17:20:30 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:29:43.114 17:20:30 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:29:43.114 17:20:30 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:29:43.114 17:20:30 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:29:43.114 17:20:30 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:29:43.114 17:20:30 keyring_file -- keyring/file.sh@121 -- # (( 2 == 2 )) 00:29:43.114 17:20:30 keyring_file -- keyring/file.sh@122 -- # get_refcnt key1 00:29:43.114 17:20:30 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:29:43.114 17:20:30 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:29:43.114 17:20:30 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:29:43.114 17:20:30 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:29:43.114 17:20:30 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:29:43.371 17:20:30 keyring_file -- keyring/file.sh@122 -- # (( 1 == 1 )) 00:29:43.371 17:20:30 keyring_file -- keyring/file.sh@123 -- # bperf_cmd bdev_nvme_get_controllers 00:29:43.371 17:20:30 keyring_file -- keyring/file.sh@123 -- # jq -r '.[].name' 00:29:43.371 17:20:30 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_get_controllers 00:29:43.628 17:20:31 keyring_file -- keyring/file.sh@123 -- # [[ nvme0 == nvme0 ]] 00:29:43.628 17:20:31 keyring_file -- keyring/file.sh@1 -- # cleanup 00:29:43.628 17:20:31 keyring_file -- keyring/file.sh@19 -- # rm -f /tmp/tmp.9NoW2rmD4A /tmp/tmp.n79xHaLl9j 00:29:43.628 17:20:31 keyring_file -- keyring/file.sh@20 -- # killprocess 3264139 00:29:43.628 17:20:31 keyring_file -- common/autotest_common.sh@946 -- # '[' -z 3264139 ']' 00:29:43.628 17:20:31 keyring_file -- common/autotest_common.sh@950 -- # kill -0 3264139 00:29:43.628 17:20:31 keyring_file -- common/autotest_common.sh@951 -- # uname 00:29:43.628 17:20:31 keyring_file -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:29:43.628 17:20:31 keyring_file -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3264139 00:29:43.628 17:20:31 keyring_file -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:29:43.628 17:20:31 keyring_file -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:29:43.628 17:20:31 keyring_file -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3264139' 00:29:43.628 killing process with pid 3264139 00:29:43.628 17:20:31 keyring_file -- common/autotest_common.sh@965 -- # kill 3264139 00:29:43.628 Received shutdown signal, test time was about 1.000000 seconds 00:29:43.628 00:29:43.628 Latency(us) 00:29:43.628 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:43.628 =================================================================================================================== 00:29:43.628 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:29:43.628 17:20:31 keyring_file -- common/autotest_common.sh@970 -- # wait 3264139 00:29:43.886 17:20:31 keyring_file -- keyring/file.sh@21 -- # killprocess 3262609 00:29:43.886 17:20:31 keyring_file -- common/autotest_common.sh@946 -- # '[' -z 3262609 ']' 00:29:43.886 17:20:31 keyring_file -- common/autotest_common.sh@950 -- # kill -0 3262609 00:29:43.886 17:20:31 keyring_file -- common/autotest_common.sh@951 -- # uname 00:29:43.886 17:20:31 keyring_file -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:29:43.886 17:20:31 keyring_file -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3262609 00:29:43.886 17:20:31 keyring_file -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:29:43.886 17:20:31 keyring_file -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:29:43.886 17:20:31 keyring_file -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3262609' 00:29:43.886 killing process with pid 3262609 00:29:43.886 17:20:31 keyring_file -- common/autotest_common.sh@965 -- # kill 3262609 00:29:43.886 [2024-05-15 17:20:31.386512] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:29:43.886 [2024-05-15 17:20:31.386551] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:29:43.886 17:20:31 keyring_file -- common/autotest_common.sh@970 -- # wait 3262609 00:29:44.144 00:29:44.144 real 0m11.468s 00:29:44.144 user 0m27.611s 00:29:44.144 sys 0m2.727s 00:29:44.144 17:20:31 keyring_file -- common/autotest_common.sh@1122 -- # xtrace_disable 00:29:44.144 17:20:31 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:29:44.144 ************************************ 00:29:44.144 END TEST keyring_file 00:29:44.144 ************************************ 00:29:44.144 17:20:31 -- spdk/autotest.sh@292 -- # [[ n == y ]] 00:29:44.144 17:20:31 -- spdk/autotest.sh@304 -- # '[' 0 -eq 1 ']' 00:29:44.144 17:20:31 -- spdk/autotest.sh@308 -- # '[' 0 -eq 1 ']' 00:29:44.144 17:20:31 -- spdk/autotest.sh@312 -- # '[' 0 -eq 1 ']' 00:29:44.144 17:20:31 -- spdk/autotest.sh@317 -- # '[' 0 -eq 1 ']' 00:29:44.144 17:20:31 -- spdk/autotest.sh@326 -- # '[' 0 -eq 1 ']' 00:29:44.144 17:20:31 -- spdk/autotest.sh@331 -- # '[' 0 -eq 1 ']' 00:29:44.144 17:20:31 -- spdk/autotest.sh@335 -- # '[' 0 -eq 1 ']' 00:29:44.144 17:20:31 -- spdk/autotest.sh@339 -- # '[' 0 -eq 1 ']' 00:29:44.144 17:20:31 -- spdk/autotest.sh@343 -- # '[' 0 -eq 1 ']' 00:29:44.144 17:20:31 -- spdk/autotest.sh@348 -- # '[' 0 -eq 1 ']' 00:29:44.144 17:20:31 -- spdk/autotest.sh@352 -- # '[' 0 -eq 1 ']' 00:29:44.144 17:20:31 -- spdk/autotest.sh@359 -- # [[ 0 -eq 1 ]] 00:29:44.144 17:20:31 -- spdk/autotest.sh@363 -- # [[ 0 -eq 1 ]] 00:29:44.144 17:20:31 -- spdk/autotest.sh@367 -- # [[ 0 -eq 1 ]] 00:29:44.144 17:20:31 -- spdk/autotest.sh@371 -- # [[ 0 -eq 1 ]] 00:29:44.144 17:20:31 -- spdk/autotest.sh@376 -- # trap - SIGINT SIGTERM EXIT 00:29:44.144 17:20:31 -- spdk/autotest.sh@378 -- # timing_enter post_cleanup 00:29:44.144 17:20:31 -- common/autotest_common.sh@720 -- # xtrace_disable 00:29:44.144 17:20:31 -- common/autotest_common.sh@10 -- # set +x 00:29:44.144 17:20:31 -- spdk/autotest.sh@379 -- # autotest_cleanup 00:29:44.144 17:20:31 -- common/autotest_common.sh@1388 -- # local autotest_es=0 00:29:44.144 17:20:31 -- common/autotest_common.sh@1389 -- # xtrace_disable 00:29:44.144 17:20:31 -- common/autotest_common.sh@10 -- # set +x 00:29:49.405 INFO: APP EXITING 00:29:49.405 INFO: killing all VMs 00:29:49.405 INFO: killing vhost app 00:29:49.405 INFO: EXIT DONE 00:29:51.304 0000:5e:00.0 (8086 0a54): Already using the nvme driver 00:29:51.304 0000:00:04.7 (8086 2021): Already using the ioatdma driver 00:29:51.304 0000:00:04.6 (8086 2021): Already using the ioatdma driver 00:29:51.304 0000:00:04.5 (8086 2021): Already using the ioatdma driver 00:29:51.304 0000:00:04.4 (8086 2021): Already using the ioatdma driver 00:29:51.304 0000:00:04.3 (8086 2021): Already using the ioatdma driver 00:29:51.304 0000:00:04.2 (8086 2021): Already using the ioatdma driver 00:29:51.304 0000:00:04.1 (8086 2021): Already using the ioatdma driver 00:29:51.304 0000:00:04.0 (8086 2021): Already using the ioatdma driver 00:29:51.304 0000:80:04.7 (8086 2021): Already using the ioatdma driver 00:29:51.304 0000:80:04.6 (8086 2021): Already using the ioatdma driver 00:29:51.561 0000:80:04.5 (8086 2021): Already using the ioatdma driver 00:29:51.561 0000:80:04.4 (8086 2021): Already using the ioatdma driver 00:29:51.561 0000:80:04.3 (8086 2021): Already using the ioatdma driver 00:29:51.561 0000:80:04.2 (8086 2021): Already using the ioatdma driver 00:29:51.561 0000:80:04.1 (8086 2021): Already using the ioatdma driver 00:29:51.561 0000:80:04.0 (8086 2021): Already using the ioatdma driver 00:29:54.088 Cleaning 00:29:54.088 Removing: /var/run/dpdk/spdk0/config 00:29:54.088 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:29:54.088 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:29:54.088 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:29:54.088 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:29:54.088 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-0 00:29:54.088 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-1 00:29:54.088 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-2 00:29:54.088 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-3 00:29:54.346 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:29:54.346 Removing: /var/run/dpdk/spdk0/hugepage_info 00:29:54.346 Removing: /var/run/dpdk/spdk1/config 00:29:54.346 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-0 00:29:54.346 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-1 00:29:54.346 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-2 00:29:54.346 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-3 00:29:54.346 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-0 00:29:54.346 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-1 00:29:54.346 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-2 00:29:54.346 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-3 00:29:54.346 Removing: /var/run/dpdk/spdk1/fbarray_memzone 00:29:54.346 Removing: /var/run/dpdk/spdk1/hugepage_info 00:29:54.346 Removing: /var/run/dpdk/spdk1/mp_socket 00:29:54.346 Removing: /var/run/dpdk/spdk2/config 00:29:54.346 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-0 00:29:54.346 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-1 00:29:54.346 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-2 00:29:54.346 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-3 00:29:54.346 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-0 00:29:54.346 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-1 00:29:54.346 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-2 00:29:54.346 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-3 00:29:54.346 Removing: /var/run/dpdk/spdk2/fbarray_memzone 00:29:54.346 Removing: /var/run/dpdk/spdk2/hugepage_info 00:29:54.346 Removing: /var/run/dpdk/spdk3/config 00:29:54.346 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-0 00:29:54.346 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-1 00:29:54.346 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-2 00:29:54.346 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-3 00:29:54.346 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-0 00:29:54.346 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-1 00:29:54.346 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-2 00:29:54.346 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-3 00:29:54.346 Removing: /var/run/dpdk/spdk3/fbarray_memzone 00:29:54.346 Removing: /var/run/dpdk/spdk3/hugepage_info 00:29:54.346 Removing: /var/run/dpdk/spdk4/config 00:29:54.346 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-0 00:29:54.346 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-1 00:29:54.346 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-2 00:29:54.346 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-3 00:29:54.346 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-0 00:29:54.346 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-1 00:29:54.346 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-2 00:29:54.346 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-3 00:29:54.346 Removing: /var/run/dpdk/spdk4/fbarray_memzone 00:29:54.346 Removing: /var/run/dpdk/spdk4/hugepage_info 00:29:54.346 Removing: /dev/shm/bdev_svc_trace.1 00:29:54.346 Removing: /dev/shm/nvmf_trace.0 00:29:54.346 Removing: /dev/shm/spdk_tgt_trace.pid2880806 00:29:54.346 Removing: /var/run/dpdk/spdk0 00:29:54.346 Removing: /var/run/dpdk/spdk1 00:29:54.346 Removing: /var/run/dpdk/spdk2 00:29:54.346 Removing: /var/run/dpdk/spdk3 00:29:54.346 Removing: /var/run/dpdk/spdk4 00:29:54.346 Removing: /var/run/dpdk/spdk_pid2878532 00:29:54.346 Removing: /var/run/dpdk/spdk_pid2879737 00:29:54.346 Removing: /var/run/dpdk/spdk_pid2880806 00:29:54.346 Removing: /var/run/dpdk/spdk_pid2881441 00:29:54.346 Removing: /var/run/dpdk/spdk_pid2882407 00:29:54.346 Removing: /var/run/dpdk/spdk_pid2882645 00:29:54.346 Removing: /var/run/dpdk/spdk_pid2883623 00:29:54.346 Removing: /var/run/dpdk/spdk_pid2883847 00:29:54.346 Removing: /var/run/dpdk/spdk_pid2883982 00:29:54.346 Removing: /var/run/dpdk/spdk_pid2885601 00:29:54.605 Removing: /var/run/dpdk/spdk_pid2886961 00:29:54.605 Removing: /var/run/dpdk/spdk_pid2887251 00:29:54.605 Removing: /var/run/dpdk/spdk_pid2887539 00:29:54.605 Removing: /var/run/dpdk/spdk_pid2887840 00:29:54.605 Removing: /var/run/dpdk/spdk_pid2888134 00:29:54.605 Removing: /var/run/dpdk/spdk_pid2888387 00:29:54.605 Removing: /var/run/dpdk/spdk_pid2888635 00:29:54.605 Removing: /var/run/dpdk/spdk_pid2888916 00:29:54.605 Removing: /var/run/dpdk/spdk_pid2889887 00:29:54.605 Removing: /var/run/dpdk/spdk_pid2892874 00:29:54.605 Removing: /var/run/dpdk/spdk_pid2893359 00:29:54.605 Removing: /var/run/dpdk/spdk_pid2893623 00:29:54.605 Removing: /var/run/dpdk/spdk_pid2893649 00:29:54.605 Removing: /var/run/dpdk/spdk_pid2894129 00:29:54.605 Removing: /var/run/dpdk/spdk_pid2894353 00:29:54.605 Removing: /var/run/dpdk/spdk_pid2894728 00:29:54.605 Removing: /var/run/dpdk/spdk_pid2894864 00:29:54.605 Removing: /var/run/dpdk/spdk_pid2895125 00:29:54.605 Removing: /var/run/dpdk/spdk_pid2895357 00:29:54.605 Removing: /var/run/dpdk/spdk_pid2895615 00:29:54.605 Removing: /var/run/dpdk/spdk_pid2895643 00:29:54.605 Removing: /var/run/dpdk/spdk_pid2896184 00:29:54.605 Removing: /var/run/dpdk/spdk_pid2896440 00:29:54.605 Removing: /var/run/dpdk/spdk_pid2896727 00:29:54.605 Removing: /var/run/dpdk/spdk_pid2896994 00:29:54.605 Removing: /var/run/dpdk/spdk_pid2897025 00:29:54.605 Removing: /var/run/dpdk/spdk_pid2897303 00:29:54.605 Removing: /var/run/dpdk/spdk_pid2897553 00:29:54.605 Removing: /var/run/dpdk/spdk_pid2897798 00:29:54.605 Removing: /var/run/dpdk/spdk_pid2898052 00:29:54.605 Removing: /var/run/dpdk/spdk_pid2898300 00:29:54.605 Removing: /var/run/dpdk/spdk_pid2898547 00:29:54.605 Removing: /var/run/dpdk/spdk_pid2898800 00:29:54.605 Removing: /var/run/dpdk/spdk_pid2899054 00:29:54.605 Removing: /var/run/dpdk/spdk_pid2899306 00:29:54.605 Removing: /var/run/dpdk/spdk_pid2899553 00:29:54.605 Removing: /var/run/dpdk/spdk_pid2899801 00:29:54.605 Removing: /var/run/dpdk/spdk_pid2900056 00:29:54.605 Removing: /var/run/dpdk/spdk_pid2900302 00:29:54.605 Removing: /var/run/dpdk/spdk_pid2900558 00:29:54.605 Removing: /var/run/dpdk/spdk_pid2900809 00:29:54.605 Removing: /var/run/dpdk/spdk_pid2901063 00:29:54.605 Removing: /var/run/dpdk/spdk_pid2901344 00:29:54.605 Removing: /var/run/dpdk/spdk_pid2901635 00:29:54.605 Removing: /var/run/dpdk/spdk_pid2901932 00:29:54.605 Removing: /var/run/dpdk/spdk_pid2902216 00:29:54.605 Removing: /var/run/dpdk/spdk_pid2902518 00:29:54.605 Removing: /var/run/dpdk/spdk_pid2902597 00:29:54.605 Removing: /var/run/dpdk/spdk_pid2902907 00:29:54.605 Removing: /var/run/dpdk/spdk_pid2906697 00:29:54.605 Removing: /var/run/dpdk/spdk_pid2950372 00:29:54.605 Removing: /var/run/dpdk/spdk_pid2954641 00:29:54.605 Removing: /var/run/dpdk/spdk_pid2965153 00:29:54.605 Removing: /var/run/dpdk/spdk_pid2970553 00:29:54.605 Removing: /var/run/dpdk/spdk_pid2974512 00:29:54.605 Removing: /var/run/dpdk/spdk_pid2975019 00:29:54.605 Removing: /var/run/dpdk/spdk_pid2986606 00:29:54.605 Removing: /var/run/dpdk/spdk_pid2986689 00:29:54.605 Removing: /var/run/dpdk/spdk_pid2987441 00:29:54.605 Removing: /var/run/dpdk/spdk_pid2988360 00:29:54.605 Removing: /var/run/dpdk/spdk_pid2989274 00:29:54.605 Removing: /var/run/dpdk/spdk_pid2989741 00:29:54.605 Removing: /var/run/dpdk/spdk_pid2989869 00:29:54.605 Removing: /var/run/dpdk/spdk_pid2990170 00:29:54.605 Removing: /var/run/dpdk/spdk_pid2990204 00:29:54.605 Removing: /var/run/dpdk/spdk_pid2990214 00:29:54.605 Removing: /var/run/dpdk/spdk_pid2991129 00:29:54.605 Removing: /var/run/dpdk/spdk_pid2992039 00:29:54.605 Removing: /var/run/dpdk/spdk_pid2992929 00:29:54.605 Removing: /var/run/dpdk/spdk_pid2993422 00:29:54.605 Removing: /var/run/dpdk/spdk_pid2993434 00:29:54.865 Removing: /var/run/dpdk/spdk_pid2993662 00:29:54.865 Removing: /var/run/dpdk/spdk_pid2994905 00:29:54.865 Removing: /var/run/dpdk/spdk_pid2996101 00:29:54.865 Removing: /var/run/dpdk/spdk_pid3004950 00:29:54.865 Removing: /var/run/dpdk/spdk_pid3005202 00:29:54.865 Removing: /var/run/dpdk/spdk_pid3009448 00:29:54.865 Removing: /var/run/dpdk/spdk_pid3015249 00:29:54.865 Removing: /var/run/dpdk/spdk_pid3017916 00:29:54.865 Removing: /var/run/dpdk/spdk_pid3028316 00:29:54.865 Removing: /var/run/dpdk/spdk_pid3037213 00:29:54.865 Removing: /var/run/dpdk/spdk_pid3039031 00:29:54.865 Removing: /var/run/dpdk/spdk_pid3039959 00:29:54.865 Removing: /var/run/dpdk/spdk_pid3057078 00:29:54.865 Removing: /var/run/dpdk/spdk_pid3060888 00:29:54.865 Removing: /var/run/dpdk/spdk_pid3084514 00:29:54.865 Removing: /var/run/dpdk/spdk_pid3088906 00:29:54.865 Removing: /var/run/dpdk/spdk_pid3090757 00:29:54.865 Removing: /var/run/dpdk/spdk_pid3092985 00:29:54.865 Removing: /var/run/dpdk/spdk_pid3093223 00:29:54.865 Removing: /var/run/dpdk/spdk_pid3093456 00:29:54.865 Removing: /var/run/dpdk/spdk_pid3093699 00:29:54.865 Removing: /var/run/dpdk/spdk_pid3094214 00:29:54.865 Removing: /var/run/dpdk/spdk_pid3096056 00:29:54.865 Removing: /var/run/dpdk/spdk_pid3097062 00:29:54.865 Removing: /var/run/dpdk/spdk_pid3097534 00:29:54.865 Removing: /var/run/dpdk/spdk_pid3099852 00:29:54.865 Removing: /var/run/dpdk/spdk_pid3100394 00:29:54.865 Removing: /var/run/dpdk/spdk_pid3101092 00:29:54.865 Removing: /var/run/dpdk/spdk_pid3105345 00:29:54.865 Removing: /var/run/dpdk/spdk_pid3115064 00:29:54.865 Removing: /var/run/dpdk/spdk_pid3119104 00:29:54.865 Removing: /var/run/dpdk/spdk_pid3125121 00:29:54.865 Removing: /var/run/dpdk/spdk_pid3126616 00:29:54.865 Removing: /var/run/dpdk/spdk_pid3127958 00:29:54.865 Removing: /var/run/dpdk/spdk_pid3132596 00:29:54.865 Removing: /var/run/dpdk/spdk_pid3137010 00:29:54.865 Removing: /var/run/dpdk/spdk_pid3144388 00:29:54.865 Removing: /var/run/dpdk/spdk_pid3144390 00:29:54.865 Removing: /var/run/dpdk/spdk_pid3149091 00:29:54.865 Removing: /var/run/dpdk/spdk_pid3149321 00:29:54.865 Removing: /var/run/dpdk/spdk_pid3149548 00:29:54.865 Removing: /var/run/dpdk/spdk_pid3149840 00:29:54.865 Removing: /var/run/dpdk/spdk_pid3149981 00:29:54.865 Removing: /var/run/dpdk/spdk_pid3154252 00:29:54.865 Removing: /var/run/dpdk/spdk_pid3154790 00:29:54.865 Removing: /var/run/dpdk/spdk_pid3159154 00:29:54.865 Removing: /var/run/dpdk/spdk_pid3161697 00:29:54.865 Removing: /var/run/dpdk/spdk_pid3167172 00:29:54.865 Removing: /var/run/dpdk/spdk_pid3172858 00:29:54.865 Removing: /var/run/dpdk/spdk_pid3181186 00:29:54.865 Removing: /var/run/dpdk/spdk_pid3188678 00:29:54.865 Removing: /var/run/dpdk/spdk_pid3188680 00:29:54.865 Removing: /var/run/dpdk/spdk_pid3206818 00:29:54.865 Removing: /var/run/dpdk/spdk_pid3207420 00:29:54.865 Removing: /var/run/dpdk/spdk_pid3208119 00:29:54.865 Removing: /var/run/dpdk/spdk_pid3208590 00:29:54.865 Removing: /var/run/dpdk/spdk_pid3209561 00:29:54.865 Removing: /var/run/dpdk/spdk_pid3210252 00:29:54.865 Removing: /var/run/dpdk/spdk_pid3210902 00:29:54.865 Removing: /var/run/dpdk/spdk_pid3211437 00:29:54.865 Removing: /var/run/dpdk/spdk_pid3215683 00:29:54.865 Removing: /var/run/dpdk/spdk_pid3215923 00:29:54.865 Removing: /var/run/dpdk/spdk_pid3221973 00:29:54.865 Removing: /var/run/dpdk/spdk_pid3222132 00:29:54.865 Removing: /var/run/dpdk/spdk_pid3224378 00:29:54.865 Removing: /var/run/dpdk/spdk_pid3232653 00:29:54.865 Removing: /var/run/dpdk/spdk_pid3232711 00:29:54.865 Removing: /var/run/dpdk/spdk_pid3237743 00:29:54.865 Removing: /var/run/dpdk/spdk_pid3239694 00:29:54.865 Removing: /var/run/dpdk/spdk_pid3241604 00:29:55.123 Removing: /var/run/dpdk/spdk_pid3242717 00:29:55.123 Removing: /var/run/dpdk/spdk_pid3244695 00:29:55.123 Removing: /var/run/dpdk/spdk_pid3245968 00:29:55.123 Removing: /var/run/dpdk/spdk_pid3254481 00:29:55.123 Removing: /var/run/dpdk/spdk_pid3254947 00:29:55.123 Removing: /var/run/dpdk/spdk_pid3255598 00:29:55.123 Removing: /var/run/dpdk/spdk_pid3257748 00:29:55.123 Removing: /var/run/dpdk/spdk_pid3258326 00:29:55.123 Removing: /var/run/dpdk/spdk_pid3258809 00:29:55.123 Removing: /var/run/dpdk/spdk_pid3262609 00:29:55.123 Removing: /var/run/dpdk/spdk_pid3262626 00:29:55.123 Removing: /var/run/dpdk/spdk_pid3264139 00:29:55.123 Clean 00:29:55.123 17:20:42 -- common/autotest_common.sh@1447 -- # return 0 00:29:55.123 17:20:42 -- spdk/autotest.sh@380 -- # timing_exit post_cleanup 00:29:55.123 17:20:42 -- common/autotest_common.sh@726 -- # xtrace_disable 00:29:55.123 17:20:42 -- common/autotest_common.sh@10 -- # set +x 00:29:55.123 17:20:42 -- spdk/autotest.sh@382 -- # timing_exit autotest 00:29:55.123 17:20:42 -- common/autotest_common.sh@726 -- # xtrace_disable 00:29:55.123 17:20:42 -- common/autotest_common.sh@10 -- # set +x 00:29:55.123 17:20:42 -- spdk/autotest.sh@383 -- # chmod a+r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:29:55.123 17:20:42 -- spdk/autotest.sh@385 -- # [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log ]] 00:29:55.123 17:20:42 -- spdk/autotest.sh@385 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log 00:29:55.123 17:20:42 -- spdk/autotest.sh@387 -- # hash lcov 00:29:55.123 17:20:42 -- spdk/autotest.sh@387 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:29:55.123 17:20:42 -- spdk/autotest.sh@389 -- # hostname 00:29:55.123 17:20:42 -- spdk/autotest.sh@389 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -t spdk-wfp-08 -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info 00:29:55.380 geninfo: WARNING: invalid characters removed from testname! 00:30:17.297 17:21:02 -- spdk/autotest.sh@390 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:30:17.297 17:21:04 -- spdk/autotest.sh@391 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/dpdk/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:30:19.226 17:21:06 -- spdk/autotest.sh@392 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '/usr/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:30:21.129 17:21:08 -- spdk/autotest.sh@393 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/examples/vmd/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:30:23.030 17:21:10 -- spdk/autotest.sh@394 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:30:24.405 17:21:12 -- spdk/autotest.sh@395 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:30:26.305 17:21:13 -- spdk/autotest.sh@396 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:30:26.306 17:21:13 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:26.306 17:21:13 -- scripts/common.sh@508 -- $ [[ -e /bin/wpdk_common.sh ]] 00:30:26.306 17:21:13 -- scripts/common.sh@516 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:26.306 17:21:13 -- scripts/common.sh@517 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:26.306 17:21:13 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:26.306 17:21:13 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:26.306 17:21:13 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:26.306 17:21:13 -- paths/export.sh@5 -- $ export PATH 00:30:26.306 17:21:13 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:26.306 17:21:13 -- common/autobuild_common.sh@436 -- $ out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:30:26.306 17:21:13 -- common/autobuild_common.sh@437 -- $ date +%s 00:30:26.306 17:21:13 -- common/autobuild_common.sh@437 -- $ mktemp -dt spdk_1715786473.XXXXXX 00:30:26.306 17:21:13 -- common/autobuild_common.sh@437 -- $ SPDK_WORKSPACE=/tmp/spdk_1715786473.j2jc2z 00:30:26.306 17:21:13 -- common/autobuild_common.sh@439 -- $ [[ -n '' ]] 00:30:26.306 17:21:13 -- common/autobuild_common.sh@443 -- $ '[' -n '' ']' 00:30:26.306 17:21:13 -- common/autobuild_common.sh@446 -- $ scanbuild_exclude='--exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/' 00:30:26.306 17:21:13 -- common/autobuild_common.sh@450 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp' 00:30:26.306 17:21:13 -- common/autobuild_common.sh@452 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/ --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:30:26.306 17:21:13 -- common/autobuild_common.sh@453 -- $ get_config_params 00:30:26.306 17:21:13 -- common/autotest_common.sh@395 -- $ xtrace_disable 00:30:26.306 17:21:13 -- common/autotest_common.sh@10 -- $ set +x 00:30:26.306 17:21:13 -- common/autobuild_common.sh@453 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user' 00:30:26.306 17:21:13 -- common/autobuild_common.sh@455 -- $ start_monitor_resources 00:30:26.306 17:21:13 -- pm/common@17 -- $ local monitor 00:30:26.306 17:21:13 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:30:26.306 17:21:13 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:30:26.306 17:21:13 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:30:26.306 17:21:13 -- pm/common@21 -- $ date +%s 00:30:26.306 17:21:13 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:30:26.306 17:21:13 -- pm/common@21 -- $ date +%s 00:30:26.306 17:21:13 -- pm/common@21 -- $ date +%s 00:30:26.306 17:21:13 -- pm/common@25 -- $ sleep 1 00:30:26.306 17:21:13 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1715786473 00:30:26.306 17:21:13 -- pm/common@21 -- $ date +%s 00:30:26.306 17:21:13 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1715786473 00:30:26.306 17:21:13 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1715786473 00:30:26.306 17:21:13 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1715786473 00:30:26.564 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1715786473_collect-cpu-temp.pm.log 00:30:26.564 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1715786473_collect-vmstat.pm.log 00:30:26.564 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1715786473_collect-cpu-load.pm.log 00:30:26.564 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1715786473_collect-bmc-pm.bmc.pm.log 00:30:27.500 17:21:14 -- common/autobuild_common.sh@456 -- $ trap stop_monitor_resources EXIT 00:30:27.500 17:21:14 -- spdk/autopackage.sh@10 -- $ MAKEFLAGS=-j96 00:30:27.500 17:21:14 -- spdk/autopackage.sh@11 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:30:27.500 17:21:14 -- spdk/autopackage.sh@13 -- $ [[ 0 -eq 1 ]] 00:30:27.500 17:21:14 -- spdk/autopackage.sh@18 -- $ [[ 0 -eq 0 ]] 00:30:27.500 17:21:14 -- spdk/autopackage.sh@19 -- $ timing_finish 00:30:27.500 17:21:14 -- common/autotest_common.sh@732 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:30:27.500 17:21:14 -- common/autotest_common.sh@733 -- $ '[' -x /usr/local/FlameGraph/flamegraph.pl ']' 00:30:27.501 17:21:14 -- common/autotest_common.sh@735 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:30:27.501 17:21:14 -- spdk/autopackage.sh@20 -- $ exit 0 00:30:27.501 17:21:14 -- spdk/autopackage.sh@1 -- $ stop_monitor_resources 00:30:27.501 17:21:14 -- pm/common@29 -- $ signal_monitor_resources TERM 00:30:27.501 17:21:14 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:30:27.501 17:21:14 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:30:27.501 17:21:14 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:30:27.501 17:21:14 -- pm/common@44 -- $ pid=3274385 00:30:27.501 17:21:14 -- pm/common@50 -- $ kill -TERM 3274385 00:30:27.501 17:21:14 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:30:27.501 17:21:14 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:30:27.501 17:21:14 -- pm/common@44 -- $ pid=3274387 00:30:27.501 17:21:14 -- pm/common@50 -- $ kill -TERM 3274387 00:30:27.501 17:21:14 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:30:27.501 17:21:14 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:30:27.501 17:21:14 -- pm/common@44 -- $ pid=3274389 00:30:27.501 17:21:14 -- pm/common@50 -- $ kill -TERM 3274389 00:30:27.501 17:21:14 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:30:27.501 17:21:14 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:30:27.501 17:21:14 -- pm/common@44 -- $ pid=3274419 00:30:27.501 17:21:14 -- pm/common@50 -- $ sudo -E kill -TERM 3274419 00:30:27.501 + [[ -n 2776406 ]] 00:30:27.501 + sudo kill 2776406 00:30:27.512 [Pipeline] } 00:30:27.533 [Pipeline] // stage 00:30:27.538 [Pipeline] } 00:30:27.557 [Pipeline] // timeout 00:30:27.562 [Pipeline] } 00:30:27.582 [Pipeline] // catchError 00:30:27.588 [Pipeline] } 00:30:27.606 [Pipeline] // wrap 00:30:27.614 [Pipeline] } 00:30:27.635 [Pipeline] // catchError 00:30:27.647 [Pipeline] stage 00:30:27.650 [Pipeline] { (Epilogue) 00:30:27.665 [Pipeline] catchError 00:30:27.667 [Pipeline] { 00:30:27.681 [Pipeline] echo 00:30:27.682 Cleanup processes 00:30:27.687 [Pipeline] sh 00:30:27.965 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:30:27.965 3274506 /usr/bin/ipmitool sdr dump /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/sdr.cache 00:30:27.965 3274783 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:30:27.978 [Pipeline] sh 00:30:28.260 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:30:28.260 ++ grep -v 'sudo pgrep' 00:30:28.260 ++ awk '{print $1}' 00:30:28.260 + sudo kill -9 3274506 00:30:28.272 [Pipeline] sh 00:30:28.551 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:30:38.526 [Pipeline] sh 00:30:38.809 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:30:38.809 Artifacts sizes are good 00:30:38.824 [Pipeline] archiveArtifacts 00:30:38.831 Archiving artifacts 00:30:38.972 [Pipeline] sh 00:30:39.279 + sudo chown -R sys_sgci /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:30:39.293 [Pipeline] cleanWs 00:30:39.302 [WS-CLEANUP] Deleting project workspace... 00:30:39.302 [WS-CLEANUP] Deferred wipeout is used... 00:30:39.309 [WS-CLEANUP] done 00:30:39.311 [Pipeline] } 00:30:39.335 [Pipeline] // catchError 00:30:39.347 [Pipeline] sh 00:30:39.630 + logger -p user.info -t JENKINS-CI 00:30:39.641 [Pipeline] } 00:30:39.660 [Pipeline] // stage 00:30:39.666 [Pipeline] } 00:30:39.684 [Pipeline] // node 00:30:39.691 [Pipeline] End of Pipeline 00:30:39.723 Finished: SUCCESS